text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
C++ Program to Find the Size of Primitive Data Types
Grammarly
In this post, you’ll learn how to write a Program to Find the Size of Primitive Data Types in C++.
This lesson, will help you learn how to find the Size of Primitive Data Types with a certain function, using the C++ language. Let’s look at the below source code.
How to Find the Size of Primitive Data Types in C++?
Source Code
#include <iostream> using namespace std; int main() { cout << "Size of char: " << sizeof(char) << " byte" << endl; cout << "Size of int: " << sizeof(int) << " bytes" << endl; cout << "Size of float: " << sizeof(float) << " bytes" << endl; cout << "Size of double: " << sizeof(double) << " bytes"; return 0; }
Output
Size of char: 1 byte Size of int: 4 bytes Size of float: 4 bytes Size of double: 8 bytes
Let’s break down the code and elaborate the steps, to understand better
#include <iostream>
- This line which is called the header file is used in every C++ codding.
#includestatement tells the compiler to use available files and
<iostream>is the name of the specific that we have used in this code.
- The
<iostream>file stands for Input and Output statements..
- In this code the main function statement is
sizeof(DataType);, using this function we can find the size of the Data Type
- Display the output statements with the
coutand using the Insertion Operator ‘ << ‘, we can insert the function to find the Size of the Data Type.
- When using sentences, write them within quotes ” “,
" Size of char: "followed by ‘ << ‘ and the function.
- When we run the program, the function is executed and the answer is displayed in it’s place.
- End the function with
return 0;this returns the function to main( ).
Now that you have understood how the source code performs its function, try it yourself with following different functions to it understand better.
- sizeof (char)
- sizeof (int)
- sizeof (short int)
- sizeof (long int)
- sizeof (signed long int)
- sizeof (unsigned long int)
- sizeof (float)
- sizeof (double)
- sizeof (wchar_t)
Note: The ‘ << endl ‘ in the code is used to end the current line and move to the next line, to understand how it works exclude it from the code, move it around and work with it.
|
https://developerpublish.com/academy/courses/c-programming-examples-2/lessons/c-program-to-find-the-size-of-primitive-data-types/
|
CC-MAIN-2021-49
|
refinedweb
| 372
| 68.74
|
Hello all!
My summer has been filled with a bunch of projects that I’ll be offloading here soon! The first one I’ll be talking about is my RFID Cloner I built. Making the code work took a lot longer than I expected because I haven’t used C++ in around 5 years and Arduino is based off C++, so for some reason, I forgot how to do simple things, but I eventually got it working. The basis of the project will be the MFRC522 module and an Arduino Mega 2560. The goal here was to create a system that could scan and duplicate High-Frequency RFID cards.
Before I begin I think it’s necessary to talk about some basic RFID details. There are 3 main types of RFID; Low-Frequency, High-Frequency, and Ultra-High-Frequency. These different types determine the radio frequency at which the RFID Card or Tag will ‘excite’ and turn on to let the reader access the information on the card. Low-Frequency RFID systems operate usually at 125 kHz, which allows them to stand up well against interference, but has the downside of having a low range. High-Frequency RFID is what the MFRC522 reads and usually operates at 13.56 MHz, which gives it better range but makes it more vulnerable to interference. And lastly there is Ultra-High-Frequency RFID which operates from 300 MHz to 1 GHz, allowing an insane range up to around 100 meters. I will be using a passive HF system, meaning that there is no battery powering my card and instead the reader emits an electromagnetic resonance that is used to power the card wirelessly.
Onto the actual project! My first task was verifying that all my components worked and getting the proper setups for the pieces I was using. I settled upon 3 libraries that I needed to get everything running smoothly. The first is SPI.h, which is used for the serial interface to communicate to the MFRC522. This library was also massively useful for debugging because I could control the MFRC522 directly and monitor what I was doing, allowing me to get the main code working before my display even arrived. I also used the official MFRC522.h library for controlling the reader and the standard LiquidCrystal.h library for controlling the display. Importing libraries with Arduino is super easy; if you’re using the standard Arduino IDE just install the library through the library manager and then use the
#include <> command. I also needed to declare the objects so that my components know what pins on the Arduino I’m routing them through, so the beginning of my code looks like this.
#include <SPI.h> // Import Serial Peripheral Interface library for RC522 #include <MFRC522.h> // Import library for RC522 RFID Module #include <LiquidCrystal.h> // Import library for LCD Display #define RST_PIN 5 // Reset pin for MFRC522 #define SS_PIN 53 // Serial pin for MFRC522 #define READ_PIN 2 // Pullup resistor pin for read button #define WRITE_PIN 3 // Pullup resistor pin for write button MFRC522 mfrc522(SS_PIN, RST_PIN); // Define MFRC522 instance LiquidCrystal lcd(7, 8, 9, 10, 11, 12); // Define lcd instance byte NEW_UID[4]; // Create a 4 piece list for the UID to be stored in when reading
Next up we need to talk about how C handles functions. When you start a program in C++ you get 2 main functions, setup() and loop(). The setup() function runs only once and is obviously used for setup of variables, pinmodes, etc. And loop() runs infinitely over and over again so you generally use that for anything else unless you want to make a custom function for something, but for this project I didn’t have much of a need for that. During my setup function I needed to do a few key things, the most important of which were to initialize the serial bus and the LCD library for its resolution. After that I also defined my pins using pinMode(), pinMode() takes in some pin and sets it up as either input, input_pullup, or output. For this code I defined READ_PIN and WRITE_PIN as input_pullup, what this means is that there is a resistor running on the line the button is on and if the voltage line gets sent to ground, by the button being pushed, then the resistor line won’t get any voltage and will read as low, when this happens the Arduino knows the button has been pushed.
void setup() { Serial.begin(9600); SPI.begin(); lcd.begin(16, 2); mfrc522.PCD_Init(); // Init MFRC522 card pinMode(READ_PIN, INPUT_PULLUP); pinMode(WRITE_PIN, INPUT_PULLUP); }
Next up we get into the fun part… the loop! This has 3 parts, the initial precursor stuff, then the read function and the write function. Properly I should have put them into their own functions, but instead I placed them into if statements because it was easier. So in the beginning of the loop function I clear the LCD and print “Standby.” on it, then wait until the reader detects a new card in its range.
void loop() { lcd.clear(); lcd.print("Standby"); // Wait until new card is present if ( ! mfrc522.PICC_IsNewCardPresent() || ! mfrc522.PICC_ReadCardSerial() ) { delay(50); return; }
Once this section has run and the Arduino detects a new card it’ll jump into either the read or the write loop depending on which button is pressed. If the read button is pressed then the LCD will display “READING…” while a for loop grabs the UID from the card and prints it to the display, also placing the UID into the NEW_UID list we created earlier.
// Read function if (digitalRead(READ_PIN) == LOW) { lcd.clear(); lcd.setCursor(0, 0); lcd.print("READING..."); delay(500); lcd.setCursor(0, 1); lcd.print("UID:"); for (byte i = 0; i < mfrc522.uid.size; i++) { lcd.print(mfrc522.uid.uidByte[i] < 0x10 ? " 0" : " "); lcd.print(mfrc522.uid.uidByte[i], HEX); NEW_UID[i] = (mfrc522.uid.uidByte[i]); } lcd.setCursor(0, 0); delay(1000); }
On the other hand when the write button is pressed the program will take the UID previously stored in NEW_UID and it will write it to the card. If it succeeds in writing the new UID then it declares “Written!” to the screen and reads the UID to display it on the screen again just as a sanity check.
//Write function if (digitalRead(WRITE_PIN) == LOW) { lcd.clear(); lcd.print("WRITING"); delay(500); lcd.setCursor(0, 0); if ( mfrc522.MIFARE_SetUid(NEW_UID, (byte)4, true) ) { lcd.print("Written!"); } lcd.setCursor(0, 1); for (byte i = 0; i < mfrc522.uid.size; i++) { lcd.print(NEW_UID[i], HEX); } delay(1000); } lcd.setCursor(0, 0); }
And bam, we have a functioning RFID Cloner! By creating this I did learn quite a bit about RFID itself, as well as how to manipulate it. I hope you enjoyed the quick overview of current RFID tech and my attempt at a cloner.
Thanks for reading and have a wonderful day!
~ Corbin
|
https://maker.godshell.com/archives/tag/rfid
|
CC-MAIN-2020-45
|
refinedweb
| 1,158
| 63.09
|
![endif]-->
Arduino
<![endif]-->
Buy
Download
Products
Arduino
AtHeart
Certified
Learning
Getting started
Examples
Playground
Reference
Support
Forum
Advanced Search
| Arduino Forum :: Members :: Daanii
Show Posts
Pages: [
1
]
2
3
...
18
1
Using Arduino
/
Motors, Mechanics, and Power
/
Re: Control 48v Go Kart motor
on: May 11, 2014, 01:00:09 pm
You have to be a little careful with your motors because the rated 1000 Watts of power is probably the continuous rating. The motors will have a stall current of well over the rated 27 Amps. That's why the Pololu controllers would not work. The Pololu and Adafruit boards are great, but your motors are a little big for them.
I think the scooter motor controller would have a tough time handling the current for your motors as well. Its technical specs suggest it for a 350 to 600 Watt motor, not a 1000 Watt motor like yours. (Though it does say peak current of 40 Amps, so it may work with your motors -- hard to say.)
If you can run both motors with the same controller, a golf cart controller like this one might work at a relatively cheap price:
. That controller should be heavy duty enough to handle any current your motors pull.
But that controller does not switch directions (forward and reverse). To enable that, you would need a reversible contactor like this one:
.
You can easily control the above motor controller with an Arduino PWM output pin. The contactor is harder to control electronically. You would need to wire up a circuit with a transistor or relay.
2
Using Arduino
/
General Electronics
/
Re: Help me improve my dire first attempt at soldering
on: April 30, 2014, 07:31:47 pm
Quote from: Oscar_Zeta_Acosta on April 30, 2014, 07:50:53 am
Thanks for the help guys. So, I won't get away with leaving it as it is?
I have some desilder braid but I struggle to get the molten solder to soak into it.
You can probably get away with leaving it the way it is. I did not see any places where the solder bridged the gap between pins. The main problem with your soldering job is that you wasted solder. But practice makes perfect. Or at least practice makes you better. So I would just try to improve next time.
I'm surprised that you are having trouble with the desoldering braid. That stuff usually works pretty well.
Just put the braid on top of the solder you want to remove and press down on the braid with the tip of your soldering iron. Make sure you press down firmly, or the heat will not make it through the braid to melt the solder. And be sure to move to a new spot on the braid when the old spot has turned silver with solder.
3
Using Arduino
/
Programming Questions
/
Re: Compile errors when trying to use interrupt.h with a Pololu A-Star 32U4 Micro
on: April 26, 2014, 04:01:47 pm
I'm not sure exactly what is going on, but I found out that the problem is not with using the libraries with an ATmega32u4 board. They seem to work fine. The trouble was that I was trying to use Timer2, which apparently doesn't work on those boards. I switched to Timer3, and the code compiles fine now.
4
Using Arduino
/
Programming Questions
/
Re: Compile errors when trying to use interrupt.h with a Pololu A-Star 32U4 Micro
on: April 26, 2014, 01:29:30 pm
I see that the definitions that I need (TCNT2, TCCR2A, TCCR2B, WGM21, CS22, etc.) are defined in a header file that gets selected in avr/io.h.
But when I look at the header file io.h it seems to be a version from 12/4/2008. Are there no newer versions of avr/io.h? Or am I reading that date wrong?
Assuming that there is no version of io.h that will work for my board, I guess I can create a new version that I can include in. But any suggestions would be appreciated.
5
Using Arduino
/
Programming Questions
/
Re: Compile errors when trying to use interrupt.h with a Pololu A-Star 32U4 Micro
on: April 26, 2014, 12:36:39 pm
Quote from: PaulS on April 26, 2014, 08:35:05 am
Why does that surprise you?
Because I haven't worked much with libraries. I didn't realize that many of them are not going to work with the Arduino Leonardo or similar boards. Now I know.
The 32u4 has three timers. I'll try to make my way through the code libraries and see where the timer-related variables are defined, and define them for the 32u4 chip.
6
Using Arduino
/
Programming Questions
/
Re: Compile errors when trying to use interrupt.h with a Pololu A-Star 32U4 Micro
on: April 25, 2014, 08:57:21 pm
Yes, I know it is not an Arduino. But it does run in the Arduino environment. I thought the program I wrote might not run on this board, but was surprised to find that it would not even compile. But I tried to compile the program for an Arduino Leonardo, which uses the same 32U4 chip, and got the same error.
So I guess the Arduino libraries only work for the 328 and 168 boards, or at least the libraries that use the timers. And I guess the environment is set up so that the libraries do not load, thus generating a compiler error to let people like me know that the libraries don't work. Is that right?
7
Using Arduino
/
Programming Questions
/
[Solved] Compile errors trying to use interrupt.h with Pololu A-Star 32U4 Micro
on: April 25, 2014, 05:39:55 pm
I wanted to try out the Pololu A-Star 32U4 Micro. I have a program written for the Arduino Uno that compiles and runs fine for it. But I get compiler errors when I select as my board "Pololu A-Star 32U4" instead of "Arduino Uno." It seems like the variable names in the libraries are not being loaded. My code is below:
Code:
#include <ServoTimer2.h>
#include "DualVNH5019MotorShield.h"
#include <io.h>
#include <interrupt.h>
#define BuzzerPin 5
#define LeftServoPin 11
#define RightServoPin 3
unsigned char cmd, value;
float Steering;
int throttle;
DualVNH5019MotorShield md;
ServoTimer2 leftServo; // create servo object to control a servo
// a maximum of eight servo objects can be created
ServoTimer2 rightServo;
int posLeft = 0; // variable to store the servo position
int posRight = 0;
void setup() {
// set up Arduino motor shield
md.init();
// set up servos for steering
leftServo.attach(LeftServoPin);
rightServo.attach(RightServoPin);
pinMode(BuzzerPin, OUTPUT);
pinMode(RightServoPin, OUTPUT);
pinMode(LeftServoPin, OUTPUT);
leftServo.write(1245);
rightServo.write(1606);
// set compare match register to desired timer count:
// OCR2A = 150624;
// start serial port at 115,200 bits per second
Serial.begin(115200);
// wait until serial contact is made
while (Serial.available() <= 0) {
delay(300);
}
}
// this interrupt service routine for timer 2 toggles the horn buzzer
ISR(TIMER2_COMPA_vect)
{
digitalWrite(BuzzerPin, !digitalRead(BuzzerPin));
// reset counter 2
TCNT2 = B00000000;
}
void loop() {
// if we get a valid byte, read throttle signal:
if (Serial.available() > 0) {
// get an incoming byte:
cmd = Serial.read();
// If it's not a command character, ignore it and wait for the next one
if(cmd != 'F' && cmd != 'B' && cmd != 'H' &&
cmd != 'R' && cmd != 'L')return;
// got the command, now wait for the data char
while(Serial.available() == 0);
value = Serial.read();
// got the command and value, now do the command
switch(cmd) {
case 'F':
value = constrain(value, 0, 255);
throttle = map(value, 0, 255, 0, 400);
md.setM2Speed(throttle);
// Serial.println(throttle);
break;
case 'B':
value = constrain(value, 0, 255);
throttle = map(value, 0, 255, 0, -400);
md.setM2Speed(throttle);
break;
case 'H':
if (value == 0) {
cli(); // disable global interrupts
// turn Timer2 off
cli(); // disable global interrupts
TCCR2A = 0; // set entire TCCR2A register to 0
TCCR2B = 0;
// turn off CTC mode:
TIMSK2 |= 0;
sei(); // enable global interrupts
} else {
// initialize Timer2
cli(); // disable global interrupts
TCCR2A = 0; // clear entire TCCR2A register
TCCR2B = 0;
// turn on CTC mode:
TCCR2B |= (1 << WGM21);
// Set timer clock speed:
TCCR2B |= (1 << CS22);
// enable timer compare interrupt:
TIMSK2 |= (1 << OCIE2A);
sei(); // enable global interrupts
}
break;
case 'R':
value = constrain(value, 0, 255);
Steering = float(value - 127);
if (Steering > 0) {
posRight = 103 - int(Steering * 0.258); // divide by 128 and times by 33
} else {
posRight = 103 + int(Steering * -0.449); // divide by -127 and times by 57
}
posRight = map(posRight, 0, 180, 544, 2400);
rightServo.write(posRight);
break;
case 'L':
value = constrain(value, 0, 255);
Steering = float(value - 127);
if (Steering > 0) {
posLeft = 68 - int(Steering * 0.453); // divide by 128 and times by 58
} else {
posLeft = 68 + int(Steering * -0.252); // divide by -127 and times by 32
}
posLeft = map(posLeft, 0, 180, 544, 2400);
leftServo.write(posLeft);
break;
}
}
}
The error messages I get are things like TCNT2 is not declared in this scope.
Does the problem lie in my trying to use these libraries with the A-Star 32U4? Or have I just somehow not put the libraries in the right place?
Thank you.
8
Using Arduino
/
Programming Questions
/
Re: how do i make motors run for a set amount of time
on: April 21, 2014, 05:15:45 pm
If you want your motors to turn on and stay on for 25 minutes, and then turn off and continue with your code, you can do something like this:
Code:
const unsigned X_AXIS_PIN = 0;
const unsigned Y_AXIS_PIN = 1;
const unsigned Z_AXIS_PIN = 2;
const int motor1 = 3;
const int motor2 = 5;
const int motor3 = 10;
int motorState = LOW;
long delay25minutes = 1500000; // delay for 25 minutes, which is 1500 seconds
void setup()
{
Serial.begin(9600);
pinMode(motor1, OUTPUT);
pinMode(motor2, OUTPUT);
pinMode(motor3, OUTPUT);
}
int x,
y,
z;
void loop()
{
Serial.print(analogRead(X_AXIS_PIN));
Serial.print(" ");
x = analogRead(X_AXIS_PIN);
Serial.print(analogRead(Y_AXIS_PIN));
Serial.print(" ");
y = analogRead(Y_AXIS_PIN);
Serial.println(analogRead(Z_AXIS_PIN));
z = analogRead(Z_AXIS_PIN);
delay(5000);
if( x < 700 && y < 700 && z < 700)
{
if (motorState == LOW)
motorState = HIGH;
else
motorState = LOW ;
digitalWrite(motor1, motorState);
digitalWrite(motor2, motorState);
digitalWrite(motor3, motorState);
delay(delay25minutes);
motorState = LOW;
digitalWrite(motor1, motorState);
digitalWrite(motor2, motorState);
digitalWrite(motor3, motorState);
}
}
If you want your Arduino to be doing some other code during the 25 minutes, then you need the blink without delay approach.
9
Using Arduino
/
General Electronics
/
Re: Intermittent problem with battery -- pulling too much current or too much noise?
on: April 13, 2014, 04:38:32 pm
Quote from: Grumpy_Mike on April 13, 2014, 03:50:13 pm
Have you got good decoupling on your supply?
No, I had no decoupling on my supply. There are two USB ports on the battery. The line to the servos came out of one port while the line to the Raspberry Pi came out of the other. I'm not sure whether the two USB ports are decoupled inside the battery, but I added no decoupling.
To solve the problem without using an extra battery, I think I'll get a UBEC to power the servos off the 9.6 Volt traction battery.
10
Using Arduino
/
General Electronics
/
Intermittent problem with battery -- pulling too much current or too much noise?
on: April 13, 2014, 03:37:04 pm
I stripped down and replaced the electronics on a model car. Now I've got a Raspberry Pi on the car, which communicates with a laptop computer via a Wi-Fi adapter to get steering and throttle signals. (There are no brakes.) The Pi then sends a signal to an Arduino Uno which drives two Hitec HS-325HB servos to steer the front wheels (one servo for each wheel) and controls a Pololu Dual VNH5019 motor driver shield that drives the rear wheels.
I have two batteries on board. One battery is a 9.6 Volt nickel metal hydride battery that drives the rear wheels, through the motor drive shield. The other is a Duracell Powermat GoPower Longhaul backup battery, which is a 5 Volt, 8800 mAh lithium ion battery that says it can put out 2.1 Amps over 2 USB ports. I use the Duracell battery to power the Raspberry Pi through one USB port and the two servos through the other USB port. The Pi then powers the Arduino Uno and the Wi-Fi adapter through the Pi's USB port.
The problem is that the Raspberry Pi will sometimes reset while I'm driving the car. It usually happens after two or three minutes. The car will stop working, and it looks like the Pi reboots itself. (It's hard to tell exactly what the Pi is doing, since I am monitoring it on the laptop using Window's Remote Desktop Connection and the Wi-Fi connection simply goes dead.)
To see what causes the problem, I added a third battery. That battery, also a lithium ion 5V battery with a USB port, powers just the servo motors. So the servo motors are now powered by a separate battery from the Raspberry Pi. That seems to solve the problem. I can now run the car for many minutes with no problem.
To me, that indicates that the problem is one of two things: I'm pulling too much current from the battery so the Raspberry Pi resets, or I'm getting too much noise on the line so the Pi resets. If it is one of those problems, I wonder which one. I'm not sure how to tell.
As far as current drain, the servo specs say that the current drain at 4.8V is 7.4mA/idle and 160mA no load operating. With the two of them, that's maybe 500 mA max. The Raspberry Pi pulls less than 700 mA. The Wi-Fi adapter is a TP-LINK TL-WN823N 300Mbps Wireless Mini USB Adapter that seems to pull less than 100 mA. I think the Arduino Uno also pulls less than 100 mA. So I would think the current draw should be well below 2.1 Amps.
So the problem may be noise. I know that motors can make a power line very noisy, but could the line noise from two servos be enough to cause the Raspberry Pi to reset?
Any thoughts?
Thank you.
11
Using Arduino
/
Programming Questions
/
Re: Servo library and motor controller library fighting over pins 9 and 10
on: April 12, 2014, 05:06:45 pm
Robin2, thanks for suggesting ServoTimer2. I tried it out and it didn't work as a drop-in replacement for Servo.
Then I realized that ServoTimer2 takes microseconds of pulse width instead of degrees like Servo. So I had to write a line of code that would convert the degrees to microseconds:
Code:
posLeft = map(posLeft, 0, 180, 544, 2400);
where 0 degrees equals 544 microseconds and 180 degrees equals 2400 microseconds.
The code now works like a charm. I can now use my two servos (which steer independently the two front wheels of a toy car) and my motor shield (which drives the back wheels) at the same time. Thanks again.
12
Using Arduino
/
Programming Questions
/
Servo library and motor controller library fighting over pins 9 and 10 [solved]
on: April 12, 2014, 02:30:37 pm
I'm using an Arduino Uno with a Pololu Dual VNH5019 Motor Shield. The library for that motor shield uses pins 9 and 10 as PWM pins, one for each of the two motors it drives.
I'm also using the Uno to control two servo motors, using the Servo library. I found out that the Servo library disables the PWM function on pins 9 and 10, even though I am using pins 11 and 13 to control the two servos.
So both libraries are fighting over pins 9 and 10. The motor shield library lets me re-map all pins but 9 and 10. I can't figure out how to modify it to let me use other pins.
But the Servo library seems to suggest that timer pins other than pins 9 and 10 can be used. It seems that Timer1 uses pins 9 and 10, but Timer2 uses pins 3 and 11, which would work for me. So I want to have the Servo library use Timer2 instead of Timer1. So far, I have not figured out how to do that. Anyone know?
Thank you.
13
Using Arduino
/
General Electronics
/
Re: How can I find out why a small DC motor won't work?
on: April 06, 2014, 11:39:21 pm
I'll try to take apart the motor tomorrow to see what what happened. I suspect the motor is toast, though, and destined for the garbage can.
I need to be more careful. Not only have I burned out a motor, but I seem to have burned out a 1 1/2" speaker as well. It was working fine (I added it to the car for a horn) until I suddenly saw a wisp of smoke. I seem to have fried it. Perhaps I needed a resistor on it.
14
Using Arduino
/
General Electronics
/
Re: How can I find out why a small DC motor won't work?
on: April 06, 2014, 04:55:50 pm
I see. When my motor is burned out, does that mean that the brushes are no longer in contact with the rotor? With this small a motor I suspect that is irreparable.
I ask because I'm surprised that this motor burned out, and wonder what caused it. These little motors seem to be indestructible in the various toys I've scavenged them from. I wonder if I abused it somehow. Maybe too high a voltage. Or too much current (stall current) for too long. Those are the two possibilities I've thought of.
15
Using Arduino
/
General Electronics
/
Re: How can I find out why a small DC motor won't work?
on: April 06, 2014, 02:20:13 pm
Quote from: Grumpy_Mike on April 06, 2014, 01:40:54 pm
That is very high, are you sure that is not just your skin resistance? Were you touching both terminals of the meter when you made the measurements?
I get varying results, but it's usually above at least 80 k Ohms, and I am not touching the probes. It's funny, though. I can get the meter to settle on a resistance with the faulty motor, even though I get a little different reading every time. But with a similar motor that works fine the meter will not settle on a resistance, but jumps around from 20 or so Ohms to 200 or 300 Ohms. Almost always less than 0.5 k Ohms, but I thought it would settle down to a small number.
Quote
It sounds like it is open circuit and quite burned out.
Yes, I'm afraid the motor is burned out. That's a shame, since I have no idea how to replace it. I wonder if the 9.6 Volts I used (which usually measured out at well over 10 Volts) was too high for this motor. Though it did work for quite a long time.
Pages: [
1
]
2
3
...
18
|
SMF © 2013, Simple Machines
Newsletter
©2014 Arduino
|
http://forum.arduino.cc/index.php?action=profile;u=17226;sa=showPosts
|
CC-MAIN-2014-41
|
refinedweb
| 3,219
| 72.16
|
Now let’s look at some examples of fetching HTTP pages and invoking CGI scripts and servlets from MIDlets using the connection framework.
Example 7-1
shows how to read the contents of a
file referenced by a URL, using a
StreamConnection
. An
HttpConnection can also be used, but since no
HTTP-specific behavior is needed here, the
StreamConnection is used. The application is very
simple. The
Connector.open( )
method opens a connection to the URL and
returns a
StreamConnection object. Then an
InputStream is opened through which to read the
contents of the file, one character at a time, until the end of the
file (signaled by a character value of -1) is reached. In the event
that an exception is thrown, both the stream and connection are
closed.
Example 7-1. Fetching a page referenced by a URL
import java.io.*; import javax.microedition.io.*; import javax.microedition.lcdui.*; import javax.microedition.midlet.*; public class FetchPageMidlet extends MIDlet { private Display display; String url = ""; public FetchPageMidlet( ) { display = Display.getDisplay(this); } /** * This will be invoked when we start the MIDlet */ public void startApp( ) { try { getViaStreamConnection(url); } catch (IOException e) { //Handle Exceptions any other way you like. System.out.println("IOException " + e); e.printStackTrace( ); } } /** * Pause, discontinue .... */ public void pauseApp( ) { } /** * Destroy must cleanup ...
No credit card required
|
https://www.oreilly.com/library/view/wireless-java/0596002432/ch07s04.html
|
CC-MAIN-2019-43
|
refinedweb
| 215
| 57.27
|
Hello,
The latest version on maven repo is 3.0.0-rc2
Could you fix please
thanks
Hermann
Type: Posts; User: hermann.rangamana
Hello,
The latest version on maven repo is 3.0.0-rc2
Could you fix please
thanks
Hermann
Hello,
I think Component class would benefit for implementing HasEnabled interface (there'd be no API change, since Component already has setEnabled(boolean)/isEnabled). The benefit is that when i...
Hi Colin,
I'm not sure you understand my issue. Let me explain it in a simple example.
Consider you write an application that archives stock data (last quote, and change in % comparing to the...
Hi there,
I have a simple grid, with some editable columns. Autocommit is false. When i edit the column, i call a rpc which validate the data and then respond with a full object (that is,...
Thanks for your reply.
HR
Hi Colin,
Both names (repo1.maven.org and repo.maven.apache.org) direct me to 89.167.251.252.
HR
Thanks for your prompt reply!
Using the ip address, i can see the 3.0.0-rc version (@ ), so i guess it's a sync problem on some of their servers. I...
Absolutely sure.
I tried to get those file using my corporate access network or my home internet access, both fails. And when i try to list the content of the directory...
Guys,
NPE occurs in com.sencha.gxt.data.shared.Store.java, at line 126 (method isCurrentValue(M model))
public boolean isCurrentValue(M model) {
return (access.getValue(model) ==...
I encounter the same problem when building with maven using RC
From my pom.xml
<dependency>
<groupId>com.sencha.gxt</groupId>
<artifactId>gxt</artifactId>
...
Hello,
I am on GWT 3 beta 3.
I created a BorderLayoutContainer with collapsible panel on the west side. Here is the code snippet
westPanel = new ContentPanel();
centerPanel...
Thanks for your reply, steven.
Hermann
Thanks for your reply.
But what if i want my container to fill the whole browser window ? I'd expected the same behavior as straight gwt panels which automatically fill the window browser when i...
On gxt 3.0.0 beta 2, i have an simple CenterLayoutContainer with a single panel inside it (a copy-paste from the showcase actually). But i was surprised, the content of my container is not centered,...
|
https://www.sencha.com/forum/search.php?s=ffb684085386d14f88446fc4e98c53a4&searchid=19067317
|
CC-MAIN-2017-13
|
refinedweb
| 387
| 61.22
|
#include <deal.II/base/thread_management.h>
Inherits mutex.
A class implementing a mutex. Mutexes are used to lock data structures to ensure that only a single thread of execution can access them at the same time.
This class is like
std::mutex in almost every regard and in fact is derived from
std::mutex. The only difference is that this class is copyable when
std::mutex is not. Indeed, when copied, the receiving object does not copy any state from the object being copied, i.e. an entirely new mutex is created. These semantics are consistent with the common use case if a mutex is used as a member variable to lock the other member variables of a class: in that case, the mutex of the copied-to object should only guard the members of the copied-to object, not the members of both the copied-to and copied-from object. Since at the time when the class is copied, the destination's member variable is not used yet, its corresponding mutex should also remain in its original state.
Definition at line 88 of file thread_management.h.
This declaration introduces a scoped lock class. It is deprecated: use
std::lock_guard<std::mutex> or any of the other, related classes offered by C++ instead.
When you declare an object of this type, you have to pass it a mutex, which is locked in the constructor of this class and unlocked in the destructor. The lock is thus held during the entire lifetime of this object, i.e. until the end of the present scope, which explains where the name comes from. This pattern of using locks with mutexes follows the resource-acquisition-is-initialization pattern, and was used first for mutexes by Doug Schmidt. It has the advantage that locking a mutex this way is thread-safe, i.e. when an exception is thrown between the locking and unlocking point, the destructor makes sure that the mutex is unlocked; this would not automatically be the case when you lock and unlock the mutex "by hand", i.e. using Mutex::acquire() and Mutex::release().
Definition at line 111 of file thread_management.h.
Default constructor.
Copy constructor. As discussed in this class's documentation, no state is copied from the object given as argument.
Definition at line 122 of file thread_management.h.
Copy operators. As discussed in this class's documentation, no state is copied from the object given as argument.
Definition at line 132 of file thread_management.h.
Acquire a mutex.
Definition at line 146 of file thread_management.h.
Release the mutex again.
Definition at line 159 of file thread_management.h.
|
https://www.dealii.org/current/doxygen/deal.II/classThreads_1_1Mutex.html
|
CC-MAIN-2019-39
|
refinedweb
| 439
| 56.76
|
Ist add Istio support to services by deploying a special sidecar proxy to each of your application's Pods. The proxy intercepts all network communication between microservices and is configured and managed using Istio's control plane functionality.
This codelab shows you how to install and configure Istio on Kubernetes Engine, deploy an Istio-enabled multi-service application, and dynamically change request routing..
You need to make sure that you have the Kubernetes Engine API enabled:
gcloud services enable container.googleapis.com
Choose a region for your cluster using the following command:
gcloud compute regions list
Set your region to one from the above list. For example:
REGION=us-central1
To create a new cluster with Istio enabled with mutual TLS between sidecars enforced by default, run this command:
gcloud beta container clusters create hello-istio --project=$PROJECT_ID \ --addons=Istio --istio-config=auth=MTLS_STRICT \ --cluster-version=latest \ --machine-type=n1-standard-2 \ --num-nodes=4 \ --region=$REGION
Wait a few moments while your cluster is set up for you. It will be visible in the Kubernetes Engine section of the Google Cloud Platform console.
Once the cluster is created, click on the "Connect" command, copy the command and run in Cloud Shell. This will make sure that kubectl is setup to access the cluster.
At the end of the cluster creation, there will be a
istio-system namespace created along with the required RBAC permissions, and the five primary Istio control plane components deployed:
First, ensure the following Kubernetes services are deployed:
istio-pilot,
istio-ingressgateway,
istio-egressgateway,
istio-telemetry,
istio-policy,
istio-citadel,
prometheus and
istio-sidecar-injector.
kubectl get svc -n istio-system
Your output should look like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-citadel ClusterIP 30.0.0.119 <none> 8060/TCP,9093/TCP istio-egressgateway ClusterIP 30.0.0.11 <none> 80/TCP,443/TCP istio-ingressgateway LoadBalancer 30.0.0.39 9.111.255.245 80:31380/TCP,443:31390/TCP,31400:31400/TCP istio-pilot ClusterIP 30.0.0.136 <none> 15003/TCP,15005/TCP,15007/TCP,15010/TCP,15011/TCP,8080/TCP,9093/TCP istio-policy ClusterIP 30.0.0.242 <none> 9091/TCP,15004/TCP,9093/TCP istio-statsd-prom-bridge ClusterIP 30.0.0.111 <none> 9102/TCP,9125/UDP istio-telemetry ClusterIP 30.0.0.246 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP prometheus ClusterIP 30.0.0.253 <none> 9090/TCP istio-sidecar-injector ClusterIP 10.23.242.122<none> 443/TCP
Next, make sure that-* and
istio-sidecar-injector-*.
kubectl get pods -n istio-system
When all the pods are running, you can proceed.
NAME READY STATUS istio-citadel-7bdc7775c7-22dxq 1/1 Running istio-egressgateway-78dd788b6d-ld4qx 1/1 Running istio-ingressgateway-7dd84b68d6-smqbt 1/1 Running istio-pilot-d5bbc5c59-sv6ml 2/2 Running istio-policy-64595c6fff-sqbz7 2/2 Running istio-sidecar-injector-dbd67c88d-4jxqj 1/1 Running istio-statsd-prom-bridge-949999c4c-fbfzg 1/1 Running istio-telemetry-cfb674b6c-kk98w 2/2 Running prometheus-86cb6dd77c-z2tmq 1/1 Running istio-pilot-2275554717-93c43 2/2 Running
Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation — BookInfo.
Let's first download the sample. The Istio release page offers download artifacts for several OSs. In our case, we can use a convenient command to download and extract a specific release:
curl -L | ISTIO_VERSION=1.0.0 sh -
The installation directory contains sample applications in
samples/. You will find the source code and all the other files used in this example in your Istio
samples/bookinfo directory.:
There are 3 versions of the reviews microservice:
The end-to-end architecture of the application is thus:
First, have a look at the YAML which describes the bookinfo application:
less samples/bookinfo/platform/kube/bookinfo.yaml
Note how there are standard Deployments and Services to deploy the Bookinfo application and nothing Istio-specific here at all. To start making use of Istio functionality, no application changes are needed. When we configure and run the services, Envoy sidecars
You can verify that the label was successfully applied:
kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 34m enabled istio-system Active 32m kube-public Active 34m kube-system Active 34m
Now we can simply deploy the services to the default namespace with
kubectl:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Look at one of the pods. You will see that it now contains a second container, the Istio sidecar, along with all of the necessary configuration:
kubectl get pod NAME READY STATUS RESTARTS AGE details-v1-64b86cd49-jqq4g 2/2 Running 0 46s productpage-v1-84f77f8747-6vg6l 0/2 Pending 0 45s ratings-v1-5f46655b57-h4zfw 2/2 Running 0 46s reviews-v1-ff6bdb95b-hqm89 2/2 Running 0 46s reviews-v2-5799558d68-6wsz6 0/2 Pending 0 45s reviews-v3-58ff7d665b-rjpbn 0/2 Pending 0 45s kubectl describe pod details-v1-64b86cd49-jqq4g ...
To allow ‘ingress' traffic to reach the mesh we need to create a ‘Gateway' (to configure a load balancer) and a ‘VirtualService' (which controls the forwarding of traffic from the gateway to our services). You can read more about gateways here. To create gateway:
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Finally, confirm that the application has been deployed correctly by running the following commands:
kubectl get services kubectl get pods
When all the pods have been created, you should see five services and six pods:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) details 10.0.0.31 <none> 9080/TCP kubernetes 10.0.0.1 <none> 443/TCP productpage 10.0.0.120 <none> 9080/TCP ratings 10.0.0.15 <none> 9080/TCP reviews 10.0.0.170 <none> 9080/TCP NAME READY STATUS RESTARTS details-v1-1520924117-48z17 2/2 Running 0 productpage-v1-560495357-jk1lz 2/2 Running 0 ratings-v1-734492171-rnr5l 2/2 Running 0 reviews-v1-874083890-f0qf0 2/2 Running 0 reviews-v2-1343845940-b34q5 2/2 Running 0 reviews-v3-1813607990-8ch52 2/2 Running 0
Congratulations: you have deployed an Istio-enabled application. Next, let's see the application in use.
Now that it's deployed, let's see the BookInfo application in action. First, you need to get the external IP of the gateway:
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingressgateway LoadBalancer 10.23.251.44 35.204.239.131 80:31380/TCP,443:31390/TCP,31400:31400/TCP
Copy the
EXTERNAL-IP value and paste it into the
GATEWAY_URL environment variable.
export GATEWAY_URL=<your gateway IP>
Once you have the address and port, check that the BookInfo app is running with
curl:
curl -o /dev/null -s -w "%{http_code}\n"{GATEWAY_URL}/productpage
Check you have a HTTP
200 output.
You can now point your browser to
http://<your gateway IP>/productpage to view the BookInfo web page.
Refresh the page several times. Notice how you see three different versions of reviews shown in the product page? If you refer back to the diagram on the previous page, you will see we have three different book review services, which are called in a round-robin style - showing black stars, red stars, or no stars at all. This is the normal Kubernetes balancing behavior.
We can use Istio to do something different — to control which users are routed to which version of the services.
The BookInfo sample deploys three versions of the
reviews microservice. When you accessed the application several times, you will have noticed that the output sometimes contains star ratings and sometimes it does not. This is because without an explicit default version set, Istio will route requests to all available versions of a service, in a round-robin fashion.
Routes control how requests are routed within an Istio service mesh. Requests can be routed based on the source and destination, HTTP paths and header fields, and weights associated with individual service versions.-mtls.yaml destinationrule "productpage" created destinationrule "reviews" created destinationrule "ratings" created destinationrule "details" created
First, let's add rules to make traffic go to
v1 of each service.
Verify that you don't have any routes for the services yet apart from the one that allows the gateway to route to the top-level ‘productpage' service:
kubectl get virtualservices NAME AGE bookinfo 2m
We will create a VirtualService for each microservice. A VirtualService defines the rules that control how requests for the service are routed. Each rule corresponds to one or more request destination hosts. In our case we are routing to other services within our mesh so we can use the internal mesh name (e.g. ‘reviews') as the host.
Here's how a rule can route all traffic for a
reviews virtual service to Pods running
v1 of that service, as identified by Kubernetes labels.
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1
The rule refers to a
subset called
v1, which is defined for the underlying reviews service instances as part of a
DestinationRule:
apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1
As can be seen above, a subset specifies one or more labels that identify version specific instances. As the
VirtualService above specifies the subset called
v1 it will only send traffic with the label "
version: v1".
Bookinfo includes a sample with rules for all four services. Let's install it:
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
Note that we used the ‘mtls' version of the file because we are running optional Istio auth components. The file includes tls traffic policies so that the communication between the Envoy sidecars for service to service traffic is encrypted. This all happens without changes to application code.
Confirm that four routes were created. There should be five in total. You can add
-o yaml to view the actual configuration.
kubectl get virtualservices
Also check the corresponding
DestinationRules and their subset definitions:
kubectl get destinationrules
Go back to the Bookinfo application () in your browser. Refresh a few times. Do you see any stars? You should see the book review with no rating stars, as
reviews:v1 does not access the ratings service.
As the mesh operates at Layer 7, we can use HTTP attributes (paths or cookies) to decide on how to route a request.
We can route certain users to a service by applying a regex to a header (e.g. cookie) like this:
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: cookie: regex: "^(.*?;)?(user=jason)(;.*)?$" route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v1
Create the route:
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
View it in the list, or add
-o yaml to see the full output.
kubectl get virtualservices reviews
We now have a way to route some requests to use the
reviews:v2 service. Can you guess how? (Hint: no passwords are needed) See how the page behaviour changes if you are logged in as no-one, 'jason', or 'kylie'.
Once the
v2 version has been canary tested to our satisfaction by jason or another subset of our users, we can use Istio to progressively send more and more traffic to our new service.
Let's try that by sending 50% of the traffic to
v3 by using weight based version routing.
v3 of the service shows red stars. Replace the reviews route:
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
Confirm that the route was replaced:
kubectl get virtualservice reviews -o yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 50 - destination: host: reviews subset: v3 weight: 50
The implementation of the routing in the Envoy proxy sidecar means that you may need to refresh your browser many times before seeing the results. With significant traffic there will be a 50% split. Send some extra traffic to the service like this:
watch -n 0.2 curl -o /dev/null -s -w "%{http_code}"
Now refresh the productpage in your browser and you should now see red colored star ratings about 50% of the time.
In a normal canary rollout you would want to use much smaller increments and then increase the amount of traffic gradually by progressively increasing the weighting for
v3. Now lets send 100% of the traffic to
v3:
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-v3.yaml
Now when you refresh your browser you should see the red stars 100% of the time.
For now, let's clean up the routing rules (don't worry if you see an error):
kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
Congratulations; you've reached the end of the Istio 'Hello World'. For now, you can uninstall Istio and delete your cluster; watch this space, because very soon you will be able to continue straight to our Istio 201 codelab.
The Istio site contains guides and samples with fully working example uses for Istio that you can experiment with. These include:
Here's how to uninstall Istio.
kubectl delete -f samples/bookinfo/networking/bookinfo-gateway.yaml kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml kubectl delete -f install/kubernetes/istio-demo-auth.yaml
In addition to uninstalling Istio, you can also delete the Kubernetes cluster created in the setup phase (to save on cost and to be a good Cloud citizen):
gcloud container clusters delete hello-istio
Note: If you get a "Not Found" error, it could be because you didn't set the region for your project in gcloud command line tool. Cluster delete command with --region option should work.
The following clusters will be deleted. - [hello-istio] in [us-central1-f] Do you want to continue (Y/n)? Y Deleting cluster hello-istio...done. Deleted [].
|
https://codelabs.developers.google.com/codelabs/cloud-hello-istio/index.html?index=..%2F..springone
|
CC-MAIN-2020-10
|
refinedweb
| 2,352
| 54.02
|
I have modified my own image by following your instructions:
1). Open the Terminal on your Raspberry Pi and clone the Dexter Industries GoPiGo Repository
git clone
2). A folder named GoPiGo should appear in you current working directory
3). Open it and go to Setup Files directory
cd Setup Files/
4). The install script is called install.sh . Make it executable
sudo chmod +x install.sh
5). Run the script
./install.sh
When I attempt to run a script such as basic_robot.py, I get :
Traceback (most recent call last):
File “/home/pi/GoPiGo/Software/Python/Examples/Basic_Robot_Control_GUI/basic_robot_gui.py”, line 45, in <module>
from gopigo import * #Has the basic functions for controlling the GoPiGo Robot
ImportError: No module named gopigo
I have read other posts and done the following:
try running the install script from here following the directions on the Readme
and
looks like the install script did not work properly for you. Can you try running it again sudo chmod +x install.sh and then sudo ./install.sh and then try again
However, I continue to get the error. Please advise.
|
https://forum.dexterindustries.com/t/no-module-named-gopigo/1309
|
CC-MAIN-2018-30
|
refinedweb
| 185
| 66.54
|
Hello Devs,
I am going to explain about how to remove non ascii characters from input text or content. Let first get to know what non-ascii characters are.
What are non ascii characters ?
You might have faced an issue while copy pasting text from document ( docx ) to HTML input element or any editor. Sometimes the format of symbols is not supported in particular. input area. Example, double quote is used in docx file and code editor or input element is different see below 👇🏻
“Example Text”. - in docx file "Example Text" - in editor or HTML input element
When you are trying to docx file text format into HTML then it is treated as non ascii characters or junk characters. Generally It can save into the database but sometime while doing some encoding or signature calculating you will face an issue because this will throw an error due to an unsupported string. One of the real scenarios I faced while calculating AWS signature before passing to API gateway and same matching with calculated signature by AWS is match and it throws an error because AWS signature calculation mechanism removes those characters and calculates signature but in your code you might not be doing then very straight it will not match.
How to solve this issue then ?
Below is Python script to remove those non ascii characters or junk characters.
Prerequisite :
- Python any version ( recommended 3.x )
- Regular expression operations library(re) -
pip install re
import re ini_string = "'technews One lone dude awaits iPad 2 at Apple\x89Ûªs SXSW store" res1 = " ".join(re.split("[^A-Za-z0-9]+", ini_string)) print(res1) if re.match("[^\t\r\n\x20-\x7E]+", ini_string): print("found") result = ini_string.encode().decode('ascii', 'replace').replace(u'\ufffd', '`') result2 = ini_string.encode().decode("utf-8").replace(u"\x89Ûª", "`").encode("utf-8") print(result2)
References :
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/this-is-learning/removal-of-non-ascii-characters-using-python-8mn
|
CC-MAIN-2022-33
|
refinedweb
| 306
| 55.24
|
There seems to be a difference in how
stdout
import time
for i in xrange(10):
time.sleep(1)
print "Working" ,
Working
print "Working"
stdout
-u
-u Force stdin, stdout and stderr to be totally unbuffered.
-u
stdin
stdout
stdout
stdout
Assuming you're talking about CPython (likely), this has to do with the behaviour of the underlying C implementations.
The ISO C standard mentions (
C11 7.21.3 Files /3) three modes:
There are other triggers that cause the characters to appear (such as buffer filling up even if no newline is output, requesting input under some circumstances, or closing the stream) but they're not important in the context of your question.
What is important is
7.21.3 Files /7 in that same standard:
As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
Note the wiggle room there. Standard output can either be line buffered or unbuffered unless the implementation knows for sure it's not an interactive device.
In this case (the console), it is an interactive device so the implementation is not permitted to use unbuffered. It is, however allowed to select either of the other two modes which is why you're seeing the difference.
Unbuffered output would see the messages appear as soon as you output them (a la your Windows behaviour). Line-buffered would delay until output of a newline character (your Linux behaviour).
If you really want to ensure your messages are flushed regardless of mode, just flush them yourself:
import time, sys for i in xrange(10): time.sleep(1) print "Working", sys.stdout.flush() print
In terms of guaranteeing that output will be buffered when redirecting to a file, that would be covered in the quotes from the standard I've already shown. If the stream can be determined to be using a non-interactive device, it will be fully buffered. That's not an absolute guarantee since it doesn't state how that's determined but I'd be surprised if any implementation couldn't figure that out.
In any case, you can test specific implementations just by redirecting the output and monitoring the file to see if it flushes once per output or at the end.
|
https://codedump.io/share/osu2whvxhhsv/1/difference-in-buffering-of-stdout-on-linux-and-windows
|
CC-MAIN-2019-09
|
refinedweb
| 397
| 60.04
|
Changes & Migration from Kodein-DI 6 to Kodein-DI 7
This guide will help you migrate from Kodein-DI 6 to Kodein-DI 7+.
The first part explains the changes and their reasoning.
The second part gives steps to follow to port your code to Kodein-DI 7+.
Changes
New type system
As Kotlin evolves, we are trying to keep the pace. So, we have introduced a new type system to empower Kodein-DI and other libraries, using new features of Kotlin. You can find it on Github.
It gives us the ability to handle generic types for all the platforms, including Kotlin/Native and Kotlin/JS, by using the inline function `typeOf()'. for the JVM, the mechanism remains the same as before, by using reflection.
We still have both implementations with
generic and
erased types, but by default, Kodein-DI is using the
generic version.
This means that Native and JS targets are now compatible with generics, meaning that
bind<List<String>>() and
bind<List<Int>>() will represent two different bindings.
New modules
This new type system ha permitted us to simplify the core libraries.
Before
7.0 you had to use
kodein-di-core and either
kodein-di-erased (for all platforms) or
kodein-di-generic-jvm (for the JVM / Android).
Now you just need to import on single module, to get the core features of Kodein-DI:
implementation("org.kodein.di:kodein-di:7.5.0")
Deprecated package:
erased /
generic
Another side effect of the new type system, there is not so much differences made between
erased/
generic implementations, as
generic is now the default usage in Kodein-DI.
So, the whole packages
org.kodein.di.erased and
org.kodein.di.generic have been deprecated.
Implementation are now in the root package
`org.kodein.di, and as told before, are based on the generic counterpart of the type system.
Class renaming: Kodein → DI
New libraries have been released under the Kodein Frameowrk, like Kodein-DB.
Thus, we thought that is was legit to change the
Kodein part of our class names, in the benefit of an more explicit one
DI.
Basically, every classes that were named after
Kodein have been renamed with
DI, like
KodeinAware →
DIAware.
Migration
This section is aimed to guide you through the migration to +Kodein-DI 7.0*.
New modules
Module names have changed, but also there number. If you want to use core features of Kodein-DI, you no longer have to import multiple modules.
Just grab the new
kodein-di module! If you are using frameworks, you should find more details down below.
If you have enable the use of Gradle Metadata with
enableFeaturePreview("GRADLE_METADATA"), or even Gradle 6+. Just use this import:
implementation("org.kodein.di:kodein-di:7.5.0")
Otherwise, use the specific import, depending on your target:
implementation("org.kodein.di:kodein-di-jvm:7.5.0") implementation("org.kodein.di:kodein-di-js:7.5.0") implementation("org.kodein.di:kodein-di-linuxx64:7.5.0") implementation("org.kodein.di:kodein-di-macosx64:7.5.0") implementation("org.kodein.di:kodein-di-mingwx64:7.5.0") // ...
Deprecated package:
erased /
generic
Packages
org.kodein.di.erased and
org.kodein.di.generic are fully deprecated.
Everything that was inside those packages have been move into
org.kodein.di
You should get a replace action from IDEA for most of those implementation. But, if for some reasons IDEA does not provide you this action, you should just remove those packages from your imports.
Class renaming: Kodein → DI
Here comes the tricky part of the migration. Our internals are using companions objects and nested classes, thus we could not just duplicate them and deprecated the old ones. We had to use some tricks, like typealiases. Doing so we may have broke some (old) APIs, and the migration from 6.0 to 7.0 made those changes not optional.
Sometimes IDEA will provide you with some nice actions like replacing all the occurrences automatically, but sometimes you may have to do this manually. Here is a bunch of use cases that you may encounter:
Kodein interface
When migrating to Kodein-DI 7+, you will quickly notice some compile / deprecation errors.
The first one might be on the more import type in Kodein-DI,
Kodein.
Sometimes, IntelliJ IDEA will encourage you to refactor your code with some actions, with Alt+Enter / Cmd+Enter.
KodeinAware interface
The second most important type in Kodein-DI is
KodeinAware, that we will need to refactor to …
DIAware.
If you were using
KodeinAware in your projects, you might end up with something like this:
First thing, replace
KodeinAware with
DIAware (Alt+Enter / Cmd+Enter is your best friend here):
After that, you will need to make changes to the class that is implementing
DIAware.
Because we also had renamed the properties from
KodeinAware, you might have some errors on the following properties.
Unfortunately, IntelliJ IDEA won’t help you for migrating those properties, you will have to do it manually.
The same manipulation goes for
kodeinContext and
kodeinTrigger
Binding & retrieval
Binding and retrieval are really easier as they just have been deprecated and moved to
org.kodein.di.
Otherwise, just re-import the right package!
What about your favorite framework?
Each one of the framework modules relies heavily on the core library of Kodein-DI,
kodein-di.
So, there is not so much migration here, mostly extensions functions to access easily to the DI container.
You will find the table of correspondence for each framework right below.
Android
Importing the Android modules of Kodein-DI are now easier. You don’t need to chose between
erased and
generic anymore.
A simple gradle dependency will do :)
implementation("org.kodein.di:kodein-di-framework-android-core:7.5.0") // OR implementation("org.kodein.di:kodein-di-framework-android-support:7.5.0") // OR implementation("org.kodein.di:kodein-di-framework-android-x:7.5.0")
Here is the table of all the correspondences, for the public classes / functions, by module:
|
https://docs.kodein.org/kodein-di/7.5/migration/migration-6to7.html
|
CC-MAIN-2022-21
|
refinedweb
| 1,001
| 66.13
|
sem_unlink - remove a named semaphore (REALTIME)
#include <semaphore.h> int sem_unlink(const char *name);
The sem_unlink() function removes the semaphore named by the string name. If the semaphore named by name is currently referenced by other processes, then sem_unlink() re-create or re-connect to the semaphore refer to a new semaphore after sem_unlink() is called. The sem_unlink() call does not block until all references have been destroyed; it returns immediately.
Upon successful completion, the function returns a value of 0. Otherwise, the semaphore is not changed and the function returns a value of -1 and sets errno to indicate the error.
The sem_unlink() function will fail if:
- [EACCES]
- Permission is denied to unlink the named semaphore.
- [ENAMETOOLONG]
- The length of the name string exceeds {NAME_MAX} while {POSIX_NO_TRUNC} is in effect.
- [ENOENT]
- The named semaphore does not exist.
- [ENOSYS]
- The function sem_unlink() is not supported by this implementation.
None.
None.
None.
semctl(), semget(), semop(), sem_close(), sem_open(), <semaphore.h>.
Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995)
|
http://pubs.opengroup.org/onlinepubs/7908799/xsh/sem_unlink.html
|
CC-MAIN-2018-34
|
refinedweb
| 170
| 57.98
|
I have a button component that has a single image and some color filters to make it change colors for the different states and I need a way to change the image source at runtime based on some data. Is this possible? If not, do I really have to make a custom component just to do this? Because that seems like way too much work for something so simple.
That custom component that is "way to much work" probably would have taken the same amount of time to create as you spent creating this thread.
package { import mx.binding.utils.BindingUtils; import spark.components.Button; import spark.components.Image; public class MyButton extends Button { public function MyButton() { super(); } [SkinPart] public var imageDisplay:Image; [Bindable] public var icon:Object; override protected function partAdded(partName:String, instance:Object):void { super.partAdded(partName, instance); if(instance == this.imageDisplay) { BindingUtils.bindProperty(this.imageDisplay, "source", this, "icon"); } } } }
Now just assign "imageDisplay" to the id of the image in your skin and your done.
That wasn't so much work now, was it?
Sweet, thanks.
|
https://forums.adobe.com/thread/855950
|
CC-MAIN-2017-51
|
refinedweb
| 179
| 58.08
|
Question is best asked in video.
Sorry can’t upload attachments (whomp) #newuserprobs
Question is best asked in video.
Sorry can’t upload attachments (whomp) #newuserprobs
Welcome to the community!
The first thing is for your string input of “Phase Created”, it has a return at the end. Drop that to get the phase value. Parameters are case-sensitive and the name needs to be the same.
Once you do that, you can get the phase name from the
GetParameterValueByName Output and from your select phase node. Then you can compare the names.
I kind of mocked it up here, if it is still unclear you should be able to upload files now.
Hi, here is a simple solution for your task
import clr clr.AddReference('ProtoGeometry') from Autodesk.DesignScript.Geometry import * clr.AddReference('RevitAPI') from Autodesk.Revit.DB import BuiltInParameter elements = [ ] for i in IN[0]: phase = UnwrapElement(i).get_Parameter(BuiltInParameter.PHASE_CREATED).AsValueString() if phase == IN[1].Name: elements.append(i) OUT = elements
The first string was wrong so good call.
However still getting results [false, false] where it should be [true, false]; as you can see in the photo the highlighted wall for Phase Created is 01 - Existing which is the parameter in the “select phase” node.
Still need to add the element.name to the graph like I indicated in my screenshot. That way you are comparing 2 string (text) values.
got it because I wasn’t not comparing two of the same data types.
Hi All, there is a video about filtering elements by phase.
Dynamo - Filter Elements by Phase
Hope this is helpful!
|
https://forum.dynamobim.com/t/filtering-elements-by-phase/37139
|
CC-MAIN-2022-05
|
refinedweb
| 268
| 60.61
|
C++11 introduced a pretty nice change to enum types in C++, the scoped enumeration. They mostly supersede the old unscoped enumeration, which was inherited from C and had a few shortcomings. For example, the names in the enumeration where added to its parent scope. This means that given an
enum colors {red, green blue}; you can simply say
auto my_color = red;. This can, of course, lead to ambiguities and people using some weird workarounds like putting the enums in namespaces or prefixing all elements á la hungarian-notation. Also, unscoped enumerations are not particularly type-safe: they can be converted to integer types and back without any special consideration, so you can write things like
int x = red; without the compiler complaining.
Scoped enumerations improves both theses aspects: with
enum class colors {red, green, blue};, you have to use
auto my_color = colors::red; and
int x = colors::red; will simply not compile.
To get the second part to compile, you need to insert a static_cast:
int x = static_cast(colors::red); which is purposefully a lot more verbose. Now this is a bit of a blessing and a curse. Of course, this is a lot more type-safe, but it make one really common usage pattern with enums very cumbersome: bit flags.
Did this get worse?
While you could previously use the bit operators to combine different bitmasks defined as enums, scoped enumerations will only let you do that if you cast them first. In other words, type-safety prevents us from combining flags because the result might, of course, no longer be a valid enum.
However, we can still get the convenience and compactness of bit flags with a type that represents combinations bitmasks from a specific enum type. Oh, this reeks of a template. I give you
scoped_flags, which you can use like this:
enum class window_flags { has_border = 1 << 0, has_caption = 1 << 1, is_child = 1 << 2, /* ... */ }; void create_window(scoped_flags<window_flags> flags); void main() { create_window({window_flags::has_border, window_flags::has_caption}); } scoped_flags<window_flags> something = /* ... */ // Check a flag bool is_set = something.test(window_flags::is_child); // Remove a flag auto no_border = something.without(window_flags::has_border); // Add a flag auto with_border = something.with(window_flags::has_border);
Current implementation
You can find my current implementation on this github gist. Even in its current state, I find it a niftly little utility class that makes unscoped enumerations all but legacy code.
I opted not to replicate the bitwise operator syntax, because
&~ for “without” is so ugly, and
~ alone makes little sense. A non-explicit single-argument constructor makes usage with a single flag as convenient as the old C-style variant, while the list construction is just a tiny bit more complicated.
The implementation is not complete or final yet; for example without is missing an overload that gets a list of flags. After my previous adventures with initializer_lists, I’m also not entirely sure whether std::initializer_list should be used anywhere but in the c’tor. And maybe CTAD could make it more comfortable? Of course, everything here can be
constexpr‘fied. Do you think this is a useful abstraction? Any ideas for improvements? Do tell!
|
https://schneide.blog/author/mariuselvert/
|
CC-MAIN-2020-05
|
refinedweb
| 519
| 54.93
|
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
#include <stdlib.h> size_t wcstombs(char *restrict s, const wchar_t *restrict pwcs, size_t n);
The wcstombs() function converts the sequence of wide-character codes from the array pointed to by pwcs into a sequence of characters and stores these characters into the array pointed to by s, stopping if a character would exceed the limit of n total bytes or if a null byte is stored. Each wide-character code is converted as if by a call to wctomb(3C).
The behavior of this function is affected by the LC_CTYPE category of the current locale.
No more than n bytes will be modified in the array pointed to by s. If copying takes place between objects that overlap, the behavior is undefined. If s is a null pointer, wcstombs() returns the length required to convert the entire array regardless of the value of n, but no values are stored.:
A wide-character code does not correspond to a valid character.
See attributes(5) for descriptions of the following attributes:
mblen(3C), mbstowcs(3C), mbtowc(3C), setlocale(3C), wctomb(3C), attributes(5), standards(5)
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
|
http://docs.oracle.com/cd/E19253-01/816-5168/wcstombs-3c/index.html
|
CC-MAIN-2014-10
|
refinedweb
| 201
| 50.36
|
Interface definition for property list io. More...
#include <simgear/compiler.h>
#include <simgear/props/props.hxx>
#include <stdio.h>
#include <string>
#include <vector>
#include <map>
#include <iosfwd>
Go to the source code of this file.
Interface definition for property list io.
Started Fall 2000 by David Megginson, david@megginson.com This code is released into the Public Domain.
See props.html for documentation [replace with URL when available].
Definition in file props_io.hxx.
Copy properties from one node to another.
Copy properties from one node to another.
Definition at line 628 of file props_io.cxx.
Read properties from an in-memory buffer.
Definition at line 395 of file props_io.cxx.
|
http://simgear.sourceforge.net/doxygen/props__io_8hxx.html
|
CC-MAIN-2017-30
|
refinedweb
| 110
| 56.01
|
What Readers Are Saying about
Practical Programming.
➤ Philip Guo
Creator of Online Python Tutor (), Assistant Professor, Depart-
ment.
➤.
➤ Kathleen Freeman
Director of Undergraduate Studies, Department of Computer and Information
Science, University of Oregon
Practical Programming, Third Edition
An Introduction to Computer Science Using Python 3.6
Paul Gries
Jennifer Campbell
Jason Montojo
Development Editor: Tammy Coron
Indexing: Potomac Indexing
Copy Editor: Liz Welch
Layout: Gilson Graphics
For sales, volume licensing, and support, please contact support@pragprog.com.
For international rights, please contact rights@pragprog.com.-6805026-8-8
Encoded using the finest acid-free high-entropy binary digits.
Book version: P1.0—December 2017
support@pragprog.com
rights@pragprog.com
Contents
Acknowledgments . . . . . . . . . . . xi
Preface . . . . . . . . . . . . . . xiii
1. What’s Programming? . . . . . . . . . . 1
Programs and Programming 2
What’s a Programming Language? 3
What’s a Bug? 4
The Difference Between Brackets, Braces, and Parentheses 5
Installing Python 5
2. Hello, Python . . . . . . . . . . . . . 7
How Does a Computer Run a Python Program? 7
Expressions and Values: Arithmetic in Python 9
What Is a Type? 12
Variables and Computer Memory: Remembering Values 15
How Python Tells You Something Went Wrong 22
A Single Statement That Spans Multiple Lines 23
Describing Code 25
Making Code Readable 26
The Object of This Chapter 27
Exercises 27
3. Designing and Using Functions . . . . . . . . 31
Functions That Python Provides 31
Memory Addresses: How Python Keeps Track of Values 34
Defining Our Own Functions 35
Using Local Variables for Temporary Storage 39
Tracing Function Calls in the Memory Model 40
Designing New Functions: A Recipe 47
Writing and Running a Program 58
Omitting a return Statement: None 60
Dealing with Situations That Your Code Doesn’t Handle 61
What Did You Call That? 62
Exercises 63
4. Working with Text . . . . . . . . . . . 65
Creating Strings of Characters 65
Using Special Characters in Strings 68
Creating a Multiline String 70
Printing Information 70
Getting Information from the Keyboard 73
Quotes About Strings 74
Exercises 75
5. Making Choices . . . . . . . . . . . . 77
A Boolean Type 77
Choosing Which Statements to Execute 86
Nested if Statements 92
Remembering Results of a Boolean Expression Evaluation 92
You Learned About Booleans: True or False? 94
Exercises 94
6. A Modular Approach to Program Organization . . . . 99
Importing Modules 100
Defining Your Own Modules 104
Testing Your Code Semiautomatically 110
Tips for Grouping Your Functions 112
Organizing Our Thoughts 113
Exercises 113
7. Using Methods . . . . . . . . . . . . 115
Modules, Classes, and Methods 115
Calling Methods the Object-Oriented Way 117
Exploring String Methods 119
What Are Those Underscores? 123
A Methodical Review 125
Exercises 126
8. Storing Collections of Data Using Lists . . . . . . 129
Storing and Accessing Data in Lists 129
Type Annotations for Lists 133
Modifying Lists 133
Contents • vi
Operations on Lists 135
Slicing Lists 137
Aliasing: What’s in a Name? 139
List Methods 141
Working with a List of Lists 142
A Summary List 145
Exercises 145
9. Repeating Code Using Loops . . . . . . . . 149
Processing Items in a List 149
Processing Characters in Strings 151
Looping Over a Range of Numbers 152
Processing Lists Using Indices 154
Nesting Loops in Loops 156
Looping Until a Condition Is Reached 160
Repetition Based on User Input 162
Controlling Loops Using break and continue 163
Repeating What You’ve Learned 167
Exercises 168
10. Reading and Writing Files . . . . . . . . . 173
What Kinds of Files Are There? 173
Opening a File 175
Techniques for Reading Files 179
Files over the Internet 183
Writing Files 185
Writing Example Calls Using StringIO 186
Writing Algorithms That Use the File-Reading Techniques 188
Multiline Records 195
Looking Ahead 198
Notes to File Away 200
Exercises 201
11. Storing Data Using Other Collection Types . . . . . 203
Storing Data Using Sets 203
Storing Data Using Tuples 209
Storing Data Using Dictionaries 214
Inverting a Dictionary 222
Using the in Operator on Tuples, Sets, and Dictionaries 223
Comparing Collections 224
Creating New Type Annotations 224
Contents • vii
A Collection of New Information 226
Exercises 226
12. Designing Algorithms . . . . . . . . . . 229
Searching for the Two Smallest Values 230
Timing the Functions 238
At a Minimum, You Saw This 240
Exercises 240
13. Searching and Sorting . . . . . . . . . . 243
Searching a List 243
Binary Search 250
Sorting 256
More Efficient Sorting Algorithms 265
Merge Sort: A Faster Sorting Algorithm 266
Sorting Out What You Learned 270
Exercises 272
14. Object-Oriented Programming . . . . . . . . 275
Understanding a Problem Domain 276
Function isinstance, Class object, and Class Book 277
Writing a Method in Class Book 280
Plugging into Python Syntax: More Special Methods 285
A Little Bit of OO Theory 288
A Case Study: Molecules, Atoms, and PDB Files 293
Classifying What You’ve Learned 297
Exercises 298
15. Testing and Debugging . . . . . . . . . . 303
Why Do You Need to Test? 303
Case Study: Testing above_freezing 304
Case Study: Testing running_sum 309
Choosing Test Cases 315
Hunting Bugs 316
Bugs We’ve Put in Your Ear 317
Exercises 317
16. Creating Graphical User Interfaces . . . . . . . 321
Using Module tkinter 321
Building a Basic GUI 323
Models, Views, and Controllers, Oh My! 327
Customizing the Visual Style 331
Contents • viii
Introducing a Few More Widgets 335
Object-Oriented GUIs 338
Keeping the Concepts from Being a GUI Mess 339
Exercises 340
17. Databases . . . . . . . . . . . . . 343
Overview 343
Creating and Populating 344
Retrieving Data 348
Updating and Deleting 351
Using NULL for Missing Data 352
Using Joins to Combine Tables 353
Keys and Constraints 357
Advanced Features 358
Some Data Based On What You Learned 364
Exercises 365
Bibliography . . . . . . . . . . . . 369
Index . . . . . . . . . . . . . . 371
Contents • ix
Acknowledgments
This book would be confusing and riddled with errors if it weren’t for a bunch
of awesome people who patiently and carefully read our drafts.
We had a great team of people provide technical reviews for this edition and
previous editions: in no particular order, Frank Ruiz, Stefan Turalski, Stephen
Wolff, Peter W.A. Wood, Steve Wolfman, Adam Foster, Owen Nelson, Arturo
Martínez Peguero, C. Keith Ray, Michael Szamosi, David Gries, Peter Beens,
Edward Branley, Paul Holbrook, Kristie Jolliffe, Mike Riley, Sean Stickle, Tim
Ottinger, Bill Dudney, Dan Zingaro, and Justin Stanley. We also appreciate
all the people who reported errata: your feedback was invaluable.
Greg Wilson started us on this journey when he proposed that we write a
textbook, and he was our guide and mentor as we worked together to create
the first edition of this book.
Finally, we would like to thank our editor Tammy Coron, who set up a workflow
that made the tight timeline possible. Tammy, your gentle nudges kept us on
track (squirrel!) and helped us complete this third edition in record time.
report erratum • discuss
Preface
This book uses the Python programming language to teach introductory
computer science topics and a handful of useful applications. You’ll certainly
learn a fair amount of Python as you work through this book, but along the
way you’ll also learn about issues that every programmer needs to know:
ways to approach a problem and break it down into parts, how and why to
document your code, how to test your code to help ensure your program does
what you want it to, and more.
We chose Python for several reasons:
• It is free and well documented. In fact, Python is one of the largest and
best-organized open source projects going.
• It runs everywhere. The reference implementation, written in C, is used
on everything from cell phones to supercomputers, and it’s supported by
professional-quality installers for Windows, macOS, and Linux.
• It has a clean syntax. Yes, every language makes this claim, but during
the several years that we have been using it at the University of Toronto,
we have found that students make noticeably fewer “punctuation” mistakes
with Python than with C-like languages.
• It is relevant. Thousands of companies use it every day: it is one of the
languages used at Google, Industrial Light & Magic uses it extensively,
and large portions of the game EVE Online are written in Python. It is
also widely used by academic research groups.
• It is well supported by tools. Legacy editors like vi and Emacs all have
Python editing modes, and several professional-quality IDEs are available.
(We use IDLE, the free development environment that comes with a
standard Python installation.)
report erratum • discuss
Our Approach
We have organized the book into two parts. The first covers fundamental pro-
gramming ideas: how to store and manipulate information (numbers, text, lists,
sets, dictionaries, and files), how to control the flow of execution (conditionals
and loops), how to organize code (functions and modules), how to ensure your
code works (testing and debugging), and how to plan your program (algorithms).
The second part of the book consists of more or less independent chapters
on more advanced topics that assume all the basic material has been covered.
The first of these chapters shows how to create and manage your own types
of information. It introduces object-oriented concepts such as encapsulation,
inheritance, and polymorphism. The other chapters cover testing, databases,
and graphical user interface construction.
Further Reading
Lots of other good books on Python programming exist. Some are accessible
to novices, such as Introduction to Computing and Programming in Python: A
Multimedia Approach [GE13] and Python Programming: An Introduction to
Computer Science [Zel03]; others are for anyone with any previous programming
experience (How to Think Like a Computer Scientist: Learning with Python
[DEM02], Object-Oriented Programming in Python [GL07], and Learning Python
[Lut13]). You may also want to take a look at Python Education Special Interest
Group (EDU-SIG) [Pyt11], the special interest group for educators using Python.
Python Resources
Information about a variety of Python books and other resources is available at.
After you have a good grasp of programming in Python, we recommend that
you learn a second programming language. There are many possibilities, such
as well-known languages like C, Java, C#, and Ruby. Python is similar in
concept to those languages. However, you will likely learn more and become
a better programmer if you learn a programming language that requires a
different mindset, such as Racket,1 Erlang,2 or Haskell.3 In any case, we
strongly recommend learning a second programming language.
1. See.
2. See.
3. See.
Preface • xiv
report erratum • discuss
What You’ll See
In this book, we’ll do the following:
• We’ll show you how to develop and use programs that solve real-world
problems. Most of the examples will come from science and engineering,
but the ideas can be applied to any domain.
• We’ll start by teaching you the core features of Python. These features
are included in most modern programming languages, so you can use
what you learn no matter what you work on next.
• We’ll also teach you how to think methodically about programming. In
particular, we will show you how to break complex problems into simple
ones and how to combine the solutions to those simpler problems to create
complete applications.
• Finally, we’ll introduce some tools that will help make your programming
more productive, as well as some others that will help your applications
cope with larger problems.
Online Resources
All the source code, errata, discussion forums, installation instructions, and
exercise solutions are available at.
report erratum • discuss
What You’ll See • xv
CHAPTER 1
What’s Programming?
(Photo credit: NASA/Goddard Space Flight Center Scientific Visualization Studio)
Take a look at the pictures above. The first one shows forest cover in the
Amazon basin in 1975. The second one shows the same area twenty-six years
later. Anyone can see that much of the rainforest has been destroyed, but
how much is “much”?
Now look at this:
(Photo credit: CDC)
report erratum • discuss
Are these blood cells healthy? Do any of them show signs of leukemia? It
would take an expert doctor a few minutes to tell. Multiply those minutes by
the number of people who need to be screened. There simply aren’t enough
human doctors in the world to check everyone.
This is where computers come in. Computer programs can measure the dif-
ferences between two pictures and count the number of oddly shaped platelets
in a blood sample. Geneticists use programs to analyze gene sequences;
statisticians, to analyze the spread of diseases; geologists, to predict the effects
of earthquakes; economists, to analyze fluctuations in the stock market; and
climatologists, to study global warming. More and more scientists are writing
programs to help them do their work. In turn, those programs are making
entirely new kinds of science possible.
Of course, computers are good for a lot more than just science. We used
computers to write this book. Your smartphone is a pretty powerful computer;
you’ve probably used one today to chat with friends, check your lecture notes,
or look for a restaurant that serves pizza and Chinese food. Every day,
someone figures out how to make a computer do something that has never
been done before. Together, those “somethings” are changing the world.
This book will teach you how to make computers do what you want them to
do. You may be planning to be a doctor, a linguist, or a physicist rather than
a full-time programmer, but whatever you do, being able to program is as
important as being able to write a letter or do basic arithmetic.
We begin in this chapter by explaining what programs and programming are.
We then define a few terms and present some useful bits of information for
course instructors.
Programs and Programming
A program is a set of instructions. When you write down directions to your
house for a friend, you are writing a program. Your friend “executes” that
program by following each instruction in turn.
Every program is written in terms of a few basic operations that its reader already
understands. For example, the set of operations that your friend can understand
might include the following: “Turn left at Darwin Street,” “Go forward three
blocks,” and “If you get to the gas station, turn around—you’ve gone too far.”
Computers are similar but have a different set of operations. Some operations
are mathematical, like “Take the square root of a number,” whereas others
include “Read a line from the file named data.txt” and “Make a pixel blue.”
Chapter 1. What’s Programming? • 2
report erratum • discuss
The most important difference between a computer and an old-fashioned
calculator is that you can “teach” a computer new operations by defining
them in terms of old ones. For example, you can teach the computer that
“Take the average” means “Add up the numbers in a sequence and divide by
the sequence’s size.” You can then use the operations you have just defined
to create still more operations, each layered on top of the ones that came
before. It’s a lot like creating life by putting atoms together to make proteins
and then combining proteins to build cells, combining cells to make organs,
and combining organs to make a creature.
Defining new operations and combining them to do useful things is the heart
and soul of programming. It is also a tremendously powerful way to think
about other kinds of problems. As Professor Jeannette Wing wrote in
Computational Thinking [Win06], computational thinking is about the following:
• Conceptualizing, not programming. Computer science isn’t computer pro-
gramming. Thinking like a computer scientist means more than being
able to program a computer: it requires thinking at multiple levels of
abstraction.
• A way that humans, not computers, think. Computational thinking is a
way humans solve problems; it isn’t trying to get humans to think like
computers. Computers are dull and boring; humans are clever and
imaginative. We humans make computers exciting. Equipped with com-
puting devices, we use our cleverness to tackle problems we wouldn’t dare
take on before the age of computing and build systems with functionality
limited only by our imaginations.
• For everyone, everywhere. Computational thinking will be a reality when
it becomes so integral to human endeavors it disappears as an explicit
philosophy.
We hope that by the time you have finished reading this book, you will see
the world in a slightly different way.
What’s a Programming Language?
Directions to the nearest bus station can be given in English, Portuguese,
Mandarin, Hindi, and many other languages. As long as the people you’re
talking to understand the language, they’ll get to the bus station.
In the same way, there are many programming languages, and they all can
add numbers, read information from files, and make user interfaces with
windows and buttons and scroll bars. The instructions look different, but
report erratum • discuss
What’s a Programming Language? • 3
they accomplish the same task. For example, in the Python programming
language, here’s how you add 3 and 4:
3 + 4
But here’s how it’s done in the Scheme programming language:
(+ 3 4)
They both express the same idea—they just look different.
Every programming language has a way to write mathematical expressions,
repeat a list of instructions a number of times, choose which of two instruc-
tions to do based on the current information you have, and much more. In
this book, you’ll learn how to do these things in the Python programming
language. Once you understand Python, learning the next programming lan-
guage will be much easier.
What’s a Bug?
Pretty much everyone has had a program crash. A standard story is that you
were typing in a paper when, all of a sudden, your word processor crashed.
You had forgotten to save, and you had to start all over again. Old versions
of Microsoft Windows used to crash more often than they should have,
showing the dreaded “blue screen of death.” (Happily, they’ve gotten a lot
better in the past several years.) Usually, your computer shows some kind of
cryptic error message when a program crashes.
What happened in each case is that the people who wrote the program told
the computer to do something it couldn’t do: open a file that didn’t exist,
perhaps, or keep track of more information than the computer could handle,
or maybe repeat a task with no way of stopping other than by rebooting the
computer. (Programmers don’t mean to make these kinds of mistakes, they
are just part of the programming process.)
Worse, some bugs don’t cause a crash; instead, they give incorrect information.
(This is worse because at least with a crash you’ll notice that there’s a prob-
lem.) As a real-life example of this kind of bug, the calendar program that one
of the authors uses contains an entry for a friend who was born in 1978. That
friend, according to the calendar program, had his 5,875,542nd birthday this
past February. Bugs can be entertaining, but they can also be tremendously
frustrating.
Every piece of software that you can buy has bugs in it. Part of your job as a
programmer is to minimize the number of bugs and to reduce their severity.
In order to find a bug, you need to track down where you gave the wrong
Chapter 1. What’s Programming? • 4
report erratum • discuss
instructions, then you need to figure out the right instructions, and then you
need to update the program without introducing other bugs.
Every time you get a software update for a program, it is for one of two reasons:
new features were added to a program or bugs were fixed. It’s always a game
of economics for the software company: are there few enough bugs, and are
they minor enough or infrequent enough in order for people to pay for the
software?
In this book, we’ll show you some fundamental techniques for finding and
fixing bugs and also show you how to prevent them in the first place.
The Difference Between Brackets, Braces, and Parentheses
One of the pieces of terminology that causes confusion is what to call certain
characters. Several dictionaries use these names, so this book does too:
Parentheses()
Brackets[]
Braces (Some people call these curly brackets or curly braces, but we’ll
stick to just braces.)
{}
Installing Python
Installation instructions and use of the IDLE programming environment are
available on the book’s website:.
report erratum • discuss
The Difference Between Brackets, Braces, and Parentheses • 5
CHAPTER 2
Hello, Python
Programs are made up of commands that tell the computer what to do. These
commands are called statements, which the computer executes. This chapter
describes the simplest of Python’s statements and shows how they can be
used to do arithmetic, which is one of the most common tasks for computers
and also a great place to start learning to program. It’s also the basis of almost
everything that follows.
How Does a Computer Run a Python Program?
In order to understand what happens when you’re programming, it helps to
have have a mental model of how a computer executes a program.
The computer is assembled from pieces of hardware, including a processor
that can execute instructions and do arithmetic, a place to store data such
as a hard drive, and various other pieces, such as a screen, a keyboard, an
Ethernet controller for connecting to a network, and so on.
To deal with all these pieces, every computer runs some kind of operating
system, such as Microsoft Windows, Linux, or macOS. An operating system,
or OS, is a program; what makes it special is that it’s the only program on
the computer that’s allowed direct access to the hardware. When any other
application (such as your browser, a spreadsheet program, or a game) wants
to draw on the screen, find out what key was just pressed on the keyboard,
or fetch data from storage, it sends a request to the OS (see the top image on
page 8).
This may seem like a roundabout way of doing things, but it means that only
the people writing the OS have to worry about the differences between one
graphics card and another and whether the computer is connected to a
network through Ethernet or wireless. The rest of us—everyone analyzing
report erratum • discuss
Storage Device Screen
Operating System
Applications
scientific data or creating 3D virtual chat rooms—only have to learn our way
around the OS, and our programs will then run on thousands of different
kinds of hardware.
Today,:
Storage Device Screen
Operating System
Applications Python Interpreter
Python Program
There are two ways to use the Python interpreter. One is to tell it to execute
a Python program that is saved in a file with a .py extension. Another is to
interact with it in a program called a shell, where you type statements one at
a time. The interpreter will execute each statement when you type it, do what
the statement says to do, and show any output as text, all in one window.
We will explore Python in this chapter using a Python shell.
Chapter 2. Hello, Python • 8
report erratum • discuss
Install Python Now (If You Haven’t Already)
If you haven’t yet installed Python 3.6, please do so now. (Python 2 won’t do; there
are significant differences between Python 2 and Python 3, and this book uses Python
3.6.) Locate installation instructions on the book’s website:
practical-programming.
Programming requires practice: you won’t learn how to program just by reading this
book, much like you wouldn’t learn how to play guitar just by reading a book on how
to play guitar.
Python comes with a program called IDLE, which we use to write Python programs.
IDLE has a Python shell that communicates with the Python interpreter and also
allows you to write and run programs that are saved in a file.
We strongly recommend that you open IDLE and follow along with our examples.
Typing in the code in this book is the programming equivalent of repeating phrases
back to an instructor as you’re learning to speak a new language.
Expressions and Values: Arithmetic in Python
You’re familiar with mathematical expressions like 3 + 4 (“three plus four”)
and 2 - 3 / 5 (“two minus three divided by five”); each expression is built out of
values like 2, 3, and 5 and operators like + and -, which combine their operands
in different ways. In the expression 4 / 5, the operator is “/” and the operands
are 4 and 5.
Expressions don’t have to involve an operator: a number by itself is an
expression. For example, we consider 212 to be an expression as well as a
value.
Like any programming language, Python can evaluate basic mathematical
expressions. For example, the following expression adds 4 and 13:
>>> 4 + 13
17
The >>> symbol is called a prompt. When you opened IDLE, a window should
have opened with this symbol shown; you don’t type it. It is prompting you
to type something. Here we typed 4 + 13, and then we pressed the Return (or
Enter) key in order to signal that we were done entering that expression.
Python then evaluated the expression.
When an expression is evaluated, it produces a single value. In the previous
expression, the evaluation of 4 + 13 produced the value 17. When you type the
expression in the shell, Python shows the value that is produced.
report erratum • discuss
Expressions and Values: Arithmetic in Python • 9
Subtraction and multiplication are similarly unsurprising:
>>> 15 - 3
12
>>> 4 * 7
28
The following expression divides 5 by 2:
>>> 5 / 2
2.5
The result has a decimal point. In fact, the result of division always has a
decimal point even if the result is a whole number:
>>> 4 / 2
2.0
Types
Every value in Python has a particular type, and the types of values determine
how they behave when they’re combined. Values like 4 and 17 have type int
(short for integer), and values like 2.5 and 17.0 have type float. The word float
is short for floating point, which refers to the decimal point that moves around
between digits of the number.
An expression involving two floats produces a float:
>>> 17.0 - 10.0
7.0
When an expression’s operands are an int and a float, Python automatically
converts the int to a float. This is why the following two expressions both return
the same answer:
>>> 17.0 - 10
7.0
>>> 17 - 10.0
7.0
If you want, you can omit the zero after the decimal point when writing a
floating-point number:
>>> 17 - 10.
7.0
>>> 17. - 10
7.0
However, most people think this is bad style, since it makes your programs
harder to read: it’s very easy to miss a dot on the screen and see 17 instead
of 17..
Chapter 2. Hello, Python • 10
report erratum • discuss
Integer Division, Modulo, and Exponentiation
Every now and then, we want only the integer part of a division result. For
example, we might want to know how many 24-hour days there are in 53
hours (which is two 24-hour days plus another 5 hours). To calculate the
number of days, we can use integer division:
>>> 53 // 24
2
We can find out how many hours are left over using the modulo operator,
which gives the remainder of the division:
>>> 53 % 24
5
Python doesn’t round the result of integer division. Instead, it takes the floor
of the result of the division, which means that it rounds down to the nearest
integer:
>>> 17 // 10
1
Be careful about using % and // with negative operands. Because Python takes
the floor of the result of an integer division, the result is one smaller than
you might expect if the result is negative:
>>> -17 // 10
-2
When using modulo, the sign of the result matches the sign of the divisor
(the second operand):
>>> -17 % 10
3
>>> 17 % -10
-3
For the mathematically inclined, the relationship between // and % comes from
this equation, for any two non-zero numbers a and b:
(b * (a // b) + a % b) is equal to a
For example, because -17 // 10 is -2, and -17 % 10 is 3; then 10 * (-17 // 10) + -17 %
10 is the same as 10 * -2 + 3, which is -17.
Floating-point numbers can be operands for // and % as well. With //, division
is performed and the result is rounded down to the nearest whole number,
although the type is a floating-point number:
report erratum • discuss
Expressions and Values: Arithmetic in Python • 11
>>> 3.3 // 1
3.0
>>> 3 // 1.0
3.0
>>> 3 // 1.1
2.0
>>> 3.5 // 1.1
3.0
>>> 3.5 // 1.3
2.0
The following expression calculates 3 raised to the 6th power:
>>> 3 ** 6
729
Operators that have two operands are called binary operators. Negation is a
unary operator because it applies to one operand:
>>> -5
-5
>>> --5
5
>>> ---5
-5
What Is a Type?
We’ve now seen two types of numbers (integers and floating-point numbers),
so we ought to explain what we mean by a type. In Python, a type consists
of two things:
• A set of values
• A set of operations that can be applied to those values
For example, in type int, the values are …, -3, -2, -1, 0, 1, 2, 3, … and we have seen
that these operators can be applied to those values: +, -, *, /, //, %, and **.
The values in type float are a subset of the real numbers, and it happens that
the same set of operations can be applied to float values. We can see what
happens when these are applied to various values in Table 1, Arithmetic
Operators, on page 13. If an operator can be applied to more than one type
of value, it is called an overloaded operator.
Finite Precision
Floating-point numbers are not exactly the fractions you learned in grade
school. For example, look at Python’s version of the fractions 2⁄3 and
5⁄3:
Chapter 2. Hello, Python • 12
report erratum • discuss
ResultExampleOperatorSymbol
-5-5Negation-
14.111 + 3.1Addition+
-145 - 19Subtraction-
34.08.5 * 4Multiplication*
5.511 / 2Division/
511 // 2Integer Division//
1.58.5 % 3.5Remainder%
322 ** 5Exponentiation**
Table 1—Arithmetic Operators
>>> 2 / 3
0.6666666666666666
>>> 5 / 3
1.6666666666666667
The first value ends with a 6, and the second with a 7. This is fishy: both of
them should have an infinite number of 6s after the decimal point. The
problem is that computers have a finite amount of memory, and (to make
calculations fast and memory efficient) most programming languages limit
how much information can be stored for any single number. The number
0.6666666666666666 turns out to be the closest value to 2⁄3 that the computer
can actually store in that limited amount of memory, and 1.6666666666666667
is as close as we get to the real value of 5⁄3.
Operator Precedence
Let’s put our knowledge of ints and floats to use in converting Fahrenheit to
Celsius. To do this, we subtract 32 from the temperature in Fahrenheit and
then multiply by 5⁄9:
>>> 212 - 32 * 5 / 9
194.22222222222223
Python claims the result is 194.22222222222223 degrees Celsius, when in fact it
should be 100. The problem is that multiplication and division have higher
precedence than subtraction; in other words, when an expression contains
a mix of operators, the * and / are evaluated before the - and +. This means
that what we actually calculated was 212 - ((32 * 5) / 9): the subexpression 32 * 5
is evaluated before the division is applied, and that division is evaluated before
the subtraction occurs.
report erratum • discuss
What Is a Type? • 13
More on Numeric Precision
Integers (values of type int) in Python can be as large or as small as you like. However,
float values are only approximations to real numbers. For example, 1⁄4 can be stored
exactly, but as we’ve already seen, 2⁄3 cannot. Using more memory won’t solve the
problem, though it will make the approximation closer to the real value, just as
writing a larger number of 6s after the 0 in 0.666… doesn’t make it exactly equal to 2⁄3.
The difference between 2⁄3 and 0.6666666666666666 may look tiny. But if we use
0.6666666666666666 in a calculation, then the error may get compounded. For example,
if we add 1 to 2⁄3, the resulting value ends in …6665, so in many programming lan-
guages, 1 + 2⁄3 is not equal to
5⁄3:
>>> 2 / 3 + 1
1.6666666666666665
>>> 5 / 3
1.6666666666666667
As we do more calculations, the rounding errors can get larger and larger, particularly
if we’re mixing very large and very small numbers. For example, suppose we add
10000000000 (10 billion) and 0.00000000001 (there are 10 zeros after the decimal point):
>>> 10000000000 + 0.00000000001
10000000000.0
The result ought to have twenty zeros between the first and last significant digit, but
that’s too many for the computer to store, so the result is just 10000000000—it’s as if
the addition never took place. Adding lots of small numbers to a large one can
therefore have no effect at all, which is not what a bank wants when it totals up the
values of its customers’ savings accounts.
It’s important to be aware of the floating-point issue. There is no magic bullet to solve
it, because computers are limited in both memory and speed. Numerical analysis,
the study of algorithms to approximate continuous mathematics, is one of the largest
subfields of computer science and mathematics.
Here’s a tip: If you have to add up floating-point numbers, add them from smallest
to largest in order to minimize the error.
We can alter the order of precedence by putting parentheses around
subexpressions:
>>> (212 - 32) * 5 / 9
100.0
Table 2, Arithmetic Operators Listed by Precedence from Highest to Lowest, on
page 15 shows the order of precedence for arithmetic operators.
Operators with higher precedence are applied before those with lower prece-
dence. Here is an example that shows this:
Chapter 2. Hello, Python • 14
report erratum • discuss
>>> -2 ** 4
-16
>>> -(2 ** 4)
-16
>>> (-2) ** 4
16
Because exponentiation has higher precedence than negation, the subexpres-
sion 2 ** 4 is evaluated before negation is applied.
OperationOperatorPrecedence
Exponentiation**Highest
Negation-
Multiplication, division, integer division, and
remainder
*, /, //, %
Addition and subtraction+, -Lowest
Table 2—Arithmetic Operators Listed by Precedence from Highest to Lowest
Operators on the same row have equal precedence and are applied left to
right, except for exponentiation, which is applied right to left. So, for example,
because binary operators + and - are on the same row, 3 + 4 - 5 is equivalent
to (3 + 4) - 5, and 3 - 4 + 5 is equivalent to (3 - 4) + 5.
It’s a good rule to parenthesize complicated expressions even when you don’t
need to, since it helps the eye read things like 1 + 1.7 + 3.2 * 4.4 - 16 / 3. On the
other hand, it’s a good rule to not use parentheses in simple expressions such
as 3.1 * 5.
Variables and Computer Memory: Remembering Values
Like mathematicians, programmers frequently name values so that they can
use them later. A name that refers to a value is called a variable. In Python,
variable names can use letters, digits, and the underscore symbol (but they
can’t start with a digit). For example, X, species5618, and degrees_celsius are all
allowed, but 777 isn’t (it would be confused with a number), and neither is
no-way! (it contains punctuation). Variable names are case sensitive, so ph and
pH are two different names.
You create a new variable by assigning it a value:
>>> degrees_celsius = 26.0
This statement is called an assignment statement; we say that degrees_celsius is
assigned the value 26.0. That makes degrees_celsius refer to the value 26.0. We can
report erratum • discuss
Variables and Computer Memory: Remembering Values • 15
use variables anywhere we can use values. Whenever Python sees a variable in
an expression, it substitutes the value to which the variable refers:
>>> degrees_celsius = 26.0
>>> degrees_celsius
26.0
>>> 9 / 5 * degrees_celsius + 32
78.80000000000001
>>> degrees_celsius / degrees_celsius
1.0
Variables are called variables because their values can vary as the program
executes. We can assign a new value to a variable:
>>> degrees_celsius = 26.0
>>> 9 / 5 * degrees_celsius + 32
78.80000000000001
>>> degrees_celsius = 0.0
>>> 9 / 5 * degrees_celsius + 32
32.0
Assigning a value to a variable that already exists doesn’t create a second
variable. Instead, the existing variable is reused, which means that the variable
no longer refers to its old value.
We can create other variables; this example calculates the difference between
the boiling point of water and the temperature stored in degrees_celsius:
>>> degrees_celsius = 15.5
>>> difference = 100 - degrees_celsius
>>> difference
84.5
Warning: = Is Not Equality in Python!
In mathematics, = means “the thing on the left is equal to the thing on the right.” In
Python, it means something quite different. Assignment is not symmetric: x = 12
assigns the value 12 to variable x, but 12 = x results in an error. Because of this, we
never describe the statement x = 12 as “x equals 12.” Instead, we read this as “x gets
12” or “x is assigned 12.”
Values, Variables, and Computer Memory
We’re going to develop a model of computer memory—a memory model—that will
let us trace what happens when Python executes a Python program. This memory
model will help us accurately predict and explain what Python does when it exe-
cutes code, a skill that is a requirement for becoming a good programmer.
Chapter 2. Hello, Python • 16
report erratum • discuss
The Online Python Tutor
Philip Guo wrote a web-based memory visualizer that matches our memory model
pretty well. Here’s the URL:. It can trace both Python 2
and Python 3 code; make sure you select the correct version. The settings that most
closely match our memory model are these:
• Hide exited frames
• Render all objects on the heap
• Use text labels for pointers
We strongly recommend that you use this visualizer whenever you want to trace
execution of a Python program.
In case you find it motivating, we weren’t aware of Philip’s visualizer when we devel-
oped our memory model (and vice versa), and yet they match extremely closely.
Every location in the computer’s memory has a memory address, much like
an address for a house on a street, that uniquely identifies that location.
We’re going to mark our memory addresses with an id prefix (short for identi-
fier) so that they look different from integers: id1, id2, id3, and so on.
Here is how we draw the floating-point value 26.0 using the memory model:
26.0
id1
This image shows the value 26.0 at the memory address id1. We will always
show the type of the value as well—in this case, float. We will call this box an
object: a value at a memory address with a type. During execution of a pro-
gram, every value that Python keeps track of is stored inside an object in
computer memory.
In our memory model, a variable contains the memory address of the object
to which it refers:
In order to make the image easier to interpret, we usually draw arrows from
variables to their objects.
We use the following terminology:
• Value 26.0 has the memory address id1.
• The object at the memory address id1 has type float and the value 26.0.
report erratum • discuss
Variables and Computer Memory: Remembering Values • 17
• Variable degrees_celsius contains the memory address id1.
• Variable degrees_celsius refers to the value 26.0.
Whenever Python needs to know which value degrees_celsius refers to, it looks
at the object at the memory address that degrees_celsius contains. In this
example, that memory address is id1, so Python will use the value at the
memory address id1, which is 26.0.
Assignment Statement
Here is the general form of an assignment statement:
«variable» = «expression»
This is executed as follows:
1. Evaluate the expression on the right of the = sign to produce a value. This
value has a memory address.
2. Store the memory address of the value in the variable on the left of the =.
Create a new variable if that name doesn’t already exist; otherwise, just reuse
the existing variable, replacing the memory address that it contains.
Consider this example:
>>> degrees_celsius = 26.0 + 5
>>> degrees_celsius
31.0
Here is how Python executes the statement degrees_celsius = 26.0 + 5:
1. Evaluate the expression on the right of the = sign: 26.0 + 5. This produces
the value 31.0, which has a memory address. (Remember that Python
stores all values in computer memory.)
2. Make the variable on the left of the = sign, degrees_celsius, refer to 31.0 by
storing the memory address of 31.0 in degrees_celsius.
Reassigning to Variables
Consider this code:
>>> difference = 20
>>> double = 2 * difference
>>> double
40
>>> difference = 5
>>> double
40
Chapter 2. Hello, Python • 18
report erratum • discuss
This code demonstrates that assigning to a variable does not change any
other variable. We start by assigning value 20 to variable difference, and then
we assign the result of evaluating 2 * difference (which produces 40) to variable
double.
Next, we assign value 5 to variable difference, but when we examine the value
of double, it still refers to 40.
Here’s how it works according to our rules. The first statement, difference = 20,
is executed as follows:
1. Evaluate the expression on the right of the = sign: 20. This produces the
value 20, which we’ll put at memory address id1.
2. Make the variable on the left of the = sign, difference, refer to 20 by storing
id1 in difference.
Here is the current state of the memory model. (Variable double has not yet
been created because we have not yet executed the assignment to it.)
The second statement, double = 2 * difference, is executed as follows:
1. Evaluate the expression on the right of the = sign: 2 * difference. As we see
in the memory model, difference refers to the value 20, so this expression
is equivalent to 2 * 20, which produces 40. We’ll pick the memory address
id2 for the value 40.
2. Make the variable on the left of the = sign, double, refer to 40 by storing id2
in double.
Here is the current state of the memory model:
When Python executes the third statement, double, it merely looks up the value
that double refers to (40) and displays it.
report erratum • discuss
Variables and Computer Memory: Remembering Values • 19
The fourth statement, difference = 5, is executed as follows:
1. Evaluate the expression on the right of the = sign: 5. This produces the
value 5, which we’ll put at the memory address id3.
2. Make the variable on the left of the = sign, difference, refer to 5 by storing
id3 in difference.
Here is the current state of the memory model:
Variable double still contains id2, so it still refers to 40. Neither variable refers
to 20 anymore.
The fifth and last statement, double, merely looks up the value that double refers
to, which is still 40, and displays it.
We can even use a variable on both sides of an assignment statement:
>>> number = 3
>>> number
3
>>> number = 2 * number
>>> number
6
>>> number = number * number
>>> number
36
We’ll now explain how Python executes this code, but we won’t explicitly
mention memory addresses. Trace this on a piece of paper while we describe
what happens; make up your own memory addresses as you do this.
Python executes the first statement, number = 3, as follows:
1. Evaluate the expression on the right of the = sign: 3. This one is easy to
evaluate: 3 is produced.
2. Make the variable on the left of the = sign, number, refer to 3.
Python executes the second statement, number = 2 * number, as follows:
Chapter 2. Hello, Python • 20
report erratum • discuss
1. Evaluate the expression on the right of the = sign: 2 * number. number cur-
rently refers to 3, so this is equivalent to 2 * 3, and 6 is produced.
2. Make the variable on the left of the = sign, number, refer to 6.
Python executes the third statement, number = number * number, as follows:
1. Evaluate the expression on the right of the = sign: number * number. number
currently refers to 6, so this is equivalent to 6 * 6, and 36 is produced.
2. Make the variable on the left of the = sign, number, refer to 36.
Augmented Assignment
In this example, the variable score appears on both sides of the assignment
statement:
>>> score = 50
>>> score
50
>>> score = score + 20
>>> score
70
This is so common that Python provides a shorthand notation for this
operation:
>>> score = 50
>>> score
50
>>> score += 20
>>> score
70
An augmented assignment combines an assignment statement with an oper-
ator to make the statement more concise. An augmented assignment statement
is executed as follows:
1. Evaluate the expression on the right of the = sign to produce a value.
2. Apply the operator attached to the = sign to the variable on the left of the
= and the value that was produced. This produces another value. Store
the memory address of that value in the variable on the left of the =.
Note that the operator is applied after the expression on the right is evaluated:
>>> d = 2
>>> d *= 3 + 4
>>> d
14
report erratum • discuss
Variables and Computer Memory: Remembering Values • 21
All the operators (except for negation) in Table 2, Arithmetic Operators Listed
by Precedence from Highest to Lowest, on page 15, have shorthand versions.
For example, we can square a number by multiplying it by itself:
>>> number = 10
>>> number *= number
>>> number
100
This code is equivalent to this:
>>> number = 10
>>> number = number * number
>>> number
100
Table 3 contains a summary of the augmented operators you’ve seen plus a
few more based on arithmetic operators you learned about in Expressions
and Values: Arithmetic in Python, on page 9.
ResultExampleSymbol
x refers to 9
+= x = 7
x += 2
x refers to 5
-= x = 7
x -= 2
x refers to 14
*= x = 7
x *= 2
x refers to 3.5
/= x = 7
x /= 2
x refers to 3
//= x = 7
x //= 2
x refers to 1
%= x = 7
x %= 2
x refers to 49
**= x = 7
x **= 2
Table 3—Augmented Assignment Operators
How Python Tells You Something Went Wrong
Broadly speaking, there are two kinds of errors in Python: syntax errors,
which happen when you type something that isn’t valid Python code, and
semantic errors, which happen when you tell Python to do something that it
just can’t do, like divide a number by zero or try to use a variable that doesn’t
exist.
Chapter 2. Hello, Python • 22
report erratum • discuss
Here is what happens when we try to use a variable that hasn’t been created yet:
>>> 3 + moogah
Traceback (most recent call last):
File "", line 1, in
NameError: name 'moogah' is not defined
This is pretty cryptic; Python error messages are meant for people who already
know Python. (You’ll get used to them and soon find them helpful.) The first
two lines aren’t much use right now, though they’ll be indispensable when
we start writing longer programs. The last line is the one that tells us what
went wrong: the name moogah wasn’t recognized.
Here’s another error message you might sometimes see:
>>> 2 +
File "", line 1
2 +
^
SyntaxError: invalid syntax
The rules governing what is and isn’t legal in a programming language are
called its syntax. The message tells us that we violated Python’s syntax
rules—in this case, by asking it to add something to 2 but not telling it what
to add.
Earlier, in Warning: = Is Not Equality in Python!, on page 16, we claimed that
12 = x results in an error. Let’s try it:
>>> 12 = x
File "", line 1
SyntaxError: can't assign to literal
A literal is any value, like 12 and 26.0. This is a SyntaxError because when Python
examines that assignment statement, it knows that you can’t assign a value
to a number even before it tries to execute it; you can’t change the value of
12 to anything else. 12 is just 12.
A Single Statement That Spans Multiple Lines
Sometimes statements get pretty intricate. The recommended Python style is
to limit lines to 80 characters, including spaces, tabs, and other whitespace
characters, and that’s a common limit throughout the programming world.
Here’s what to do when lines get too long or when you want to split it up for
clarity.
report erratum • discuss
A Single Statement That Spans Multiple Lines • 23
In order to split up a statement into more than one line, you need to do one
of two things:
1. Make sure your line break occurs inside parentheses.
2. Use the line-continuation character, which is a backslash, \.
Note that the line-continuation character is a backslash (\), not the division
symbol (/).
Here are examples of both:
>>> (2 +
... 3)
5
>>> 2 + \
... 3
5
Notice how we don’t get a SyntaxError. Each triple-dot prompt in our examples
indicates that we are in the middle of entering an expression; we use them
to make the code line up nicely. You do not type the dots any more than you
type the greater-than signs in the usual >>> prompt, and if you are using
IDLE, you won’t see them at all.
Here is a more realistic (and tastier) example: let’s say we’re baking cookies.
The authors live in Canada, which uses Celsius, but we own cookbooks that
use Fahrenheit. We are wondering how long it will take to preheat our oven.
Here are our facts:
• The room temperature is 20 degrees Celsius.
• Our oven controls use Celsius, and the oven heats up at 20 degrees per
minute.
• Our cookbook uses Fahrenheit, and it says to preheat the oven to 350
degrees.
We can convert t degrees Fahrenheit to t degrees Celsius like this: (t - 32) * 5 /
9. Let’s use this information to try to solve our problem.
>>> room_temperature_c = 20
>>> cooking_temperature_f = 350
>>> oven_heating_rate_c = 20
>>> oven_heating_time = (
... ((cooking_temperature_f - 32) * 5 / 9) - room_temperature_c) / \
... oven_heating_rate_c
>>> oven_heating_time
7.833333333333333
Chapter 2. Hello, Python • 24
report erratum • discuss
Not bad—just under eight minutes to preheat.
The assignment statement to variable oven_heating_time spans three lines. The
first line ends with an open parenthesis, so we do not need a line-continuation
character. The second ends outside the parentheses, so we need the line-
continuation character. The third line completes the assignment statement.
That’s still hard to read. Once we’ve continued an expression on the next line,
we can indent (by pressing the Tab key or by pressing the spacebar a bunch)
to our heart’s content to make it clearer:
>>> oven_heating_time = (
... ((cooking_temperature_f - 32) * 5 / 9) - room_temperature_c) / \
... oven_heating_rate_c
Or even this—notice how the two subexpressions involved in the subtraction
line up:
>>> oven_heating_time = (
... ((cooking_temperature_f - 32) * 5 / 9) -
... room_temperature_c) / \
... oven_heating_rate_c
In the previous example, we clarified the expression by working with indenta-
tion. However, we could have made this process even clearer by converting
the cooking temperature to Celsius before calculating the heating time:
>>> room_temperature_c = 20
>>> cooking_temperature_f = 350
>>> cooking_temperature_c = (cooking_temperature_f - 32) * 5 / 9
>>> oven_heating_rate_c = 20
>>> oven_heating_time = (cooking_temperature_c - room_temperature_c) / \
... oven_heating_rate_c
>>> oven_heating_time
7.833333333333333
The message to take away here is that well-named temporary variables can
make code much clearer.
Describing Code
Programs can be quite complicated and are often thousands of lines long. It
can be helpful to write a comment describing parts of the code so that when
you or someone else reads it the meaning is clear.
In Python, any time the # character is encountered, Python will ignore the
rest of the line. This allows you to write English sentences:
>>> # Python ignores this sentence because of the # symbol.
report erratum • discuss
Describing Code • 25
The # symbol does not have to be the first character on the line; it can appear
at the end of a statement:
>>> (212 - 32) * 5 / 9 # Convert 212 degrees Fahrenheit to Celsius.
100.0
Notice that the comment doesn’t describe how Python works. Instead, it is
meant for humans reading the code to help them understand why the code
exists.
Making Code Readable
Much like there are spaces in English sentences to make the words easier to
read, we use spaces in Python code to make it easier to read. In particular,
we always put a space before and after every binary operator. For example,
we write v = 4 + -2.5 / 3.6 instead of v=4+-2.5/3.6. There are situations where it
may not make a difference, but that’s a detail we don’t want to fuss about,
so we always do it: it’s almost never harder to read if there are spaces.
Psychologists have discovered that people can keep track of only a handful
of things at any one time (Forty Studies That Changed Psychology [Hoc04]).
Since programs can get quite complicated, it’s important that you choose
names for your variables that will help you remember what they’re for. id1,
X2, and blah won’t remind you of anything when you come back to look at your
program next week: use names like celsius, average, and final_result instead.
Other studies have shown that your brain automatically notices differences
between things—in fact, there’s no way to stop it from doing this. As a result,
the more inconsistencies there are in a piece of text, the longer it takes to
read. (JuSt thInK a bout how long It w o u l d tAKE you to rEa d this cHaPTer
iF IT wAs fORmaTTeD like thIs.) It’s therefore also important to use consistent
names for variables. If you call something maximum in one place, don’t call it
max_val in another; if you use the name max_val, don’t also use the name maxVal,
and so on.
These rules are so important that many programming teams require members
to follow a style guide for whatever language they’re using, just as newspapers
and book publishers specify how to capitalize headings and whether to use
a comma before the last item in a list. If you search the Internet for program-
ming style guide (), you’ll
discover links to hundreds of examples. In this book, we follow the style guide
for Python from.
You will also discover that lots of people have wasted many hours arguing
over what the “best” style for code is. Some of your classmates (and your
Chapter 2. Hello, Python • 26
report erratum • discuss
instructors) may have strong opinions about this as well. If they do, ask them
what data they have to back up their beliefs. Strong opinions need strong
evidence to be taken seriously.
The Object of This Chapter
In this chapter, you learned the following:
• An operating system is a program that manages your computer’s hardware
on behalf of other programs. An interpreter or virtual machine is a program
that sits on top of the operating system and runs your programs for you.
The Python shell is an interpreter, translating your Python statements
into language the operating system understands and translating the
results back so you can see and use them.
• Programs are made up of statements, or instructions. These can be simple
expressions like 3 + 4 and assignment statements like celsius = 20 (which
create new variables or change the values of existing ones). There are
many other kinds of statements in Python, and we’ll introduce them
throughout the book.
• Every value in Python has a specific type, which determines what opera-
tions can be applied to it. The two types used to represent numbers are
int and float. Floating-point numbers are approximations to real numbers.
• Python evaluates an expression by applying higher-precedence operators
before lower-precedence operators. You can change that order by putting
parentheses around subexpressions.
• Python stores every value in computer memory. A memory location con-
taining a value is called an object.
• Variables are created by executing assignment statements. If a variable
already exists because of a previous assignment statement, Python will
use that one instead of creating a new one.
• Variables contain memory addresses of values. We say that variables refer
to values.
• Variables must be assigned values before they can be used in expressions.
Exercises
Here are some exercises for you to try on your own. Solutions are available
at.
report erratum • discuss
The Object of This Chapter • 27
1. For each of the following expressions, what value will the expression give?
Verify your answers by typing the expressions into Python.
a. 9 - 3
b. 8 * 2.5
c. 9 / 2
d. 9 / -2
e. 9 // -2
f. 9 % 2
g. 9.0 % 2
h. 9 % 2.0
i. 9 % -2
j. -9 % 2
k. 9 / -2.0
l. 4 + 3 * 5
m. (4 + 3) * 5
2. Unary minus negates a number. Unary plus exists as well; for example,
Python understands +5. If x has the value -17, what do you think +x should
do? Should it leave the sign of the number alone? Should it act like
absolute value, removing any negation? Use the Python shell to find out
its behavior.
3. Write two assignment statements that do the following:
a. Create a new variable, temp, and assign it the value 24.
b. Convert the value in temp from Celsius to Fahrenheit by multiplying
by 1.8 and adding 32; make temp refer to the resulting value.
What is temp’s new value?
4. For each of the following expressions, in which order are the subexpres-
sions evaluated?
a. 6 * 3 + 7 * 4
b. 5 + 3 / 4
c. 5 - 2 * 3 ** 4
Chapter 2. Hello, Python • 28
report erratum • discuss
5. Create a new variable x, and assign it the value 10.5.a.
b. Create a new variable y, and assign it the value 4.
c. Sum x and y, and make x refer to the resulting value. After this state-
ment has been executed, what are the values of x and y?
6. Write a bullet list description of what happens when Python evaluates
the statement x += x - x when x has the value 3.
7. When a variable is used before it has been assigned a value, a NameError
occurs. In the Python shell, write an expression that results in a NameError.
8. Which of the following expressions results in SyntaxErrors?
a. 6 * -----------8
b. 8 = people
c. ((((4 ** 3))))
d. (-(-(-(-5))))
e. 4 += 7 / 2
report erratum • discuss
Exercises • 29
CHAPTER 3
Designing and Using Functions
Mathematicians create functions to make calculations (such as Fahrenheit-
to-Celsius conversions) easy to reuse and to make other calculations easier
to read because they can use those functions instead of repeatedly writing
out equations. Programmers do this too, at least as often as mathematicians.
In this chapter we will explore several of the built-in functions that come with
Python, and we’ll also show you how to define your own functions.
Functions That Python Provides
Python comes with many built-in functions that perform common operations.
One example is abs, which produces the absolute value of a number:
>>> abs(-9)
9
>>> abs(3.3)
3.3
Each of these statements is a function call.
Keep Your Shell Open
As a reminder, we recommend that you have IDLE open (or another Python editor)
and that you try all the code under discussion; this is a good way to cement your
learning.
The general form of a function call is as follows:
«function_name»(«arguments»)
An argument is an expression that appears between the parentheses of a
function call. In abs(-9), the argument is -9.
report erratum • discuss
Here, we calculate the difference between a day temperature and a night
temperature, as might be seen on a weather report (a warm weather system
moved in overnight):
>>> day_temperature = 3
>>> night_temperature = 10
>>> abs(day_temperature - night_temperature)
7
In this call on function abs, the argument is day_temperature - night_temperature.
Because day_temperature refers to 3 and night_temperature refers to 10, Python
evaluates this expression to -7. This value is then passed to function abs,
which then returns, or produces, the value 7.
Here are the rules to executing a function call:
1. Evaluate each argument one at a time, working from left to right.
2. Pass the resulting values into the function.
3. Execute the function. When the function call finishes, it produces a value.
Because function calls produce values, they can be used in expressions:
>>> abs(-7) + abs(3.3)
10.3
We can also use function calls as arguments to other functions:
>>> pow(abs(-2), round(4.3))
16
Python sees the call on pow and starts by evaluating the arguments from left
to right. The first argument is a call on function abs, so Python executes it.
abs(-2) produces 2, so that’s the first value for the call on pow. Then Python
executes round(4.3), which produces 4.
Now that the arguments to the call on function pow have been evaluated,
Python finishes calling pow, sending in 2 and 4 as the argument values. That
means that pow(abs(-2), round(4.3)) is equivalent to pow(2, 4), and 24 is 16.
Here is a diagram indicating the order in which the various pieces of this
expression are evaluated by Python:
Chapter 3. Designing and Using Functions • 32
report erratum • discuss
We have underlined each subexpression and given it a number to indicate
when Python executes or evaluates that subexpression.
Some of the most useful built-in functions are ones that convert from one
type to another. Type names int and float can be used as functions:
>>> int(34.6)
34
>>> int(-4.3)
-4
>>> float(21)
21.0
In this example, we see that when a floating-point number is converted to an
integer, it is truncated, not rounded.
If you’re not sure what a function does, try calling built-in function help, which
shows documentation for any function:
>>> help(abs)
Help on built-in function abs in module builtins:
abs(x, /)
Return the absolute value of the argument.
The first line states which function is being described and which module
it belongs to. Here, the module name is builtins. Modules are an organizational
tool in Python and are discussed in Chapter 6, A Modular Approach, on
page 99.
The next part describes what the function does. The form of the function
appears first: function abs expects one argument. (The / indicates that there
are no more arguments.) After the form is an English description of what the
function does when it is called.
Another built-in function is round, which rounds a floating-point number to
the nearest integer:
>>> round(3.8)
4
>>> round(3.3)
3
>>> round(3.5)
4
>>> round(-3.3)
-3
>>> round(-3.5)
-4
report erratum • discuss
Functions That Python Provides • 33
The function round can be called with one or two arguments. If called with one, as
we’ve been doing, it rounds to the nearest integer. If called with two, it rounds to
a floating-point number, where the second argument indicates the precision:
>>> round(3.141592653, 2)
3.14
The documentation for round indicates that the second argument is optional
by surrounding it with brackets:
>>>.
Let’s explore built-in function pow by starting with its help documentation:
>>>.
This shows that the function pow can be called with either two or three argu-
ments. The English description mentions that when called with two arguments
it is equivalent to x ** y. Let’s try it:
>>> pow(2, 4)
16
This call calculates 24. So far, so good. How about with three arguments?
>>> pow(2, 4, 3)
1
We know that 24 is 16, and evaluation of 16 % 3 produces 1.
Memory Addresses: How Python Keeps Track of Values
Back in Values, Variables, and Computer Memory, on page 16, you learned that
Python keeps track of each value in a separate object and that each object has a
memory address. You can discover the actual memory address of an object using
built-in function id:
Chapter 3. Designing and Using Functions • 34
report erratum • discuss
>>> help(id)
Help on built-in function id in module builtins:
id(obj, /)
Return the identity of an object.
This is guaranteed to be unique among simultaneously existing objects.
(CPython uses the object's memory address.)
How cool is that? Let’s try it:
>>> id(-9)
4301189552
>>> id(23.1)
4298223160
>>> shoe_size = 8.5
>>> id(shoe_size)
4298223112
>>> fahrenheit = 77.7
>>> id(fahrenheit)
4298223064
The addresses you get will probably be different from what’s listed here since
values get stored wherever there happens to be free space. Function objects
also have memory addresses:
>>> id(abs)
4297868712
>>> id(round)
4297871160
Defining Our Own Functions
The built-in functions are useful but pretty generic. Often there aren’t built-
in functions that do what we want, such as calculate mileage or play a game
of cribbage. When we want functions to do these sorts of things, we have to
write them ourselves.
Because we live in Toronto, Canada, we often deal with our neighbor to the
south. The United States typically uses Fahrenheit, so we convert from
Fahrenheit to Celsius and back a lot. It sure would be nice to be able to do
this:
>>> convert_to_celsius(212)
100.0
>>> convert_to_celsius(78.8)
26.0
>>> convert_to_celsius(10.4)
-12.0
report erratum • discuss
Defining Our Own Functions • 35
Python Remembers and Reuses Some Objects
A cache is a collection of data. Because small integers—up to about 250 or so,
depending on the version of Python you’re using—are so common, Python creates
those objects as it starts up and reuses the same objects whenever it can. This speeds
up operations involving these values. The function id reveals this:
>>> i = 3
>>> j = 3
>>> k = 4 - 1
>>> id(i)
4296861792
>>> id(j)
4296861792
>>> id(k)
4296861792
What that means is that variables i, j, and k refer to the exact same object. This is
called aliasing.
Larger integers and all floating-point values aren’t necessarily cached:
>>> i = 30000000000
>>> j = 30000000000
>>> id(i)
4301190928
>>> id(j)
4302234864
>>> f = 0.0
>>> g = 0.0
>>> id(f)
4298223040
>>> id(g)
4298223016
Python decides for itself when to cache a value. The only reason you need to be aware
of it is so that you aren’t surprised when it happens; the output of your program is
not affected by when Python decides to cache.
However, the function convert_to_celsius doesn’t exist yet, so instead we see this
(focus only on the last line of the error message for now):
>>> convert_to_celsius(212)
Traceback (most recent call last):
File "", line 1, in
NameError: name 'convert_to_celsius' is not defined
To fix this, we have to write a function definition that tells Python what to do
when the function is called.
We’ll go over the syntax of function definitions soon, but we’ll start with an
example:
Chapter 3. Designing and Using Functions • 36
report erratum • discuss
>>> def convert_to_celsius(fahrenheit):
... return (fahrenheit - 32) * 5 / 9
...
The function body is indented. Here, we indent four spaces, as the Python
style guide recommends. If you forget to indent, you get this error:
>>> def convert_to_celsius(fahrenheit):
... return (fahrenheit - 32) * 5 / 9
File "", line 2
return (fahrenheit - 32) * 5 / 9
^
IndentationError: expected an indented block
Now that we’ve defined function convert_to_celsius, our earlier function calls will
work. We can even use built-in function help on it:
>>> help(convert_to_celsius)
Help on function convert_to_celsius in module __main__:
convert_to_celsius(fahrenheit)
This shows the first line of the function definition, which we call the function
header. (Later in this chapter, we’ll show you how to add more help documen-
tation to a function.)
Here is a quick overview of how Python executes the following code:
>>> def convert_to_celsius(fahrenheit):
... return (fahrenheit - 32) * 5 / 9
...
>>> convert_to_celsius(80)
26.666666666666668
1. Python executes the function definition, which creates the function object
(but doesn’t execute it yet).
2. Next, Python executes function call convert_to_celsius(80). To do this, it assigns
80 to fahrenheit (which is a variable). For the duration of this function call,
fahrenheit refers to 80.
3. Python now executes the return statement. fahrenheit refers to 80, so the
expression that appears after return is equivalent to (80 - 32) * 5 / 9. When
Python evaluates that expression, 26.666666666666668 is produced. We use
the word return to tell Python what value to produce as the result of the
function call, so the result of calling convert_to_celsius(80) is 26.666666666666668.
4. Once Python has finished executing the function call, it returns to the
place where the function was originally called.
report erratum • discuss
Defining Our Own Functions • 37
Here is an image showing this sequence:
def convert_to_celsius(fahrenheit):
return (fahrenheit - 32) * 5 / 9
convert_to_celsius(80)
(rest of program)
1
2
3
4
A function definition is a kind of Python statement. The general form of a
function definition is as follows:
def «function_name»(«parameters»):
«block»
Keywords Are Words That Are Special to Python
Keywords are words that Python reserves for its own use. We can’t use them except
as Python intends. Two of them are def and return. If we try to use them as either
variable names or as function names (or anything else), Python produces an error:
>>> def = 3
File "", line 1
def = 3
^
SyntaxError: invalid syntax
>>> def return(x):
File "", line 1
def return(x):
^
SyntaxError: invalid syntax
Here is a complete list of Python keywords (we’ll encounter most of them in this book):
False assert del for in or while
None break elif from is pass with
True class else global lambda raise yield
and continue except if nonlocal return
as def finally import not try
The function header (that’s the first line of the function definition) starts with
def, followed by the name of the function, then a comma-separated list of
parameters within parentheses, and then a colon. A parameter is a variable.
You can’t have two functions with the same name in the same file; it isn’t an
error, but if you do it, the second function definition replaces the first one, much
like assigning a value to a variable a second time replaces the first value.
Below the function header and indented (four spaces, as per Python’s style
guide) is a block of statements called the function body. The function body
must contain at least one statement.
Chapter 3. Designing and Using Functions • 38
report erratum • discuss
Most function definitions will include a return statement that, when executed,
ends the function and produces a value. The general form of a return statement
is as follows:
return «expression»
When Python executes a return statement, it evaluates the expression and then
produces the result of that expression as the result of the function call.
Using Local Variables for Temporary Storage
Some computations are complex, and breaking them down into separate steps
can lead to clearer code. In the next example, we break down the evaluation
of the quadratic polynomial ax2+ bx + c into several steps. Notice that all the
statements inside the function are indented the same amount of spaces in
order to be aligned with each other. You may want to type this example into
an editor first (without the leading >>> and ...) and then paste it to the Python
shell. That makes fixing mistakes much easier:
>>> def quadratic(a, b, c, x):
... first = a * x ** 2
... second = b * x
... third = c
... return first + second + third
...
>>> quadratic(2, 3, 4, 0.5)
6.0
>>> quadratic(2, 3, 4, 1.5)
13.0
Variables like first, second, and third that are created within a function are called
local variables. Local variables get created each time that function is called, and
they are erased when the function returns. Because they only exist when the
function is being executed, they can’t be used outside of the function. This means
that trying to access a local variable from outside the function is an error, just
like trying to access a variable that has never been defined is an error:
>>> quadratic(2, 3, 4, 1.3)
11.280000000000001
>>> first
Traceback (most recent call last):
File "", line 1, in
NameError: name 'first' is not defined
A function’s parameters are also local variables, so we get the same error if
we try to use them outside of a function definition:
report erratum • discuss
Using Local Variables for Temporary Storage • 39
>>> a
Traceback (most recent call last):
File "", line 1, in
NameError: name 'a' is not defined
The area of a program that a variable can be used in is called the variable’s
scope. "", line 1, in
TypeError: quadratic() takes exactly 4 arguments (3 given)
Remember that you can call built-in function help to find out information
about the parameters of a function.
Tracing Function Calls in the Memory Model
Read the following code. Can you predict what it will do when we run it?
>>> def f(x):
... x = 2 * x
... return x
...
>>> x = 1
>>> x = f(x + 1) + f(x + 2)
That code is confusing, in large part because x is used all over the place.
However, it is pretty short and it only uses Python features that we have seen
so far: assignment statements, expressions, function definitions, and function
calls. We’re missing some information: Are all the x’s the same variable? Does
Python make a new x for each assignment? For each function call? For each
function definition?
Here’s the answer: whenever Python executes a function call, it creates a
namespace (literally, a space for names) in which to store local variables for
that call. You can think of a namespace as a scrap piece of paper; Python
writes down the local variables on that piece of paper, keeps track of them
as long as the function is being executed, and throws that paper away when
the function returns.
Separately, Python keeps another namespace for variables created in the
shell. That means that the x that is a parameter of function f is a different
variable than the x in the shell!
Chapter 3. Designing and Using Functions • 40
report erratum • discuss
Reusing Variable Names Is Common
Using the same name for local variables in different functions is quite common. For
example, imagine a program that deals with distances—converting from meters to
other units of distance, perhaps. In that program, there would be several functions
that all deal with these distances, and it would be entirely reasonable to use meters
as a parameter name in many different functions.
Let’s refine our rules from Functions That Python Provides, on page 31, for
executing a function call to include this namespace creation:
1. Evaluate the arguments left to right.
2. Create a namespace to hold the function call’s local variables, including
the parameters.
3. Pass the resulting argument values into the function by assigning them
to the parameters.
4. Execute the function body. As before, when a return statement is executed,
execution of the body terminates and the value of the expression in the
return statement is used as the value of the function call.
From now on in our memory model, we will draw a separate box for each
namespace to indicate that the variables inside it are in a separate area of
computer memory. The programming world calls this box a frame. We separate
the frames from the objects by a vertical dotted line:
Frames Objects
Frames for namespaces
go here
Objects go here
Using our newfound knowledge, let’s trace that confusing code. At the
beginning, no variables have been created; Python is about to execute the
function definition. We have indicated this with an arrow:
>>> def f(x):➤
... x = 2 * x
... return x
...
>>> x = 1
>>> x = f(x + 1) + f(x + 2)
As you’ve seen in this chapter, when Python executes that function definition,
it creates a variable f in the frame for the shell’s namespace plus a function
report erratum • discuss
Tracing Function Calls in the Memory Model • 41
object. (Python didn’t execute the body of the function; that won’t happen
until the function is called.) Here is the result:
Frames
shell
f id1 f(x)
id1:function
Objects
Now we are about to execute the first assignment to x in the shell.
>>> def f(x):
... x = 2 * x
... return x
...
>>> x = 1➤
>>> x = f(x + 1) + f(x + 2)
Once that assignment happens, both f and x are in the frame for the shell:
Now we are about to execute the second assignment to x in the shell:
>>> def f(x):
... x = 2 * x
... return x
...
>>> x = 1
>>> x = f(x + 1) + f(x + 2)➤ argu-
ment, x + 1. In order to find the value for x, Python looks in the current frame.
The current frame is the frame for the shell, and its variable x refers to 1, so
x + 1 evaluates to 2.
Now we have evaluated the argument to f. The next step is to create a
namespace for the function call. We draw a frame, write in parameter x, and
assign 2 to that parameter:
Chapter 3. Designing and Using Functions • 42
report erratum • discuss
Frames
shell
f
x
id1 f(x)
id1:function
Objects
id2
f
x id3
1
id2:int
2
id3:int
Notice that there are two variables called x, and they refer to different values.
Python will always look in the current frame, which we will draw with a
thicker border. 2, so that
expression evaluates to 4. Python finishes executing that assignment statement
by making x refer to that 4:
Frames
shell
f
x
id1 f(x)
id1:function
Objects
id2
f
x id4
1
id2:int
2
id3:int
4
id4:int
We are now about to execute the second statement of function f:
>>> def f(x):
... x = 2 * x
... return x➤
...
>>> x = 1
>>> x = f(x + 1) + f(x + 2)
report erratum • discuss
Tracing Function Calls in the Memory Model • 43
This is a return statement, so we evaluate the expression, which is simply x.
Python looks up the value for x in the current frame and finds 4, so that is
the return value:
Frames
shell
f
x
id1 f(x)
id1:function
Objects
id2
f
x
Return value
id4
id4
1
id2:int
2
id3:int
4
id4:int
When the function returns, Python comes back to this expression: f(x + 1) +
f(x + 2). Python just finished executing f(x + 1), which produced the value 4. It
then executes the right function call: f(x + 2).
Following the rules for executing a function call, Python evaluates the argu-
ment, x + 2. In order to find the value for x, Python looks in the current frame.
The call on function f has returned, so that frame is erased: the only frame
left is the frame for the shell, and its variable x still refers to 1, so x + 2 evalu-
ates to 3.
Now we have evaluated the argument to f. The next step is to create a
namespace for the function call. We draw a frame, write in the parameter x,
and assign 3 to that parameter:
Again, we have two variables called x.
Chapter 3. Designing and Using Functions • 44
report erratum • discuss 3, so that
expression evaluates to 6. Python finished executing that assignment statement
by making x refer to that 6:
Frames
shell
f
x
id1 f(x)
id1:function
Objects
id2
f
x id6
1
id2:int
2
id3:int
4
id4:int
3
id5:int
6
id6:int
We are now about to execute the second statement of function f:
>>> def f(x):
... x = 2 * x
... return x➤
...
>>> x = 1
>>> x = f(x + 1) + f(x + 2)
This is a return statement, so we evaluate the expression, which is simply x.
Python looks up the value for x in the current frame and finds 6, so that is
the return value (as shown in the figure on page 46).
report erratum • discuss
Tracing Function Calls in the Memory Model • 45
f
x
Return value
Frames
shell
f
x
id1 f(x)
id1:function
Objects
id2
id6
id6
1
id2:int
2
id3:int
4
id4:int
3
id5:int
6
id6:int
When the function returns, Python comes back to this expression: f(x + 1) +
f(x + 2). Python just finished executing f(x + 2), which produced the value 6.
Both function calls have been executed, so Python applies the + operator to
4 and 6, giving us 10.
We have now evaluated the right side of the assignment statement; Python
completes it by making the variable on the left side, x, refer to 10:
Frames
shell
f
x
id1 f(x)
id1:function
Objects
id7 1
id2:int
2
id3:int
4
id4:int
3
id5:int
6
id6:int
10
id7:int
Phew! That’s a lot to keep track of. Python does all that bookkeeping for us,
but to become a good programmer it’s important to understand each individ-
ual step.
Chapter 3. Designing and Using Functions • 46
report erratum • discuss
Designing New Functions: A Recipe
Writing a good essay requires planning: deciding on a topic, learning the
background material, writing an outline, and then filling in the outline until
you’re done.
Similarly, writing a good function also requires planning. You have an idea
of what you want the function to do, but you need to decide on the details.
Every time you write a function, you need to figure out the answers to the fol-
lowing questions:
• What do you name the function?
• What are the parameters, and what types of information do they refer to?
• What calculations are you doing with that information?
• What information does the function return?
• Does it work like you expect it to?
The function design recipe helps you find answers to all these questions.
This section describes a step-by-step recipe for designing and writing a
function. Part of the outcome will be a working function, but almost as
important is the documentation for the function. Python uses three double
quotes to start and end this documentation; everything in between is meant
for humans to read. This notation is called a docstring, which is short for
documentation string.
Here is an example of a completed function. We’ll show you how we came up
with this using a function design recipe (FDR), but it helps to see a completed
example first:
>>>
...
Here are the parts of the function, including the docstring:
report erratum • discuss
Designing New Functions: A Recipe • 47
• The first line is the function header. We have annotated the parameters
with the types of information that we expect to be passed to them (we
expect both day1 and day2 to refer to values of type int), and the int after the
-> is the type of value we expect the function to return. These type anno-
tations are optional in Python, but we will use them throughout the book.
• The second line has three double quotes to start the docstring, which begins
with a description of what the function will do when it is called. The description
mentions both parameters and describes what the function returns.
• Next are some example calls and return values as we would expect to see
in the Python shell. (We chose the first example because that made day1
smaller than day2, the second example because the two days are equal,
and the third example because that made day1 bigger than day2.)
• Next are three double quotes to end the docstring.
• The last line is the body of the function.
There are five steps to the function design recipe. It may seem like a lot of
work at first, and you will often be able to write a function without rigidly
following these steps, but this recipe can save you hours of time when you’re
working on more complicated functions.
1. Examples. The first step is to figure out what name you want to give to
your function, what arguments it should have, and what information it
will return. This name is often a short answer to the question, “What does
your function do?” Type a couple of example calls and return values.
We start with the examples because they’re the easiest: before we write
anything, we need to decide what information we have (the argument
values) and what information we want the function to produce (the return
value). Here are the examples from days_difference:
... >>> days_difference(200, 224)
... 24
... >>> days_difference(50, 50)
... 0
... >>> days_difference(100, 99)
... -1
2. Header. The second step is to decide on the parameter names, parameter
types, and return type and write the function header. Pick meaningful
parameter names to make it easy for other programmers to understand
what information to give to your function. Include type annotations: Are
you giving it integers? Floating-point numbers? Maybe both? We’ll see a
lot of other types in the upcoming chapters, so practicing this step now
Chapter 3. Designing and Using Functions • 48
report erratum • discuss
while you have only a few choices will help you later. If the answer is,
“Both integers and floating-point numbers,” then use float because integers
are a subset of floating-point numbers.
Also, what type of value is returned? An integer, a floating-point number,
or possibly either one of them?
The parameter types and return type form a type contract because we are
claiming that if you call this function with the right types of values, we’ll
give you back the right type of value. (We’re not saying anything about
what will happen if we get the wrong kind of values.)
Here is the header from days_difference:
>>> def days_difference(day1: int, day2: int) -> int:
3. Description. Write a short paragraph describing your function: this is what
other programmers will read in order to understand what your function
does, so it’s important to practice this! Mention every parameter in your
description and describe the return value. Here is the description from
days_difference:
... """Return the number of days between day1 and day2, which are
... both in the range 1-365 (thus indicating the day of the
... year).
4. Body. By now, you should have a good idea of what you need to do in
order to get your function to behave properly. It’s time to write some code!
Here is the body from days_difference:
... return day2 - day1
report erratum • discuss
Designing New Functions: A Recipe • 49
and what day of the year the birthday is on? For example, if today is the third
day of the year and it’s a Thursday, and a birthday is on the 116th day of the
year, what day of the week will it be on that birthday?
We’ll design three functions that together will help us do this calculation.
We’ll write them in the same file; until we get to Chapter 6, A Modular
Approach, on page 99, we’ll need to put functions that we write in the same
file if we want to be able to have them call one another.
We will represent the day of the week using 1 for Sunday, 2 for Monday, and
so on:
NumberDay of the Week
1Sunday
2Monday
3Tuesday
4Wednesday
5Thursday
6Friday
7Saturday
We are using these numbers simply because we don’t yet have the tools to
easily convert between days of the week and their corresponding numbers.
We’ll have to do that translation in our heads.
For the same reason, we will also ignore months and use the numbers 1
through 365 to indicate the day of the year. For example, we’ll represent
February 1st as 32, since it’s the thirty-second day of the year.
How Many Days Difference?
We’ll start by seeing how we came up with function days_difference. Here are
the function design recipe steps. Try following along in the Python shell.
1. Examples. We want a clear name for the difference in days; we’ll use
days_difference. In our examples, we want to call this function and state
what it returns. If we want to know how many days there are between
the 200th day of the year and the 224th day, we can hope that this will
happen:
... >>> days_difference(200, 224)
... 24
What are the special cases? For example, what if the two days are the
same? How about if the second one is before the first?
Chapter 3. Designing and Using Functions • 50
report erratum • discuss
... >>> days_difference(50, 50)
... 0
... >>> days_difference(100, 99)
... -1
Now that we have a few examples, we can move on to the next step.
2. Header. We have a couple of example calls. The arguments in our function
call examples are all integers, and the return values are integers too, so
that gives us the type contract. In the examples, both arguments represent
a number of days, so we’ll name them day1 and day2:
>>> def days_difference(day1: int, day2: int) -> int:
3. Description. We’ll now describe what a call on the function will do. Because
the documentation should completely describe the behavior of the function,
we need to make sure that it’s clear what the parameters mean:
... """Return the number of days between day1 and day2, which are
... both in the range 1-365 (thus indicating the day of the
... year).
4. Body. We’ve laid everything out. Looking at the examples, we see that we
can implement this using subtraction. Here is the whole function again,
including the body:
>>>
...
5. Test. To test it, we fire up the Python shell and copy and paste the calls
into the shell, checking that we get back what we expect:
>>> days_difference(200, 224)
24
>>> days_difference(50, 50)
0
>>> days_difference(100, 99)
-1
report erratum • discuss
Designing New Functions: A Recipe • 51
Here’s something really cool. Now that we have a function with a docstring,
we can call help on that function:
>>> help(days_difference)
Help on function days_difference in module __main__:
What Day Will It Be in the Future?
It will help our birthday calculations if we write a function to calculate what
day of the week it will be given the current weekday and how many days
ahead we’re interested in. Remember that we’re using the numbers 1 through
7 to represent Sunday through Saturday.
Again, we’ll follow the function design recipe:
1. Examples. We want a short name for what it means to calculate what
weekday it will be in the future. We could choose something like
which_weekday or what_day; we’ll use get_weekday. There are lots of choices.
We
|
https://b-ok.org/book/3694923/796ad5
|
CC-MAIN-2019-13
|
refinedweb
| 15,171
| 62.38
|
From: Victor A. Wagner Jr. (vawjr_at_[hidden])
Date: 2004-07-25 16:54:25
At Sunday 2004-07-25 13:45, you wrote:
>Hi Gennadiy
>
>----- Mensaje original -----
>De: Gennadiy Rozental <gennadiy.rozental_at_[hidden]>
>Fecha: Domingo, Julio 25, 2004 9:55 pm
>Asunto: [boost] Re: [test] regression with latest change
>tolibs/test/src/test_tools.cpp
>
> > > FYI, latest changes to the aforementioned
> > > file seem to break things for msvc-stlport,
> > > as shown for instance at:
> >
> > Dave tried to use "using namespace std" solution. It's already
> > fixed in cvs.
> >
>
>Thanks for the quick response! Let's see if this
>fixes things up in the next testing cycle.
The results at metacomm seem to be somewhat out of date. According to my
logs I've run the tests both yesterday and today.
Also, I now have the vc8.0 beta and would be happy to add those to my
testing if I only knew how to go about adding a toolset. Any assistance
would be greatly appreciated.
>Best,
>
>Joaquín M López Muñoz
>Telefónica, Ingectigación y Desarroll
|
https://lists.boost.org/Archives/boost/2004/07/69112.php
|
CC-MAIN-2019-43
|
refinedweb
| 173
| 76.42
|
Now this is kind of dangerous to do since there is no compile time check (Like most things set in markup) but say you want to sort a collection, using the Linq extension methods, but you don’t know what you what to sort on at any given time. On top of that, you have a datagrid and a bunch of sort expressions to deal with. Now you could do something like create a hashtable full of lambda expressions that the key is the sort expression:
Dictionary<String, Func<User, IComparable>> list; userList = User.GetUserList(); list = new Dictionary<String, Func<User, IComparable>>(); list.Add("UserName", currentUser => currentUser.UserName); list.Add("UserID", currentUser => currentUser.UserID); userList.OrderBy(list["UserID"]);
Works just fine, and might be preferable to what I’m about to show. OooOoOO sound eerie?
//This is just to get the property info using reflection. In order to get the value //from a property dynamically, we need the property info from the class public static PropertyInfo[] GetInfo<K>(K item) where K : class { PropertyInfo[] propertyList; Type typeInfo; typeInfo = item.GetType(); propertyList = typeInfo.GetProperties(); return propertyList; } //This is the dynamic order by func that the OrderBy method needs to work public static IComparable OrderByProperty<T>(String propertyName, T item) where T : class { PropertyInfo[] propertyList; propertyList = GetInfo(item); //Here we get the value of that property of the passed in item and make sure //to type the object (Which is what GetValue returns) into an IComparable return (IComparable)propertyList.First(currentProperty => currentProperty.Name == propertyName).GetValue(item, null); }
And use:
//This takes the current user and calls the OrderByProperty method which in turn //gives us the Func OrderBy is requesting. var test = userList.OrderBy(currentUser => DynamicPropertySort.OrderByProperty("UserID", currentUser)).ToList();
Ok so what the hell? I mean intellisense on the OrderBy method doesn’t give much help. Func<<User, TKey>>. Yeah ok. So basically the return type is open. Well this kind of sucks right? Because I would have to return a Func that already knows the return type. (Be it string, int, ect) Of course, this would mean we would have to handle each sort expression in code. NOT VERY DYNAMIC IS IT? Well f that. Truth is, what the order by is looking for is a Func that takes in a User and returns something it can compare. This is where IComparable comes in.
The OrderBy has to take the returned value, say UserID which is an int, and figure out how to compare it to another value. Pretty simple. So as long as the property you are ordering by uses IComparable, you’re good to go. Pretty nice huh?
Now I would suggest, if you use this (HAHAHAHA), is to cache a dictionary of the property info with the class type as the key so that you don’t have to use as much reflection everytime. I just didn’t put that in.
U U USING
using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Text;
|
http://byatool.com/tag/order-by/
|
CC-MAIN-2017-22
|
refinedweb
| 501
| 57.47
|
BadNameReuse
Bad Name Reuse is the practice or act of using a name (such as a URI in RDF) in a way which is incompatible or inconsistent with prior use.
We often imagine the first several uses of a name being made by some self-consistent entity, the "namer", who is likely to use the name "properly" for at least a little while. After that... when time goes by and others start to use the name in different ways... that's when bad reuse can start to occur.
It can be done by accident (a misunderstanding, a typo), with benign purpose (trying to move the meaning in a better direction), or with malice (trying to redefine the terms of a contract).
It is different from NamespaceSquatting, but they are intertwined.
It is even slightly different from NamespaceDistortion, which is based on the idea of incompatibility with the namespace document, instead of with all prior documents.
|
https://www.w3.org/wiki/BadNameReuse
|
CC-MAIN-2017-30
|
refinedweb
| 155
| 57.71
|
?
The bigger version of the problem you describe is often called "configuration management". In addition to the items you list, the better approaches also consider versioning issues and the need to run multiple versions of libraries in the same system.
One approach that seems to have some traction at the moment is OSGI.
but there is more. OSGi as far as I can read from Wikipedia seems to handle the "other" end of the spectrum from what I had in mind. Bundling code into components that can be started, stopped, installed, uninstalled. Seems like it solves dependency problems using some kind of name server functionality, which is a nice way to solve it for dynamic behaviors.
But the kinds of dependencies I was thinking of are more of the kind "this object is now faulty, what other objects are affected by it?", as an example.
I guess you could divide it into two categories:
The first bullet is what would need language support. The second would probably be a compiler/interpreter concern only.
Making dependencies first-class seems to be at odds with abstraction principle: to be able to observe them, let alone pass them around, you'll need to break the boundary of the thing that encapsulates them (package, executable, closure, process etc.). Of course if the binding is immutable (a-la lambda calculus, static linkage) then its a non-problem. Plan9 namespaces is another example, where a parent can setup a namespace for the child, after which it becomes private to the child.
If bindings are mutable by 3rd parties then I would stick with the abstraction principle (assuming no harm by default) rather than deal with the fall-out from a dependency mess that would ensue if abstraction is broken...
external dependencies define many dimensions like
- component version number
- package / namespace
- interfaces exported by component ;
- platform description / required operating system / JVM version ; etc. etc.
- other packages required by the component
- documentation of interfaces.
So you are more tied to the execution environment (JVM; .NET ; native OS) than the language.
With Java you have that the JVM is a more general and fundamental setting than the language, this applies to most cases.
You might take a look how Maven does a good job at managing dependencies for Java and co.
Most aspects of a dependency are external to the language; so they can't be expressed in a programming language.
... but quite a lot exists within the language itself too. Imagine types for example. Using an object oriented model, you can easily see "is-a" relationships, which also translates into dependencies. Any object aggregated into another object is another type of dependency. An object using another object is yet another type of dependency.
When objects are shared in a big network, the dependency information might be interesting to have. I think this is mostly useful in error situations, but it might be useful for other situations too. Combined with the possibility to interrogate versions would make it even more useful. Then you could place restrictions on what version of certain objects that are allowed to be combined with what version of other objects.
I was just thinking someone might be triggered by the idea and come up with some useful scenarios where dependencies as language feature would be good to have. But it seems like this didn't happen. At least not yet...
In the java world they have annotations
An annotation adds metadata to the defined class / type; the metadata can be looked up/checked at run time. It is possible to define new kinds of annotations.
Do you think that this kind of per class metadata could express version information adequately for your purposes ?
so I guess the answer is yes. But to make this useful the compiler needs to be aware of it and what it is, so inheritance etc propagates the information correctly.
A possible implementation would be to have each object hold a reference to the object(s) and functions that lead to that object. Of course, such scheme only works when objects are immutable (=values).
Capturing runtime dependencies would allow us to do online (deep) impact analysis or offline postmortem analysis.
Another plus is full traceability. We would ultimately know the sources of all important (derived) data - including software- and how they interplay. This is something that banks, and their regulators, are very keen on.
Having dependencies as first class entities in a language makes it possible to store dependency information in some kind of external storage (e.g. database), which might be nice for banking application for example.
Using a compiler switch you could also store all externally visible dependencies in a file suitable for a build system. This way, you could have full dependency tracking not only for file dependencies but also for configurations (in C this would be macros), compiler switches etc. This by itself does not require dependencies to be first class entities though.
... I'd say there are some things that could become easier with language support. Let's say you have something you call "configuration", which is basically an object with a name, description and value. Wouldn't it be nice to be able to tell the user where this configuration comes from? A text file? Command line? Etc.
Sure, simpler cases are easy to program if you design for it beforehand. But for more complex cases, it would be nice to have some language support. So you can just write
print myConf.origin
And be away with it...
My intention wasn't primarily to be able to change the dependencies in weird ways. Rather, I envisioned a more introspective, maybe read-only, way of gathering dependency information. It could be nice to write:
if a isDependentOn b then
I.e., if b is needed to calculate a, then maybe I want to take another route. I don't see why this would break boundaries of things, just by asking how things relates.
And yes, versioning would be nice to have too as a language feature. But I leave that for another thread sometime in the future, it's a bit to big topic to throw in to the mix.
I think you cannot have a dependency system without also having codependencies.
Dependencies form a graph from a set of pairs (A,B) meaning A depends on B and throw in R, the root. A simple example is control flow which tells that some function f calls g so f depends on g. Given some library of functions we know how to start with R and get the transitive closure (the algorithm is called garbage collection :)
Codependencies are the dual. The elements are a set of triples (C, A, B) meaning if C then A else B. A simple example is conditional compilation. Codedependencies are evaluated by starting with a set of base conditions (macros in C) and flowing towards the root, which is the resultant program.
In the language C, codependencies are done first (by the pre-processor) and then the dependencies (by the linker searching libraries).
There's been some work on constraint languages for configuration management. In general, solving package constraints is an NP-complete problem. You might look at OPIUM, which is a constraint-based package manager. The original paper is available here. They appear to have incorporated it into Eclipse.
Beyond just instantiating/installing the correct dependencies, there is the issue of configuration value dependencies. For example, if a component is parameterized by a value which it externalizes, then that value needs to be propagated to any client components. I've looked at this issue in the context of configuration languages. I borrowed some ideas from programming language module systems when designing the configuration language for Engage.
Engage was implemented as an external DSL. I've thought about how it might be more tightly integrated into a programming language. A big challenge, as mentioned in other comments, is that many dependencies involve the external environment. I suggest providing some constructs in the language to make the high level decisions along with extensible hooks that can be used by the programmer to interface to external tools. This is similar to the approach used by Java class-loaders. As for specific language constructs, perhaps first-class modules along with a constraint notation to specify module dependencies/imports.
FYI Felix has two kinds of dependencies specifiable in the language. First you need to know it's a cross-cross-compiler: it generates C++ as an intermediate language then compiles and links that. The first kind of dependency you can state is for the first stage:
header gmpxx = '#include "gmpxx.h";
type mpz = "mpz_t" requires gmpxx;
The requires clause here specifies that if you use the type mpz the compiler has to emit an include for the header file defining it. If you do not use the type mpz the header may not be included: whether you use the type is determined by the dependencies implicit in the program call graph.
mpz
The second type of dependency you can specify is like this,
which is a more correct version of the above:
header gmpxx = '#include "gmpxx" requires package "gmpxx";
This says that if you use the header, you need the package too. The package is an abstract name which is the basename of a configuration file containing instructions on how to link the gmpxx library. It can also include the header information and if so the include spec can be removed from the program.
There's more, but these are the two basic language features involved. Actually the requirement clause allows expressions with alternatives but I could not figure out how to make that work (lacking a SAT solver).
The utility of this setup is that provided you construct your configuration database correctly, the program requires only that you label types (and occasionally functions) with an abstract dependency, and the compiler driver does all the rest, allowing Felix programs to be run without needing any compiler switches, linker instructions, or other crud, whilst the program source remains relatively portable because the dependencies are only specified in the abstract.
The big problem I have with this isn't the lack of a constraint solver to handle alternatives, but the fact that I can still only specify *codependencies* like this:
macro val WINDOWS = true;
macro val UNIX = false;
..
if WINDOWS do
fun dlopen : string -> address = "LoadLibrary($1)";
elif UNIX do
fun dlopen : string -> address = "dlopen($1)" requires package "dlfcn";
else ERROR;
done
In other words, traditional conditional compilation. The point here is that the OS macro tag is driving the code choice, so the code generated depends on the OS tag, which is the inverse of saying that the LoadLibrary version depends on WINDOWS, so I cannot use the requires clause. In other words I have no decent language construct here for codependencies (conditional compilation does not rate as "decent" :) Suggestions would be welcome!
Have a look at CC. I think you'll like it.
I haven't read it completely yet, only skimmed it a bit. But it seems like something that I would like to be able to support. The question for me is in what way. I guess this is always the question when it comes to how to implement a calculus...
I wonder whether dependency tracking would be able to solve those kinds of problems implicitly. Imagine a function returning current OS variant (Windows, Linux, FreeBSD, ...), and then some conditional module/header inclusions depending on what variant the function returned.
With full dependency tracking the compiler should be able to figure out that for this particular compile this function will never return anything else then say Linux. Because of this all else will come naturally since the modules/headers specifies which OS variant(s) they support.
The challenge will probably be that sometimes you want a generic result supporting as many different environments as possible for the same binary, sometimes you only want to support a specific machine but often something in between. Should be possible to solve with dependency description, but in this case it needs to be specified by the programmer rather than implicitly determined.
Managed to post same comment twice, but haven't figured out how to remove the copy...
Mark Burgess, the author of cfengine, has written quite a bit about configuration management. Please see:
Testable System Administration, in Communications of the ACM. This is an opinion piece, with a rant containing Burgess's hatred for package-based approaches to configuration management, notably cfengine's biggest competitor, puppet.
I have a hard time understanding anything Burgess writes, so god bless you if you can understand him easily. e.g. the documentation for cfengine literally says things like, "We speak of a promiser (the abstract object making the promise), the promisee is the abstract object to whom the promise is made, and then there is a list of associations that we call the `body' of the promise, which together with the promiser-type tells us what it is all about.". I have always had negative surface impressions of cfengine as:
That said, the major motivation for considering promise theory is that it is designed for decentralized authority. In other words, autonomous agents and that even touches concepts like "independent compilation" as opposed to "separate compilation".
It is also really queer that Mark claims his approach to be simpler, but he can't describe his ideas in anything graphically simpler than a rat's nest of a cyclical graph structure. There are more scalable approaches to rendering dependencies, from design theory, known as dependency structure matrices, and already have been applied to software. I would like for languages to have a library that allows analyzing programs in the language for meaningful dependency properties, like type dependencies. Even better if the language had a library for micro-refactoring to fix dependency issues.
You also probably want to consider how to manage all this state you want to track. I know you may not look at this as an End-to-end arguments problem, but at large scales, it looks to me as such. One approach worth looking at is Trickles.
You may also want to use the search term "provenance" as well.
That said, for questions of provenance, good system design should mitigate its necessity. For example, many so-called Dependency Injection frameworks allow multiple processing phases, from local to external. For example, an XML configuration file, followed by configuration within the module e.g. via attributes/annotations. Scattering such configuration generally implies poor system design, as the system's configuration is dependent on dynamic dispatch. It is far better to have a static configuration for a system.
After having read debates about whether lazy or eager evaluation makes the best resource utilization, I'm wondering whether dependency tracking that also includes resource usage could help in finding the best compromise?
My idea would be that the application would be able for each moment describe its resource needs to advance a bit further, and could also give a number of combinations in falling usefulness to help a scheduler. The scheduler could then decide which alternative to provide and when based on fairness and priority. This could span over several CPUs, could be used in NUMA environments or even distributed over a network.
Would this be too complicated to implement, or would this be possible?
|
http://lambda-the-ultimate.org/node/4657
|
CC-MAIN-2017-09
|
refinedweb
| 2,572
| 53.51
|
But today, I will not explain the routing concept, like what is routing in detail, how it work, how we can create etc. Actually I will try to explain the Custom Routing concept. When you request for Asp.Net MVC page then it passes through the routing architecture. The default routing algorithm is like {controller}/ {action}/ {id}. But sometime we need to create our custom route which full feel our requirement.
When you create a Asp.Net MVC project than it auto create a default route inside the RouteConfig.cs which is located in App_Start folder.
Default Route
public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } }
Note: you can put this RegisterRoutes Method directly inside the Global.asax.cs file.
This RouteConfig is registered on the time of application start inside the Global.asax.cs(); }
Custom Routing
You can develop an Asp.Net MVC application without use of Custom Routing but sometimes, there is a requirement to change the url in such a way that is not possible to create using simple routing. So, in that scenario we use the Custom Routing.
I will give you two examples where using routing you will not able to achieve the task but you can achieve it using Custom Routing.
Example I
Now, we can take a simple example for custom routing. Find the below url, can It possible by routing only.
Here you can see, after my domain name url is something like that “/articles/categoryname/urlofarticles”. So, how can we achieve this? Here you need to use custom routing.
You need to add a new route inside the RouteConfig.cs.
routes.MapRoute("ArticlesPost", "articles/{category}/{url}", new { controller = "Article", action = "ViewArticle" }, new { category = @"\S+", url = @"\S+" });
So, a question arises here, how to use it with link or something else. So, you can use this custom route by following way.
@Html.RouteLink(Model.PostTitle, " ArticlesPost ", new { category = Model.postCategory.Category, url = Model.PostUrl }, new { @class = "title" })
Here if pass the Model.postCategory.Category=”csharp” and Model.PostUrl=”mytesturl” than it will generate the url like below.
Example II
Now, we can take one more example to understand the custom routing. Find the below url, can it possible by routing only.
Here you can see, after my domain name url is something like that “/articles/dateofarticle”. So, how can we achieve this? Here you also need to use custom routing.
You need to add a new route inside the RouteConfig.cs.
routes.MapRoute("ArticlesPostByDate", "articles/{postdate}", new { controller = "Article", action = "ArticleByDate" }, new { postdate = @"\d{2}-d{2}-d{4}" });
So, how can use it
@Html.RouteLink(Model.PostDate, "ArticlesPostByDate", new { postdate = Model.PostDate }, new { @class = "title" })
Here if pass the Model.PostDate =”16-09-2015” than it will generate the url like below.
Regular Expression Constraints
Sometime you need to prevent the invalid request to route. So, you can use regular expression constraints to prevent the wrong or invalid request. Regular expression constraints are used to check pattern like text, integer value, date, time, currency etc.
Conclusion:
So, Today we learned how to create custom routing in Asp.Net MVC with examples.
Mukund Narayan Jha Posted : Last Year
I think what ever you post you also post concept with demo project.
|
http://www.mukeshkumar.net/articles/mvc/getting-started-with-custom-routing-in-asp-net-mvc
|
CC-MAIN-2018-22
|
refinedweb
| 562
| 60.21
|
Final modifier is used to declare a field as final. It can be used with variable, method or a class.
If we declare a variable as final then it prevents its content from being modified. The variable acts like a constant. Final field must be initialized when it is declared.
If we declare a method as final then it prevents it from being overridden.
If we declare a class as final the it prevents from being inherited. We can not inherit final class in Java.
In this example, we declared a final variable and later on trying to modify its value. But final variable can not be reassigned so we get compile time error.
public class Test { final int a = 10; public static void main(String[] args) { Test test = new Test(); test.a = 15; // compile error System.out.println("a = "+test.a); } }
error: The final field Test.a cannot be assigned
A method which is declared using final keyword known as final method. It is useful when we want to prevent a method from overridden.
In this example, we are creating a final method learn() and trying to override it but due to final keyword compiler reports an error.
class StudyTonight { final void learn() { System.out.println("learning something new"); } } // concept of Inheritance class Student extends StudyTonight { void learn() { System.out.println("learning something interesting"); } public static void main(String args[]) { Student object= new Student(); object.learn(); } }
Cannot override the final method from StudyTonight
This will give a compile time error because the method is declared as final and thus, it cannot be overridden. Don't get confused by the extends keyword, we will learn about this in the Inheritance tutorial which is next.
Let's take an another example, where we will have a final variable and method as well. an example of a final class.
We can create our own final class so that no other class can inherit it.
In this example, we created a final class ABC and trying to extend it from Demo class. but due to restrictions compiler reports an error. See the below example.
final class ABC{ int a = 10; void show() { System.out.println("a = "+a); } } public class Demo extends ABC{ public static void main(String[] args) { Demo demo = new Demo(); } }
The type Demo cannot subclass the final class ABC
Final variable that is not initialized at the time of declaration is called blank final variable. Java allows to declare a final variable without initialization but it should be initialized by the constructor only. It means we can set value for blank final variable in a constructor only.
In this example, we created a blank final variable and initialized it in a constructor which is acceptable.
public class Demo{ // blank final variable final int a; Demo(){ // initialized blank final a = 10; } public static void main(String[] args) { Demo demo = new Demo(); System.out.println("a = "+demo.a); } }
a=10
A blank final variable declared using static keyword is called static blank final variable. It can be initialized in static block only.
Static blank final variables are used to create static constant for the class.
In this example, we are creating static blank final variable which is initialized within a static block. And see, we used class name to access that variable because for accessing static variable we don’t need to create object of that class.
public class Demo{ // static blank final variable static final int a; static{ // initialized static blank final a = 10; } public static void main(String[] args) { System.out.println("a = "+Demo.a); } }
a=10
|
https://www.studytonight.com/java/final-in-java.php
|
CC-MAIN-2021-04
|
refinedweb
| 596
| 56.86
|
All the code we need will be in one file, main.cpp. Create that file with the code below:
#include <KApplication>
#include <KAboutData>
#include <KCmdLineArgs>
#include <KMessageBox>
#include <KLocale>.
The KAction is built up in a number of steps. The first is including the KAction library and then creating the KAction:
#include <KAction> ... KAction* clearAction = new KAction(this);
This creates a new KAction called clearAction.
Since the description of the UI is defined with XML, the layout must follow strict rules. This tutorial will not go into great depth on this topic, but for more information, see the detailed XMLGUI page (here is an older tutorial: [1]).
<..
|
https://techbase.kde.org/User:Icwiener/Test
|
CC-MAIN-2015-22
|
refinedweb
| 107
| 64.71
|
Here’s some of the recent news on ISO Schematron!
- My XSLT “skeleton” implementation (the latest version of the most commonly used version of Schematron) is available in beta from Schematron.com, as open source, non-viral. This version fully supports ISO Schematron (except for abstract patterns, for which a preprocessor has been contributed) and has a lot of input from members of the schematron-love-in maillist, it is shaping up nicely I think. (Notable contrabutions are from Ken Holman, Dave Pawson and Florent Georges.) A variety of different output formats are available as backends, including an ISO SVRL (Schematron Validation Report Language) XML format and a terminate-on-first-error backend.
- Topologi’s Ant Task for Schematron is available now in beta from Schematron.com. The code will be available as open source, non-viral. Thanks for Allette System’s Christophe Lauret and Willi Ekasalim for doing the programming on this. It can output text to standard error or collate all the SVRLs into a single XML file.
- Dave Pawson is writing a little online book ISO Schematron tutorial concentrating on using Schematron with XSLT2. I haven’t reviewed it thoroughly yet, but Dave has a good track record.
- Mitre’s Roger Costello written up two pages Usage and Features of Schematron and Best way to phrase the Schematron assertion text that seem pretty sensible to me. Roger followed his usual method of asking people on the XML-DEV maillist and compiling the results.
- Murata Makoto has been preparing the Japanese translation of ISO Schematron, to be used as the text for the Japanese Industrial Standard. He has also been translating other parts of ISO DSDL. The great thing about diligent translators such as Dr Murata and Dr Komachi is that they uncover many practical issues; in Schematron’s case there are a couple of paragraphs in the ISO standard that seem completely reasonable when you know what they are supposed to mean, but actually are pretty cryptic. Murata-san also has pointed out an improvement to the formal specification of Schematron in predicate logic. These are corrections not changes to the semantics, so they might be put through as a corrigendum (corrections procedure) at ISO; however, if the query binding for XSLT2 and EXLT becomes clear in the next month, I might just go for an addendum (a slightly different procedure) or newly dated version. (No existing stylesheets using the default bindings would become incorrect.) Of course, where a country’s national standards system works by adopting translations of international standards, it becomes really important to keep standards in synch: this requires both prompt action to correct problems in the original standard that the translation uncovers, as well as basic stability of the original standard: a moving target complicates life for translators and standardizers.
Thanks for this useful stuff.
How can I send feedback to Roger Costello the "Best way to phrase the Schematron assertion text"? (without subscribing to yet another dev list)
His summary says, "In the case where the Assertion Text is to be seen and used only by developers, alternative (a) may be appropriate: 1. The meeting's start time is before its end time."
I would add that, depending on how the assertion text is displayed, alternative (d), 'should', can be less confusing (even to developers) and thus safer. In a common case, violated assertions are presented in a validation report without any explicit indication that the assertions were violated. Thus if you have an assertion with text "The meeting's start time is before its end time" and this assertion is violated by the document under validation, the developer will see simply "The meeting's start time is before its end time", which is false. If the developer does not realize from the context that "these statements are false", the developer will be very confused (or else may assume that all is well and the document is valid). This is especially problematic if the assertion text is expressed in domain-oriented terms, whose correspondence to elements and attributes is not completely unambiguous.
The wording "The meeting's start time should be before its end time" is much less misleading in this situation. Even in a report that explicitly says, "The follow assertions were violated", the 'should' wording remains clear and accurate.
My 2c, from my experience using Schematron validation in a real-world project.
Rick,
I realize that an XSLT1 template should work fine in an XSLT2 processor, but I was wondering whether you or anyone has built a schematron processor specifically on XSLT2? I ask only because I suspect it'd be lexically cleaner and terser. If you don't know of any efforts underway to do that I my write one myself based upon your code, if that is all right with you.
Kurt Cagle
Kurt: I think Florent Georges has a variant that runs the compiler on XSLT2. The main difference is the namespace handling is one line rather than a couple of lines: no dummy attributes needed. If you are thinking of doing something, actually the the thing that I really would like to see is someone maintain Jing's Schematron stylesheet to bring it up to ISO Schematron. James used a different implementation technique than my one, and it would be great for Jing users.
Another thing to consider is how to combine Schematron, NVDL, DSRL and DTTL processing into one set of stylesheets. That is ultimately where things may head. Indeed, compiling RELAX NG into XSLT would be neat too, and usually possible.
|
http://www.oreillynet.com/xml/blog/2007/02/schematron_news.html
|
crawl-002
|
refinedweb
| 927
| 50.26
|
Tools that honor the Output has M Values environment will control whether the output geodataset will store m-values.
Usage notes
- Feature vertices that do not have an m-value will be assigned a value of NaN (Not a Number).
- For shapefiles, storage of m- and z-values is closely connected; if the output has z-values, then regardless of this environment setting, the output will also have m-values.
Dialog syntax
- Same as Input—If the input has m-values, the output will also have m-values. If the input does not have m-values, the output will not have m-values. This is the default value.
- Enabled—The output will have m-values.
- Disabled—The output will not have m-values.
Scripting syntax
arcpy.env.outputMFlag = output_m_flag
Script example
import arcpy # Set the outputMFlag environment to Enabled arcpy.env.outputMFlag = "Enabled"
|
https://desktop.arcgis.com/en/arcmap/latest/tools/environments/output-has-m-values.htm
|
CC-MAIN-2021-25
|
refinedweb
| 143
| 58.79
|
After the control has been added to the document or spreadsheet, the experience of using the control on the design surface should be very close to that of working with a standard Windows Form. However, there are some differences. The biggest difference appears when you click a Windows Forms control in the document and use the categorized view in the Properties window. If you compare a Windows.Forms.Controls.Button with a Microsoft.Office.Tools.Excel.Controls.Button, you will see the extra properties merged in from the OLEObject. These properties are listed in the Misc category to denote that these properties are coming from OLEObject.
Excel Control Properties That Are Added from OLEObject
The OLEObject merge done for controls in the Microsoft.Office.Tools.Excel.Controls namespace adds several properties to VSTO extended controls that are not in the base Windows.Forms controls. Table 14-1 shows the most important properties that are added for controls in Excel.
Word Control Properties Added from OLEControl
The OLEControl merge done for controls in the Microsoft.Office.Tools.Word.Controls namespace adds several properties to VSTO extended controls that are not in the base Windows.Forms controls. Table 14-2 shows the most important properties that are added for controls in Word.
Many of the properties for controls running in Word are dependent on the wrapping style of the control. If the control is inline with text, the Left, Bottom, Right, Top, and Width properties will throw an exception. Why? Word represents ActiveX controls as either Shapes or InlineShapes depending on how the control is positioned on the document, and the positioning properties are only available on Shapes that are controls whose wrapping style is Behind text or In front of text.
Word controls also have an InlineShape and Shape property that provide you with access to the InlineShape or Shape object in the Word object model corresponding to the
|
https://flylib.com/books/en/2.53.1/properties_merged_from_oleobject_or_olecontrol.html
|
CC-MAIN-2018-30
|
refinedweb
| 318
| 53.71
|
C API: UCollationElements. More...
#include "unicode/utypes.h"
#include "unicode/ucol.h"
Go to the source code of this file.
C API: UCollationElements.
The UCollationElements API,
. "; . str=(UChar*)malloc(sizeof(UChar) * (strlen("This is a test")+1) ); . u_uastrcpy(str, "This is a test"); . coll = ucol_open(NULL, &success); . c = ucol_openElements(coll, str, u_strlen(str), &status); . order = ucol_next(c, &success); . ucol_reset(c); . order = ucol_prev(c, &success); . free(str); . ucol_close(coll); . ucol_closeElements(c); . }
<p<blockquote>).
Definition in file ucoleitr.h.
This indicates an error has occurred during processing or if no more CEs is to be returned.
Definition at line 30 of file ucoleitr.h.
The UCollationElements struct.
For usage in C programs.
Definition at line 1 of file ucoleitr.h.
Get the maximum length of any expansion sequences that end with the specified comparison order.
This is useful for .... ?
Get the offset of the current source character.
This is an offset into the text of the character containing the current collation elements.
Get the ordering priority of the next collation element in the text.
A single character may contain more than one collation element.
Open the collation elements for a string.
The UCollationElements retains a pointer to the supplied text. The caller must not modify or delete the text while the UCollationElements object is used to iterate over this text.
Get the ordering priority of the previous collation element in the text.
A single character may contain more than one collation element. Note that internally a stack is used to store buffered collation elements.
Reset the collation elements to their initial state.
This will move the 'cursor' to the beginning of the text. Property settings for collation will be reset to the current status.
Set the offset of the current source character.
This is an offset into the text of the character to be processed. Property settings for collation will remain the same. In order to reset the iterator to the current collation property settings, the API reset() has to be called.
Set the text containing the collation elements.
Property settings for collation will remain the same. In order to reset the iterator to the current collation property settings, the API reset() has to be called.
The UCollationElements retains a pointer to the supplied text. The caller must not modify or delete the text while the UCollationElements object is used to iterate over this text.
|
https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucoleitr_8h.html
|
CC-MAIN-2021-39
|
refinedweb
| 393
| 62.34
|
React tools. The reality is that create-react-app will be fine for most apps, especially if you're new to React.
As you gain more experience with React, you might have certain requirements for your apps that need custom configuration of the setup files. In this case, you'd need to be able to set up React build tools manually, as create-react-app hides these from you by default.
In this tutorial I'll show you how to set up a React app by manually configuring build tools as we go. This will hopefully give you the confidence to go on and experiment with more complex setups.
Although it may seem a little daunting in the beginning, you'll enjoy all the benefits of having total control over every single configuration setting. And you can decide exactly which tools get included in your app, which may vary from project to project. This approach also allows you to easily incorporate new build tools as they come along (which they do frequently).
Are you ready to create your first React app completely from scratch? Let's do it.
Create the App File Structure
To demonstrate how to set up a React app via manual configuration of the build tools, we'll be building the same, very simple, React app from previous tutorials in this series.
Start by creating a folder called
my-first-components-build, and then open a command-line window pointing to this folder.
Type
npm init to create a
package.json file. This file will contain all the information about the tools used to build your app, plus associated settings. Accept all the default settings and just keep hitting Enter (around ten times) until complete.
If you accepted all the defaults,
package.json will look like this:
{ "name": "my-first-components-build", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC" }
We now need to add the React and ReactDOM scripts to our project. We'll do this via npm, the package manager for Node.js.
Inside the same command-line directory, enter:
npm install --save react react-dom
This installs both React and ReactDom, plus any dependencies required by those two modules. You'll notice we now have a new
node_modules directory which is where the modules have been added to.
If you take a look at the
package.json file, a new
dependencies property has been added containing information about the node modules we installed.
"dependencies": { "react": "^15.6.1", "react-dom": "^15.6.1" }
This happened because we specified the
--save option in our
npm install command. This notified npm that we wanted to keep track of our installed project dependencies. This is important if we want to share our project.
Typically, because the
node_modules folder is so large, you don't want to try to share this directly. Instead, you share your project without the
node_modules folder. Then, when someone downloads your project, all they have to do is type
npm install to duplicate the setup directly from
package.json.
Note: In npm 5.x, installed modules are automatically saved to
package.json. You no longer have to manually specify the
--save option.
Inside the
my-first-components-build folder, create a new
src folder, and add an
index.js file to it. We'll come back to this later as we start to create our React app, once we've configured the project setup files.
Add an index.html file inside the same folder with the following code:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Creating a React App Manually, Using Build Tools</title> </head> <body> <div id="app"></div> </body> </html>
We want to be able to compile our app down to a single JavaScript file, and also make use of JSX and ES6 classes and modules. To do this, we need to install Webpack and Babel modules via npm.
Let's install Babel first. Type the following into the command-line window:
npm install --save-dev babel-core babel-loader babel-preset-env babel-preset-react
This installs all of the modules needed for Babel to compile ES6 and JSX code down to standard JavaScript.
Now, let's install Webpack, again via the command line:
npm install --save-dev html-webpack-plugin webpack webpack-dev-server
This installs all of the modules needed for Webpack, a local web server, and enables us to direct Webpack to create a dynamic
index.html file in the
public folder based on the one we added to the
src folder. We can also add a dynamic reference to the bundled JavaScript file inside the HTML file every time the app is built.
After these new modules have been installed, your
package.json file will now look like this:
"dependencies": { "react": "^15.6.1", "react-dom": "^15.6.1" }, "devDependencies": { "babel-core": "^6.25.0", "babel-loader": "^7.1.0", "babel-preset-env": "^1.5.2", "babel-preset-react": "^6.24.1", "html-webpack-plugin": "^2.28.0", "webpack": "^3.0.0", "webpack-dev-server": "^2.5.0" }
This time, though, the Webpack and Babel dependencies are saved to
package.json as dev dependencies.
This means these particular modules are needed during the development (i.e. build) phase of the app. On the other hand, the dependencies (such as React, and ReactDOM) are required during runtime, and so will be included directly along with our custom app code..
Inside
webpack.config.js, add:
var path = require('path'); var HtmlWebpackPlugin = require( 'html-webpack-plugin' ); module.exports = { entry: './src/index.js', output: { path: path.resolve(__dirname, 'public'), filename: 'build.js' }, module: { rules: [ { test: /\.(js)$/, use: 'babel-loader' } ] }, plugins: [new HtmlWebpackPlugin({ template: 'src/index.html' })] }
Don't worry too much about the syntax used here; just understand the overview of what's going on.
All we're doing is exporting a JavaScript object with certain properties that control how Webpack builds our app. The
entry property specifies the starting point of our React app, which is
index.js. Next, the
output property defines the output path, and filename, of the bundled JavaScript file.
As for the build process itself, we want Webpack to pass all JavaScript files through the Babel compiler to transform JSX/ES6 to standard JavaScript. We do this via the
module property. It simply specifies a regular expression that runs Babel transformations only for JavaScript files.
To complete the Babel setup, we need to add an entry to the
package.json file to specify which Babel transformations we want to perform on our JavaScript files. Open up
package.json and add a
babel property:
"babel": { "presets": [ "env", "react" ] },
This will run two transformations on each JavaScript file in our project. The
env transformation will convert ES6 JavaScript to standard JavaScript that's compatible with all browsers. And the
react transformation will compile JSX code down to
createElement() function calls, which is perfectly valid JavaScript.
Now, back to our
webpack.config.js file.
The last property we have is
plugins, which contains any special operations we want performed during the build process. In our case, we need Webpack to create an
index.html file which includes a reference to the bundled JavaScript file. We also indicate an existing
index.html file (the one we created earlier) to be used as a template to create the final bundled
index.html file.
Build and Test
Let's now add a
script property to
package.json. By the way, you can add as many scripts as you like to perform various tasks. For now, we just want to be able to run Webpack, so in
package.json delete the
"test" script and replace it with:
"scripts": { "build": "webpack", },
Before we test the build process, let's add a React component to
index.js so we have something to render.
import React, { Component } from 'react'; import ReactDOM from 'react-dom'; class App extends Component { render() { return ( <div> <h2>Hello World!</h2> </div> ) } } ReactDOM.render( <App />, document.querySelector( '#app' ) );
This should look very familiar by now if you've followed along with the previous tutorials in this series.
From the command line, run:
npm run build
After a little while, you should see a new
public folder created inside
my-first-components-build, containing
index.html and
index.js. Open up
index.html to see the output of our test React app.
Notice the bundled JavaScript file has been added for us, and the test component is rendered to the correct DOM element.
Automate the Compilation Process
Once you start making multiple changes to your app, you'll soon learn that it's rather tedious to have to manually edit a file, save it, run the build command, and then reload the browser window to see the changes.
Fortunately, we can use the Webpack mini server that we installed earlier to automate this process. Add a second script to
package.json so the 'scripts' property looks like this:
"scripts": { "build": "webpack", "dev": "webpack-dev-server --open" },
Now run:
npm run dev
After a few seconds, you'll see a new browser tab open with your web app running. The URL is now pointing to a local server instead of pointing to a specific local file. Make a minor change to
index.js in the
src folder and save. Notice that your app automatically updates in the browser almost instantly to reflect the new changes.
Webpack will now monitor the files in your app for changes. When any change is made, and saved, Webpack will recompile your app and automatically reload the browser window with the new updates.
Note: The Webpack server will not rebuild your app, as such—rather it stores changes in a cache, which is why it can update the browser so quickly. This means you won't see the updates reflected in the
public folder. In fact, you can delete this folder entirely when using the Webpack server.
When you need to build your app, you can simply run
npm run build to create the
public folder again (if necessary) and output your app files, ready for distribution.
Finishing Up Our App
For completeness, let's add the two simple components we've been using in previous tutorials.
Add two new files in the root project folder called
MyFirstComponent.js and
MySecondComponent.js to the main app folder. In
MyFirstComponent.js, add the following code:
import React, { Component } from 'react'; class MyFirstComponent extends Component { render() { return ( <p>{this.props.number}: Hello from React!</p> ) } } export default MyFirstComponent;
And in
MySecondComponent.js, add:
import React, { Component } from 'react'; class MySecondComponent extends Component { render() { return ( <p>{this.props.number}: My Second React Component.</p> ) } } export default MySecondComponent;
To use these components in our app, update
index.js to the following:
import React, { Component } from 'react'; import ReactDOM from 'react-dom'; import MyFirstComponent from './MyFirstComponent'; import MySecondComponent from './MySecondComponent'; class App extends Component { render() { return ( <div> <h2>My First React Components!</h2> <MyFirstComponent number="1st" /> <MySecondComponent number="2nd" /> </div> ) } } ReactDOM.render( <App />, document.querySelector( '#app' ) );
This results in the same output as we've seen before, except this time via setting up the React app 100% manually.
Reusable React Setup Templates
Once you've gone through this manual setup once and created configuration setup files, this is the only time you'll need to do this completely from scratch. For future projects, you can reuse one or more of your existing setup files, making subsequent React projects much quicker to set up.
You could even create a set of purpose-built React starter templates, and host them on GitHub. It would then be a simple case of cloning a starter project and running
npm init to install the required Node.js modules.
Download and Install the Project
The React project for this tutorial is available for download, so you can play around with it or use it as a template for new projects.
Click the Download Attachment link in the right sidebar to access the project .zip file. Once downloaded, extract it and open a command-line window. Make sure you're in the
my-first-components-build directory.
Enter the following commands to install and compile the React app.
npm install npm run dev
The first command will download all the Node.js modules needed for the project, which will take a minute or two. The second command will compile the React app and run the mini web server, displaying it in the browser.
Try making some changes to your React app. Every time you save changes, your app will be recompiled, and the browser window automatically updates to reflect the new version of your app.
When you want to build your project for distribution, just run the following command.
npm run build
Conclusion
Throughout this tutorial series, we've looked at several ways you can approach setting up React apps, each one progressively requiring more setup tasks up front. But the long term benefit is that you have far more control and flexibility over exactly how the project is set up.
Once you've mastered setting up React, I think you'll find developing apps a lot of fun. I'd love to hear your comments. Let me know what you plan to build next with React!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/tutorials/setup-a-react-environment-part-4--cms-29108
|
CC-MAIN-2019-51
|
refinedweb
| 2,243
| 57.77
|
In this project I implemented OpenCV color recognition on the Raspberry Pi that uses PID to control the pan-tilt servo system. In this post, I will explain briefly how color tracking works, and how to use PID control algorithm to improve tracking performance. Like my previous face recognition tutorial, I will be using the Wall-E robot in this Raspberry Pi Color Tracking project as an example.
The Raspberry Pi has relatively small computational capacity compared to a laptop or PC. Because of that you might notice the sluggish result in face recognition. But in color tracking, the result is quite smooth and satisfactory because the computational process is not as complex as face recognition.
We can use Raspberry Pi to control the servos directly using interface like ServoBlaster. But instead, I use the Arduino as a servo controller. Not only it’s easier to manage, but also there is PID library already available as a library on Arduino.
So this is how it works: the Raspberry Pi detects the color, work out the coordinates and send to the Arduino via I2C. The Arduino will then feed the data to the PID controller to calculate how much and to which direction to turn the servos.
What is PID and How to Use It?
PID is a closed loop control system that is trying to minimize the error. In simple words, it takes the inputs from your sensor, and you tell it what your target set-point is, and it will come up with a output adjustment which aims to help your system to get closer to the set-point.
The most important factor of a optimal PID controller is its three constant parameters. To achieve good performance, we need to play around with different values to see different results. I will also talk about some tuning practices that I found helpful.
How to Use Arduino PID Controller
Using PID on Arduino is very easy, simply follow the instructions on this page to setup the library and we are good to go!
Following the examples provided in the library, its not hard to see how to use it. Basically, we need to:
- include the library
#include <PID_v1.h>
- define the necessary variables
double Setpoint, Input, Output;
- Construct the PID controller and establish links with the variables, specify the tuning constants
PID myPID(&Input, &Output, &Setpoint, 0.4, 0.4, 0, DIRECT);
- Measure input and feed into the PID controller, and retrieve output
Input = meansurement; myPID.Compute(); Adjustment = Output;
Tips on Tuning the PID constants
Depending on how you are using the output to adjust your system, the PID constants parameters will be different, so make sure have settled down on that before you start tuning. Sometimes not all three constants are needed (can be zero), it’s all down to your requirements and performance. If you don’t think some of the constants are helping, then set it to zero.
To start with, I usually set all PID constant parameters to 0, and then tune each constant in order, then randomly fine tune each one.
P (proportional), the key here is to get a quick strong response without any shake or vibration. From a small number and work your way up. The error rate at this point will be high and final accurate leveling will be slow. When this is set too high you will produce a high speed shake.
I (integral). The Integral algorithm will add more and more to the corrective action. This can help to balance inherent inconsistencies to the system, smoothing out errors over time. When this is set too high it will produce a slow wobble or oscillation (overshoot?), when it’s set too low errors will occur (damping effect?).
D (derivative). This parameter can have positive effect on overall stability of a mechanical system, i.e. it helps to overcome the inertia faster, etc.
Raspberry Pi Color Tracking and Source Code
Color Tracking using OpenCV is really simple, We basically need to go through this steps on the Raspberry Pi every time.
- Capture Image
- Throw away the pixels which are not falling in the range and high-light the pixel which are in the range, so you will see a black image with white dots and puddles.
- When the detected color has a large enough area, calculate the center position of that color using image moments.
- Send the position off to.
In the source code, we can choose whatever color you want to track. Look for the InRangeS() function. It takes source, lower bound color, upper bound color and destination. Just replace the lower bound and upper bound colors with the HSV values.
You can find this value of your favorite color using Gimp or MS Paint. Note that software like Gimp and MS paint use Hue value ranging from 0-360, Saturation and Value from 0-100%. But OpenCV uses 0-180 for Hue and 0-255 for Saturation and Value. So you need to do some conversion before plugging the values in.
[sourcecode language=”python”]
# Raspbery Pi Color Tracking Project
# Code written by Oscar Liang
# 30 Jun 2013
import cv2.cv as cv
import smbus
bus = smbus.SMBus(1)
address = 0x04
def sendData(value):
bus.write_byte(address, value)
# bus.write_byte_data(address, 0, value)
return -1
def readData():
state = bus.read_byte(address)
# number = bus.read_byte_data(address, 1)
return state
def ColorProcess(img):
# returns thresholded image
imgHSV = cv.CreateImage(cv.GetSize(img), 8, 3)
# converts BGR image to HSV
cv.CvtColor(img, imgHSV, cv.CV_BGR2HSV)
imgProcessed = cv.CreateImage(cv.GetSize(img), 8, 1)
# converts the pixel values lying within the range to 255 and stores it in the destination
cv.InRangeS(imgHSV, (100, 94, 84), (109, 171, 143), imgProcessed)
return imgProcessed
def main():
# captured image size, change to whatever you want
width = 320
height = 240
capture = cv.CreateCameraCapture(0)
# Over-write default captured image size
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH,width)
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT,height)
cv.NamedWindow( “output”, 1 )
cv.NamedWindow( “processed”, 1 )
while True:
frame = cv.QueryFrame(capture)
cv.Smooth(frame, frame, cv.CV_BLUR, 3)
imgColorProcessed = ColorProcess(frame)
mat = cv.GetMat(imgColorProcessed)
# Calculating the moments
moments = cv.Moments(mat, 0)
area = cv.GetCentralMoment(moments, 0, 0)
moment10 = cv.GetSpatialMoment(moments, 1, 0)
moment01 = cv.GetSpatialMoment(moments, 0,1)
# Finding a big enough blob
if(area > 60000):
# Calculating the center postition of the blob
posX = int(moment10 / area)
posY = int(moment01 / area)
# check slave status and send coordinates
state = readData()
if state == 1:
sendData(posX)
sendData(posY)
print ‘x: ‘ + str(posX) + ‘ y: ‘ + str(posY)
# update video windows
cv.ShowImage(“processed”, imgColorProcessed)
cv.ShowImage(“output”, frame)
if cv.WaitKey(10) >= 0:
break
return;
if __name__ == “__main__”:
main()
[/sourcecode]
Sourcecode on The Arduino
The code on the Arduino is mainly about how the servos are controlled by PID and how to use PID. I think I have explained pretty much everything here in the “How PID works in Arduino” section.
One thing I noticed about Arduino PID library is that you have to explicitly specify both negative and positive range of the output like this: myPIDX.SetOutputLimits(-255, 255); Otherwise if the output will only be positive, which means in our case, the servo will only be told to turn right and not the other way.
Last note is on the variable called ‘status’. It will be set to 0 when we are receiving and executing the command from raspberry pi, tell the pi we are busy and not ready for the next command yet. This is because Color tracking is so fast and the commands are sent more frequent than the Arduino can handle, which the problem we didn’t have in face recognition.
[sourcecode language=”cpp”]
// Raspbery Pi Color Tracking Project
// Code written by Oscar Liang
// 30 Jun 2013
#include <Wire.h>
#include <Servo.h>
#include <PID_v1.h>
#define SLAVE_ADDRESS 0x04
#define NUM_DATA 2
byte data[NUM_DATA];
byte cur_data_index;
byte state;
Servo servoNeckX;
Servo servoNeckY;
const byte servoNeckX_pin = 3;.4, 0.4, 0,[cur_data_index++] =);
}
[/sourcecode]
Conclusion
Hope you enjoyed this post and found it helpful :-). Leave me a comment if you have any suggestion or question.
hi Can you share circuit connections?
Hi oscar
Great project but I tried to compile opencv code many times but every time I got error can you please give the correct code. Really I will appreciate your help so much.
Hi Nice tutorial.I tried it with only x axis but my servo is turning only one side left or right.
this is my arduino code
// Raspbery Pi Color Tracking Project
// Code written by Oscar Liang
// 30 Jun 2013
#include
#include
#include
#define SLAVE_ADDRESS 0x04
#define NUM_DATA 2
byte data[NUM_DATA];
byte cur_data_index;
byte state;
Servo servoNeckX;
//Servo servoNeckY;
const byte servoNeckX_pin = 9;
/.17, 0.015, 0.001,[0] =);
}
could you please post the rasberry pi code in opencv3 with proper indentation ? I would greatly appreciate it :)
hi, i am working on similar application, i want the robot to follow a virtual point( (x,y) coordinates).
Do you have any idea how can i implement thee pid on 2 dc motor?
like how do i map the controller action to it.
do i need 2 controllers or one is would suffice?
thanks :)
I am trying to implement the above code with opencv 3.1, but there are many errors for the modules used in the code. please can I get to know what version of opencv you are using.
Hi Oscar, great project.
I am currently to get the RPi color tracking part with a GPIO camera.
Do you have any clues on how to modify the code?
Thanks!
Hi Oscar, this is a great project.
I am trying to get the RPi color tracking part to work with a camera plugged in the GPIO.
Do you have any idea how I can achieve this?
Thanks!
Hi Oscar
i was wondering why my rpi slows down after running this code for 5 or 10min? and any longer, it just freezes. how can i solve this issue?
Thanks a lot for sharing,
Mehdi
Cause of freezing: Memory Leak
where in code: mat = cv.GetMat(imgColorProcessed)
FIX: Upgrade OpenCV and write a new code base on the OpenCV you upgrade to
excuse me! Can you help me something :)
In your code “cv.InRangeS(imgHSV, (100, 94, 84), (109, 171, 143), imgProcessed)”
Do “(100,94,84),(109,171,143)” is range of blue in BGR ?
thank you so much!
i think i was tracking the red dot on the ruler :)
can you tell me how to firgure out range of red. i want to tracking a red ball. Thanks so much!
Hi ocsar!!
Great work done!!
actually when i tried to compile the arduino code that u had given, in the arduino ide, i get the following error..
arduino_pid_i2c.ino:15:20: fatal error: PID_v1.h: No such file or directory
compilation terminated.
Error compiling.
and i have also included the pid library in the ide..but still i get the above error..Kindly help me out with this issue..
Thank u
Just a little bit curious about a couple of things.
First of all:
myPIDX.SetOutputLimits(-255, 255);
myPIDY.SetOutputLimits(-255, 255)
I took a look to the explanation that they give in the Arduino library and they say the following about the SetOutputLimits function:
The PID controller is designed to vary its output within a given range. By default this range is 0-255: the arduino PWM range. There’s no use sending 300, 400, or 500 to the PWM. Depending on the application though, a different range may be desired. So, basically, the value that you are setting here is not your output to the actuator (in this case your servo). Am I right?
Also, I notice in the PID explanation the following:
Input = meansurement;
myPID.Compute();
Adjustment = Output;
My question in this part would be. Is the output a delta of the PID, or the real output. It’s because I paying attention to your code and you have the following.
posX = constrain(posX + OutputX, lrServoMin, lrServoMax);
posY = constrain(posY – OutputY, udServoMin, udServoMax);
From what I see here, you are adding a delta to your position, am I right?
BTW, congratulations, great work. Keep me posted please.
Super Bro……
hello, Oscar, very luvly work, however i need your help with object tracking or color tracking with c++ on the raspberry pi, i have tried so many methods but still get errors, the opencv has been installed and it is working properly, please if you could be of any assistant, i will be very grateful. thanks
Hi Oscar, thank you for such a wonderful post. I am inspired by your post and in the exitment I ordered the raspberry pi camera module, only to discover open cv doesn’t work with raspberry pi cam.
After some looking around I found this post picamera.readthedocs.org/en/release-1.6/recipes1.html#capturing-to-an-opencv-object so I tried to replace your cv.CreateCameraCapture(0) with
cv2.imdecode(data, 1) function. But the got error invalid capture. Do you have any sugestion ?
hey can you send me this code with i2c communication with the arduino, i can’t see it anywhere :/
the code is all on this page. The i2c uses this function “sendData()”.
Good info. Lucky me I found your website by chance (stumbleupon).
I have saved it for later!
Sorry my mistake i forgot the i2c basics…sorry..;)
Senddata function is not called any where..then how could raspberry know the state?
Sorry, I’m a rookie I want to know how to connect wire between raspi – arduino – servo .
Thanks, for this : )
Servo is connected to the Arduino, like described here
Arduino is connected to the RPi via I2C, just like described here.
Very Useful, thank! :-)
|
https://oscarliang.com/raspberry-pi-color-tracking-opencv-pid/
|
CC-MAIN-2018-39
|
refinedweb
| 2,302
| 66.13
|
Overview
Just In Time Debugging (JITD) is a way for Fuchsia to suspend processes that crash so that interested parties can debug/process them later. This permits interesting flows such as attaching zxdb to a program that crashed overnight, when the debugger was not attached/running.
This is done by storing process in exceptions in a special place called the "Process Limbo". This place will keep those processes suspended until some other agent comes and releases them.
See Implementation for more details about how it works.
How to enable it
One of the great benefits of the Process Limbo is to be able to catch crashing processes in the wild, without the need to have already running debuggers. This is specially useful for situations where the debugger cannot be running, such as driver startup. For such cases, having an active Process Limbo can provide an invaluable source of debugging information.
There are two ways of enabling the Process Limbo:
Manual activation
The Process Limbo comes with a CLI tool that permits the user to query the current state of the limbo:
$ run run fuchsia-pkg://fuchsia.com/limbo-client#meta/limbo_client.cmx Usage: limbo [--help] <option> The process limbo is a service that permits the system to suspend any processes that throws an exception (crash) for later processing/debugging. This CLI tool permits to query and modify the state of the limbo. Options: --help: Prints this message. enable: Enable the process limbo. It will now begin to capture crashing processes. disable: Disable the process limbo. Will free any pending processes waiting in it. list: Lists the processes currently waiting on limbo. The limbo must be active. release: Release a process from limbo. The limbo must be active. Usage: limbo release <pid>.
Enable on startup
Manual activation works only if you have a way to send commands to the system. But some development environments run software earlier that the user can interact with (or run a debugger). Drivers are a good example of this. For those cases, having the Process Limbo active from the start lets you catch driver crashes as they occur while the driver is spinning up, which is normally the hardest part to debug.
In order to do this, there is a configuration that has to be set into the build:
fx set <YOUR CONFIG> --with-base //src/developer/forensics:exceptions_enable_jitd_on_startup
Or add this label to the
base_package_labels in your build args. You can still use the Process
Limbo CLI tool to disable and manipulate the limbo afterwards. Then you will need to push an update
to your device for this to take an effect.
NOTE: Driver initialization is finicky and freezing crashing process can leave the system in an undefined state and "hang" it, so your mileage may vary when using this feature, especially for very early drivers.
How to use it
zxdb
The main user of JITD is zxdb, which is able to attach to a process waiting in the limbo. When starting zxdb, it will display the processes that are waiting in it:
> fx debug Checking for debug agent on [fe80::2e0:4cff:fe68:8d%3]:2345. Debug agent not found. Starting one. Connecting (use "disconnect" to cancel)... Connected successfully. 👉 To get started, try "status" or "help". Processes waiting on exception: 272401: crasher Type "attach <pid>" to reconnect. [zxdb] attach 272401 Process 1 [Running] koid=272401 crashed Attached Process 1 [Running] koid=272401 crasher [Warning] Received thread exception for an unknown thread. [zxdb] thread # State Koid Name ▶ 1 Blocked (Exception) 272403 initial-thread [zxdb] frame ▶ 0 blind_write(volatile unsigned int*) • crasher.c:22 (inline) 1 main(int, char**) • crasher.c:201 2 start_main(const start_params*) • __libc_start_main.c:93 3 __libc_start_main(zx_handle_t, int (*)(int, char**, char**)) • __libc_start_main.c:165 4 _start + 0x14 [zxdb] list 17 int (*func)(volatile unsigned int*); 18 const char* desc; 19 } command_t; 20 21 int blind_write(volatile unsigned int* addr) { ▶ 22 *addr = 0xBAD1DEA; 23 return 0; 24 } 25 26 int blind_read(volatile unsigned int* addr) { return (int)(*addr); } 27 28 int blind_execute(volatile unsigned int* addr) { 29 void (*func)(void) = (void*)addr; 30 func(); 31 return 0;
Within zxdb you can also do
help process-limbo to get more information about how to use it.
Process Limbo FIDL Service
The Process Limbo presents itself as a FIDL service, which is what the Process Limbo CLI tool and
zxdb use. The FIDL protocol is defined in
zircon/system/fidl/fuchsia-exception/process_limbo.fidl.
A good example about how to use the API is the Process Limbo CLI tool itself:
src/developer/forensics/exceptions/limbo_client/limbo_client.cc.
Implementation
Crash Service
When a process throws an exception, Zircon will generate an associated
exception handle. It will
then look if there are any listeners in any associated exception channels that might be interested
in handling that exception. That is how debuggers such as zxdb get the exceptions from running
processes. See the exceptions handling for more details.
But when there are no more exception handlers left, either because there weren't any or they all
decided to pass on handling it, the root job has an exclusive handler called
crashsvc. Once an
exception has reached the Crash Service, it is understood that it has "crashed" and that no program
was able to handle it. The Crash Service will then dump the crashing stack trace to the logs and
pass the exception over to the
Exception Broker.
Exception Broker
The Exception Broker is in charge of deciding what is to be done with a crashing exception, depending on the actual system configuration. It might decide to create a minidump file and dump a crash report, send the exception over to the Process Limbo or kill the process.
The Exception Broker is aware of the Process Limbo and whether it is active or not. When it receives an exception, it will check whether the Process Limbo is enabled. If so, it will pass the exception handle over to it. This is the same Process Limbo exposed by the FIDL service.
|
https://fuchsia.dev/fuchsia-src/development/debugging/just_in_time_debugging
|
CC-MAIN-2021-31
|
refinedweb
| 1,002
| 63.09
|
I.
View.
I have a simple test application following the directions from . When I attempt to drag the data source onto the design surface I get the following error:
Cannot add the control to the design surface or bind to the control because the type TestApp.AdventureWorksEntities cannot be resolved. Please try to build the project or add necessary assembly references.
The project as been built and the data source is in the same namespace. I did a search for this error and the only hit I found was here, and mentioned Tools->Options->Extension Manager, 'Load per use extensions...'. This option is checked.
I've tried unchecking and restarted with the same behavior.
Any ideas?
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/34645-cannot-drag-entity-framework-data-source.aspx
|
CC-MAIN-2018-17
|
refinedweb
| 131
| 65.62
|
Okey-doke, Dean, I'll rustle something up. Might take me a couple of days, so please be patient.
Best wishes
Richard
Hi Dean,
we don't have any methods to search by spacegroup symbol or formula, so iterating over hit structures would be the only way to do it. If you have to do many of these searches it would be fairly simple to make an SQLite database containing terms of interest, then to join the results of a ReducedCellSearch with a query of this database.
If you like I can provide a prototype of how to go about this.
Best wishes
Richard
Hi Dean,
I'm not entirely clear what you are trying to do here. Let me know if I've got the wrong end of the stick:
You run a reduced cell search on the CSD, or another database of structures, retrieving some hits. You then wish to filter these results according to further criteria, e.g. chemical formula, or space groups.
You can do a simple filter of the hits, assuming there are not too many of them, simply by iterating over the hits:
for h in hits:
c = h.crystal
if c.spacegroup_symbol == ...
Alternatively you can use any of the search classes except TextNumericSearch on an individual crystal structure.
Hope this is helpful; if not please ask again.
Richard
Hi Chris.
The HBond in CATKIT is not found because the default path length range for detecting hydrogen bonds is set to (4, 999), so excluding contacts between separate components of the molecule. You can include such contacts by setting the path_length_range to (-1, 999), i.e:
from ccdc import io
csd = io.MoleculeReader('csd')
catkit = csd.molecule('CATKIT')
print catkit.hbonds(path_length_range=(-1, 999))
(HBond(Atom(N2)-Atom(H2)-Atom(O2)),)
The value -1 is used to cope with both options to the 'require_hydrogens' parameter of the hbonds() method. I appreciate that this is not clear from the documentation, and this will be rectified in a forthcoming release.
I think the default behaviour is somewhat counterintuitive; I shall discuss with colleagues whether the default should be made more permissive.
Hope this is helpful.
Richard
Hi Dave,
I agree - or rather a friendly chemist agrees - that the structure is a bit rubbish. The first kekulize misassigns the double bonds in the carbon you mentioned, so the second aromatic assignment does not regard these bonds as aromatic, then the second kekulize does not operate on the same structure as the first. I agree that this is not ideal behaviour, but it is comprehensible.
The only solution I can think of is to assign all bond types:
mol.assign_bond_types('all')
where the double bond to the phosphorus is detected, the five membered ring is no longer aromatic and the kekulisation works as expected.
I have mailed the database group to see if they want to fix the bonds in the structure, but this will be too late for the forthcoming November release.
Best wishes
Richard
Hi David,
I'm afraid you have unearthed a genuine bug. Discussions are underway here to see if it can be fixed in the forthcoming API version 1.3 release.
In the meantime you can work around the problem by using the internal API:
mol = Molecule.from_string(...)
mol = Molecule(mol.identifier, _molecule=mol._molecule.create_editable_molecule())
Sorry about this.
Richard
Thanks, Christian.
Please carry on raising any difficulties you have, and making suggestions for ways in which we may improve the API.
Cheers
Richard
Here's the slightly modified script, testing for 3D coordinates.
Richard
I've attached a table of spacegroup, average void space (as a percentage of the unit cell volume), number of observations from the 673,606 structures of CSD V536 with 3D coordinates. I'll leave it to the crystallographically adept to extract any meaning there is in the table.
Richard
I've attached a script which will do this over the whole CSD. It's running on my desktop at the moment, but I don't expect the results until tomorrow - the void calculation is computationally fairly heavy.
I'll let you know the results when I get them.
Richard
|
https://www.ccdc.cam.ac.uk/public/5dd7c9a3-9189-e311-8b17-005056868fc8/forum-posts?page=4
|
CC-MAIN-2018-17
|
refinedweb
| 692
| 72.26
|
Hi Eric, Ivan, On 28 June 2011 18:32, Erik de Castro Lopo <mle+hs at mega-nerd.com> wrote: > The hlint program would have flagged both of those and possibly > others. See: > Cool! It didn't flag either for me, but it recommended replacing ++ (show port)with ++ show port, if then else with unless, putStrLn (show x) with print x, and do stuff with stuff. All useful to know. On 28 June 2011 18:16, Ivan Lazar Miljenovic <ivan.miljenovic at gmail.com> wrote: > I don't think you need all those return () everywhere... > You're right. At some point I added it in to (try to) make the compiler happy, but it must have been or become unnecessary. I still need two though because forkIO (and therefore my processLine function) returns IO ThreadId, but the last line for do notation must be return ()(see below). On 28 June 2011 18:16, Ivan Lazar Miljenovic <ivan.miljenovic at gmail.com> wrote: > And at the end, why do you do "line <- getLine" when you don't use the > result? > Oh that. I was trying to figure out a way to terminate by program. I've now changed it to exit on EOF. Here is my second attempt. Is it much better?: print e else ioError e processLine return() putStrLn "Press <CTRL-D> to quit." let processStdIn = do lineResult <- try getLine case lineResult of Right line -> processStdIn Left e -> unless (isEOFError e) $ ioError e processStdIn Thanks for the suggestions. Cheers, -John -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
|
http://www.haskell.org/pipermail/haskell-cafe/2011-June/093560.html
|
CC-MAIN-2013-48
|
refinedweb
| 258
| 75.5
|
direct.distributed.TimeManager¶
from direct.distributed.TimeManager import TimeManager
Inheritance diagram
- class
TimeManager(cr)[source]¶
Bases:
direct.distributed.DistributedObject.DistributedObject
This DistributedObject lives on the AI and on the client side, and serves to synchronize the time between them so they both agree, to within a few hundred milliseconds at least, what time it is.
It uses a pull model where the client can request a synchronization check from time to time. It also employs a round-trip measurement to minimize the effect of latency.
delete(self)[source]¶
This method is called when the DistributedObject is permanently removed from the world and deleted from the cache.
disable(self)[source]¶
This method is called when the DistributedObject is removed from active duty and stored in a cache.
generate(self)[source]¶
This method is called when the DistributedObject is reintroduced to the world, either for the first time or from the cache.
serverTime(self, int8 context, int32 timestamp)[source]¶
This message is sent from the AI to the client in response to a previous requestServerTime. It contains the time as observed by the AI.
The client should use this, in conjunction with the time measurement taken before calling requestServerTime (above), to determine the clock delta between the AI and the client machines.
synchronize(self, string description)[source]¶
Call this function from time to time to synchronize watches with the server. This initiates a round-trip transaction; when the transaction completes, the time will be synced.
The description is the string that will be written to the log file regarding the reason for this synchronization attempt.
The return value is true if the attempt is made, or false if it is too soon since the last attempt.
|
https://docs.panda3d.org/1.10/python/reference/direct.distributed.TimeManager
|
CC-MAIN-2020-29
|
refinedweb
| 284
| 53.71
|
Asked by:
SmartCard framework in Windows Store Apps
Hi,
Is there anyway to use the SmartCard framework, or any other API's to issue APDU commands to smart cards from a Windows Store app? We have a requirement to authenticate record access via a smart card before it can be displayed; and the abstracted natures of the Proximity APIs prohibit us from doing this.
Any recommendations would be appreciated.
Thanks,
Lewis
Follow Me on Twitter: @LewisBenge Or check out my blog:, December 11, 2012 11:36 PM
Question
All replies
- The only smart card support currently in WinRT is via the Windows.Security.Cryptography namespace. Similar to other WinRT APIs, it is abstracted and only allows PKI scenarios on cards that utilize the in-box smart card cryptographic providers with a smart card mini-driver. There are no current WinRT APIs that allow direct smart card access similar to WinSCard.dll.
Jeff Shipman [MSFT] -- This posting is provided "AS IS" with no warranties, and confers no rights.
Friday, December 28, 2012 6:56 PM
- Proposed as answer by Jeff Shipman - MSFT Friday, December 28, 2012 6:57 PM
- Thanks Jeff. Just to confirm there is also no way of issuing APDU commands to smart cards accessible via the proximity APIs?
Follow Me on Twitter: @LewisBenge Or check out my blog:, January 23, 2013 4:21."
I am not sure that you can send APDUs to the card though.
Thursday, June 27, 2013 2:54 AM
- Edited by Andrew7Webb Thursday, June 27, 2013 3:13 AM Checked out sample
Hello Lewis,
I was caught up in a similar issue as yours and finally, I've managed to come up with something productive now. Please have a look at it (Smart Cards framework for WinRT) and you can play more with it.
Regards
ArafatSaturday, April 12, 2014 8:52 PM
- I realize this is a bit of an old topic, but this is possible on Windows Phone 8.1 with both Silverlight and Universal apps:
- Heath @; Visual Studio Professional DeploymentWednesday, January 7, 2015 1:04 AM
|
https://social.msdn.microsoft.com/Forums/en-US/c6220a2f-db62-4a70-82d6-eb98336fa925/smartcard-framework-in-windows-store-apps?forum=tailoringappsfordevices
|
CC-MAIN-2019-13
|
refinedweb
| 344
| 57.71
|
f3ndot (Justin Bull), 01/24/2020 03:19 PM
View differences:
inline
side by side
# +dest+ argument is obsolete.
# It still works but you must not use it.
#
# If called before the connection has started, this method will open the
# connection and finish it once complete.
# This method never raises an exception.
# response = http.get('/index.html')
# This method returns a Net::HTTPResponse object.
# response = nil
# the socket. Note that in this case, the returned response
# object will *not* contain a (meaningful) body.
# the block can process it using HTTPResponse#read_body,
# if desired.
# This method never raises Net::* exceptions.
def request(req, body = nil, &block) # :yield: +response+
unless started?
start {
req['connection'] ||= 'close'
return request(req, body, &block)
}
start { return request(req, body, &block) }
end
if proxy_user()
req.proxy_basic_auth proxy_user(), proxy_pass() unless use_ssl?
end
if not req.response_body_permitted? and @close_on_empty_response
req['connection'] ||= 'close'
req.update_uri address, port, use_ssl?
req['host'] ||= addr_port()
end
|
https://bugs.ruby-lang.org/attachments/8260
|
CC-MAIN-2020-45
|
refinedweb
| 152
| 62.95
|
Can anybody help with a field calculator expression which reverses the direction of a cogo direction field. Where N 90 0 0 E becomes S 90 0 0 W? Any help is greatly appreciated.
Interesting problem- you want to reverse all bearings and these values are all contained in one text field, correct?...in addition to the example you gave, you want to reverse, say, N 40 0 0 E to S 40 0 0 W and (one more example) S 60 0 0 E to N 60 0 0 W ?
If so, then you only need to change the beginning and ending characters, am I reading you correctly?
def revBearing(bearStr): orientation = bearStr[0:1] + bearStr[-1:] if orientation == 'NE': testStr = bearStr.replace('N','S') testStr = testStr.replace('E','W') elif orientation == 'NW': testStr = bearStr.replace('N','S') testStr = testStr.replace('W','E') elif orientation == 'SE': testStr = bearStr.replace('S','N') testStr = testStr.replace('E','W') elif orientation == 'SW': testStr = bearStr.replace('S','N') testStr = testStr.replace('W','E') else: # handle something unexpected here testStr = '' return testStr
That should do it -- may be a shorter way, but the logic is clear...test this, copy/paste the def statement into the field calculator...follow the example as in the aforemention pic on the webhelp page...to call the function, enter on the code block bottom text box (replacing < your field > with the name of your txt field -- the function has been tested in IDLE, but I didn't go as far to test this in the field calculator, so let me know how it goes):
revBearing(!< your field >!)
|
https://community.esri.com/thread/65713-reverse-bearing-calls-in-attribute-table
|
CC-MAIN-2018-22
|
refinedweb
| 268
| 66.74
|
view raw
I am trying out BDD in Visual Studio 2013. I have started a fresh new blank project. I have written the feature file and steps definition. I added SpecFlow using NugGet packages. It installed SpecFlow version 2.1.0
When I build the solution it is looking for SpecFlow version 1.9.0.77
When I searched for SpecFlow in NugGet packages only 1 was listed. I installed it and i believe it was the latest version 2.1.0
Why is the solution looking for an older version of 1.9.0.77 ?
If the resolution is to install 1.9.0.77 how do i install it and I how i find this version in NugGet?
The full error trace is:
Custom tool error: Generation error: Could not load file or assembly 'TechTalk.SpecFlow, Version=1.9.0.77, Culture=neutral, PublicKeyToken=0778194805d6db41' or one of its dependencies. The system cannot find the file specified. E:\RL Fusion\projects\BDD\C# BDD\youtubetutorial2\specflowfirst\SpecflowFirst\SpecflowFirst\Features\GoogleSearch.feature 2 2 SpecflowFirst
using Baseclass.Contrib.SpecFlow.Selenium.NUnit.Bindings;
using OpenQA.Selenium;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using TechTalk.SpecFlow;
using TechTalk.SpecFlow.Assist;
Baseclass.Contrib.SpecFlow.Selenium.NUnit does only work with SpecFlow 1.9.0. It claims that it work with versions after 1.9, but this is not true, because we changed the plugin- interface in SpecFlow. Did you wanted to use it, because of the ability to test against multiple browser with only one scenario?
If so, have a look at this example which uses the SpecFlow+Runner as test runnner.
If you follow that, you have the same functionality and you can use the latest version of SpecFlow.
Full Disclosure: I am one of the developers of SpecFlow and SpecFlow+.
|
https://codedump.io/share/FgovCJR6Ah7L/1/could-not-load-file-or-assembly-techtalkspecflow-version19077
|
CC-MAIN-2017-22
|
refinedweb
| 313
| 54.59
|
SIGWAIT(3) BSD Programmer's Manual SIGWAIT(3)
sigwait - synchronously accept a signal
#include <signal.h> int sigwait(const sigset_t *set, int *sig); be- comes pending. The signals defined by set should have been blocked at the time of the call to sigwait(); otherwise the behaviour is undefined. The effect of sigwait() on the signal actions for the signals in set is un- specified. If more than one thread is using sigwait() to wait for the same signal, no more than one of these threads shall return from sigwait() with the signal number. Which thread returns from sigwait() if more than a single thread is waiting is unspecified. Note: Code using the sigwait() function must be compiled and linked with the -pthread option to gcc(1).
Upon successful completion, sigwait() stores the signal number of the re- ceived signal at the location referenced by sig and returns zero.
On error, sigwait() returns one of these error values: [EINVAL] The set argument contains an invalid or unsupported signal number.
sigaction(2), sigpending(2), sigsuspend(2), pause(3), pthread_sigmask(3), pthreads(3)
sigwait() conforms to ISO/IEC 9945-1:1996 ("POSIX"). MirOS BSD #10-current August.
|
https://www.mirbsd.org/htman/i386/man3/sigwait.htm
|
CC-MAIN-2014-10
|
refinedweb
| 194
| 63.49
|
Pet. Here is an example:
import java.util.List;
public class ForLoopTest {
public static void main(String[] args) {
List<String> items = null;
for (String item : items) {
System.out.println("Item:" + item);
}
}
}
When this program is run, it will throw a null pointer exception.
I think it would have been better if the for loop did a null pointer check on the collection before trying to iterate over it. Granted, it will have a little bit of overhead, but I think the cost is justified because it is really common for people to not check for null values before invoking the for loop. I dont expect the semantics to get changed now because of backward compatibility concerns, so dont hold your breath expecting a change. But still, what do you think? Did the language designers make a mistake? Share your thoughts as comments to the blog.
- Login or register to post comments
- Printer-friendly version
- inder's blog
- 5935 reads
|
https://weblogs.java.net/blog/inder/archive/2007/05/pet_peeve_with.html
|
CC-MAIN-2015-40
|
refinedweb
| 159
| 64.71
|
declaration of ‘mqd_t mq_open(const char*, int, ...)’ throws different exceptions
Bug Description
I'm getting a weird error when compiling a trivial test using mq_open(), when using both -pedantic and -O2:
$ cat mqtest.cpp
#include <mqueue.h>
int main()
{
return 0;
}
$ g++ -pedantic -O2 -c mqtest.cpp
In file included from /usr/include/
/usr/include/
/usr/include/
/usr/include/
Using just -O2 or just -pedantic works fine:
$ g++ -O2 -c mqtest.cpp
$ g++ -pedantic -c mqtest.cpp
$
Using a Debian unstable system to compile, works fine with both -O2 -pedantic at the same time.
Ubuntu 10.4, x86_64, libc6-dev 2.11.1-0ubuntu7, g++ 4:4.4.3-1ubuntu1
The difference between Debian and Ubuntu is Ubuntu defines the macro _FORTIFY_SOURCE, but I can't see where (and that triggers the inclusion of bits/mqueue2.h). I think bits/mqueue2.h is broken in both though.
The problem is explained in more detail in the Debian bug:
http://
Is this so hard to fix? Is really annoying to have to change the compilation options when I have to compile in Ubuntu. Seems like a very silly error, but a very annoying one.
|
https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/581871
|
CC-MAIN-2015-35
|
refinedweb
| 193
| 68.57
|
As sad, desperate and/or pathetic as it may sound, I often times will find myself rooting around the Mono Project SVN repository looking for buried treasure; One of the intended side effects of open source software is the freedom and encouragement to experiment, so there’s a tendency for those willing to dig to find things that haven’t made it into an official release, but they’re both useful and useable tools, libraries, applications, etc. none-the-less.
Today, apparently, is my lucky day (though I’m surprised I hadn’t noticed this before given Eno did the initial check in 7 months ago),
Assembly/ 81031 7 months atsushi initial checkin. Mono.Xml/ 81031 7 months atsushi initial checkin. Mono.XsltDebugger/ 81155 6 months atsushi 2007-07-02 Atsushi Enomoto <atsushi@ximian.com> * XsltDebugger.cs XsltDebugg... ChangeLog 81031 7 months atsushi initial checkin. Makefile 81031 7 months atsushi initial checkin. Mono.XsltDebugger.dll.sources 81031 7 months atsushi initial checkin.
To get the debugger to work you’ll need to check-out two folders,
… and …
… run
make; sudo make install in both at which point you will have
Mono.XsltDebugger and
xslt-debugger.exe in <mono-prefix-root>/lib/mono/{3.0|3.5} (it wound up in the ~/3.0 folder on Linux and ~/3.5 on OS X. Not sure why, though I’m sure the reason is easy enough to track down.) By then running,
mdavid$ mono /usr/local/lib/mono/3.0/xslt-debugger.exe page/controller/atomictalk/base.xsl index.page
… results in,
[xslt]
… which, as it turns out, is a rudimentary interactive XSLT debugging console,
[xslt] help help Show this help. quit (q) exit Quits the debugger. run Runs a transform. continue (cont) Continues an interrupted transform. break <xpath> Sets a breakpoint for output to hit XPath match. break <line> <column> [uri] Sets a breakpoint to match specified stylesheet element at (<line>, <column>) ([uri] is the primary stylesheet by default). break list Lists the registered breakpoints. break remove <index> Removes specified breakpoints. break clear Removes all of the registered breakpoints. xmlns add [prefix] <uri> Adds a namespace mapping from [prefix] ('' by default) to <uri> used to resolve prefixes in XPath. xmlns remove [prefix] Removes a namespace mapping by [prefix] ('' by default) used to resolve prefixes in XPath. xmlns list Lists a namespace mapping used to resolve prefixes in XPath. output [lines] Shows the transformation result within [lines] (default = 10). batch
Processes batch commands listed in <filename>. load stylesheet <filename> Loads a stylesheet <filename>. load input <filename> Loads an input document <filename>. clear [number] Removes the specified breakpoint by breakpoint index. list breakpoints (bp) Shows the list of breakpoints.
While it seems the functionality at the moment is limited, there are some basic operations that will run successfully. For example,
[xslt] break /html/head [xslt] run Breakpoint matched [xslt] output <head /> [xslt] cont [xslt] Transform finished [xslt] output <html> <head> <style type="text/css" /> </head> </html> [xslt]
As I did above, you can specify the transformation xml and data xml when launching the application which then pre-loads both into memory. According to the
help output you can apparently load them into memory from within the console via,
load stylesheet <filename>
… and …
load input <filename>
… but apparently the resolver code needs some attention,
[xslt] load stylesheet page/controller/atomictalk/base.xsl File stylesheet does not exist
Just to be sure,
[xslt] load stylesheet /Users/mdavid/Projects/Nuxleus/Public/Web/Development/page/controller/atomictalk/base.xsl File stylesheet does not exist
… and …
[xslt] load stylesheet File stylesheet does not exist
In poking around a bit more, it’s obvious there’s still work to be done, but the fact that the project even exists is pretty cool, and given the open source nature of — well, open source ;-) — it certainly opens up the door for anyone who might feel so inclined to jump in and start hacking away at things.
Maybe that person will be you? Don’t know, but I for one would think you were one of the coolest people who’s ever walked the planet if you did.
Okay, so maybe that would be a deterrent rather than encouragement, but regardless, the offers on the table. ;-) :D
Oh, and special thanks to Eno (Atsushi Enomoto) for what looks to be the original author of the code base! (there’s no notes attached to the source so I’m going directly from SVN log entries.)
|
http://www.oreillynet.com/xml/blog/2008/01/mono_projecthidden_gems_monoxs.html
|
crawl-002
|
refinedweb
| 742
| 64.1
|
03 August 2012 07:10 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The official could not ascertain the actual level of operating rate and for how long it will continue but added that the I-4 No 2 ethane cracker was operating at 90-95% capacity in July.
The cracker was taken off line on 29 June for repairs. It was restarted earlier than expected on 6 July because the technical problems had been resolved, the source said.
The cracker was initially expected to be shut for around two weeks.
Separately, PTTGC’s 515,000 tonne/year mixed-feed I-4 No 1 cracker at the same site is now running normally at 85%
|
http://www.icis.com/Articles/2012/08/03/9583544/thailands-ptt-global-chemical-runs-i-4-no-2-cracker-at-lower-rate.html
|
CC-MAIN-2015-22
|
refinedweb
| 114
| 62.38
|
Getting Started with React Native
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
With is React Native
- what is Expo
- how to set up an React Native development environment
- how to create an app with React Native
Want to learn React Native from the ground up? This article is an extract from our Premium library. Get an entire collection of React Native books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month. and it will recommended\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString(''))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"\myUsername\AppData\Local\Android\Sdkand
C:\users\myUsername\AppData\Local\Android\Sdk\platform-tools. Note that this is also where you add the path to the JDK if it isn’t already added:
Setting Up on Linux
This section will show you how to install and configure the tools required for developing React Native apps on Linux. I’ve specifically used Ubuntu 18.04 for testing things out, but you should be able to translate the commands to the Linux distribution you’re using.
Install Prerequisite Tools
The first step is to install the following tools. The first line installs the tools required by Node, and the second line is required by Watchman, which we’ll also install later:
sudo apt-get install build-essential libssl-dev curl sudo apt-get install git autoconf automake python-dev
Install NVM
NVM allows us to install and use multiple versions of Node. You can install it with the following commands:
curl -o- | bash source ~/.profile
Note: be sure to check out the latest version from the releases page to ensure the NVM version you’re installing is updated.
Install JDK
As seen earlier, React Native actually compiles the corresponding code to each of the platforms you wish to deploy to. The JDK enables your computer to understand and run Java code. The specific JDK version required by React Native is JDK version 8.
sudo apt-get install openjdk-8-jre
Install Watchman
Watchman is a tool for watching changes in the file system. It’s mainly used to speed up the compilation process. If you’ve enabled live preview on the app that you’re developing, the changes you make to the app will be reflected faster in the live preview. The following steps require Git to already be installed on your system:
git clone cd watchman git checkout v4.9.0 ./autogen.sh ./configure make sudo make install
You may encounter an issue which looks like the following:
CXX scm/watchman-Mercurial.o scm/Mercurial.cpp: In constructor ‘watchman::Mercurial::infoCache::infoCache(std::__cxx11::string)’: scm/Mercurial.cpp:16:40: error: ‘void* memset(void*, int, size_t)’ clearing an object of non-trivial type ‘struct watchman::FileInformation’; use assignment or value-initialization instead [-Werror=class-memaccess] memset(&dirstate, 0, sizeof(dirstate)); ^ In file included from scm/Mercurial.h:10, from scm/Mercurial.cpp:3: ./FileInformation.h:18:8: note: ‘struct watchman::FileInformation’ declared here struct FileInformation { ^~~~~~~~~~~~~~~ cc1plus: all warnings being treated as errors make: *** [Makefile:4446: scm/watchman-Mercurial.o] Error 1
Try the following command instead:
./configure --without-python --without-pcre --enable-lenient
Update the Environment Variables
Updating the environment variables is necessary in order for the operating system to be aware of the tools you installed, so you can use them directly from the terminal. Note that this is the final step for setting up all the tools required by React Native. Follow this right before the step for installing the React Native CLI.
To update the environment variables, open your
.bash_profile file:
sudo nano ~/.bash_profile
Add the following at the beginning then save the file:
export ANDROID_HOME=$HOME/Android/Sdk export PATH=$PATH:$ANDROID_HOME/emulator export PATH=$PATH:$ANDROID_HOME/tools export PATH=$PATH:$ANDROID_HOME/tools/bin export PATH=$PATH:$ANDROID_HOME/platform-tools
Note that the path above assumes that the Android SDK is installed on your user’s home directory:
echo $HOME
Setting up on macOS
Having a Mac allows you to develop both Android and iOS apps with React Native. In this section, I’ll show how you can set up the development environment for both Android and iOS.
Installing prerequisite tools
Since macOS already comes with Ruby and cURL by default, the only tool you need to install is Homebrew, a package manager for macOS:
/usr/bin/ruby -e "$(curl -fsSL)"
If you already have it installed, simply update it with the following:
brew update
For iOS, the following are required:
- Latest version of Xcode: installs the tools required for compiling iOS apps
- Watchman: for watching file changes
- NVM: for installing Node
Install JDK
Install JDK version 8 for macOS, as that’s the one required by React Native:
brew tap AdoptOpenJDK/openjdk brew cask install adoptopenjdk8
Install Watchman
Watchman speeds up the compilation process of your source code. Installing it is required for both Android and iOS:
brew install watchman
Install NVM
NVM allows you to install multiple versions of Node for testing purposes:
brew install nvm echo "source $(brew — prefix nvm)/nvm.sh" >> .bash_profile
Update Environment Variables
After you’ve installed all the required tools, and right before you install the React Native CLI, it’s time to update the environment variables. This is an important step, because without doing it, the operating system won’t be aware of the tools required by React Native.
To update it, open your
.bash_profile file:
sudo nano ~/.bash_profile
Then add the path to the Android SDK and platform tools:
export ANDROID_HOME=/Users/$USER/Library/Android/sdk export PATH=${PATH}:$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools:$ANDROID_HOME/emulator
Setting up Android Studio
Android Studio is the easiest way to install the tools required for Android (Android SDK, Android Emulator) so it’s the method I usually recommend for beginners. You can download the installer for your specific platform here.
On Windows and macOS, it has a setup wizard which you can just run, clicking on Next until the install is complete. Be sure to select the
.dmg file for macOS or the
.exe file for Windows from here if the default download button downloads something different. You can use the following screenshots as a basis for your install.
If you’re on Linux, you need to follow these steps first before you can proceed to installing Android Studio:
Download and extract the
.tar.gzfile:
tar -xzvf android-studio-ide-183.5522156-linux.tar.gz
Navigate to the extracted directory and go inside the
bindirectory.
Execute
studio.sh. This opens up the installer for Android Studio:
./studio.sh
The Setup Wizard will first greet you with the following screen:
Just click on Next until you see the screen below. The Android SDK, and the latest Android API version, are checked by default. You can also check the Android Virtual Device if you want. This installs the default Android emulator for testing your apps. But I generally recommend using Genymotion instead, as it comes with better tools for testing out different device features:
Once it has installed everything, it will show the following. Just click on Finish to close the Setup Wizard:
The next step is to open Android Studio to configure the SDK platform and tools:
Note: if you’ve previously installed Android Studio, you might have an existing project opened already. In that case, you can launch the SDK Manager from the Tools → SDK Manager top menu. Then check the Show Package Details at the bottom right. This way, you could choose only the sub-components instead of installing the whole thing.
Under the SDK Platforms tab, make sure that the following are checked:
- Android 9.0 (Pie)
- Android SDK Platform 28
- Google APIs Intel x86 Atom_64 System Image or Intel x86 Atom_64 System Image
Under the SDK Tools tab, check the Show Package Details again and make sure that the following are checked:
- Android SDK Build-Tools 29
- 28.0.3
- Android SDK Platform-Tools
- Android SDK Tools
- Android Support Repository
- Google Repository
Check the following if you decided to install the Android Emulator:
- Intel x86 Emulator Accelerator (HAXM installer)
That will optimize the emulator’s performance.
Install Node
Execute the following commands to install a specific version of Node and set it as the default:
nvm install 11.2.0 nvm alias default 11.2.0
Once installed, you can verify that it works by executing the following:
node --version npm --version
Here’s what the output will look like:
Setting Up Expo
In this section, we’ll set up Expo, an alternative way of developing React Native apps. This section assumes that you already have Node and Watchman installed.
To set up Expo, all you have to do is install their command line tool via npm:
npm install -g expo-cli
That’s really all there is to it! The next step is to download the Expo client App for Android or iOS. Note that this is the only way you can run Expo apps while you’re still on development. Later on, you can build the standalone version.
From here, you can either proceed to the Hello World App section if you plan on running apps on your device, or the Setting up Emulators section if you want to run it on the Android Emulator or iOS Simulator.
Setting Up Emulators
Emulators allow you to test out apps you’re developing right from your development machine.
Note: emulators require your machine to have at least 4GB of RAM. Otherwise, they’ll really slow down your machine to the point where you get nothing done because of waiting for things to compile or load.
iOS Simulator
Xcode already comes pre-installed with iOS simulators, so all you have to do is launch it before you run your apps. To launch an iOS simulator from the command line, you first have to list the available simulators:
xcrun simctl list
Take note of the device UUID of the device you wish to run and substitute it with the value of
UUID of the command below:
open -a Simulator --args -CurrentDeviceUDID "UUID"
Genymotion
As mentioned earlier, I recommend Genymotion as the emulator for Android, as it has more device features that you can test out — for example, when testing out apps that make use of GPS. Genymotion allows you to select a specific place via a map interface:
To install Genymotion, you first have to download and install VirtualBox. Once that’s done, sign up for a Genymotion account, log in, and download the installer. Windows and macOS come with a corresponding installer. But for Linux, you have to download the installer and make it executable:
chmod +x genymotion-<version>_<arch>.bin
After that, you can now run the installer:
./genymotion-<version>_<arch>.bin
Once it’s done installing, you should be able to search for it on your launcher.
Android Emulator
Even though Genymotion is the first thing I recommend, I believe that Android Emulator has its merits as well. For example, it boots up faster and it feels faster in general. I recommend it if your machine has lower specs or if you have no need for Genymotion’s additional features.
When you launch Android Studio, you can select AVD Manager from the configuration options (or Tools → AVD Manager if you currently have an open project).
Click on the Create Virtual Device button on the window that shows up. It will then ask you to choose the device you wish to install:
You can choose any that you want, but ideally, you want to have the ones which already have a Play Store included. It will be especially useful if your app integrates with Google Sign in or other apps, as it will allow you to install those apps with ease.
Next, it will ask you to download the version of Android you wish to install on the device. Simply select the latest version of Android that’s supported by React Native. At the time of writing this tutorial, it’s Android Pie. Note that this is also the same version that we installed for the Android SDK Platform earlier:
Once installed, click Finish to close the current window then click on Next once you see this screen:
Click on Finish on the next screen to create the emulator. It will then be listed on Android Virtual Device Manager. Click on the play button next to the emulator to launch it.
Install React Native CLI
The final step is to install the React Native CLI. This is the command line tool that allows you to bootstrap a new React Native project, link native dependencies, and run the app on a device or emulator:
npm install -g react-native-cli
Once it’s installed, you can try creating a new project and run it on your device or emulator:
react-native init HelloWorldApp cd HelloWorldApp react-native run-android react-native run-ios
Here’s what the app will look like by default:
At this point, you now have a fully functional React Native development environment set up.
Troubleshooting Common Errors
In this section, we’ll look at the most common errors you may encounter when trying to set up your environment.
Could Not Find tools.jar
You may get the following error:
Could not find tools.jar. Please check that /usr/lib/jvm/java-8-openjdk-amd64 contains a valid JDK installation
This means that the system doesn’t recognize your JDK installation. The solution is to re-install it:
sudo apt-get install openjdk-8-jdk
SDK Location Not Found
You may get the following error when you run your app:
FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring project ':app'. > SDK location not found. Define location with sdk.dir in the local.properties file or with an ANDROID_HOME environment variable.
This means that you haven’t properly added the path to all of the Android tools required by React Native. You can check it by executing the following:
echo $PATH
It should show the following path:
Android/sdk/tools Android/sdk/tools/bin Android/sdk/platform-tools
If not, then you have to edit either your
.bashrc or
.bash_profile file to add the missing path. The config below adds the path to the platform tools:
sudo nano ~/.bash_profile export PATH=$PATH:$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools
Unable to Find Utility “instruments”
If you’re developing for iOS, you might encounter the following error when you try to run the app:
Found Xcode project TestProject.xcodeproj xcrun: error: unable to find utility "instruments", not a developer tool or in PATH
The problem is that Xcode command line tools aren’t installed yet. You can install them with the following command:
xcode-select --install
“Hello World” App
Now that your development environment is set up, you can start creating the obligatory “hello world” app. The app you’re going to create is a Pokemon search app. It will allow the user to type the name of a Pokemon and view its details.
Here’s what the final output will look like:
You can find the source code on this GitHub repo.
Bootstrapping the App
On your terminal, execute the following command to create a new React Native project:
react-native init RNPokeSearch
For those of you who decided to use Expo instead, here’s the equivalent command for bootstrapping a new React Native project on Expo. Under Managed Workflow, select blank, enter “RNPokeSearch” for the project name, and install dependencies using Yarn:
expo init RNPokeSearch
Just like in: PokeAPI:
yarn add pokemon axios
Note that the above command works on both standard React Native projects and Expo’s managed workflow, since they don’t have any native dependencies. If you’re using Expo’s managed workflow, you won’t be able to use packages that have native dependencies.
React Native Project Directory Structure
Before we proceed to coding, let’s first take a look at the directory structure of a standard React Native project:
Here’s a break down of the most important files and folders that you need to remember:
App.js: the main project file. This is where you’ll start developing your app. Any changes you make to this file will be reflected on the screen.
index.js: the entry point file of any React Native project. This is responsible for registering the
App.jsfile as the main component..
src: acts as the main folder which stores all the source code related to the app itself. Note that this is only a convention. The name of this folder can be anything. Some people use
appas well.
android: where the Android-related code is. React Native isn’t a native language. That’s why we need this folder to bootstrap the Android app.
ios: where the iOS-related code is. This accomplishes the same thing as the
androidfolder, but for iOS.
Don’t mind the rest of the folders and files for now, as we won’t be needing them when just getting started.
For Expo users, your project directory will look like this:
As you can see, it’s pretty much the same. The only difference is that there’s no
android and
ios folders. This is because Expo takes care of running the app for you on both platforms. There’s also the addition of the
assets folder. This is where app assets such as icons and splash screens are stored.
Running the App
At this point, you can now run the app. Be sure to connect your device to your computer, or open your Android emulator or iOS simulator before doing so:
react-native run-android react-native run-ios
You already saw what the default screen looks like earlier.
If you’re on Expo, you can run the project with the following command. Be sure you’ve already installed the corresponding Expo client for your phone’s operating system before doing so:
yarn start
Once it’s running, it will display the QR code:
_16<<
If you’re testing on a real device, shake it so the developer menu will show up:
Click on the following:
- Enable Live Reload: automatically reloads your app when you hit save on any of its source code.
- Start Remote JS Debugging: for debugging JavaScript errors on the browser. You can also use
react-native log-androidor
react-native log-iosfor this, but remote debugging has a nicer output, so it’s easier to inspect.
If you want, you can also set the debug server. This allows you to disconnect your mobile device from your computer while you develop the app. You can do that by selecting Dev Settings in the developer menu. Then under the Debugging section, select Debug server host & port for device. This opens up a prompt where you can enter your computer’s internal IP address and the port where Metro Bundler runs on (usually port
8081).
Once you’ve set that, select Reload from the developer menu to commit the changes. At this point, you can now disconnect your device from the computer. Note that any time you install a package, you have to connect your device, quit Metro Bundler, and run the app again (using
react-native run-android or
react-native run-ios). That’s the only way for the changes to take effect.
Coding the App
Both standard React Native projects and Expo have built-in components which you can use to accomplish what you want. Simply dig through the documentation and you’ll find information on how to implement what you need. In most cases, you either need a specific UI component or an SDK which works with a service you plan on using. You can use Native Directory to look for those or just plain old Google. More often than not, here’s what your workflow is going to look like:
- Look for an existing package which implements what you want.
- Install it.
- Link it — only for native modules. If you’re on Expo, you don’t really need to do this because you can only install pure JavaScript libraries — although this might change soon because of the introduction of unimodules and bare workflow.
- Use it on your project.
Now that you’ve set up your environment and learned a bit about the workflow, we’re ready to start coding the app.
Start by replacing the contents of the
App.js file with the following code:
import React from 'react'; import Main from './src/Main'; function App() { return <Main /> } export default App;
The first line in the code above code imports React. You need to import this class any time you want create a component.
The second line is where we import a custom component called
Main. We’ll create it later. For now, know that this is where we’ll put the majority of our code.
After that, we create the component by creating a new function. All this function does is return the
Main component.
Lastly, we export the class so that it can be imported somewhere else. In this case, it’s actually imported from the
index.js file.
Next, create the
src/Main.js file and add the following:
// src/Main.js import React, { Component } from 'react'; import { SafeAreaView, View, Text, TextInput, Button, Alert, StyleSheet, ActivityIndicator } from 'react-native';
The second line imports the components that are built into React Native. Here’s what each one does:
SafeAreaView: for rendering content within the safe area boundaries of a device. This automatically adds a padding that wraps its content so that it won’t be rendered on camera notches and sensor housing area of a device.
View: a fundamental UI building block. This is mainly used as a wrapper for all the other components so they’re structured in such a way that you can style them with ease. Think of it as the equivalent of
<div>: if you want to use Flexbox, you have to use this component.
Text: for displaying text.
TextInput: the UI component for inputting text. This text can be plain text, email, password, or a number pad.
Button: for showing a platform-specific button. This component looks different based on the platform it runs on. If it’s Android, it uses Material Design. If it’s iOS, it uses Cupertino.
Alert: for showing alerts and prompts.
ActivityIndicator: for showing a loading animation indicator.
StyleSheet: for defining the component styles.
Next, import the libraries we installed earlier:
import axios from 'axios'; import pokemon from 'pokemon';
We’ll also be creating a custom
Pokemon component later. This one is used for displaying Pokemon data:
import Pokemon from './components/Pokemon';
Because getting the required Pokemon data involves making two API requests, we have to set the API’s base URL as a constant:
const POKE_API_BASE_URL = "";
Next, define the component class and initialize its state:
export default class Main extends Component { state = { isLoading: false, // decides whether to show the activity indicator or not searchInput: '', // the currently input text name: '', // Pokemon name pic: '', // Pokemon image URL types: [], // Pokemon types array desc: '' // Pokemon description } // next: add render() method }
In the code above, we’re defining the main component of the app. You can do this by defining an ES6 class and having it extend React’s
Component class. This is another way of defining a component in React. In the
App.js file, we created a functional component. This time we’re creating a class-based component.
The main difference between the two is that functional components are used for presentation purposes only. Functional components have no need to keep their own state because all the data they require is just passed to them via props. On the other hand, class-based components maintain their own state and they’re usually the ones passing data to functional components.
If you want to learn more about the difference between functional and class-based components, read this tutorial: Functional vs Class-Components in React.
Going back to the code, we’re initializing the state inside our component. You define it as a plain JavaScript object. Any data that goes into the state should be responsible for changing what’s rendered by the component. In this case, we put in
isLoading to control the visibility of the activity indicator and
searchInput to keep track of the input value in the search box.
This is an important concept to remember. React Native’s built-in components, and even the custom components you create, accept properties that control the following:
- what’s displayed on the screen (data source)
- how they present it (structure)
- what it looks like (styles)
- what actions to perform when user interacts with it (functions)
We’ll go through those properties in more detail in the next section. For now, know that the value of those properties are usually updated through the state.
The rest of the state values are for the Pokemon data. It’s a good practice to set the initial value with the same type of data you’re expecting to store later on — as this serves as documentation as well.
Anything that’s not used for rendering or data flow can be defined as an instance variable, like so:
class Main extends Component { this.appTitle = "RNPokeSearch"; }
Alternatively, it can be defined as an outside-the-class variable, just like what we did with the
POKE_API_BASE_URL earlier:
const POKE_API_BASE_URL = ''; class Main extends Component { }
Structuring and Styling Components
Let’s return to the component class definition. When you extend React’s
Component class, you have to define a
render() method. This contains the code for returning the component’s UI and it’s made up of the React Native components we imported earlier.
Each component has its own set of props. These are basically attributes that you pass to the component to control a specific aspect of it. In the code below, most of them have the
style prop, which is used to modify the styles of a component. You can pass any data type as a prop. For example, the
onChangeText prop of the
TextInput is a function, while the
types prop in the
Pokemon is an array of objects. Later on in the
Pokemon component, you’ll see how the props will be used:
render() { const { name, pic, types, desc, searchInput, isLoading } = this.state; // extract the Pokemon data from the state return ( <SafeAreaView style={styles.wrapper}> <View style={styles.container}> <View style={styles.headContainer}> <View style={styles.textInputContainer}> <TextInput style={styles.textInput} onChangeText={(searchInput) => this.setState({searchInput})} value={this.state.searchInput} placeholder={"Search Pokemon"} /> </View> <View style={styles.buttonContainer}> <Button onPress={this.searchPokemon} </View> </View> <View style={styles.mainContainer}> { isLoading && <ActivityIndicator size="large" color="#0064e1" /> } { !isLoading && <Pokemon name={name} pic={pic} types={types} desc={desc} /> } </View> </View> </SafeAreaView> ); }
Breaking down the code above, we first extract the state data:
const { name, pic, types, desc, searchInput, isLoading } = this.state;
Next, we return the component’s UI, which follows this structure:
SafeAreaView.wrapper View.container View.headContainer View.textInputContainer TextInput View.buttonContainer Button View.mainContainer ActivityIndicator Pokemon
The above structure is optimized for using Flexbox. Go ahead and define the component styles at the bottom of the file:
const styles = StyleSheet.create({ wrapper: { flex: 1 }, container: { flex: 1, padding: 20, backgroundColor: '#F5FCFF', }, headContainer: { flex: 1, flexDirection: 'row', marginTop: 100 }, textInputContainer: { flex: 2 }, buttonContainer: { flex: 1 }, mainContainer: { flex: 9 }, textInput: { height: 35, marginBottom: 10, borderColor: "#ccc", borderWidth: 1, backgroundColor: "#eaeaea", padding: 5 } });
In React Native, you define styles by using
StyleSheet.create() and passing in the object that contains your styles. These style definitions are basically JavaScript objects, and they follow the same structure as your usual CSS styles:
element: { property: value }
The
wrapper and
container is set to
flex: 1, which means it will occupy the entirety of the available space because they have no siblings. React Native defaults to
flexDirection: 'column', which means it will lay out the flex items vertically, like so:
In contrast, (
flexDirection: 'row') lays out items horizontally:
It’s different for
headContainer, because even though it’s set to
flex: 1, it has
mainContainer as its sibling. This means that
headContainer and
mainContainer will both share the same space.
mainContainer is set to
flex: 9 so it will occupy the majority of the available space (around 90%), while
headContainer will only occupy about 10%.
Let’s move on to the contents of
headContainer. It has
textInputContainer and
buttonContainer as its children. It’s set to
flexDirection: 'row', so that its children will be laid out horizontally. The same principle applies when it comes to space sharing:
textInputContainer occupies thw thirds of the available horizontal space, while
buttonContainer only occupies one third.
The rest of the styles are pretty self explanatory when you have a CSS background. Just remember to omit
- and set the following character to uppercase. For example, if you want to set
background-color, the React Native equivalent is
backgroundColor.
Note: not all CSS properties that are available on the Web are supported on React Native. For example, things like floats or table properties aren’t supported. You can find the list of supported CSS properties in the docs for View and Text components. Someone has also compiled a React Native Styling Cheat Sheet. There’s also a style section in the documentation for a specific React Native component that you want to use. For example, here are the style properties that you can use for the Image component.
Event Handling and Updating the State
Let’s now break down the code for the
TextInput and
Button components. In this section, we’ll talk about event handling, making HTTP requests, and updating the state in React Native.
Let’s start by examining the code for
TextInput:
<TextInput style={styles.textInput} onChangeText={(searchInput) => this.setState({searchInput})} value={searchInput} placeholder={"Search Pokemon"} />
In the above code, we’re setting the function to execute when the user inputs something in the component. Handling events like these are similar to how it’s handled in the DOM: you simply pass the event name as a prop and set its value to the function you wish to execute. In this case, we’re simply inlining it because we’re just updating the state. The value input by the user is automatically passed as an argument to the function you supply so all you have to do is update the state with that value. Don’t forget to set the value of the
TextInput to that of the state variable. Otherwise, the value input by the user won’t show as they type on it.
Next, we move on to the
Button component. Here, we’re listening for the
onPress event:
<Button onPress={this.searchPokemon}
Once pressed, the
searchPokemon() function is executed. Add this function right below the
render() method. This function uses the async/await pattern because performing an HTTP request is an asynchronous operation. You can also use Promises, but to keep our code concise, we’ll stick with async/await instead. If you’re not familiar with it, be sure to read this tutorial:
render() { // ... } searchPokemon = async () => { try { const pokemonID = pokemon.getId(this.state.searchInput); // check if the entered Pokemon name is valid this.setState({ isLoading: true // show the loader while request is being performed }); const { data: pokemonData } = await axios.get(`${POKE_API_BASE_URL}/pokemon/${pokemonID}`); const { data: pokemonSpecieData } = await axios.get(`${POKE_API_BASE_URL}/pokemon-species/${pokemonID}`); const { name, sprites, types } = pokemonData; const { flavor_text_entries } = pokemonSpecieData; this.setState({ name, pic: sprites.front_default, types: this.getTypes(types), desc: this.getDescription(flavor_text_entries), isLoading: false // hide loader }); } catch (err) { Alert.alert("Error", "Pokemon not found"); } }
Breaking down the code above, we first check if the entered Pokemon name is valid. If it’s valid, the National Pokedex ID (if you open the link, that’s the number on top of the Pokemon name) is returned and we supply it as a parameter for the HTTP request. The request is made using axios’
get() method, which corresponds to an HTTP GET request. Once the data is available, we extract what we need and update the state.
Here’s the
getTypes() function. All it does is reassign the
slot and
type properties of the Pokemon types to
id and
name:
getTypes = (types) => { return types.map(({ slot, type }) => { return { "id": slot, "name": type.name } }); }
Here’s the
getDescription() function. This finds the first English version of the
flavor_text:
getDescription = (entries) => { return entries.find(item => item.language.name === 'en').flavor_text; }
Pokemon Component
Earlier, we imported and used a component called
Pokemon, but we haven’t really created it yet. Let’s go ahead and do so. Create a
src/components/Pokemon.js file and add the following:
// src/components/Pokemon.js import React from 'react'; import { View, Text, Image, FlatList, StyleSheet } from 'react-native'; const Pokemon = ({ name, pic, types, desc }) => { if (!name) { return null } return ( > ); } // const styles = StyleSheet.create({ mainDetails: { padding: 30, alignItems: 'center' }, image: { width: 100, height: 100 }, mainText: { fontSize: 25, fontWeight: 'bold', textAlign: 'center' }, description: { marginTop: 20 }, types: { flexDirection: 'row', marginTop: 20 }, type: { padding: 5, width: 100, alignItems: 'center' }, typeText: {' } }); export default Pokemon;
In the code above, we first checked if the
name has a falsy value. If it has, we simply return
null as there’s nothing to render.
We’re also using two new, built-in React Native components:
Image: used for displaying images from the Internet or from the file system
FlatList: used for displaying lists
As we saw earlier, we’re passing in the Pokemon data as prop for this component. We can extract those props the same way we extract individual properties from an object:
const Pokemon = ({ name, pic, types, desc }) => { // .. }
The
Image component requires the
source to be passed to it. The
source can either be an image from the file system, or, in this case, an image from the Internet. The former requires the image to be included using
require(), while the latter requires the image URL to be used as the value of the
uri property of the object you pass to it.
resizeMode allows you to control how the image will be resized based on its container. We used
contain, which means it will resize the image so that it fits within its container while still maintaining its aspect ratio. Note that the container is the
Image component itself. We’ve set its
width and
height to
100, so the image will be resized to those dimensions. If the original image has a wider width than its height, a width of 100 will be used, while the height will adjust accordingly to maintain the aspect ratio. If the original image dimension is smaller, it will simply maintain its original size:
<Image source={{uri: pic}} style={styles.image} resizeMode={"contain"} />
Next is the
FlatList component. It’s used for rendering a list of items. In this case, we’re using it to render the types of the Pokemon. This requires the
data, which is an array containing the items you want to render, and the
renderItem, which is the function responsible for rendering each item on the list. The item in the current iteration can be accessed the same way props are accessed in a functional component:
<FlatList columnWrapperStyle={styles.types} data={types} numColumns={2} keyExtractor={(item) => item.id.toString()} renderItem={({item}) => { return ( <View style={[styles[item.name], styles.type]}> <Text style={styles.typeText}>{item.name}</Text> </View> ) }} />
In the code above, we also supplied the following props:
columnWrapperStyle: used for specifying the styles for each column. In this case, we want to render each list item inline, so we’ve specified
flexDirection: 'row'.
numColumns: the maximum number of columns you want to render for each row on the list. In this case, we’ve specified
2, because a Pokemon can only have two types at most.
keyExtractor: the function to use for extracting the keys for each item. You can actually omit this one if you pass a
keyprop to the outer-most component of each of the list items.
At this point, you can now test the app on your device or emulator:
react-native run-android react-native run-ios yarn start
Conclusion and Next Steps
That’s it! In this tutorial, you’ve learned how to set up the React Native development environment using both the standard procedure and Expo. You also learned how to create your very first React Native app.
To learn more, check out these resources:
You can find the source code used in this tutorial on this GitHub repo.
Learn how Git works, and how to use it to streamline your workflow!
Google, Netflix and ILM are Python users. Maybe you should too?
|
https://www.sitepoint.com/getting-started-with-react-native/
|
CC-MAIN-2020-40
|
refinedweb
| 6,127
| 54.52
|
While people are likely to bunch video games together as a single industry, the video game industry is very diverse from a technological standpoint. Like athletic brands that separate their product lines to cater to different sports and activities, video game development is as varied as the number of platforms used to play them on. From consoles like Xbox and PlayStation, to PCs and mobile platforms, game developers have to consider their audience and what they use to play before writing the first lines of code.
Mobile platforms such as iOS and Android comprise the biggest market for video games. According to Statista, there were around 900+ million smartphones sold worldwide in 2014—a huge number that makes up a considerable chunk of the human population. The video game industry is aware of this, which is why everyone is making an effort to get a piece of the mobile gaming pie.
Are you thinking of developing your own games? Whether you want to write for consoles, PCs, or mobile platforms, you're going to have to learn about the programming languages you need to work with first. In this article, we will review the most common programming languages used for video game development. We will try to check each platform and collect the used technologies and most widely-used frameworks for video game development.
It is important to note that video game development isn’t just about coding. The preparation of used resources, like textures, sound, images, and game design often takes more time to get done than the actual programming phase.
Console Game Development
Microsoft's Xbox and Sony's PlayStation are the two big players of the console industry. For PlayStation, there's a mobile developer line called PlayStation Mobile, and this does have an SDK which is based on Mono, so the programming language used most of the time is C#. As for traditional PlayStation game development, C++ is one of the most widely used language. As you might already know, additional development tools are used for bigger games. There's a separate section at the end of the article for these. Go ahead and scroll down if you are interested in that part.
Desktop Game Development
There's a rich history of game development for desktop computers. I think many of us remember the good old Lotus car game:
The game was designed to run on x286 machines (image source vogons.org):
Back then, there were some games created using Pascal and Delphi. Delphi was the popular programming language at that time, but C was there too. Of course, most of these game development environments have since become obsolete, replaced by more modern dev languages.
Nowadays programming platforms like CUDA are getting more and more acceptance from game developers, mostly because of the demand for realistic graphics and textures, as well as high levels of animation and motion. To cope with these expectations, a lot of fast video calculation is needed. CUDA offers the possibility to parallelize execution of mathematical operations, and the programmer plays a big role in this.
Here is some CUDA code, just to illustrate what game developers have to deal with. The basic idea behind programming for the CUDA platform is that everything should be able to run in parallel and computations need to be optimized as much as possible. The code looks very much like C and C++ code, but it does have some specific elements.
#include <iostream> __global__ void sum( int a, int b, long *c ) { *c = a + b; } int main( void ) { int c; long *sum_c; // allocate memory for sum_c HANDLE_ERROR( cudaMalloc( (void**)&sum_c, sizeof(long) ) ); sum<<<1,1>>>( 134, 2237, dev_c ); //copy result to the c variable HANDLE_ERROR( cudaMemcpy( &c, sum_c, sizeof(int), cudaMemcpyDeviceToHost ) ); //display the sum printf( "2 + 7 = %d\n", c ); // free the memory cudaFree( sum_c ); return 0; }
Although it's not as popular as C++ with DirectX and OpenGL, Python does support game development. For example, few people know that EVE Online and Disney’s Pirates of the Caribbean were developed using Python (there's more listed on this page). PyGame is a library that is developer-friendly and easy to use for building games. Python is an easy language to start with, so building games in Python is not a hard thing to do either.
If we take the Web as a platform, which is used mostly on desktop, then a couple years ago and even now, Flash was the leading platform for creating games. However, WebGL and JavaScript are getting used more and more, mostly because Flash was considered unsecure and slow. With the evolution of browsers, the JavaScript engines got fast enough to handle the amount of calculations needed for games. There is a webpage called html5gameengine.com which compares all the available JS-based game engines. This can be a good place to start when you want to hack something or build a cross-platform game for desktop and mobile devices. Engines like Phaser, EaselJS, CraftyJS, BabylonJS, PixiJS are all good alternatives to start Web game development with.
Here is a sample from CraftyJS (source GitHub), which implements a ping-pong game with some basic navigation.
var Crafty = require('craftyjs'); Crafty.init(600, 300); Crafty.background('rgb(127,127,127)'); //Paddles Crafty.e("Paddle, 2D, DOM, Color, Multiway") .color('rgb(255,0,0)') .attr({ x: 20, y: 100, w: 10, h: 100 }) .multiway(4, { W: -90, S: 90 }); Crafty.e("Paddle, 2D, DOM, Color, Multiway") .color('rgb(0,255,0)') .attr({ x: 580, y: 100, w: 10, h: 100 }) .multiway(4, { UP_ARROW: -90, DOWN_ARROW: 90 }); //Ball Crafty.e("2D, DOM, Color, Collision") .color('rgb(0,0,255)') .attr({ x: 300, y: 150, w: 10, h: 10, dX: Crafty.math.randomInt(2, 5), dY: Crafty.math.randomInt(2, 5) }) .bind('EnterFrame', function () { //hit floor or roof if (this.y <= 0 || this.y >= 290) this.dY *= -1; if (this.x > 600) { this.x = 300; Crafty("LeftPoints").each(function () { this.text(++this.points + " Points") }); } if (this.x < 10) { this.x = 300; Crafty("RightPoints").each(function () { this.text(++this.points + " Points") }); } this.x += this.dX; this.y += this.dY; }) .onHit('Paddle', function () { this.dX *= -1; })
WebGL (Web Graphics Library) is a technology used by most of the mentioned JS frameworks. This is an API based on OpenGLES, which supports 3D rendering within the browser with the help of the HTML canvas element. You can read more details about WebGL Fundamentals in this article.
Mobile Game Development
The mobile market has three big players from a platform point of view: Android (Java), iOS (Objective-C), and Windows Phone (with C#). As Android is based on Java, there are game engines written for the Android Platform, like the multi-platform Cocos2d for Android, although Unity supports the Android platform as well. Note: there are some extra details about Unity available at the end of the article.
iOS offers more alternatives for developers, including Cocos2d, SpriteKit, Sparrow, and many others.
Additional Development Tools
Blender is one of the most widely-used tool for creating textures, sprites, animations and rendering. This is an open source project, but it does work very well and has a wide variety of functionalities.
The Unreal Engine really stunned the game development industry when it first appeared in 1998. The game engine has been developed using mostly C++, and the current major version is Unreal Engine 4. Epic Games has developed a new programming language that can be used with the engine, called UnrealScript or UScript. Below you can see a sample source code of UnrealScript:
state() UpdatePlayer { function DrawFace( String playerName, float updateInterval ) { if (plyerName != ‘’) { waitFor(updateInterval); InternalDraw(playerName, BODY.Face); } } function DrawHand( String playerName, float updateInterval ) { if (plyerName != ‘’) { waitFor(updateInterval); InternalDraw(playerName, BODY.Hand); } } }
Microsoft adopted Unity for Xbox development, but the game engine is available across different platforms. The engine was written in C++ and C#, and currently supports a wide range of platforms, from PS3 and PS4 to Xbox, PS Vita, AppleTV, Android, and even Windows Phone. It offers a full ecosystem for creating the games. Below you can see a source code taken from Unity tutorial:
using UnityEngine; using System.Collections; public class TransformFunctions : MonoBehaviour { public float moveSpeed = 10f; public float turnSpeed = 50f; void Update () { if(Input.GetKey(KeyCode.UpArrow)) transform.Translate(Vector3.forward * moveSpeed * Time.deltaTime); if(Input.GetKey(KeyCode.DownArrow)) transform.Translate(-Vector3.forward * moveSpeed * Time.deltaTime); if(Input.GetKey(KeyCode.LeftArrow)) transform.Rotate(Vector3.up, -turnSpeed * Time.deltaTime); if(Input.GetKey(KeyCode.RightArrow)) transform.Rotate(Vector3.up, turnSpeed * Time.deltaTime); } }
As you can see, the code is very easy to read. It's like reading traditional text. The Unity engine has wrappers and constants. For example, the Input.GetKey() method checks if the key pressed by the user matches the one given as a parameter. Also, please note that the KeyCode.UpArrow is a constant.
As you can see, developers have a wide range of tools and programming languages that they can use to start developing their games. But before choosing any engine or programming language, you must think about the target platforms and the target audience—could you imagine a teenager now sitting in front of a PC playing with mouse and keyboard? That may be the standard a decade ago, but it's not that popular anymore. Teenagers and gamers today are now more into “touch” interfaces—smartphones, tablets, and notebooks with touchscreens. They grew up with having touchscreen everywhere, so when you start to develop your game, please keep these details in mind before selecting any technology.
|
https://www.freelancer.com.ru/community/articles/top-7-programming-languages-used-in-video-games
|
CC-MAIN-2018-47
|
refinedweb
| 1,593
| 56.76
|
Tk_SetWindowVisual - change visual characteristics of window
#include <tk.h>
int Tk_SetWindowVisual(tkwin, visual, depth, colormap)
Token for window.
New visual type to use for tkwin.
Number of bits per pixel desired for tkwin.
New colormap for tkwin, which must be compatible with visual and depth.
When Tk creates a new window it assigns it the default visual characteristics (visual, depth, and colormap) for its screen. Tk_SetWindowVisual may be called to change them. Tk_SetWindowVisual must be called before the window has actually been created in X (e.g. before Tk_MapWindow or Tk_MakeWindowExist has been invoked for the window). The safest thing is to call Tk_SetWindowVisual immediately after calling Tk_CreateWindow. If tkwin has already been created before Tk_SetWindowVisual is called then it returns 0 and doesn't make any changes; otherwise it returns 1 to signify that the operation completed successfully.
Note: Tk_SetWindowVisual should not be called if you just want to change a window's colormap without changing its visual or depth; call Tk_SetWindowColormap instead.
colormap, depth, visual
|
http://search.cpan.org/~srezic/Tk-804.031_503/pod/pTk/SetVisual.pod
|
CC-MAIN-2016-22
|
refinedweb
| 167
| 50.33
|
Cleanup API for core cursors. All the fuctions here are located in RDM DB Engine Library. Linker option:
-l
rdmrdm
See cursor for a more detailed description of a cursor.
#include <rdmcursorapi.h>
Close an RDM_CURSOR.
This function will close a cursor and free any status and row sets associated with the cursor. After a cursor has been closed it can continue to be used and associated with other row sets. Once a cursor has been freed with rdm_cursorFree() it can no longer be used.
#include <rdmcursorapi.h>
Free an RDM_CURSOR.
This function will free all resources associated with a cursor. Attempting to use a cursor after it has been freed may result in an application crash.
|
https://docs.raima.com/rdm/14_1/group__cursor__cleanup.html
|
CC-MAIN-2019-18
|
refinedweb
| 117
| 68.26
|
Play and Record Sound with Python
This Python module provides bindings for the PortAudio library and a few convenience functions to play and record NumPy arrays containing audio signals.
- Documentation:
-
- Source code repository and issue tracker:
-
-:i or and sounddevice.RawOutputStream use plain Python buffer objects and don’t need NumPy at all.:
python3 -m pip install sounddevice --user
If you want to install it system-wide for all users (assuming you have the necessary rights), you can just drop the --user option.
To un-install, use:
python3 -m pip uninstall sounddevice
If you are using Windows, you can alternatively install one of the packages provided at. The PortAudio library is also included in the package and you can get the rest of the dependencies on the same page.
Usage
First, import the module:
import sounddevice as sd
Playback
To record audio data from your sound device into a NumPy array, use sounddevice.rec():
duration = 10.5 # seconds myrecording = sd.rec(int
Callback “wire” with sounddevice.Stream:
import sounddevice as sd duration = 5.5 # seconds def callback(indata, outdata, frames, time, status): if status: print(status) outdata[:] = indata with sd.Stream(channels=2, callback=callback): sd.sleep(int(duration * 1000))
Same thing with sounddevice.RawStream:
import sounddevice as sd duration = 5.5 # seconds def callback(indata, outdata, frames, time, status): if status: print(status) outdata[:] = indata with sd.RawStream(channels=2, dtype='int24', callback=callback): sd.sleep(int(duration * 1000))
Note
We are using 24-bit samples here for no particular reason (just because we can).
Blocking Read/Write Streams).
Release History
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/sounddevice/
|
CC-MAIN-2017-47
|
refinedweb
| 283
| 50.94
|
[flexcoders] Changing font size of button bar
( fontSize, ); - Change fontSize style property of the button bar/buttons Are there any better approaches in Flex to do the same? Thanks, Weyert de Boer
Re: [flexcoders] EPUB reader in Flex
Well, ePub is basically just HTML. As far as I am aware off... What about using the new text engine and load the text as HTML? That might work.
Re: [flexcoders] .csv import stuff
(values); } Weyert de Boer is there any nice and easy way to 'Browse' .csv file on flex and after that send 'Import Request' to php and php will import this .csv file's data to database.? any thought on this stuff!!? From Chandigarh to Chennai - find friends all over India. Click here.
Re: [flexcoders] Re: how open .doc (words )file
By unzipping it. On 7/05/2009, at 5:03 AM, securenetfreedom wrote: How would you gain access to the .doc XML?
Re: [flexcoders] how open .doc (words )file
I would think that using the Word COM+ objects would be the easiest way to read Word documents. Of course, you can also only support the XML-based version of Word documents Office 2007. On 6/05/2009, at 6:23 PM, farid wl wrote: Hi Dear friends anybody knows how we can read and open
Re: [flexcoders] Printing on Mac
And it's better to add the sprites offscreen to the stage to avoid all kind of other printing issues (mainly also blank pages). The problem I am having that the result is crap on the Mac the same code looks all sharp and nice but when I printing on the Mac it's a bit blurry. Really
Re: [flexcoders] Control remote desktop with Flash
You can have a look at the FVNC project? It's a VNC client for Flash/ Flex. You are currently unable to with a VNC server in Flash because you can't access the desktop and mouse/keyboard outside the Flash region. And the Flash Player lacks support for server sockets...
Re: [flexcoders] fvnc
Yes, you can control a remote computer using FVNC. You only can't host a VNC server from Flash. Technically impossible because of lacking features in the Flash Player On 27/04/2009, at 9:14 AM, venkat eswar wrote: Can we do remote desktop feature using fvnc(flash vnc client).Please
Re: [flexcoders] Re: Sending a HEAD request
Today on Twitter I heard about this library as3httpclient. Appears to support doing HEAD requests. See:
Re: [flexcoders] Flex Interview Questions
Why would you want to ask about specific components which can be easily learned after you have hired the guy? Better to ask question about programming principles and maybe Flash specific questions. But asking about specific components is a bit odd in my opinion. Anyways, Sidney had some nice
Re: [flexcoders] virtual keyboard
Please keep in mind that a lot of the virtual keyboard designs are patented in the U.S. As long you don't have to deal in the U.S. you can ignore my comment.
Re: [flexcoders] Merapi and execute file from AIR
You can find a .NET example of communicating with LocalConnection in one of the Flex books. Pro Flex 2. I think it was.
Re: [flexcoders] Merapi and execute file from AIR
No, I wouldn't use Java for this. This will increase the deployment size of your product. Because you would need to have the Java runtime installed.
Re: [flexcoders] Merapi and execute file from AIR
I would've have just written to simple C++ applications which start those applications. If you only want to execute files. I won't have a huge memory hog for such things then.
Re: [flexcoders] Merapi and execute file from AIR
Yeah, working on a C or Freepascal version of Merapi ;)
[flexcoders] AMF and sending/receiving pictures
Does anyone know if there are any solutions to allow sending pictures from the AMF server-side to the Flash/Flex client? Or do I need to go the Base64 or supply as image url way instead?
Re: [flexcoders] Open Source Library for Draw Graph
Have a look at BirdEye: Hi, someone know an open source library for writing VISUALLY graph ?? Thanks.
Re: [flexcoders] Launching Air application from ASP .NET
Why would you want to run the AIR application on the webserver where the ASP.NET project is deployed on?
Re: [flexcoders] adobe TV - QR code reader
It's the session by Mario Klinggemann. He did that session during Aral's online conference. -- Flexcoders Mailing List FAQ: Alternative FAQ location:
Re: [flexcoders] adobe TV - QR code reader -- Flexcoders Mailing List FAQ: Alternative FAQ location: Search
Re: [flexcoders] Re: try, catch, finally ...
I respectfully disagree with not handling exceptions and let them raised in the player. Of course, eating exceptions is terribly bad. Yes, raising exceptions because user input is bad is a long stretch. Hi Kevin, Try-Catch blocks are an absolute necessity as without them, you are putting
Re: [flexcoders] distributing air apps without install
Yes, you only need to have a freely redistrubition license from Adobe which also comes with the information to install your AIR app and the runtime silently. See: Is it possible to distribute an AIR app that can be run without
Re: [flexcoders] Thumbnail from screen (contents)
slide you can reuse it by it like: slideThumb.attachBitmap( thumbnailData, 1); Yours, Weyert de Boer
Re: [flexcoders] Re: Reading the output of an AIR application in C#
You could check if AIR allows to output to the output stream (or System.Out) while running and you can catch that easily under Windows. See for more information: Of course, can also add some
Re: [flexcoders] Re: Reading the output of an AIR application in C#
You can find most of the information you need at:.
Re: [flexcoders] syncing AIR/Flex app with mobile phones
Hi, You could consider to use the SyncML protocol. Now you probably would still need some program in the middle who can push the SyncML data to the mobile phone like via Bluetooth. Of course, you can also generate an iCal file or a vCard file. Maybe? Of course, you can just push to something
Re: [flexcoders] Displaying Extremely large fonts
Hmm. Did you try to scale it? Maybe that works?!?!
Re: [flexcoders] Re: Socket communications in Flex
It's \0 or null character for the XMLSocket class. This limitation doesn't exist when you use the binary sockets.
Re: [flexcoders] Re: Flas 10 on mobile is a go
Carlos, just get a Palm Pre ;) Same coolness but with Flash. -- Flexcoders Mailing List FAQ: Alternative FAQ location:
Re: [flexcoders] Re: Socket communications in Flex
Have a look at: 2007 Leveraging Apollo Runtime.zip Shows how you can make a socket server in Ruby and talk with a Bluetooth GPS device.
Re: [flexcoders] Alternative Rich Text Editor
You could consider using Flash Player 10 and the new text engine. The best you can get:
Re: [flexcoders] GPS in Flex
During WebDU 2007 I presented a similar solution only using Ruby instead. If you want you can have it. As part of the example during the presentation I used one of those Palm GPS mouses. But it should work with all GPS devices which support NMEA standard.
Re: [flexcoders] Using ActiveXObject with Adobe AIR and Flex
You can't really use ActiveX objects from some other different then IE. Firefox, Opera don't support it. Ofcourse, also it's only support under Windows. What you could consider is let the ActiveX (if possible) register an url scheme and use that to trigger it somehow together.
Re: [flexcoders] Using ActiveXObject with Adobe AIR and Flex
Something like: I like that idea to use something like to launch application from AIR and then let the handler check if it's coming from an AIR application ;)
Re: [flexcoders] Any Developers on a Mac?
Wow. Currently, each notebook I have had from Apple needed fixes via warranty. My G4 got is screen replaced and the MBP is still a big drama with two topcase replacements, two screens and keyboard key fix. Basically it needs to a new warranty because the last time they broke the keyboard
Re: [flexcoders] Shape Detection in BitmapData
You might want to check out the last session form Mario Klingemann at MAX. You can find it at:
Re: [flexcoders] Identifying AIR application instance
You could make a constant class and then re-compile and package a AIR file for each download of the application. Or you can the easy way and generate a unique identifier and store it somewhere like encrypted store when the application is first launched. After you can always pass along the
Re: [flexcoders] Question about a component Kap Lab Diagrammer
Nope, but I have experiences with yFiles Flex and that's a real nice component package.
Re: [flexcoders] How to extract and save the contents of an image ,audio or video file ?
Just put the file in a blob field of the database. If you really want to store it in the database (sounds silly too me).
Re: [flexcoders] Re: How to extract and save the contents of an image ,audio or video file ?
You can just try to compress the file first and then just send the bytearray to the blob. :)
Re: [flexcoders] 64bit Flash player released, Linux only for ow
What's the added value of having the problem in 64bits? Better to concentrate on a Flash Player performance improvement on the Mac to decrease the CPU usages in such manner the fans ain't sounding like a flying by airplane :)
Re: [flexcoders] Re: Flex and Screen Readers
Hi Gus, MacOSX comes with a nice screen reader out of the box. It's called VoiceOver. Yours, Weyert de Boer
Re: [flexcoders] Re: Do you use a Mac?
Well, Leopard Server runs in a virtual machine. It's supported by VMWare Fusion and Parallels under OSX. Fotis Chatzinikos wrote: Are you sure that you cannot run Leopard (MacOSX) on a vm?
Re: [flexcoders] Flex and Screen Readers
Hey Gus, This page has some interesting information about this combination: Yours, Weyert de Boer
Re: [flexcoders] Do you use a Mac?
Then there's the free(as in beer) Apple Developer Tools, which you need to install, but include: XCode and all the other good stuff it brings, including WebObjects, IPhone SDK, general Cocoa SDK tools, various profilers etc etc. the list just goes on an on.. In my experience IDEs like
Re: [flexcoders] Do you use a Mac?
I have been moving towards OSX three years ago. Only I have the feeling you replacing the BSOD with annoying rotating beachballs. You are having a lot of beachballs under OSX sometimes at odd times. Anyways, beside of things like Disco, QuickSilver and TextMate are nice applications which ain't
Re: [flexcoders] form to pdf
Hi Gustavo, You could consider using the AlivePDF! solution which allows you to generate PDF documents from inside Flash. Have a look at this example too: Yours, Weyert de Boer
Re: [flexcoders] Re: Can AIR call local DLLs?
You can use AlivePDF to generate PDF files.
Re: [flexcoders] BSS is Looking someone
Does it really has to be PHP? Not like .NET, Delphi or Ruby (RoR)? PHP is just driving me insane.
[flexcoders] Custom itemRenderer for Tree
can make this using MXML? Yours, Weyert de Boer
Re: [flexcoders] Re: Custom itemRenderer for Tree
Sorry, the code was I was using can be found below now. I am using: ?xml version=1.0 encoding=utf-8? mx:VBox xmlns:mx=; xmlns:degrafa=com.degrafa.* xmlns:paint=com.degrafa.paint.* xmlns:geometry=com.degrafa.geometry.* height=22 degrafa:Surface
[flexcoders] Problem with the DataGrid column keeps switching location
Hello, I have been working on a solution to make some sort of property grid in Flex and I have been using the DataGrid. Only I am experiencing some problems with it. The datagrid consists of two columns called name and value. The second value is a column where I used a custom itemRenderer to
Re: [flexcoders] Problem with the DataGrid column keeps switching location
The problem is shown when you click a few times on the buttons.
Re: [flexcoders] Best way to deliver data via Rails
You can consider using RubyAMF.
[flexcoders] Flex 4 and Flex 3 mixable?
Hello! I am currently working on a project where one third-party component is written for Flex 3. Only I am curious if it's possible to combine Flex 3 parts with Flex 4. Anyone know if this possible or having experience with this? Yours, Weyert de Boer
Re: [flexcoders] Re: Open pdf from ByteArray in new browser window (not AIR)
You can find information about the Excel file spec at Microsoft.com (msdn probably) or at openoffice:
[flexcoders] Template Engine for Flash?
Hello! Does anyone know some simple text-based template engine for Flash? I would like to generate some text files based on code snippets/templates in my Flash project. Only I am curious if anyone know some simple template engine written in ActionScript which might support token replacement
Re: [flexcoders] Re: What is Flexcoders?
Did you guys already invade New Zealand? As announced last year?
Re: [flexcoders] Results of Direct Phone Call to Scene7.com
No troll. Just commenting on the earlier Apple=quality statement. Sent from my iPhone On Aug 29, 2008, at 4:59 AM, Howard Fore [EMAIL PROTECTED] wrote: Please don't feed the trolls... On Thu, Aug 28, 2008 at 10:04 PM, Weyert de Boer [EMAIL PROTECTED] wrote: Apple don't even want to fix
Re: [flexcoders] Results of Direct Phone Call to Scene7.com
Apple don't even want to fix the keyboard lightning that there tech people broke when they replaced the display for the second time. Warranty or AppleCare is a bitch. Even Acer or Dell are better in that regard.
Re: [flexcoders] Changing itemEditors of a DataGrid a row basis
, Weyert de Boer
Re: [flexcoders] Changing itemEditors of a DataGrid a row basis
Setting the properties property of the ClassFactory also worked: properties.dataProvider = new Array( 1, 2, 3, 4, 5 ); Subclass ComboBox and preset its dataprovider *From:* flexcoders@yahoogroups.com [mailto:[EMAIL PROTECTED] *On Behalf Of *Weyert de Boer *Sent
[flexcoders] Changing itemEditors of a DataGrid a row basis
Hello! I am trying to make a property grid as seen in other applications in Flex. Only I am having trouble to change the itemEditors use by the current selected row. I am having the following ArrayCollection which consists of the class: public class PropertyItem { [Bindable] public
Re: [flexcoders] Changing itemEditors of a DataGrid a row basis
Hi Alex! Thanks I will give it a shot :)
Re: [flexcoders] Custom Container component and drag 'n drop support
Thanks! Somehow it worked after I woke up this morning. Now I only need to battle against clipping problems :)
[flexcoders] Custom Container component and drag 'n drop support
I have been working on a custom component which descends from the Container class. Now I am trying to implement drag 'n drop support. I have enabled dragEnabled-property on my other List-component in the MXML file. I also have add the following listeners in my custom component:
Re: [flexcoders] Gumbo on AIR
Nope, Gumbo requires the Flash Player 10. Currently, there is no AIR version which supports this player.
Re: [flexcoders] Custom component and clipping
Alex Harui wrote: I’d use mask or scrollrect Thanks. I will give it a try! -- Flexcoders Mailing List FAQ:
[flexcoders] Custom component and clipping
it's something common, though. Yours, Weyert de Boer
[flexcoders] Child items in custom component
what should be done to get this working? Yours, Weyert de Boer
Re: [flexcoders] Child items in custom component
, this is not the case. Does anyone know what should be done to get this working? Yours, Weyert de Boer
Re: [flexcoders] Child items in custom component
Thanks! I will give it a shot tomorrow. Sounds, logical somehow. I suppose it should work.
[flexcoders] Flex Builder and Ant
Does anyone know how I can combine the use of Flex Builder and Ant to compile Flex/AIR projects?
Re: [flexcoders] Re: Flex Builder and Ant
Hi! I have received a working apache build script from James Ward via the ApolloCoders list. Now I will see if I can somehow combine Flex Builder and this build script together. Yes, mainly for the profiler and the debugging.
|
https://www.mail-archive.com/search?l=flexcoders%40yahoogroups.com&q=from:%22Weyert+de+Boer%22&o=newest
|
CC-MAIN-2021-39
|
refinedweb
| 2,743
| 72.46
|
Answered by:
OLE error code 0x80040154. Class is not registered. OLE Object is being ignored
Question
Answers
All replies
I open a sample with ole control on VF 9.0 (\samples\solution\solution.scx ) and click right on control to point to builder.The error notice : There are no registered builders of this type.
The builder good working exept to activeX control (olecontrol) with this error!
solution's folder VF9 installed in the path : drive\VF9\samples\solution\solution.scx
Please you assist me this problem to register with which file (.ocx or dl)l and how ?
Thanks!
- Which:
USE "C:\Program Files\MSVFP9\Samples\Solution\solution.scx"
You see - in the field Ole2 some rows have "Memo" with upper "M". Open them with double-click (or just point with mouse).
You will see: "OLEObject = C:\WINNT\System32\MSCOMCTL.OCX"
How to register:
Start - Run:
regsvr32 "C:\WINNT\System32\MSCOMCTL.OCX" (or some other path).
Hi All,
As I am working on VB.net application and encountered this Unhandled exception in my application.
The Error occured creating the form. The error is : Class not registered ( Exception from HRESULT:0x80040154 ( REGDB_E_CLASSNOTREG))
I try use the command regsvr32 to regist the component, if there is any .dlb, .ocx or unmanaged dll in the folder. too as i put my program on D:\ for the following and use regsvr32.exe on cmd promt :
Same for the rest of dll files (AxMSComCtl2, AxMSCommLib, AxMSFlexGridLib, MSComCtl2, MSCommLib and MSFlexGridLib )
I get the same error. Eg It prompt LoadLibrary ("AxMSComCtl2.dll) failed - The specified module could not be found.
Currently i use VB6 existing codes to modify with Microsoft FlexGridControls by puting the code "Imports MSFlexGridLib" onto my Inifile form using MS VS 2008. And the properties indicate File type as ActiveX.
I also observe that Eg Under AxInterop.MSComCtl2, namespaces AxMSComCtl2, member of AxInterop.MSComCtl2.
So far i search through internet, it seem that i need to install VB6 to able to do install or register MSFlexGridLib, is it true? Or is there any method i can register?
Hoping to try all methods to get me moving. tks
Hi all,
I reinstall the VS2008 application again on another laptop and use back the older version of the project code to start develop again. I have no ideal why it work now with no error and warning appear....
Lesson Learn : Always Backup your project >> create a brand new setup project to rebuild it again when you start develop with new codes.
Anyways thanks for all the advise and comment given.
|
https://social.msdn.microsoft.com/Forums/en-US/35a85fea-bbf6-422a-8346-204a0f7d16d0/ole-error-code-0x80040154-class-is-not-registered-ole-object-is-being-ignored?forum=visualfoxprogeneral
|
CC-MAIN-2016-22
|
refinedweb
| 424
| 68.57
|
How to set QWebView content from QHelpEngineCore to enable CSS links?
As the title suggests I am setting the content of a QWebView from QHelpEngineCore content. Problem is in order to load the data into QWebView I have to get the data content from QHelpEngine::fileData which is the html and does not give QWebView a valid URL to base further loads from.
Question is, how to either get URL's from QHelpEngineCore that QWebView can use -or- how to set all the data to QWebView so that it will display properly (including CSS files included in a <link/>, or maybe a third option??
Thanks for the help.
Let me ask this another way. Is there any way to intercept url's QWebView or QWebPage is trying to load and set the content the link would have loaded manually?
Another way that would work if I could figure out how to do it is for me to manually add the css file to a page; while not ideal this would be acceptable. .. anyone? :)
Thanks.
Hi,
how do you set the html content for the QWebView? If your using QWebView::setHtml(QString html, QUrl url), you just have to provide the URL you used for QHelpEngineCore::fileData(QUrl url).
Or if the css file is not at the same location as the html file just extract the file location as specified in the html and use this location. To get the location of the css as specified in the html you could do this:
@
QStringList cssFiles;
QWebElementCollection styleSheetLink = m_MyHtmlView()->page()->mainFrame()->findAllElements("link[rel=stylesheet]");
for(int i = 0; i < styleSheetLink.count(); ++i)
{
cssFiles << styleSheetLink.at(i).attribute("href");
}@
I tried that, unfortunately QWebView does not (as far as I can tell) recognize qthelp:// url schemes.
Sry I'm not familiar with the mechanics or use of the HelpEngine. How exactly do the returned Url's look like? In QHelpEngineCore's documentation it says:.
don't know if that might be of any help.
I can load url's no problem. The problem is the urls within the page itself, QWebView does not recognize QtHelp url's and there appears to be no way to intercept the urls and load the data yourself to pass on to the web view as you can with QTextBrowser.
If you could read the content of the css files you could then create a new file at any location insert the css content and set your own Url for the WebView. Bit of an overkill but I can't think of anything else. Sorry.
Unfortunately that is what I ended up doing, while it is working ok for now since I am only looking for a single css file; this approach will limit me when it comes to inserting other things, such as Images. :(
How does QHelpEngineCore store the file connected to a Url internally? Are the files stored in a Resource file, or are they actually stored at a specific location like maybe "..\QHelpEngineCoreInstallationDir\files\someFile.css" so you could map the QUrl provided by QHelpEngineCore to a representation you can use for your WebView.
They are stored in a sqlite database.
Can i bring this topic back to life?
I thought that with this code it would work:
@QByteArray helpData = m_helpEngine->fileData(url);
QByteArray styles = m_helpEngine->fileData(QUrl("qthelp://com.mynamespace/doc/styles/style.css"));
if(!helpData.isEmpty())
{
ui->webView->setStyleSheet(styles);
ui->webView->setHtml(helpData);
}@
Even when using css inline in the HTML-file isn't working.
You would think the code above should give me a decent webpage with css.
But no, even my second option isn't showing me anything.
the "setHtml(..)": function needs the Html as QString and to be able to load all the resources linked in the Html (e.g. the style sheet) you must provide the BaseUrl. If your Html has no stylesheet resource defined (<link rel="stylesheet" type="text/css" href="myStyleSheet.css">) you may need to add that to make it work.
Also the "setStyleSheet(..)": function is inherited from QWidget and is used to set the style for the widget (your WebView) and not the Html content of the WebView.
Its just a shot in the dark and I haven't tried it myself, but maybe this will work:
@
QString myHtmlContent = QString( m_helpEngine->fileData(url) );
QUrl myBaseUrl = QUrl("qthelp://com.mynamespace/doc/styles")
if(!myHtmlContent.isEmpty())
{
ui->webView->setHtml(myHtmlContent, myBaseUrl);
}
@
It is showing me more (e.g. an icon where the image has to be) still no styles though.
This is what is in my head-tag in the html-file:
<link href="../style/styles.css" rel="stylesheet" type="text/css">
my map-structure like this:
doc
- images
- style
styles.css
- ...
so the baseUrl should be right with:
@QUrl("qthelp://com.mynamespace/doc");@
But doing nothing, are there maybe things QWebview can't display? Cause I even got problems loading a simple online webpage :s
If you have problems displaying images, this might result from missing plugins. You need to have the following structure in the directory your running your code from: plugins\imageformats\qjpeg4.dll ... and all the other plugins you need for the different image datatypes. You can find the plugins folder in your Qt folder.
I really don't know if the combination of qtHelp Urls and Html can work. Maybe you can try to find out more by using the "WebInspector": for your WebPage.
I don't think I'm understanding you well.
I know where to find the plugins (and they are there), but do I need to put the dlls in my debug/release map or in my main projects folder? e.g. myapplication\debug\plugins\imageformats... or myapplication\plugins\imageformats...
It's a pitty you can't debug assistant, cause that is almost what i need. What it displays.
But it doens't seem to work in my QWebview.
The idea with the plugins was just a guess. I think enabling the "QWebInspector": is the first thing you should try.
And how can I use that? I never worked with QWebview (apart from this subject), let alone that I worked with QWebInspector.
I've tried to put create a QWebPage like:
@QWebPage *page = ui->webView->page();@
and a QWebinspector like so:
@QWebInspector *inspector = new QWebInspector;@
when i try to set my page, nothing happens, my QWebPage is empty...
I haven't used the WebInspector myself but the documentation of it is pretty much selfexplanatory. After you have set the Html content for your WebView, you just need to get the WebPage pointer from the WebView, create the WebInspector and set the page for the WebInspector.
@
ui->yourWebView->setHtml(yourContent, yourBaseUrl);
QWebInspector *inspector = new QWebInspector(ui->yourWebView);
inspector->setPage(ui->yourWebView->page());
inspector->setVisible();
@
If this is not working, maybe you also need to set some settings to make it work (QWebSettings::DeveloperExtrasEnabled). Have a look at "QWebSettings": this class also has a function called setUserStyleSheetUrl(..).
Sorry if I have to bother you again:s
But nothing is happening. Even not showing the inspector anymore.
And this is what i'm trying to do:
@QString myHtmlContent = QString(m_helpEngine->fileData(url));
QUrl myBaseUrl = QUrl("qthelp://com.mynamespace/doc");
if(!myHtmlContent.isEmpty())
{
// ui->webView->setContent(helpData, QString(), myBaseUrl);
ui->webView->setHtml(myHtmlContent, myBaseUrl);
}
//QWebPage *page = ;
ui->webView->page()->settings()->DeveloperExtrasEnabled;
QWebInspector *inspector = new QWebInspector(ui->webView);
inspector->setPage(ui->webView->page());
inspector->setVisible(true);@
[EDIT] It works: @ui->webView->page()->settings()->setAttribute(QWebSettings::DeveloperExtrasEnabled, true);@
Do the style sheets and images also work now ?
Ok now that I got that, nothing seems to be loaded. No images, no stylesheets...
!!
As you can see, no content is loaded (i think)
Maybe you can first test with a local Html file ,Css file and image and get that working and then proceed with the qHelp Urls. Just to pinpoint the error source.
Edit: Have you checked the QString holding the Html content?
That indeed could be a good idea, i'll keep you posted
With local files it works fine, it shows my pictures and the css i made.
I have checked the QString, it's filled with the code I programmed and stored in the QHelpEngine.
Could there be something wrong with my qch, qhc, qhcp or qhp files?
I took as example the code from "Using Qt Assistant as a Custom Help Viewer":
But I don't know if that was the best idea
I'm really sorry but I have no experience in using the QHelpEngine and thus am unable to help you with anything related to this component. I fear the problem is that the qHelp Urls can not be used with Html, but this is just a guess. Is there maybe another way to display the help files other then using QWebkit?
No problem, maybe I better open a new topic to bring this thing up:) But thanks for the help! You did a great job helping me understand some handy tools ;)
There is a way, using Assistant, but the problem is that user can browse there (even when you turn this function off). We don't want them to browse, but to only view the pages that are related to the topic.
|
https://forum.qt.io/topic/10329/how-to-set-qwebview-content-from-qhelpenginecore-to-enable-css-links
|
CC-MAIN-2018-30
|
refinedweb
| 1,537
| 63.09
|
WsReadQualifiedName function
Reads a qualified name and separates it into its prefix, localName and namespace based on the current namespace scope of the XML_READER. If the ns parameter is specified, then the namespace that the prefix is bound to will be returned, or WS_E_INVALID_FORMAT will be returned. (See Windows Web Services Return Values.) The strings are placed in the specified heap.
Syntax
Parameters
- reader [in]
The reader which should read the qualified name.
- heap [in]
The heap on which the resulting strings should be allocated.
- prefix
The prefix of the qualified name is returned here.
- localName [out]
The localName of the qualified name is returned here.
- ns
The namespace to which the qualified name is bound is returned here.
- error [in, optional]
If the localName is missing the function will return WS_E_INVALID_FORMAT. If the ns parameter is specified, but the prefix is not bound to a namespace, WS_E_INVALID_FORMAT will be returned.
Return value
This function can return one of these values.
Requirements
|
https://msdn.microsoft.com/en-us/library/windows/apps/dd430597.aspx
|
CC-MAIN-2015-35
|
refinedweb
| 162
| 57.77
|
The polymorphic algorithms described in this section are pieces of reusable functionality provided by the Java 2 SDK. All of them come from the Collections class, and all take the form of static methods whose first argument is the collection on which the operation is to be performed. The great majority of the algorithms provided by the Java platform operate on List objects, but a couple of them (min and max) operate on arbitrary Collection objects. This section describes the following algorithms:
Sorting
The sort algorithm reorders a List so that its elements are in ascending order according to an ordering relation. Two forms of the operation are provided. The simple form takes a List and sorts it according to its elements' natural ordering. If you're unfamiliar with the concept of natural ordering, read the section Object Ordering (page 496).
The sort operation uses a slightly optimized merge sort algorithm. This algorithm is
Here's a trivial program that prints out its arguments in lexicographic (alphabetical) order:
import java.util.*; public class Sort { public static void main(String args[]) { List l = Arrays.asList(args); Collections.sort(l); System.out.println(l); } } that you wanted to print out the permutation groups from our earlier example in reverse order of size, largest permutation group first. The following example shows you how to achieve this with the help of the second form of the sort method.
Recall that the permutation groups are stored as values in a Map, in the form of List objects. The revised printing code iterates through the Map's values view, putting every List that passes the minimum-size test into a List of Lists. Then the code sorts this List, using a Comparator that expects List objects, and implements reverse-size ordering. Finally, the code iterates through the sorted List, printing its elements (the permutation groups). The following code replaces the printing code at the end of Perm's main method:
// Make a List); }
Running the program on the same dictionary in the section Map Interface (page 487), with the same minimum permutation: ]
Shuffling
The shuffle algorithm does the opposite of what sort does, destroying any trace of order that may have been present in a List. That is to say,. The first takes a List and uses a default source of randomness. The second requires the caller to provide a Random object to use as a source of randomness. The code for this algorithm is used as an example in the section List Interface (page 479).
Routine Data Manipulation
The Collections class provides three algorithms for doing routine data manipulation on List objects. All these algorithms are pretty straightforward.
Searching
The binary search algorithm searches for a specified element in a sorted List. This algorithm has two forms. The first takes a List and an element to search for (the "search key"). This form assumes that the List is sorted into ascending order according to the natural ordering of its elements. The second form of the call(l, key); if (pos < 0) { l.add(-pos-1); }
Finding Extreme Values
The min and the max algorithms return, respectively, the minimum and maximum element contained in a specified Collection. Both of these operations come in two forms. The sim-ple form takes only a Collection and returns the minimum (or maximum) element according to the elements' natural ordering. The second form takes a Comparator in addition to the Collection and returns the minimum (or maximum) element according to the specified Comparator.
These are the only algorithm the Java platform provides that work on arbitrary Collection objects, as opposed to List objects. Like the fill algorithm, these algorithms are quite straightforward to implement and are included in the Java platform solely as a convenience to programmers.
|
https://flylib.com/books/en/2.33.1/algorithms.html
|
CC-MAIN-2021-39
|
refinedweb
| 627
| 53.31
|
Export internal table to Excel with XLSX Workbench
related page 1
related page 2
related page 3
NOTE: Before beginning, the XLSX Workbench functionality must be available in the your system.
Let’s use standard demo report BCALV_GRID_DEMO is available in the every system (the report demonstrates how to use ALV-Grid control). We will export an output table to the Excel based form.
1 PREPARE A PRINTING PROGRAM.
1.1 Copy standard demo report BCALV_GRID_DEMO to the customer namespace, for example Z_BCALV_GRID_DEMO :
1.2 In the copied report, add new button to GUI-status ‘MAIN100’:
1.3 In the report line number 40, insert next code to processing the new OK-code :
1.4 Activate objects:
2 PREPARE A FORM.
2.1 Launch XLSX Workbench, and in the popup window specify a form name TEST_GRID , and then press the button «Create»:
Empty form will be displayed:
2.2 Push button
to save the form.
2.3 Assign context FLIGHTTAB to the form:
Herewith, you will be prompted to create a form’s structure automatically (based on context):
We will create form structrure manually,
therefore we should press the button
.
2.4 Add a «Grid» component into the Sheet:
2.5 Make markup in the Excel template:
2.6 Select the «Grid» component node and go to the Properties tab:
Select the marked cell range in the Excel-template and press the button
located in the Item «Area in the template»
- Press the button
in the Item «Value» and pick the Context Table (FLIGHTTAB) from the popup-list.
Press the button
in the Item «Layout options» and choose columns for output.
2.7 Activate form by pressing button
.
3 EXECUTION.
Run your report Z_BCALV_GRID_DEMO; ALV-grid will be displayed :
Press button
to export Grid to Excel form :
Looks great. Hope to have some time soon an test it myself. Thank you.
Igor - you are just amazing..
Thank you, Raghavendra
Great job!
It's the greatest open source project in ABAP environment.
Thanks for your share.
Hailong, thank You!
|
https://blogs.sap.com/2014/11/20/export-internal-table-to-excel-with-xlsx-workbench/
|
CC-MAIN-2022-33
|
refinedweb
| 338
| 65.93
|
.
Originally Posted by fifo_thekid
...
5- Called the function using:
Code:
NASPlatformUtil:: openUrl("");
Does the space between :: and openUrl really exists?What is NASPlatformUtil? A class or a namespace?If it is a class is a openUrl its static method?
NASPlatformUtil:: openUrl("");
Victor Nijegorodov
1- No, it doesn't exist
2- It's a class
3- Yes, it is a static method
Last edited by fifo_thekid; January 18th, 2013 at 01:49 AM.
Originally Posted by fifo_thekid
1- No, it doesn't exist
2- It's a class
3- Yes, it is a static method
That error looks like a linker error, not a compiler error. Is it a linker error?
I can right click on the function, click Open Declaration and get it without any problem
That doesn't prove anything. That function could have been eliminated from the final static library when the library was being built.
Regards,
Paul McKenzie
Last edited by Paul McKenzie; January 18th, 2013 at 05:25 AM.
Forum Rules
|
http://forums.codeguru.com/showthread.php?533043-Windows-strcpy-overflow-question&goto=nextnewest
|
CC-MAIN-2018-13
|
refinedweb
| 165
| 65.52
|
Difference between revisions of "ListT done right alternative"
From HaskellWiki
Latest revision as of 06:31, 12 May 2015
The following is an alternative implementation for ListT done right. You will find a similar implementation in the "list-t" package.
import Control.Monad.State import Control.Monad.Reader import Control.Monad.Error import Control.Monad.Cont import Control.Arrow newtype ListT m a = ListT { runListT :: m (Maybe (a, ListT m a)) } foldListT :: Monad m => (a -> m b -> m b) -> m b -> ListT m a -> m b foldListT c n (ListT m) = maybe n (\(x,l) -> c x (foldListT c n l)) =<< m -- In ListT from Control.Monad this one is the data constructor ListT, so sadly, this code can't be a drop-in replacement. liftList :: Monad m => [a] -> ListT m a liftList [] = ListT $ return Nothing liftList (x:xs) = ListT . return $ Just (x, liftList xs) instance Functor m => Functor (ListT m) where fmap f (ListT m) = ListT $ fmap (fmap $ f *** fmap f) m where instance (Monad m) => Monad (ListT m) where return x = ListT . return $ Just (x, mzero) m >>= f = ListT $ foldListT (\x l -> runListT $ f x `mplus` ListT l) (return Nothing) m instance MonadTrans ListT where lift = ListT . liftM (\x -> Just (x, mzero)) instance Monad m => MonadPlus (ListT m) where mzero = ListT $ return Nothing ListT m1 `mplus` ListT m2 = ListT $ maybe m2 (return . Just . second (`mplus` ListT m2)) =<< m1 -- These things typecheck, but I haven't made sure what they do is sensible. => MonadCont (ListT m) where callCC f = ListT $ callCC $ \c -> runListT . f $ \a -> ListT . c $ Just (a, ListT $ return Nothing) instance (MonadError e m) => MonadError e (ListT m) where throwError = lift . throwError -- I can't really decide between those two possible implementations. -- The first one is more like the IO monad works, the second one catches -- all possible errors in the list. -- ListT m `catchError` h = ListT $ m `catchError` \e -> runListT (h e) (m :: ListT m a) `catchError` h = deepCatch m where deepCatch :: ListT m a -> ListT m a deepCatch (ListT xs) = ListT $ liftM (fmap $ second deepCatch) xs `catchError` \e -> runListT (h e)
|
https://wiki.haskell.org/index.php?title=ListT_done_right_alternative&diff=59731
|
CC-MAIN-2020-34
|
refinedweb
| 346
| 70.84
|
Before we append text to an existing file, we assume we have a file named test.txt in our src folder.
Here's the content of test.txt
This is a Test file.
Example 1: Append text to existing file
import java.io.IOException import java.nio.file.Files import java.nio.file.Paths import java.nio.file.StandardOpenOption fun main(args: Array<String>) { val path = System.getProperty("user.dir") + "\\src\\test.txt" val text = "Added text" try { Files.write(Paths.get(path), text.toByteArray(), StandardOpenOption.APPEND) } catch (e: IOException) { } }
When you run the program, the test.txt file now contains:
This is a Test file.Added text
In the above program, we use
System's
user.dir property to get the current directory stored in the variable path. Check Kotlin Program to get the current directory for more information.
Likewise, the text to be added is stored in the variable text. Then, inside a
try-catch block we use
Files'
write() method to append text to the existing file.
The
write() method takes path of the given file, the text to the written and how the file should be open for writing. In our case, we used
APPEND option for writing.
Since the write() method may return an
IOException, we use a
try-catch block to catch the exception properly.
Example 2: Append text to an existing file using FileWriter
import java.io.FileWriter import java.io.IOException fun main(args: Array<String>) { val path = System.getProperty("user.dir") + "\\src\\test.txt" val text = "Added text" try { val fw = FileWriter(path, true) fw.write(text) fw.close() } catch (e: IOException) { } }
The output of the program is same as Example 1. filewriter.
Here's the equivalent Java code: Java program to append text to an existing file.
|
https://www.programiz.com/kotlin-programming/examples/append-text-existing-file
|
CC-MAIN-2021-04
|
refinedweb
| 297
| 70.29
|
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Support Forum » Realtimeclock1.c program fails to display result on LCD with ATmega328P
The interrupt handler sei() seems to disable the LCD display when using the ATmega328P. I have just changed over to the ATmega328P from the ATmega168. I have tested most of the basic programs that came with the nerd kit and they work but the display failed to work in a program that I am writing to test the capacity of rechargeable batteries. After much trouble shooting I have discovered that the failure comes from the code that I am using from the realtimeclock1.c program. So I loaded the into the 328 and it also fails to display a result on the LCD. If I comment out the sei() about six lines from the end of the program the display is active and displays zero. Of course the program will not function as intended with that line commented out.
I seems that some way sei() disables the LCD. The display remains completely blank if sei() is not commented out.
What can I do to make this program work with the ATmega328P?
"LCD with sei() commented pic"
// realtimeclock1.c
// for NerdKits with ATmega328p
//"
#include "../libnerdkits/io_328p;
}
Hi vgphotog,
Change line 56
SIGNAL(SIG_OUTPUT_COMPARE0A)
TO:
SIGNAL(TIMER0_COMPA_vect)
I just had that problem about an hour ago ,too.
-Dan
Dan thank you very much, that fixed it. Where would I have been able to find this information and possibly other fixes for the 328
Well, Ralph and I had some dialog a while back in which we discovered this same issue: Forum Topic - Nerdy Stopwatch / Kitchen Timer.
Here I posted my modified version of the realtimeclock.c project... Trouble was, the code worked for me but not Ralph.
Noter figured it out for us and offered the same fix that Dan provided you. And yes, part of the issue involved the fact that I was using the '168, and Ralph was using the '328P...
Then Rick posted some additional information to the thread. He suggested that we replace
with
ISR(TIMER0_COMPA_vect)
What we learned from Rick was that "The SIGNAL syntax has been depreciated ( LINK ) and replaced by ISR."
Rick also stated "that is also why the vector name changed. The reason SIGNAL still works is that interrupt.h still handles both but it may not in the future."
Rick referenced this snippet from interrupt.h -
/** \def SIGNAL(vector)
\ingroup avr_interrupts
\code #include <avr/interrupt.h> \endcode
Introduces an interrupt handler function that runs with global interrupts
initially disabled.
This is the same as the ISR macro without optional attributes.
\deprecated Do not use SIGNAL() in new code. Use ISR() instead.
Please check out the "Nerdy Stopwatch / Kitchen Timer" thread for an interesting and more detailed explanation of this topic!
Please log in to post a reply.
|
http://www.nerdkits.com/forum/thread/1785/
|
CC-MAIN-2020-05
|
refinedweb
| 483
| 66.33
|
Next article: Friday Q&A 2012-02-03: Ring Buffers and Mirrored Memory: Part I
Previous article: Friday Q&A 2012-01-20: Fork Safety
Tags: classic followup guest nostalgia oldschool toolbox.
Includes and constants and globals, oh my
In the Toolbox days, there were no all-encompassing headers where you do one
#import and you have everything you need. Right up until Carbon, this simple truth remained as a holdover from the times when compilers took a major speed penalty for every single header they had to read. Nor did
#import exist. Just
#include, with all the slings and arrows that flesh was heir to. At least the names were usually straightforward.
#include <Dialogs.h> // Dialog Manager (for InitDialogs()) #include <Fonts.h> // Font Manager (for InitFonts()) #include <MacWindows.h> // Window Manager #include <Menus.h> // Menu Manager #include <QuickDraw.h> // QuickDraw #include <TextEdit.h> // TextEdit (for TEInit()) #include <Controls.h> // Control Manager
There also weren't nice string names and outlet connections from nibs to give you access to your resources, so you were best off defining a constant for every resource in your application to make your code even remotely readable. Resource IDs for applications started from 128, and you had better believe following that rule was important. Because resource files were opened as overlays atop each other, the system's resources sat right underneath yours, and if you overrode a system resource by defining another with the same type and ID, chaos could and generally did ensue.
enum { kMenuBarID = 128, kAppleMenu = 128, kFileMenu = 129, kEditMenu = 130, kWindowID = 128 };
Then you had global variables. Generally frowned upon in any modern program of today, they were all but a requirement for Toolbox development when not using C++ or some other object-oriented language. There was just no other efficient way to get data where it was going. Certainly the global variable for exiting the event loop was a lot better than returning
Boolean from every single event handling function in your code on the off chance an action might lead to a
Quit. Not to mention your
aevt/quit Apple Event handler (more on that below).
static AEEventHandlerUPP oappUPP, odocUPP, pdocUPP, quitUPP; static Boolean gDone = false;
Alerts and StandardFile
Check out the fancy function for handling an about box! Later versions of OS 8 and 9 brought a better API for presenting simple alerts, but before that you just defined an
ALRT (ALeRT) and
DITL (Dialog ITem List) pair which described your about box and displayed them. I already violated my own rule about defining a constant for the resource ID, something that would tend to happen a lot when you needed just one more resource for this one little bit of code.
Alert() and its cousins such as
StopAlert() were used quite a bit for such things as error messages as well.
static void DoAboutCommand(void) { Alert(128, NULL); }
The API for asking the user to select a file from the hard drive went through quite a few evolutions, along with the data type for storing the location of a file. My knowledge only goes back so far as the System 6 APIs, of which
SFGetFile() was the relevant member. This returned an
SFReply, containing a volume reference number, directory ID, and a 63-character MacRoman-encoded file name string. This was superseded in System 7 with
StandardGetFile(), whose
StandardFileReply instead contained an
FSSpec, which was really just the same three values packaged up as a fancy structure for easier passing around.
In OS 8.1 (or 8.5, I've forgotten which), Navigation Services and the modern File Manager were introduced, which provided a considerably more featured API, an interface whose merits versus the old StandardFile dialogs were debated, and the
FSRef datatype which survived right into some Cocoa code until URLs became the norm for referring to files on disk. A particular quirk of
FSRef was that it could not refer to files which did not exist, prompting many coders to come up with their own solutions for holding either a parent directory
FSRef and file name (an
HFSUniStr255, a brutally contrived data type if ever there was one) or falling back on
FSSpec. The latter was common for its simplicity, making
FSRef a bit of a mess.
I did not use Navigation Services here, as I was trying to stick to System 7 APIs. Instead, I call into StandardFile, telling it to let me pick only files with the type
TEXT, and checking for whether or not the user clicked OK before opening the file (or more exactly, calling the stub function which should have opened it). Note: This is different from the version in the original article, which didn't limit the file type.
Even many Cocoa programmers are still familiar with the
OSType concept. In OS 9 and earlier, files typically did not have extensions, and even where they did, only the most modern versions of OS 9 made significant decisions based on them. UTIs did not exist at all. Instead, files had (and indeed, still have, though they're much rarer now) "creator code"s and "type"s. These were ostensibly four-letter codes uniquely identifying the owner and type of a file, of which all-lowercase codes were reserved by Apple. In truth, they were just 32-bit integers (not strings) whose uniqueness was guaranteed by convention at best, and by nothing most of the time. Collisions in creator codes happened all the time, and there was no standardization of file types either. It was little different than problems with three-letter file extensions, just with a little extra space to work in. UTIs were created specifically to address these problems. I leave it to the reader to decide whether that's met with any success.
Note: The practiced C programmer will wonder how you can have a four-character character literal, as these are not supported by the language specification. The spec technically allows for these literals, but to my knowledge they were only ever widely used in Toolbox and Carbon programming, and have no place in Cocoa. Modern compilers will warn when they're used unless explicitly told not to.
static void DoOpenCommand(void) { StandardFileReply stdReply; SFTypeList theTypeList = { 'TEXT', 0, 0, 0 }; StandardGetFile(NULL, 1, theTypeList, &stdReply); if (stdReply.sfGood) DoOpenDocument(stdReply.sfFile); }
Apple Events
Apple Events were, and still remain to this day, the mechanism by which AppleScript functions. They were also the first real form of direct interprocess communication, an area severely lacking in the Toolbox before their introduction for the primary reason that before MultiFinder and System 7, only one application ever ran at a time. Ever! Desk accessories notwithstanding. Many applications were written with this in mind, and the cooperative multitasking environment provided by System 7 necessitated a lot of fundamental re-engineering for programs which had gotten too aggressive about owning the whole machine.
Apple Events had a "class" and "event code", which were both another use of those 32-bit character literals. They evolved much later into Carbon Events, becoming the primary mechanism by which the OS communicated with applications, but in the early days existed as the only "asynchronous" way for the OS to tell a program much of anything. Though, as they were dispatched via the event loop like everything else, this was little more than a convenient metaphor.
Four "core" events were "required" for every application. I put "required" in quotes because, in practice, applications could get away without even knowing they existed. A more accurate description would be that they were required for any application which claimed to be "high-level event aware" in its
'SIZE' resource (more on that later).
The first required Apple Event was the "open application" event, corresponding more or less to Cocoa's
-applicationDidFinishLaunching: application delegate method. Since this event was not sent if the user had opened the application by double-clicking one of its documents (or later, dragging and dropping a document icon onto the application icon), it was a good place for doing your "default" thing. In this case, I have it do a "new document".
What's this
pascal keyword, you might wonder? This was a requirement for just about any callback the Toolbox might call, and told the compiler, "emit code to make this function comply with Pascal's calling conventions instead of C's." Because the Toolbox itself was written in Pascal (or in conforming assembly language), anything it called had to accept parameters and return values as specified by Pascal, or havoc would ensue. Every Toolbox routine was declared
pascal in the C headers, when it wasn't just a bit of inline 68K machine code.
static pascal OSErr AEOpenApplication(const AppleEvent *theAE, AppleEvent *reply, UInt32 refCon) { DoNewCommand(); return noErr; }
For the "open documents" Apple Event, corresponding to Cocoa's
-application:openFiles: method, I get the "direct object" parameter as a list from the event, count the items in that list, then loop over the list getting each item as a FSSpec pointer and opening the resulting spec. Then I dispose of the list.
In all of this, I ignore quite a few error checks which would make the code all but unreadable. This is almost entirely pure boilerplate code.
Perhaps you can see how all of this was intended to produce a type-agnostic flexible event system, but in practice all it produced was a complicated batch of APIs which were fiendishly difficult to use correctly in all cases. Maybe the OS sent me an FSRef instead of an FSSpec, and this was an OS version which didn't include a working coercion handler to turn one into the other. Maybe there was no direct object, or the direct object was just a single file. Most of the assumptions this code makes were only safe for System 7.5 and earlier, and check out how many parameters I have to declare junk variables for because
NULL was not a safe thing to pass.; }
I didn't even think about supporting printing (a truly monstrous task in Toolbox days), so I just return an error from the "print documents" handler. If I'd chosen to allow it, I could have used the same boilerplate from the open documents event to get the list of files. Yes, the exact same code, save that I'd call
DoPrintDocument(file) instead.
static pascal OSErr AEPrintDocuments(const AppleEvent *theAE, AppleEvent *reply, UInt32 refCon) { return errAEEventNotHandled; // Don't support printing }
Finally, the "quit application" event. This is not equivalent to
-applicationWillTerminate:, because if this event handler doesn't do anything, the application won't quit! I exit the event loop from here, of course.
Mind you, I wouldn't get this event if I chose "Quit" from the "File" menu (not the application menu, there was no such thing), unless I dispatched it to myself manually.
static pascal OSErr AEQuitApplication(const AppleEvent *theAE, AppleEvent *reply, UInt32 refCon) { gDone = true; return noErr; }
Initialization
Surely, you must be wondering, at least the system initializes itself on startup. Maybe there's just one function that needs calling, as with
NSApplicationMain() or
RunApplicationEventLoop()!
Sorry, no. In the Toolbox days, you did everything yourself. Let's take a look at the breakdown:
static void Initialize(void) {
First off, maximize my application's zone (this does not mean what you think it does) and ask for more master pointers.
Maximizing my zone means that I'm asking for my heap to be as large as it can be before doing anything else. This saves the Memory Manager from having to grow it later as my allocations increase, a potentially time-consuming task. Why isn't this the default? Old design decisions from the days of tiny memory space.
Master pointers were used to implement handles. A handle is a double-pointer which allows a block of memory to be relocated at need without having to update every reference to that block. This theoretically prevented memory fragmentation, but it could get ugly with the need to lock handles before accessing their contents. There was no reference or depth count on locking, making passing handles around a very fussy business.
MaxApplZone(); MoreMasters(); MoreMasters();
Now, initialize all the basic Toolbox managers. Start with QuickDraw, and give it the address of the
GrafPort passed by the OS. That's the very ancient equivalent of an
NSGraphicsContext, for you Cocoa devs. You'd think if it were passed by the OS, you wouldn't need to give it back to the OS, but the nature of global variables in the Toolbox world made this the easier way. In theory, you could have passed a different port, but I've never seen nor heard of such a thing, and I can't imagine what the use would have been. By the time Carbon rolled around, this entire initialization had been completely done away with.
InitGraf(&qd.thePort);
Initialize the Font Manager, the Window Manager, the Menu Manager, and TextEdit. These all do what they say.
InitFonts(); InitWindows(); InitMenus(); TEInit();
Initialize the Dialog Manager. Pass
NULL so there's no resume procedure. A resume procedure was, pre-System 7, a function that could be called by the OS when the infamous bomb dialog popped up and the user clicked "Resume". Generally, the function would do a jump to
main(), restarting the application. This was the only safe thing to do, though in olden times programs would sometimes actually attempt recovery. This is the equivalent of trying to restore normal operations from a
SIGSEGV handler in Cocoa code. Good luck with that. From System 7 on, it was unsupported to pass anything other than
NULL for this, and I believe that doing otherwise had no effect anyway.
InitDialogs(NULL);
Finally, set the cursor to the standard arrow and show it.
InitCursor();
Now make sure there's no leftover cruft in the event queue from anything the user might have done between launch time and initialization. Computers were slow enough back then to make this worth doing.
FlushEvents(everyEvent, 0);
Install handlers for the four "required" Apple Events. What is a UPP, you might ask? A UPP is a "Universal Procedure Pointer", a convoluted bit of trickery and hacking that the Mixed Mode Manager used to make sure the Toolbox could talk to 68K and PowerPC programs equally regardless of which architecture the program itself or the particular Toolbox routines involved was using. Apple sidestepped the whole problem during the PowerPC to Intel transition by just recompiling everything as a fat binary, a much more viable option in the Panther and Tiger days than in times when hard drive sizes were still measured in the tens of megabytes.);
Finally, load up our
'MBAR' resource and set the menu bar hold the menus it lists. Add the installed desk accessories to the Apple menu, and draw the menu bar.
AppendResMenu() is a bit interesting. The original implementation of this Toolbox call did exactly what it said: Appended the names of every resource of the given type to the given menu. With '
DRVR', this meant each installed desk accessory, as those were stored as
DRVR resources. However, in System 7, it became possible to put other things in the Apple menu. So,
AppendResMenu() had its meaning overloaded so that when drivers were added to an Apple menu, it would instead add all the items that belonged in an Apple menu.
SetMenuBar(GetNewMBar(kMenuBarID)); AppendResMenu(GetMenuHandle(kAppleMenu), 'DRVR' ); DrawMenuBar(); }
Menu commands
Menu commands are passed around in their raw form as 32-bit integers encoding the menu's resource ID in the high 16 bits and the item number in the low 16 bits. To figure out what to do, one extracts these two words and branches based on them. Yes, this means your code is intimately and directly tied to the exact layout of items in your menu resources. You remembered to make a constant for every menu ID and item number, right? No? Wow, are you in for a fun time.
You may notice that for all items in the Apple menu that I didn't define, I call the
OpenDeskAcc() function with the name of the item. This is the second half of the overloaded
AppendResMenu() call from earlier; it was overloaded similarly to open the appropriate Apple menu item rather than just desk accessories.
At the end, regardless of what happened, I call
HiliteMenu(0). Yes, that's right, the Toolbox doesn't un-hilite the menu title for you. You have to do it yourself.); }
Event handling
There are lots of different kinds of events the OS can tell you about! In Carbon, there were literally hundreds of potential Carbon Events you might be sent. But even in the Toolbox, there were plenty of things that might happen...
static void HandleMouseDown(EventRecord *event) { WindowPartCode part; WindowPtr window;
The mouse was clicked? Well, let's figure out which window it was clicked in and what part of that window.
part = FindWindow(event->where, &window); switch (part) {
It was in the menu bar! Call a Toolbox function to synchronously track the menu drag and return the 32-bit response, and pass that off to our menu command handler.
case inMenuBar: HandleMenuSelection(MenuSelect(event->where)); break;
It was in a window the application doesn't own? Well, let the system handle it, then...
case inSysWindow: SystemClick(event, window); break;
It was in a window the application does own! Pass it off to our own window click handler.
case inContent: DoWindowClick(window, event); break;
In the title bar? Better track that window drag, and make sure to tell the Toolbox what the boundaries of the drag have to be.
case inDrag: DragWindow(window, event->where, &qd.screenBits.bounds); break;
Oh, the window's close box? Track that drag, and if the mouse is released therein, call the close handler.
case inGoAway: if (TrackGoAway(window, event->where)) DoCloseCommand();
Were there other places it could land? Plenty, but we don't have to worry about them in a really simple program. Handling the zoom box, or the expand/collapse box, or in later versions of the OS, the proxy or toolbar icons, was more trouble than it was worth!
} }
Hey, a key was pressed! If the command key was down, treat it as a menu equivalent and handle that.
static void HandleKeyPress(EventRecord *event) { if ((event->modifiers & cmdKey)) HandleMenuSelection(MenuEvent(event)); }
Okay, now for the more general cases of events:
static void HandleEvent(EventRecord *event) { switch (event->what) { case mouseDown: { HandleMouseDown(event); break; } case keyDown:
autoKey? Oh, that's just if a key was held down.
case autoKey: { HandleKeyPress(event); break; }
Update event! That's the result of
[view setNeedsDisplay:YES] on an entire-window scale. If you don't call
BeginUpdate() and
EndUpdate(), even if there's nothing to actually do, you'll just keep getting update events ad infinitum.
case updateEvt: BeginUpdate((WindowPtr)event->message); EndUpdate((WindowPtr)event->message); break;
Hey, the user put a disk in the floppy drive! If something went wrong, you'd better make sure they get the message, and be sure to pick some arbitrary point on the screen to show the error dialog at. Be sure to load up the Disk Initialization package first, and to unload it when you're done. Can't have it taking up memory you're not using.
The Disk Initialization package could be used for manually initializing floppies, verifying the formatting, and wiping them with zeroes. There were also a couple of extended routines in System 7.5+ for formatting disks with different filesystems.
case diskEvt: if ((event->message & 0xFFFF0000) != noErr) { Point pt = { 100, 100 }; DILoad(); DIBadMount(pt, event->message); DIUnload(); } break;
Activate events just mean your window was brought forward or moved backward, usually by your own code.
case activateEvt: break;
OS events had two subtypes: suspend (you're no longer the frontmost application) and resume (you're the frontmost again). There was also a "convert clipboard" flag in System 7+ which told the application to make sure the contents of the clipboard made sense, as they might have changed.
case osEvt: break;
kHighLevelEvent is the generic dispatch point for Apple Events.
AEProcessAppleEvent() calls through to any registered Apple Event handlers.
case kHighLevelEvent: AEProcessAppleEvent(event); break;
A null event just means nothing's happening! Since nothing happens on its own in Toolbox-land, we check whether there's a frontmost window, and if there is, make sure any controls it contains get their share of idle time. This, for example, makes insertion points in text fields blink and indeterminate progress bars do their barbershop animation.
case nullEvent: if (FrontWindow()) IdleControls(FrontWindow()); break; } }
And finally, the event loop itself. Herein we find the infamous
WaitNextEvent() function, System 7's answer to the inadequacies of the older, not-multitasking-aware
GetNextEvent(). WNE fetched the next event from the event queue which matched the requested set of events, allowed the specified amount of time to background processes, and returned whether or not there was any event to handle.
If there was no event to handle, we do the same thing as we do for a
nullEvent. I don't actually remember what circumstances would distinguish the false return from
WaitNextEvent() from receiving a valid
nullEvent, I'm afraid. In either case, loop until something says it's time to quit.
static void RunEventLoop(void) { EventRecord event; while (!gDone) { if (WaitNextEvent(everyEvent, &event, 30L, nil)) HandleEvent(&event); else { if (FrontWindow()) IdleControls(FrontWindow()); } } }
main
And finally, the ever-important entry point of the application,
main() itself!
Oh. All it has to do is call the initialization and the event loop. Even in System 7 days, there wasn't usually much for
main() to clean up once the event loop was done.
main() didn't take any parameters in the Toolbox universe. There was no command line and no concept of environment variables, so there was really nothing to pass. What little information the OS passed directly to the program was in that
qd global.
void main(void) { Initialize(); RunEventLoop(); }
Conclusion
All I can say is, it really took a lot to get simple things done in the old days. But, it also felt a lot closer to the machine itself, with control possible over just about everything if you wanted it. Cocoa's really gotten away from that concept, as has programming in general. I don't miss the extra work, but I sometimes get nostalgic for the feeling of speaking the OS' own language.
That's all from me this week, I'll be back next week with a Friday Q&A. Meanwhile, enjoy Mike's article for this week, and as always, thanks for reading!
This seems to be a criticsm of procedural programming languages, not the Mac Toolbox, and it's also an incorrect generalization; you are taking poor programming practices that many programs used and extrapolating them to all. There's no good reason to keep the standard AppleEvent handler UPPs as global variables, it's pointless. A boolean for exiting from your event loop ("gDone") can be replaced by any number of superior constructs.
Care to elaborate? You got collapse/expand for free unless you asked to handle it specially. The proxy icon was something like an additional inProxyIcon: case, where you then passed the event to a Window Manager function named something like HandleProxyIconEvent(). Not worse than diskEvt handling, surely.
Of course, the Carbon Event Manager eliminated most of this tedium, while still allowing you to hook in at a low level if you needed it.
Ken: I didn't know about that document; thanks for the link!
another nice installment in the series, thank you! I have a few more corrections:
1) Stop bashing globals. They're the right tool for the right job. [NSApplication sharedApplication] is effectively a struct of globals. It also contains its own gDone-equivalent.
2) One great advantage of UPPs in fat binaries was that they let you mix-and-match emulated 680x0 code and PowerPC code in a single application (something the "Universal Binaries" of the PowerPC/Intel switch didn't support). So you could load a plug-in that is written in 680x0 into a PowerPC app or vice versa.
This is actually what Apple did: They only ported the most important system routines to PowerPC, and everything else still ran in emulation. Then with every system release, more was ported.
3) There is a usability reason that HiliteMenu(0) needed to be called: Menus were supposed to remain highlighted while a menu command was busy doing its work. E.g. if you selected "About" the Apple menu would not un-highlight until you closed the about panel. This was a nice bit of breadcrumbs-style feedback, reminding you where you got a window from.
However, it became problematic when alerts and dialogs started handling the menu bar (originally, you couldn't use menus while a modal dialog was up, not even copy and paste).
4) You should probably add a HiliteWindow() call to the activate event, otherwise your app is going to look really weird. Well, it would, if you had any SelectWindow() calls anywhere that re-order windows on clicks (which also needed to be done manually on content- and title bar clicks).
1) I admit, I've been a little cruel to globals, mostly because I've been bitten by their misuse in the past.
2) The Mixed Mode Manager was extremely clever in making things transparent that way, and I duly salute its ingenuity :). But in my opinion, the extra complexity it added to any code that used callbacks (and there was a lot of such code in the Toolbox) was only a reasonable tradeoff versus recompilation (such as in PPC/Intel) when space limits were considered.
3,4) I'd forgotten about both those bits :). As I mentioned, it's been a long time since I used this stuff, and I only wish I could justify the time it'd take to become proficient again.
1) Much of the toolbox was written in 68k assembly
2) Much of the toolbox was written in such a way that it had to be shared between all running programs
Re: four-character character literals:
I think you're a little off-base here; you can definitely find these in Cocoa, and I've never had a compiler complain about them (though admittedly I probably have only used them in Apple gcc/clang).
For example, you can specify OSTypes with four-character literals. Also they're used extensively in Cocoa's support for Apple Events; types like AEEventClass and DescType are found in NSAppleEventDescriptor, NSAppleEventManager, SBObject, etc.
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.
I think you still have to use FSRefs if you want to write code that does an asynchronous filesystem copy in the background and callsback the client with progress reports (with FSCopyObjectAsync()); also the old Carbon File Manager remains the only way to "set" filesystem comments without ducking into Apple Events or scripting, which work, but have, at best, same-day service.
|
https://www.mikeash.com/pyblog/the-mac-toolbox-followup.html
|
CC-MAIN-2018-30
|
refinedweb
| 4,537
| 60.95
|
A quick overview of the super useful Collections module of Python.
If the implementation is hard to explain, it’s a bad idea :The Zen of Python
Python is a pretty powerful language and a large part of this power comes from the fact that it supports modular programming. Modular programming is essentially the process of breaking down a large and complex programming task into smaller and more manageable subtask/module. Modules are like LEGO bricks which can be bundled up together to create a larger task.
Modularity has a lot of advantages when writing code like:
- Reusability
- Maintainability
- Simplicity
Functions, modules and packages are all constructs in Python that promote code modularization.
Objective
Through this article, we will explore Python’s Collections Module. This module aims to improve the functionalities and provides alternatives to Python’s general purpose built-in containers such as dict, list, set, and tuple.
Introduction
Let’s begin the article with a quick glance at the concept of modules and packages.
Module
A module is nothing but a .py script that can be called in another .py script. A module is a file containing Python definitions and statements which helps to implement a set of functions. The file name is the module name with the suffix
.py appended. Modules are imported from other modules using the
import command. Let’s import the math module.
# import the library
import math
#Using it for taking the log
math.log(10)
2.302585092994046
Python’s in-built modules
Python has innumerable inbuilt modules and there are packages already created for almost any use case you can think of. Check out the complete list here.
Two very important functions come in handy when exploring modules in Python — the
dir and
help functions.
- The built-in function
dir()is used to find out which functions are implemented in each module. It returns a sorted list of strings:
print(dir(math))
- After locating our desired function in the module, we can read more about it using the
helpfunction, inside the Python interpreter:
help(math.factorial)
Packages
Packages are a collection of related modules stacked up together. Numpy and Scipy, the core machine Learning packages, are made up of a collection of hundreds of modules. here is a partial list of sub-packages available within SciPy.
Let us now hop over to the actual objective of this article which is to get to know about the Python’s Collection module. This is just an overview and for detailed explanations and examples please refer to the official Python documentation.
Collections Module
Collections is a built-in Python module that implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers such as
dict,
list,
set, and
tuple.
Some of the useful data structures present in this module are:
1. namedtuple()
The data stored in a plain tuple can only be accessed through indexes as can be seen in the example below:
plain_tuple = (10,11,12,13)
plain_tuple[0]
10
plain_tuple[3]
13
We can’t give names to individual elements stored in a tuple. Now, this might not be needed in simple cases. However, in case a tuple has many fields this might be kind of a necessity and will also impact the code’s readability.
It is here that namedtuple’s functionality comes into the picture. It is a function for tuples with Named Fields and can be seen as an extension of the built-in tuple data type. Named tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. Each object stored in them can be accessed through a unique (human-readable) identifier and this frees us from having to remember integer indexes. Let’s see its implementation.
from collections import namedtuple
fruit = namedtuple('fruit','number variety color')
guava = fruit(number=2,variety='HoneyCrisp',color='green')
apple = fruit(number=5,variety='Granny Smith',color='red')
We construct the namedtuple by first passing the object type name (fruit) and then passing a string with the variety of fields as a string with spaces between the field names. We can then call on the various attributes:
guava.color
'green'
apple.variety
'Granny Smith'
Namedtuples are also a memory-efficient option when defining an immutable class in Python.
2. Counter
Counter is a dict subclass which helps to count hashable objects. The elements are stored as dictionary keys while the object counts are stored as the value. Let’s work through a few examples with Counter.
#Importing Counter from collections
from collections import Counter
- With Strings
c = Counter('abcacdabcacd')
print(c)
Counter({'a': 4, 'c': 4, 'b': 2, 'd': 2})
- With Lists
lst = [5,6,7,1,3,9,9,1,2,5,5,7,7]
c = Counter(lst)
print(c)
Counter({'a': 4, 'c': 4, 'b': 2, 'd': 2})
- With Sentence
s = 'the lazy dog jumped over another lazy dog'
words = s.split()
Counter(words)
Counter({'another': 1, 'dog': 2, 'jumped': 1, 'lazy': 2, 'over': 1, 'the': 1})
Counter objects support three methods beyond those available for all dictionaries:
- elements()
Returns a count of each element and If an element’s count is less than one, it is ignored.
c = Counter(a=3, b=2, c=1, d=-2)
sorted(c.elements())
['a', 'a', 'a', 'b', 'b', 'c']
- most_common([n])
Returns a list of the most common elements with their counts. The number of elements has to be specified as n. If none is specified it returns the count of all the elements.
s = 'the lazy dog jumped over another lazy dog'
words = s.split()
Counter(words).most_common(3)
[('lazy', 2), ('dog', 2), ('the', 1)]
Common patterns when using the Counter() object
sum(c.values()) # total of all counts
c.clear() # reset all counts
list(c) # list unique elements
set(c) # convert to a set
dict(c) # convert to a regular dictionary c.items() # convert to a list like (elem, cnt)
Counter(dict(list_of_pairs)) # convert from a list of(elem, cnt)
c.most_common()[:-n-1:-1] # n least common elements
c += Counter() # remove zero and negative counts
3. defaultdict
Dictionaries are an efficient way to store data for later retrieval having an unordered set of key: value pairs. Keys must be unique and immutable objects.
fruits = {'apple':300, 'guava': 200}
fruits['guava']
200
Things are simple if the values are ints or strings. However, if the values are in the form of collections like lists or dictionaries, the value (an empty list or dict) must be initialized the first time a given key is used. defaultdict automates and simplifies this stuff. The example below will make it more obvious:
d = {}
print(d['A'])
Here, the Python dictionary throws an error since ‘A’ is not currently in the dictionary. Let us now run the same example with defaultdict.
from collections import defaultdict
d = defaultdict(object)
print(d['A'])
<object object at 0x7fc9bed4cb00>
The
defaultdict in contrast will simply create any items that you try to access (provided of course they do not exist yet).The defaultdict is also a dictionary-like object and provides all methods provided by a dictionary. However, the point of difference is that it takes the first argument (default_factory) as a default data type for the dictionary.
4.OrderedDict
An OrderedDict is a dictionary subclass that remembers the order in which that keys were first inserted. When iterating over an ordered dictionary, the items are returned in the order their keys were first added. Since an ordered dictionary remembers its insertion order, it can be used in conjunction with sorting to make a sorted dictionary:
- regular the length of the key string
OrderedDict(sorted(d.items(), key=lambda t: len(t[0])))
OrderedDict([('pear', 1), ('apple', 4), ('banana', 3), ('orange', 2)])
A point to note here is that in Python 3.6, the regular dictionaries are insertion ordered i.e dictionaries remember the order of items inserted. Read the discussion here.
Conclusion
Collections module also contain some other useful datatypes like deque, Chainmap, UserString and few more. However, I have shared the ones which I use in my day to day programming to makes things simple. For a detailed explanation and usage visit the official Python documentation page.
Originally published here
|
https://parulpandey.com/2019/01/24/pythons-collections-module%E2%80%8A-%E2%80%8Ahigh-performance-container-data-types/
|
CC-MAIN-2022-21
|
refinedweb
| 1,371
| 54.52
|
Hi Miles,
I want to calculate the Sx^2 for a spin 2 system and the operator is defined in the header file spintwo.h. For the simple code shown below, the compile was successful but when I ran the code, I got the error message:
"From line 161, file itdata/qdense.cc
Setting IQTensor element non-zero would violate its symmetry.
Setting IQTensor element non-zero would violate its symmetry.
Aborted (core dumped)"
using namespace itensor;
int main()
{
auto sites = SpinTwo(4);
ITensor op = sites.op("Sx2",2);
PrintData(op);
return 0;
}
I got the same error message for operators Sy^2 and S^2 but the code worked perfectly for Sz^2. Could you please help me resolve this issue?
P.S. A year ago, an ITensor user named alesa ran into a similar problem for spin 1 system and you mentioned that there was a bug associated with spin 1 siteset which you later fixed:
Hi, thanks for the question. So the issue is a rather technical one and has to do with a tricky design issue we have with using a single site set type for both ITensor and IQTensor tensor types. We have a plan to fix this more deeply in the library, but for now you have to do the following steps:
when defining an operator such as Sx^2 inside your site set, add the line
Op = mixedIQTensor(dag(s),sP);
before setting the elements of your operator. See the file itensor/mps/sites/spinhalf.h and where the "Sx" operator is defined for an example.
when retrieving your operator from the sites.op("Sx2",j) method, make sure to convert it to an ITensor
You will need to use ITensors, and not IQTensors, to work with operators such as Sx^2 in your own code since such an operator does not conserve the total Sz quantum number.
Best regards,
Miles
|
http://itensor.org/support/1127/error-message-regarding-the-operator-sx2-spintwo-siteset
|
CC-MAIN-2020-34
|
refinedweb
| 318
| 68.7
|
Log actions to Elastic Search via bulk upload
Project description
bes is a flexible bulk uploader for Elastic Search. At the moment, it only supports index uploads over UDP. You can retrieve the logged data from Elastic Search and analyze it dynamically using Kibana (source). This package just makes the initial upload to Elastic Search as painless as possible.
The connection parameters and other configuration options are stored in bes.DEFAULT. Override them as you see fit (e.g. to connect to a remote Elastic Search server):
>>> import bes >>> bes.DEFAULT['host'] = 'my-es-server.example.com'
Then log away:
>>> bes.log(type='record', user='jdoe', action='swims')
If you are so inclined, you can override the DEFAULT index on a per-call basis:
>>> bes.log(index='my-index', type='my-type', user='jdoe', action='bikes')
The log generated by the above will look like:
{ "@timestamp": "2013-09-26T16:34:09.179048", "@version": 1, "action": "bikes", "user": "jdoe" }
following Jordan Sissel’s new format for Logstash:
You should specify a unique type for each record you create, because Elastic Search doesn’t recalculate its mapping if you post a new event that uses an old key with a new data type. If you are only logging a single record type, you can configure a global default:
>>> bes.DEFAULT['type'] = 'record' >>> bes.log(user='jdoe', action='runs')
Django
Although the core of the library is framework-agnostic, we’ll be using this to log events in a Django website. There are a few helpers for extracting information from HttpRequest to make this as easy as possible. Drop it into your views with somthing like:
import django.http imort django.shortcuts import bes.django from polls.models import Poll def detail(request, poll_id): try: p = Poll.objects.get(pk=poll_id) except Poll.DoesNotExist: bes.django.log_user_request_path( request=request, type='404', poll_id=poll_id) raise django.http.Http404 bes.django.log_user_request_path( request=request, type='poll-detail', poll_id=poll_id, poll_name=p.name) return django.shortcutsrender_to_response( 'polls/detail.html', {'poll': p})
You can also override bes’ defaults in your Django config:
BULK_ELASTIC_SEARCH_LOGGING_HOST = 'my-es-server.example.com'
The Django config names match the uppercased DEFAULT keys with a BULK_ELASTIC_SEARCH_LOGGING_ prefix (for example, host → BULK_ELASTIC_SEARCH_LOGGING_HOST).
Tracing
To log activity for critical functions, bes has tracing decorators:
import bes.trace @bes.trace.trace() def my_function(): return 1 + 2 / 3
By default this logs with bes.log using the decorated function’s name (e.g. my_function) as the log type. You can override this if you want something different:
def my_logger(*args, **kwargs): print((args, kwargs)) @bes.trace.trace(type='my-type', logger=my_logger) def my_function(): return 1 + 2 / 3
Additional trace arguments are passed straight through to the logger:
@bes.trace.trace(index='my-index', type='my-type', sport='triathalon') def my_function(): return 1 + 2 / 3
By default, you’ll get separate logged messages before the decorated function starts, after it completes, or if it raises an exception. An action argument is passed to the logger in each case, with values of start, complete, and error respectively. In the error case, the logger is also passed the string-ified exception (error=str(e)) to help with debugging the problem. If you don’t want one or more of these logged messages, you can turn them off:
@bes.trace.trace(start=False, error=False, complete=False): def my_function(): return 1 + 2 / 3
although if you turn them all off, you’re not going to get any messages at all ;).
The trace decorator uses wrapt (if it’s installed) to create introspection-preserving decorators. If wrapt is not installed, we fallback to the builtin functools.wraps which handles the basics like docstrings, but doesn’t do as well for more involved introspection (e.g. inspect.getsource).
Testing
Testing uses unittest’s automatic test discovery. Run the test suite with:
$ python -m unittest discover
For Python <3.3, the test suite requires the external mock package, which is bundled as unittest.mock in Python 3.3.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bes/
|
CC-MAIN-2018-22
|
refinedweb
| 692
| 50.23
|
So I'm new to java and learning new stuff here and there. I decided to test out my abilities (I just started yesterday) and create an extremely simple RPG character creation text game. I wrote all the code and everything and when I run it it works fine for the first 3 questions. Then when I get to the fourth question. It asks 2 questions at the same time and only lets me answer one. After that all the rest of the questions are screwed up and not answerable.
Here is the code:
import java.util.Scanner;
public class TextGame {
public static void main(String args[]){
Scanner player = new Scanner(System.in);
String name, gender, race, build, path;
double height;
int age;
System.out.println("What is your name?");
name = player.nextLine();
System.out.println("What is your gender?");
gender = player.nextLine();
System.out.println("How old are you?");
age = player.nextInt();
System.out.println("What race are you?");
race = player.nextLine();
System.out.println("What type of body build do you have?(Fat, skinny, muscular, etc...)");
build = player.nextLine();
System.out.println("How tall are you?(Feet)");
height = player.nextDouble();
System.out.println("What class are you?(Warrior, mage, archer, etc...)");
path = player.nextLine();
System.out.println("So you are a " + age + "year-old" + build + gender + race + path + "that is " + height + "ft tall, named " + name + "?" );
player.close();
}
}
//End of RPG test
Now when I run it this what I get in console, I put x and 1 for place holders:
What is your name?
x
What is your gender?
x
How old are you?
1
What race are you?
What type of body build do you have?(Fat, skinny, muscular, etc...)
x
How tall are you?(Feet)
1
What class are you?(Warrior, mage, archer, etc...)
x
So you are a 1 year-old x x x that is 1.0 ft tall, named x.
ProblemProblem
Can someone explain to me why its doing this and tell me how to fix this? Please! Thanks for reading!
|
http://forums.devshed.com/java-help-9/help-please-dont-whats-wrong-950297.html
|
CC-MAIN-2014-42
|
refinedweb
| 338
| 79.46
|
Dear Reader,
Further to my first post here, today i noticed some thing cool with respect to Using blocks in C#. We all know that in using block, usually we create a type which implements IDisposable as shown below:
public class MyClass : IDisposable { public void Dispose() { Console.WriteLine("Disposed"); } }
And the usage code looks like:
using (MyClass m = new MyClass()) { }
Infact all these days i have been doing the same w.r.t using() blocks in all my code. But today i learned (yes a real shame) that you need not always create a type, rather call a method in it actually. Lemme show you a code on the same:
public class MyClass : IDisposable { public void Dispose() { Console.WriteLine("Disposed"); } } public class UsingClass { public static MyClass SomeMethod() { return new MyClass(); } } public partial class Program { public static void Main() { using (UsingClass.SomeMethod()){ } Console.ReadLine(); } }
Here, compiler has gone a bit smart here because it efficiently sees that the return type of the method SomeMethod() is returning a type which implements IDisposable, hence it does not issue any error. Just remove the type implementing IDisposable, then you would get error.
Your comments are welcome 🙂
Happy Coding.
6 thoughts on “Method call in Using block TIP”
using just using (new MyClass()) { } should work as well, because all you do normally is store it in a variable and use that all over the place. Nothing however is stopping you from typing the create without a variable declaration.
You could use this as well:
using(new MyClass())
{
Console.WriteLine(“lol”);
SomeMethodDefinedOnMyClass();
}
Yes you could do that too, but your not utilizing much of what your doing with the created instance right.
Specifically, the using() block will accept *any* expression that is implicitly convertible to IDisposable, including variable-declaration expressions.
The only exception I know of is “using (null) {}” even though “null” is implicitly convertible to any reference type. (Note that “using ((IDisposable)null) {}” will compile and execute without error, as expected.)
Yep, even when i made the method to return a null instead of the type instance, still the compiler just passes it. And strange that even there is no run time exception being thrown. I guess one has to be careful on this.
There is no runtime exception because the using() block includes a null check. “using (foo) { … }” is syntactic sugar for:
IDisposable temp = foo;
try {
…
} finally {
if (temp != null) temp.Dispose();
}
The temporary local is only generated when the target expression of the using block is not a variable declaration, but the null check is emitted in both cases.
Yep, your right. 🙂
|
https://adventurouszen.wordpress.com/2011/11/15/method-call-in-using-block-tip/
|
CC-MAIN-2018-05
|
refinedweb
| 429
| 62.38
|
Dear listmembers, I found the root cause for the problem reading the symbol table, I have a patch for the kernel and I would appreciate your comments. Sparc 64 uses a wrapper for get_symbol_table that in fact uses a 64 Bit kernel call to perform the action. In the 32Bit world the structure kernel_sym consists of an unsigned long (4Byte) and a char - array (60Byte) summing up to 64 Byte total size. When wrapping this call to the 64Bit kernel the size of the struct grows twice: first of all our unsigned long turns into 8 Byte, then things have to be word aligned. So the struct grows up to 72Byte in total. This is where trouble begins. The kmalloc() call in use here is capable to allocate not more than 131048Byte, this boundary is hit here. Erraneously I had thought this was related to SMP - it isnt. Accidentally the map-size was a little smaller for the non - SMP kernel thereby fitting into the available memory. What I suggest now (as a first step) is to modify in linux/include/linux/module.h: struct kernel_sym { #ifdef __ARCH_SPARC64_ATOMIC__ unsigned int value; #else unsigned long value; #endif char name[60]; /* should have been 64-sizeof(long); oh well */ }; Thereby the size of kernel_sym remains 64Byte and we do not need 4 additional alignment Byte to make things work again. So the limitation is pushed upwards from 1820 symbols (now) up to 2047 symbols (see above). I honestly see no real reason for using a unsigned long here since it is almost always 4 Byte long - except on 64 Bit architectures. I am 100% sure that the kernel junkees know a better switch than __ARCH_SPARC64_ATOMIC__ what I took from atomic.h knowing the latter being included into module.h. So, who would be the right person to address this to? As I initially assumed, this is not a klogd issue - apart from the fact that klogd relies on get_kernel_symbols. Take care Dieter Jurzitza -- ________________________________________________ HARMAN BECKER AUTOMOTIVE SYSTEMS Dr.-Ing. Dieter Jurzitza Manager Hardware Systems System Development: ******************************************* loeschen delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden. *******************************************
|
https://lists.debian.org/debian-sparc/2004/12/msg00120.html
|
CC-MAIN-2017-13
|
refinedweb
| 366
| 61.97
|
CodePlexProject Hosting for Open Source Software
I have some public domain code in my project and I would like to supress all style cope checking for a whole class or namespace. What is the proper syntax to supress all checking of a file?
Please see
I'm not aware how one can suppress at the logic (namespace or class) level as opposed to the physical (file) level, but hopefully this is sufficient for you.
You're probably looking for file lists:
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
https://stylecop.codeplex.com/discussions/399772
|
CC-MAIN-2017-22
|
refinedweb
| 117
| 80.21
|
RE: SoapExtension execution on clientside
- From: "Hans" <Hans@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Thu, 30 Jun 2005 01:41:03 -0700
Hi,
Ok.. It was a fairly general question the last one hard to answer so I have
a new one instead...
I created the reflector ond the importer... I put the dll in the gac.
I adjusted the .config file to reflect teh wersion and the token.
I browse to the ws and it works fine. I look (trough the browser) at thw
WSDL and it looks okay with the namespace for my extension...
In my world, the only thing I have to do is to add a webreference and all
should be fine...?
But NO.
I've got the message "Custom tool warning: At least one optional import
ServiceDescriptionFormatExtension has been ignored"
When I look at my WSDL file in the .Net enviroment my namespace is gone...
Why? What I'm I missing?
Anybody?
TIA
BR
Hans
"Hans" wrote:
> Hi
>
> Okay, I understand why i dont get any replays... RTFM...
>
> So I now created a SoapExtensionReflector and a SoapExtensionImporter class
> so the proxy can get hold of the info when its created.
>
> When I brows to the webservice it show up nice but when I want the WSDL I
> only got a Server Internal Error 500.
>
> Is there any good examples on how to do this (I cant find any...)?
>
> TIA
> BR
> Hans
>
> "Hans" wrote:
>
> > Hi,
> >
> > I have created an extension that only runs on the serverside and not on both
> > the client and the server side.
> >
> > I have to in the BeforeDeSerialize fix my dateTime so the deserialisation
> > dosent destroy my time when using timezone.
> > I do this by modifying the timezone info inthe XML stream befor
> > deserialization.
> >
> > I executing my webservice from a windows app.
> >
> > But I cant get it to run on the clientside.
> >
> >
> > I added the folowing to my web.config
> > <webServices>
> > <soapExtensionTypes>
> > <add type="ClassLibrary1.MyExtension,ClassLibrary1"
> > >
> > </soapExtensionTypes>
> > </webServices>
> >
> > I added the class(MyExtension) as a reference on the client.
> >
> > My reference.cs on the client side does not include any information on my
> > "MyExtension" class. Should it? How to get the proxy generator to discover
> > the extension?
> >
> >
> > The extension works well on the serverside.
> >
> > TIA for any help
> >
> > Br
> > Hans
> >
.
- References:
- SoapExtension execution on clientside
- From: Hans
- RE: SoapExtension execution on clientside
- From: Hans
- Prev by Date: Re: Generate Proxy or Server Side stub Problem
- Next by Date: custom datatypes
- Previous by thread: RE: SoapExtension execution on clientside
- Next by thread: Web Services as Data Access Layer ? (Your thoughts please)
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.webservices/2005-06/msg00238.html
|
crawl-002
|
refinedweb
| 427
| 65.93
|
#include <engine.h>
List of all members.
A feature in this context means an ability of the engine which can be either available or not.
All Engine subclasses should consider overloading some or all of the following methods.
Definition at line 102 of file engine.h.
Enables the subtitle speed and toggle items in the Options section of the global main menu.
'Return to launcher' feature is supported, i.e., EVENT_RETURN_TO_LAUNCHER is handled either directly, or indirectly (that is, the engine calls and honors the result of the Engine::shouldQuit() method appropriately).
Loading savestates during runtime is supported, that is, this engine implements loadGameState() and canLoadGameStateCurrently().
If this feature is supported, then the corresponding MetaEngine *must* support the kSupportsListSaves feature.
Loading savestates during runtime is supported, that is, this engine implements saveGameState() and canSaveGameStateCurrently().
Changing the game settings during runtime is supported.
This enables showing the engine options tab in the config dialog accessed through the Global Main Menu.
Arbitrary resolutions are supported, that is, this engine allows the backend to override the resolution passed to OSystem::setupScreen.
The engine will need to read the actual resolution used by the backend using OSystem::getWidth and OSystem::getHeight.
Engine must receive joystick events because the game uses them.
For engines which have not this feature, joystick events are converted to mouse events.
Definition at line 166 of file engine.h.
Definition at line 140 of file engine.cpp.
[virtual]
Definition at line 185 of file engine.cpp.
[inline, virtual]
Notify the engine that the settings editable from the game tab in the in-game options dialog may have changed and that they need to be applied if necessary.
Definition at line 298 of file engine.h.
Indicates whether a game state can be loaded.
Reimplemented in Grim::GrimEngine, Myst3::Myst3Engine, Stark::StarkEngine, and Wintermute::WintermuteEngine.
Definition at line 748 of file engine.cpp.
Indicates whether an autosave can currently be saved.
Definition at line 476 of file engine.h.
Indicates whether a game state can be saved.
Reimplemented in Myst3::Myst3Engine, Stark::StarkEngine, and Wintermute::WintermuteEngine.
Definition at line 775 of file engine.cpp.
On some systems, check if the game appears to be run from CD.
Definition at line 451 of file engine.cpp.
Prepare an error string, which is printed by the error() function.
Definition at line 558 of file engine.cpp.
Flip mute all sound option.
Definition at line 708 of file engine.cpp.
Returns the slot that should be used for autosaves.
Definition at line 484 of file engine.h.
Return the engine's debugger instance, if any.
Definition at line 254 of file engine.h.
[inline]
Definition at line 455 of file engine.h.
[static]
Definition at line 869 of file engine.cpp.
Return the engine's debugger instance, or create one if none is present.
Used by error() to invoke the debugger when a severe error is reported.
Definition at line 853 of file engine.cpp.
Definition at line 456 of file engine.h.
Generates the savegame filename.
Definition at line 308 of file engine.h.
Definition at line 454 of file engine.h.
Get the total play time.
Definition at line 649 of file engine.cpp.
Checks for whether it's time to do an autosave, and if so, does it.
Definition at line 520 of file engine.cpp.
Determine whether the engine supports the specified feature.
Definition at line 274 of file engine.h.
Init SearchMan according to the game path.
By default it adds the directory in non-flat mode with a depth of 4 as priority 0 to SearchMan.
Definition at line 197 of file engine.cpp.
Return whether the engine is currently paused or not.
Definition at line 422 of file engine.h.
Shows the ScummVM Restore dialog, allowing users to load a game.
Definition at line 780 of file engine.cpp.
Load a game state.
Definition at line 723 of file engine.cpp.
Definition at line 743 of file engine.cpp.
Run the Global Main Menu Dialog.
Definition at line 592 of file engine.cpp.
Pause the engine.
This should stop any audio playback and other stuff. Called right before the system runs a global dialog (like a global pause, main menu, options or 'confirm exit' dialog).
Returns a PauseToken. Multiple pause tokens may exist. The engine will be resumed when all associated pause tokens reach the end of their lives.
Definition at line 562 of file engine.cpp.
[protected, virtual]
Actual implementation of pauseEngine by subclasses.
See there for details.
Reimplemented in Grim::GrimEngine, Myst3::Myst3Engine, and Stark::StarkEngine.
Definition at line 587 of file engine.cpp.
Request the engine to quit.
Sends a EVENT_QUIT event to the Event Manager.
Definition at line 841 of file engine.cpp.
[private]
Resume the engine.
This should resume any audio playback and other stuff.
Only PauseToken is allowed to call this member function. Use the PauseToken that you got from pauseEngine to resume the engine.
Definition at line 575 of file engine.cpp.
[pure virtual]
Init the engine and start its main loop.
Implemented in Grim::GrimEngine, ICB::IcbEngine, Myst3::Myst3Engine, Stark::StarkEngine, and Wintermute::WintermuteEngine.
Definition at line 667 of file engine.cpp.
Does an autosave immediately if autosaves are turned on.
Definition at line 529 of file engine.cpp.
Shows the ScummVM save dialog, allowing users to save their game.
Definition at line 809 of file engine.cpp.
false
Save a game state.
Definition at line 753 of file engine.cpp.
Definition at line 770 of file engine.cpp.
Sets the engine's debugger.
Once set, the Engine class is responsible for managing the debugger, and freeing it on exit
Definition at line 260 of file engine.h.
Sets the game slot for a savegame to be loaded after global main menu execution.
This is to avoid loading a savegame from inside the menu loop which causes bugs like #2822778.
Definition at line 674 of file engine.cpp.
0
Set the game time counter to the specified time.
This can be used to set the play time counter after loading a savegame for example. Another use case is in case the engine wants to exclude time from the counter the user spent in original engine dialogs.
Definition at line 656 of file engine.cpp.
Definition at line 488 of file engine.h.
Return whether the ENGINE should quit respectively should return to the launcher.
Definition at line 848 of file engine.cpp.
Notify the engine that the sound settings in the config manager may have changed and that it hence should adjust any internal volume etc.
values accordingly. The default implementation sets the volume levels of all mixer sound types according to the config entries of the active domain. When overwriting, call the default implementation first, then adjust the volumes further (if required).
Reimplemented in Myst3::Myst3Engine.
Definition at line 678 of file engine.cpp.
Display a warning to the user that the game is not fully supported.
Definition at line 638 of file engine.cpp.
[friend]
Definition at line 415 of file engine.h.
Autosave interval.
Definition at line 140 of file engine.h.
Optional debugger for the engine.
Reimplemented in Wintermute::WintermuteEngine.
Definition at line 157 of file engine.h.
The time when the engine was started.
This value is used to calculate the current play time of the game running.
Definition at line 135 of file engine.h.
[protected]
Definition at line 109 of file engine.h.
The last time an autosave was done.
Definition at line 145 of file engine.h.
Definition at line 112 of file engine.h.
Definition at line 105 of file engine.h.
The pause level, 0 means 'running', a positive value indicates how often the engine has been paused (and hence how often it has to be un-paused before it resumes running).
This makes it possible to nest code which pauses the engine.
Definition at line 124 of file engine.h.
The time when the pause was started.
Reimplemented in Grim::GrimEngine.
Definition at line 129 of file engine.h.
Definition at line 110 of file engine.h.
Save slot selected via global main menu.
This slot will be loaded after main menu execution (not from inside the menu loop, to avoid bugs like #2822778).
Definition at line 152 of file engine.h.
Definition at line 104 of file engine.h.
Definition at line 115 of file engine.h.
Definition at line 108 of file engine.h.
|
https://doxygen.residualvm.org/d1/db6/classEngine.html
|
CC-MAIN-2020-40
|
refinedweb
| 1,408
| 61.93
|
With pythonanywhere I develop an application that works with the social network, and it would be great if this social network would be in the white list. With pythonanywhere I develop an application that works with the social network, and it would be great if this social network would be in the white list.
@noTformaT: Welcome to PA!!!
It should be in the list now. Give it a go and let me know.
@glenn I have the following problem. This code:
import requests r = requests.post("") print r.text
output text:
ERROR The requested URL could not be retrieved
The following error was encountered while trying to retrieve the URL:
Unsupported Request Method a nd Protocol
Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.
Your cache administrator is webmaster. Generated Thu, 11 Oct 2012 17:04:1 3 GMT by glenn-liveproxy (squid/2.7.STABLE9)
That's odd. My results were different running your example...
========================================================================== a2j@ssh.pythonanywhere.com's password: 21:32 ~ $ python2.7 Python 2.7.3 (default, Oct 4 2012, 11:28:36) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import requests >>> r = requests.post("") >>> print(r.text) {"error":"invalid_client","error_description":"client_secret is undefined"} >>>
Results when browsing to the URL via Aurora:
{"error":"invalid_client","error_description":"client_secret is undefined"}
I ran the code again from a non paid account and got the same results as noTformaT
Is that the same bug in requests/urllib3 that we keep running into?
It looks like it is the same one. Of course in that case the user bought a premium account, so the thread action stopped. In this case they haven't. I guess the question is on noTformaT...Are you planning to upgrade or do we need to try to figure out a fix for you?
@a2j I develop non-commercial applications without the system of donations, so finding the funds to purchase a paid account's difficult for me. Probably will have to cut back on some functional applications.
@noTformaT: Nobody's saying you have to upgrade to a paid account. Just that you wouldn't have a problem if you did, so why try and fix it if you were going to upgrade. Now we know we need to help you figure out a solution while using a free account.
I think that the problem isn't present if you use urllib instead of requests, is that possible?
|
https://www.pythonanywhere.com/forums/topic/286/
|
CC-MAIN-2018-43
|
refinedweb
| 421
| 67.35
|
Hi all,
I modified the original workflow and code to make it work. I removed 1 minute timer and use only interrupt input to switch LED on from LED1 to LED4 one by one.
I think that the original code is toggling LED on/off every 1 minute regardless the timer. When the interrupt button event happens, it checks the timer 1 trigger (it is always false). The original code did not make sense. So, I decided to rewrite the python code to save the last time the interrupt happened.
When a new interrupt occurs, I check the time difference between the previous saved interrupt observed time and the current the interrupt observed time. If the time difference were less than 7sec, my code would ignore the current interrupt event. Otherwise, the code would save the current interrupt's observed time and turn on the next LED light.
I look forward to read Dan's detailed post regarding GPIO tutorial. See if I have missed something. I want to post my solution for feedback from any experienced developer. I welcome any comments.
Best,Michael
'''
Last Updated: Dec 13, 2016
Author: Medium One
'''
import MQTT
import Store
import Analytics
import DateConversion
import datetime
import Filter
import json
# pinmap dictionary allows you to map the physical pin to the logical map address in read_pin/write_pin functions.
pinmap = {'1':0, '8':5, '9':6, '10':7}
waittime = 7
def read_pin(pin, tag):
send_mqtt_msg(u'0;4;{};{};'.format(pin, tag));
def write_pin(pin, val):
send_mqtt_msg(u'0;3;{};{};'.format(pin, val));
def send_mqtt_msg(msg):
mqtt_msg = msg
if not isinstance(msg, basestring):
mqtt_msg = ''.join(map(unichr, msg))
MQTT.publish_event_to_client('s3a7', mqtt_msg, 'latin1')
# debounce
log ("Detect an input trigger. It could be a timer or interrupt trigger.")
in1 = IONode.get_input('in1')
log ('The current interrupt (in1) : {0}'.format(in1))
log ('The current interrupt (in1) observed at {0}'.format(DateConversion.to_py_datetime(in1['observed_at'])))
# Testing purpose only
Store.delete('intrtime')
# get the previous interrupt time
log ('Retrive the previous interrupt date/time when LED toggles')
try:
prev_time = DateConversion.to_py_datetime(json.loads(Store.get('intrtime')))
except (ValueError, TypeError):
log ('No previous interrupt time. Assign utc_now datetime as the previous date and time.')
#prev_time = datetime.datetime.utcnow() - datetime.timedelta(seconds=3) # guarantee to toggle led
#prev_time = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(seconds=3) # guarantee to toggle led
prev_time = DateConversion.to_py_datetime(in1['observed_at']) - datetime.timedelta(seconds=waittime+1)
log('From Store, in1 observed at {0}.'.format(prev_time))
timediff = DateConversion.to_py_datetime(in1['observed_at']) - prev_time
log ("Time Difference between in1(timer1) and last_2_interrupt_events[1] is {0} ".format(timediff))
if DateConversion.to_py_datetime(in1['observed_at']) - prev_time < datetime.timedelta(seconds=waittime):
log ('Difference in time between the timer trigger and the last interrupt trigger is less than 10 sec.')
log ('Escape to skip the rest of the code.')
escape()
else:
log ('Proceed to toggle the LED state.')
log ("Execute try")
log ("Store the current interrupt time.")
Store.set_data('intrtime',json.dumps(in1['observed_at'])) # store the time.
try:
prev_state = int(Store.get('prev_state'))
except (ValueError, TypeError):
log ("prev_state = None. Initialize prev_state to zero.")
prev_state = 0
log ("prev_state : {0}".format(prev_state))
# toggle LED state
next_state = 0
if prev_state == 3:
next_state = 0
else:
next_state = prev_state + 1
write_pin(pinmap['1'], 1 if next_state == 0 else 0 )
write_pin(pinmap['8'], 1 if next_state == 1 else 0 )
write_pin(pinmap['9'], 1 if next_state == 2 else 0 )
write_pin(pinmap['10'], 1 if next_state == 3 else 0 )
# Not working code.
#
#if next_state == 0:
# read_pin(pinmap['1'],'GPIO12')
#elif next_state == 1:
# read_pin(pinmap['8'],'GPIO13')
#elif next_state == 2:
# read_pin(pinmap['9'],'GPIO14')
#elif next_state == 3:
# read_pin(pinmap['10'],'GPIO15')
#else:
# log ("Something wrong.")
#Store.set_data('prev_state', str(next_state))
Wow, thanks for the contribution. I'm going to go through this!
It might be hard for me to go through this today as I'm off to IoT World in the afternoon and I have a meeting with David Thai, the CEO of Medium One in this morning.
What is the module you have next to the BMC 150 accelerometer?
It is a GPS module from Digitlent. It gives me the exact coordination of the IOT board's location. I plan to use it to get temperature and air quality data in different cities in San Francisco Bay Area.
That may be my next fun project.
Best regards,Michael
Today Renasas support team has kindly been able to provide a valuable feedback to my question about their GPIO tutorial's Python code. They have confirmed my findings and have updated their python code in their blog. They also said that they want the LED to toggle every 1 minute. It is intentional. So, they want to keep the 1 minute timer which I want to remove as I show my example in this post.
Here are the changes to) Connect raw.interrupt to IN1 of Base Python. So, the LED toggle will occur whenever the interrupt button is pressed.2) Leave 1 minute timer alone unconnected. It still can trigger the Python code every 1 minute to toggle the LED.3) They agreed to use Analytics.events() function call now to get the interrupt event with the observed time stamp. Their old code which used Analytics.last_n_values function will evoke a syntax error.
The above post is very long because I was learning their code and was trying to make sense of it. Hope you don't mind reading this long post. You can go to the end of the discussion to find their response.
|
http://learn.iotcommunity.io/t/gpio-tutorial-modified-workflow-and-python-code-4-led/794
|
CC-MAIN-2017-51
|
refinedweb
| 912
| 59.9
|
In competive programming, reading file is a basic technique that we have to implement. So, in this article, we will discuss about reading file in competitive programming in C++.
Table of contents
Use fast I/O
When we’re working with file, it is often recommended to use
scanf /
printf instead of
cin /
cout for a fast input and output. However, we can still use
cin /
cout that have still the same speed as
scanf /
printf by including two lines in the beginning of main function:
ios_base::sync_with_stdio(false);
It toggles on or off the synchronization of all the C++ standard stream with their corresponding standard C streams if it is called before the program performs its first input or output operation.
Adding
ios_base::sync_with_stdio(false);(which is true by default) before any I/O operation avoids this synchronization. It is a static member of function
std::ios_base.
cin.tie(NULL);
tie()is a method which simply gurantees the flushing of
std::coutbefore
std::cinaccepts an input. This is useful for interactive console programs which require the console to be updated constantly but also slows down the program for large I/O.
For example:
#include <iostream> int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); ... }
Some ways to read data in C++
Assuming that our data has the following format:
5 3 6 4 7 1 10 5 11 6 12 3 12 4
The below is two ways to read data:
Read token by token
#include <fstream> std::ifstream infile("file.txt"); if (!infile.is_open()) { return; } int a, b; while (infile >> a >> b) { // process data here }
Line-based parsing, using string stream
#include <fstream> #include <sstream> #include <string> std::ifstream infile("file.txt"); if (!infile.is_open()) { return; } std::string line; while (std::getline(infile, line)) { std::stringstream iss(line); int a, b; if (!(iss >> a >> b)) { break; } // process data here }
Use Boost’s file_descritpion_source
#include <boost/iostreams/device/file_descriptor.hpp> #include <boost/iostreams/stream.hpp> #include <fcntl.h> namespace io = boost::iostreams; void readLineByLineBoost() { int fdr = open(FILENAME, O_RDONLY); if (fdr >= 0) { io::file_descriptor_source fdDevice(fdr, io::file_descriptor_flags::close_handle); io::stream <io::file_descriptor_source> in(fdDevice); if (fdDevice.is_open()) { std::string line; while (std::getline(in, line)) { // using printf() in all tests for consistency printf("%s", line.c_str()); } fdDevice.close(); } } }
Use C code
FILE* fp = fopen(FILENAME, "r"); if (fp == NULL) exit(EXIT_FAILURE); char* line = NULL; size_t len = 0; while ((getline(&line, &len, fp)) != -1) { // using printf() in all tests for consistency printf("%s", line); } fclose(fp); if (line) free(line);
Some pitfalls of cin / cout
By default, cin / cout waste time synchronizing themselves with the C library’s stdio buffers, so that we can freely intermix calls to scanf / printf with operations on cin / cout.
std::ios_base::sync_with_stdio(false);
Many C++ tutorials tell us to write
std::cout << std::endl;instead of
cout << "\n";. But
std::endlis actually slower because it forces a flush, which is usually unnecessary (We’d need to flush if we were writing an interactive progress bar, but not when writing a million lines of data).
There was a bug in very old versions of
GCC(pre-2004) that significantly slowed down C++
iostreams. Don’t use ancient compilers.
Avoid these pitfalls,
cin /
cout will be just as fast as
scanf /
printf. This is probably because
scanf /
printf need to interpret their format string argument at runtime and incur the overhead of
varargs for the other arguments, while the overhead resolution for
cin /
cout all happens at compile time. In any case, the difference is small enough that we do not have to care either way, since almost no reasonable code performs so much input / output for that differences to matter.
Advantages of <iostream> over <cstdio>
==> Increase type safety, reduce errors, allow extensibility, and provide inheritability.
printf() is arguably not broken, and
scanf() is perhaps livable despite being error prone, however both are limited with respect to what C++ I/O can do. C++ I/O (using « and ») is, relative to C (using
printf() and
scanf()):.
Thanks for your reading.
Refer:
|
https://ducmanhphan.github.io/2019-03-29-Reading-file-in-competitive-programming-in-C++/
|
CC-MAIN-2021-25
|
refinedweb
| 675
| 51.28
|
#include <wx/nonownedwnd.h>
Common base class for all non-child windows.
This is the common base class of wxTopLevelWindow and wxPopupWindow and is not used directly.
Currently the only additional functionality it provides, compared to base wxWindow class, is the ability to set the window shape.
If the platform supports it, sets the shape of the window to that depicted by region.
The system will not display or respond to any mouse event for the pixels that lie outside of the region. To reset the window to the normal rectangular shape simply call SetShape() again with an empty wxRegion. Returns true if the operation is successful.
This method is available in this class only since wxWidgets 2.9.3, previous versions only provided it in wxTopLevelWindow.
Set the window shape to the given path.
Set the window shape to the interior of the given path and also draw the window border along the specified path.
For example, to make a clock-like circular window you could use
As the overload above, this method is not guaranteed to work on all platforms but currently does work in wxMSW, wxOSX/Cocoa and wxGTK (with the appropriate but almost always present X11 extensions) ports.
|
http://docs.wxwidgets.org/trunk/classwx_non_owned_window.html
|
CC-MAIN-2017-43
|
refinedweb
| 202
| 54.93
|
Local's improvements to Web Workers open doors to new kinds of web apps
Paul Frazee's side project, Local, is not just a library for getting better utility out of Web Workers. It is that, of course, but it's also a different way to think about web apps.
Local allows servers to run in browser threads where they host HTML and act as proxies between the main thread and remote services. The server threads run in their own Web Worker namespaces and communicate via Local's implementation of HTTP over the postMessage API.
This architecture presents many interesting opportunities for new kinds of web apps. Want to learn more? Paul wrote an article outlining four potential use cases that Local enables. Give it a read, check out the docs (which are built using Local), or view the project's source code on GitHub.
|
https://changelog.com/news/locals-improvements-to-web-workers-open-doors-to-new-kinds-of-web-apps-XrV
|
CC-MAIN-2018-26
|
refinedweb
| 145
| 70.63
|
The front-end development space is nothing short of electrifying. It feels like every other week I find a tool, methodology or framework that intrigues me. With all of these “shiny things,” I tend to remind myself to always practice clever adoption methods.
This week, while working on an SVG animation demo, I discovered an interesting way to manipulate SVG elements using the SVG <filter> element. As a developer who specializes in interactive design and UX, this finding sparked a variety of ideas on animated transitions, hover effects and other ways to enhance the presentation of web applications.
Structuring a Basic SVG
Before I get into filters, I’d like to explain a bit about the structure of an SVG.
An SVG in its essence is an XML-based markup language for describing 2D graphics. MDN’s definition gives a good example by saying, “SVG is essential to graphics what HTML is to text”.
An extremely basic structure of an SVG document looks like this:
<svg xmlns=”" viewBox=”0 0 100 100">
<rect x=”10" y=”10" width=”80" height=”80" fill=”green” />
</svg>`
The SVG starts with the <svg> tag, which can carry a few attributes. Similar to other HTML tags, attributes such as “width” and “height” can be added to the <svg> tag as well.
xmlns: Stands for XML Namespace. Since SVG is XML-based, it needs to declare it’s namespace to identify its elements as SVG elements. Note, if an SVG is inside HTML it does not need this attribute. The namespace is already provided in the HTML parser.
rect: This is the shape that is being rendered within the SVG. Check out MDN for more shapes.
viewBox: The viewBox attribute defines the dimensions and position of the SVG. You can think of the viewBox as a telescope lens. You’re looking through the viewBox to see the image inside. The attribute takes a few values:
//pans within viewbox
<min-x> <min-y>//zooms in or out of image within viewbox
<width> <height>
min-x and min-y: Moves or pans the viewBox within the scope of the SVG.
width and height: Zooms in or out of the image within the viewBox (ex. width = 100 and height = 100 views the whole image)
Filters
Now that we’ve established what a basic SVG structure looks like we can dive into filters! Just like the <rect> element which displays a rectangle, SVG’s can hold many other elements. One of those elements being a <filter>.
Filter: This is an SVG element that wraps a group of Filter Primitive Elements.
Filter Primitive Elements: Are elements that can both create effects and effect other SVG elements.
A good metaphor for this would be the act of painting a picture. The Filter is the palette that holds the paint. The filter primitives are the colors that can be mixed and added to your image. To better show this here is a good example of a basic filter from CoDrops.
>
In this example, they have a basic SVG element with an image and a filter element contained in it. The filter element is holding a filter primitive called `feGaussianBlur`. This filter primitive does exactly what the name says. It produces a gaussian blur on whatever element it’s mapped to within the SVG element.
Under the filter element, there is an image — yes you can put images within SVGs. The image tag has a couple of attributes. The height and width should be familiar but there are a couple of attributes that stand out.
Let’s look at this again:
>
xlink:href: This attribute points to the image file chosen (ex. /myImage.png).
x and y: Positions the image x-horizontally and y-vertically from its origin.
Filter: Specifies the filter effects that you want on your image based on the filter’s id.
There are many other filters besides `feGaussianBlur` that you can use in tandem with others to create some interesting effects.
My Demo
Now you’ve learned a little about how filters can be used to affect images, shapes or other elements within your SVG. The kicker is that these filter primitives all come with their own attributes. The real power comes from your ability to change filter primitive values using javascript, CSS or the SVG animate element.
For my demo, I wanted to give an image a “liquid-like” warped animation (no coincidence to my company name). I will eventually use that animation to warp images during the hover state.
A Quick Fact
I began doing some research on the types of filter primitives and came across the feTurbulence Filter. The feTurbulence filter produces a cloudy effect on whatever it’s element is mapped to. The cool part of learning about these filters is that they don’t just apply to front-end development. The turbulence filter derives from a computer graphics algorithm that generates a type of gradient noise. The type of noise was dubbed, Perlin Noise named after the inventor Ken Perlin. The algorithm is now used a lot in motion picture visual effects. I only mentioned that because learning these filters can help you understand more about computer graphics in general.
Back to It!
To create my warped image effect, I decided to mess with the feTurbulence filter. Here is the static rendered SVG which was initially created dynamically via javascript.
View the filter here
<svg viewBox="0 0 180 100" width="1920"><filter width="100%" height="100%" x="0%" y="0%" id="noise"><feTurbulence type="turbulence" baseFrequency="0.0547184" id="turbulence" numOctaves="1" result="turbulence" seed="5"><animate id="noiseAnimate" attributeName="baseFrequency" values="0;.1;0,0" from="0" to="100" dur="10s" repeatCount="indefinite"></animate></feTurbulence><feDisplacementMap in="SourceGraphic" in2="turbulence" scale="30" xChannelSelector="R" yChannelSelector="R"></feDisplacementMap></filter><foreignObject width="100%" height="100%"><img src="/img/img1.jpg" xlink:</foreignObject></svg>
As you can see this SVG structure looks similar to the earlier structure.
1. There is an SVG element that contains a filter and an image element.
2. The image element is wrapped within a foriegnObject tag which is used to display elements that aren’t naturally placed in the SVG Namespace.
3. The Filter takes in two filter primitives, feTurbulence which creates the Perlin Noise effect mentioned earlier, and the feDisplacementMap which takes in two inputs (this being the filter and the image) and outputting the sum of the two. If your a Dragon Ball Z fan, think of it as the act of fusing two things. Your filter is referenced within the in2 attribute and the other output (in this case the image) is referenced in the in output.
4. Within the feTurbulence element, I added an animate element. The attributeName attribute on the animate element targets which attribute you’d like to animate. In this case, it was the “baseFrequency” to add more noise to the filter.
SVG’s have lots of surprises which can enhance our knowledge of both web development and computer graphics. So before you reach for that animation library or plugin, checkout what SVG’s have to offer.
Happy coding!
-lwd
Like Water Design, LLC is a multidisciplinary design studio specialized in eCommerce, modern web experiences and product design
|
https://medium.com/swlh/using-the-svg-feturbulence-filter-for-wave-effects-2b8cb2546ee6
|
CC-MAIN-2021-21
|
refinedweb
| 1,205
| 55.44
|
Created attachment 25896 [details]
Contains 3 unit tests written using your framework to show the bug
Overview:
Part of our software takes spreadsheets created by a 3rd party scientific device that are generated using POI. These spreadsheets at times can enter the equivalent of Double.NaN. The files are written successfully, can be read by Excel, saved by excel and still work properly. However, when you try to read these files (written by POI) a RuntimeException is thrown.
This means not all files written by POI can be read by POI.
Steps to Reproduce:
Read in any cell within an Excel file containing the equivalent of Double.NaN.
Additionally, you could also run the attached Unit Test class. It saves Double.NaN into an excel file (which passes). It fails to read using the Event based and direct methods of reading the file.
Actual Results:
RuntimeException is thrown
Expected Results:
We expected Double.NaN to be returned when a cell containing the equivalent was encountered.
Build Date & Platform:
Every build / platform since.
Date: Sat Oct 4 21:43:48 2008
New Revision: 701747
Additional Information:
While this bug may seem trivial, it is a bit of a blocker for our software.
When reading in a file using POI that contains Double.NaN, the software specifically throws a RuntimeException during the initial reading that we can not recover from.
The fix that would help us out the best would be to return Double.NaN instead of throwing the RuntimeException. Since Double.NaN can be written by POI, you should also be able to read it.
This RuntimeException is thrown in the following method:
public double readDouble() {
long valueLongBits = readLong();
double result = Double.longBitsToDouble(valueLongBits);
if (Double.isNaN(result)) {
throw new RuntimeException("Did not expect to read NaN"); // (Because Excel typically doesn't write NaN
}
return result;
}
Log results for Unit test: This was run against 3.6
Testsuite: org.apache.poi.hssf.record.TestDoubleNotANumber
Tests run: 3, Failures: 0, Errors: 2, Time elapsed: 0.009 sec
------------- Standard Output ---------------
the sheet [1]:
------------- ---------------- ---------------
Testcase: testWriteNaNToFileSystem took 0.001 sec
Testcase: testEventBasedDoubleNaNError took 0.00.eventusermodel.HSSFEventFactory.genericProcessEvents(HSSFEventFactory.java:122)
at org.apache.poi.hssf.eventusermodel.HSSFEventFactory.processEvents(HSSFEventFactory.java:85)
at org.apache.poi.hssf.eventusermodel.HSSFEventFactory.processWorkbookEvents(HSSFEventFactory.java:56)
at org.apache.poi.hssf.record.TestDoubleNotANumber$NaNSpreadsheetParser.process(TestDoubleNotANumber.java:206)
at org.apache.poi.hssf.record.TestDoubleNotANumber.testEventBasedDoubleNaNError(TestDoubleNotANumber.java:68))
Testcase: testDirectDoubleNaNError took 0.00:392):317)
at org.apache.poi.hssf.usermodel.HSSFWorkbook.<init>(HSSFWorkbook.java:298)
at org.apache.poi.hssf.HSSFTestDataSamples.openSampleWorkbook(HSSFTestDataSamples.java:46)
at org.apache.poi.hssf.record.TestDoubleNotANumber.testDirectDoubleNaNError(TestDoubleNotANumber.java:83))
If you create a file using Excel, and put in that a NaN, can poi read that, or does it fail in the same way as a poi written NaN ?
Also, if you could upload an excel created file with a NaN in it, that'd be great as we can use it for a basis of additional unit tests once this is fixed
Created attachment 25898 [details]
Spreadsheet containing NaN
Attached is a POI created excel file containing Double.NaN
Created attachment 25899 [details]
Excel Generated Excel file with NaN
This was created using Excel
Q: If you create a file using Excel, and put in that a NaN, can poi read that, or
does it fail in the same way as a poi written NaN ?
A: It fails the same way as if it was written in POI.
Attached are both a spreadsheet written by POI and one written by Excel for testing purposes.
Thanks for all the digging and the files!
If no-one beats me to it, I'll take a look when I'm next near a computer with eclipse on it
A very interesting case, thanks for your investigations.
The point is that Excel's implementation of floating-point arithmetic does not fully adhere to IEEE 754. In particular, Excel does not support the notion of Positive/Negative Infinities and Not-a-Number (NaN).
In case of Infinities Excel generates a #DIV/0! error. This typically occurs when you divide by 0.
In case of NaN Excel generates an #NUM! error which indicates invalid number. For example, SQRT(-1) will result in a #NUM! error.
More details can be found at
POI allows you to set Double.NaN, but Excel displays an unexpected value of 2.69653970229347E+308. If the result is referenced by a Excel formula then your scientific software may give incorrect results because any math operation involving NaN should result in NaN.
To make POI compatible with Excel the following rules must be followed:
- setting a cell value to Double.NaN should change the cell type to CELL_TYPE_ERROR and error value #NUM!
- setting a cell value to Double.POSITIVE_INFINITY or Double.NEGATIVE_INFINITY should change the cell type to CELL_TYPE_ERROR and error value #DIV/0!
The rules should work both in HSSF and XSSF.
I applied this fix in r992591.
If you process the generated workbooks in Java you should check type of cells because double can be retrieved only from numeric cells. The code may look as follows:
double value;;
}
Regards,
Yegor
Hi
I had the same problem, saw that a bugfix was coming in 3.7 beta3. Now
I downloaded the fix tried it and it didn't work.
Since English is not my mother tongue either I didn't understand
the fix or this seem to be only a partial fix.
My stack trace would be the same as the reporters but the bug
description would be slightly different.
What I try to do is create a HSSFWorkbook from an existing xls-file:
"
POIFSFileSystem poifs = new POIFSFileSystem(fin);
fin.close();
workbook = new HSSFWorkbook(poifs);
"
This operation fails already in the reading of the InputStream.
As the original reporter posted in the class
org.apache.poi.hssf.record.RecordInputStream
there is the posted readDouble () method which doesn't expect
Excell to deliver a double with the value NaN.
If somehow it does get a NaN it throws a RuntimeException.
Since the xls file I try to read seems to have a cell containing NaN
the whole process fails and the HSSFWorkbook is not created.
As I understand the bugfix, it fixes the setCellValue(double value)
of the HSSFCell to accept NaN.
But this method is never called since already reading the cell from the
Stream throws a RuntimeException.
I privately rewrote org.apache.poi.hssf.record.RecordInputStream.readDouble()
to not check for NaN and it worked for my case, but I don't know if there are
any sideeffects, so I'd rather post it here.
(
public double readDouble() {
long valueLongBits = readLong();
double result = Double.longBitsToDouble(valueLongBits);
return result;
}
)
ps.
Sorry for maybe posting about an already fixed bug, but
the stack trace of the bug is really like mine and it is still
happening with 3.7 beta 3.
Best Regards,
Theo
A unit test was added along with the fix, which shows the problem fixed for the original use case. If you're still having problems, please can you upload a file that demonstrates the problem when running with 3.7 beta 3, then we can use that for further testing + unit tests
Created attachment 26165 [details]
This is the xls file I tested with that threw the exception.
That is the xls file.
Here is a sample code that throws the exception I'll post at the bottom.
The stack trace is identical to the one posted by the creator of the thread.
poi-3.7-beta3-20100924.jar is the jar I have added to my classpath.
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.poifs.filesystem.POIFSFileSystem;
public class XlsError {
public static void main(String[] args) {
HSSFWorkbook workbook = null;
FileInputStream fin = null;
try {
fin = new FileInputStream("capacityAnalysis.xls");
} catch (FileNotFoundException fnf){
fnf.printStackTrace();
}
try{
POIFSFileSystem poifs = new POIFSFileSystem(fin);
fin.close();
workbook = new HSSFWorkbook(poifs);
}catch(Exception e){
e.printStackTrace();
}
}
}:263)
at org.apache.poi.hssf.usermodel.HSSFWorkbook.<init>(HSSFWorkbook.java:188)
at org.apache.poi.hssf.usermodel.HSSFWorkbook.<init>(HSSFWorkbook.java:170)
at XlsError.main(XlsError.java:18)
Caused by: java.lang.RuntimeException: Did not expect to read NaN
at org.apache.poi.hssf.record.RecordInputStream.readDouble(RecordInputStream.java:276)
at org.apache.poi.hssf.record.NumberRecord.<init>(NumberRecord.java:43).poi.hssf.record.RecordFactory$ReflectionConstructorRecordCreator.create(RecordFactory.java:57)
... 8 more
Created attachment 26170 [details]
3 Unit Tests showing error still occurs
I download from SVN the poi-3.7-beta3 tag the entire file structure and stuck my test into
src/testcases org.apache.poi.hssf.record so it could be run as part of "ant test" when building.
We have a scientific instrument that exports Double.NaN into an excel file. We can NOT upgrade the software or change how it works. We must be able to read this file into our application. The way you are handing Double.NaN is keeping this from happening. The prior version of POI we were using allowed the reading of Double.NaN.
The error is still occurring in both the Event Based and direct modes of reading an excel file.
Test 1: testWriteNaNToFileSystem()
- Shows that you can create a file with Double.NaN. Properly saves the file with no errors.
- This test passes.
Test 2: testEventBasedDoubleNaNError()
- Creates the same file and reads it using the event based model. Saves the Double read into a variable as part of the listener. Asserts that the value read in is not null.
- This test fails.
Test 3: testDirectDoubleNaNError()
- Creates the same file and reads it using a direct approach. Asserts that the value read in from the cell is equal the the value put into the file
- This test fails.
What version of Apache POI is the scientific instrument using? I am not saying that you must upgrade it, I just want to know which version so we can understand better what happened at r701747 that really causes this trouble.
We have these numbers coming from various places
Instrument 1: POI 3.2
Instrument 2: POI 3.6
Instrument 3: Not POI, but gives an excel file we need to read
All of them put the Excel equivalent to Double.NaN into the files. For excel this is: 2.6965E+308 or 2.69653970229347E+308
An excel file can be created separately with that value and still cause the same exception.
What caused this is a change at some point in POI where instead of returning Double.NaN an exception is thrown. If you look at the first comment, you can see where the check for Double.NaN was first added. I'm not sure whether it has changed it is at this point or not. But an exception for Double.NaN as a value is definitely breaking things for us.
The failing test cases demonstrate expected behavior, they are not bugs.
When dealing with NaNs and Infinities POI mimics Excel, see my comment above. Setting Double.NaN changes cell type to FORMULA and cell.getNumericCellValue() can only be called for numeric cells. The correct version of testDirectDoubleNaNError() is as follows:
public void testDirectDoubleNaNError() throws IOException {
// Write the file with Double.NaN in it
createFile(SPREADSHEET_FILE_NAME);
// Read the file
HSSFWorkbook workbook = HSSFTestDataSamples.openSampleWorkbook(SPREADSHEET_FILE_NAME);
HSSFCell cell = workbook.getSheet(THE_SHEET).getRow(currentSheetRow).getCell(currentSheetRow);
Double value = null;;
}
// We should be getting back the exact same value we put in.
Assert.assertTrue(value.equals(VALUE_PRINTED));
}
testEventBasedDoubleNaNError() fails for the same reason - Setting Double.NaN results in a formula cell and a NumberRecord is not written in the binary stream.
The real issue is that POI prior to 3.7-beta3 allowed writing NaNs and you want to process these files. I'm inclined to comment the exception in RecordInputStream.readDouble, but this fix will come after 3.7-FINAL.
Yegor
I fixed POI to tolerate Double.NaN when reading .xls file. The fix was committed in r1033004.
The fix is provided for backward compatibility. POI 3.7+ never writes Double.NaN, instead it converts the cell type to error. See my previous posts.
Yegor
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=49761
|
CC-MAIN-2018-17
|
refinedweb
| 2,027
| 51.44
|
Red Hat Bugzilla – Bug 52472
printconf-gui crashes with an xml error
Last modified: 2008-05-01 11:38:00 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.78 [en] (X11; U; Linux 2.4.7-2 i686)
Description of problem:
when calling printconf-gui OR tui from an xterm it fails. the error appears
to be a missing xml file being called by the pyton script. see below
[root@dustpuppy root]# printconf-gui
Traceback (innermost last):
File "/usr/sbin/printconf-gui", line 7, in ?
import printconf_gui
File "/usr/share/printconf/util/printconf_gui.py", line 1750, in ?
File "/usr/share/printconf/util/printconf_conf.py", line 1069, in
foomatic_init_overview
file = open(foomatic.overview_file_path)
IOError: [Errno 2] No such file or directory:
'/usr/share/foomatic/db/compiled/overview.xml'
happens when calling printconf-gui, printconf-tui or printtool
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.open xterm, su -
2.type printconf-gui
3.watch the error
Actual Results: traceback error fdrrom pyton script
Expected Results: the printer config tool sholud appear
Additional info:
Had the same problem.
Creating overview.xml as an empty file doesn't fix this.
We (Red Hat) really need to fix this before next release.
Had this problem with Roswell2 packages (foomatic-1.1-0.20010717.4.noarch.rpm,
printconf-0.3.23-1.i386.rpm, printconf-gui-0.3.23-1.i386.rpm). Running "rpm
--verify foomatic" reveals a large number of missing files. Downgrading
foomatic to a previous Rawhide package (foomatic-1.1-0.20010717.3.noarch.rpm)
seems to have fixed the problem, but I have only tested one printer driver (ljet3).
Verified.
backreved to foomatic-1.1-0.20010717.3.noarch.rpm and tested with all available
drivers for HP697C (including DJ6xxP) and it works flawlessly
this is a 'upgrade over betas are not supported' bug.
the 'fix':
rm -rf /usr/share/foomatic/db
rpm -Uvh --force foomatic-?latest?.rpm
|
https://bugzilla.redhat.com/show_bug.cgi?id=52472
|
CC-MAIN-2018-43
|
refinedweb
| 324
| 51.85
|
Any rectangle whose sides are parallel to the grid is uniquely defined by its top-left and bottom-right points. There are $O(n^2)$ choices for the top-left point and $O(n^2)$ choices for the bottom-right, for a total of $O(n^4)$ possible rectangles. Since $n$ is at most 20, this means that an overall $O(n^6)$ algorithm will be fast enough to get full credit. We have $O(n^4)$ rectangles total, so this means an $O(n^2)$ algorithm for checking whether each rectangle is a PCL will be fast enough to pass.
We can implement such an algorithm using a flood-fill. For each rectangle we check, we first confirm that it only contains two different colors; if it does, then we perform a flood-fill starting from each of the $O(n^2)$ unit squares in the rectangle. However, if a square has already been processed as part of another flood fill, then we skip it. A rectangle will be a valid PCL if and only if it contains exactly three distinct components -- since it contains exactly two colors, this means one of them must have two distinct components and the other must have one. Since each individual flood-fill won't intersect any of the other flood-fills that we start, the total runtime is $O(n^2)$ as each unit square is processed exactly once.
We don't know the total number of PCLs yet, though! We still need to check whether there is a larger PCL that contains the rectangle we're currently considering. It's possible to do this with some clever ordering of the PCLs, processing them in order from largest to smallest, but this can be made significantly easier by noting that there aren't very many rectangles that will be PCLs in the first place. The absolute maximum number of rectangles is ${21 \choose 2}^2$, which is around 44,000. However, the actual number of rectangles will likely be significantly less, as any rectangles without exactly three connected components will be discarded. Therefore, we can simply keep track of all existing PCLs, and to check whether a given PCL is invalid, we test whether any PCL that we have recorded completely contains it.
Our final runtime is $O(n^6 + |PCLs|^2)$.
Here is Brian Dean's code:
#include <iostream> #include <fstream> #include <cmath> #include <vector> using namespace std; int N; string img[20]; struct PCL { int i1,j1,i2,j2; }; vector<PCL> V; bool beenthere[20][20]; void visit(int i, int j, int c, int i1, int j1, int i2, int j2) { beenthere[i][j] = true; if (i > i1 && img[i-1][j]-'A'==c && !beenthere[i-1][j]) visit(i-1,j,c,i1,j1,i2,j2); if (i < i2 && img[i+1][j]-'A'==c && !beenthere[i+1][j]) visit(i+1,j,c,i1,j1,i2,j2); if (j > j1 && img[i][j-1]-'A'==c && !beenthere[i][j-1]) visit(i,j-1,c,i1,j1,i2,j2); if (j < j2 && img[i][j+1]-'A'==c && !beenthere[i][j+1]) visit(i,j+1,c,i1,j1,i2,j2); } bool is_PCL(int i1, int j1, int i2, int j2) { int num_colors = 0; int color_count[26] = {0}; for (int i=i1; i<=i2; i++) for (int j=j1; j<=j2; j++) beenthere[i][j] = false; for (int i=i1; i<=i2; i++) for (int j=j1; j<=j2; j++) if (!beenthere[i][j]) { int c = img[i][j] - 'A'; if (color_count[c] == 0) num_colors++; color_count[c]++; visit(i,j,c,i1,j1,i2,j2); } if (num_colors != 2) return false; bool found_one=false, found_many=false; for (int i=0; i<26; i++) { if (color_count[i] == 1) found_one = true; if (color_count[i] > 1) found_many = true; } return found_one && found_many; } // is x in y? bool PCL_in_PCL(int x, int y) { return V[x].i1 >= V[y].i1 && V[x].i2 <= V[y].i2 && V[x].j1 >= V[y].j1 && V[x].j2 <= V[y].j2; } bool PCL_maximal(int x) { for (int i=0; i<V.size(); i++) if (i!=x && PCL_in_PCL(x,i)) return false; return true; } int main(void) { ifstream fin ("where.in"); ofstream fout ("where.out"); fin >> N; for (int i=0; i<N; i++) fin >> img[i]; for (int i1=0; i1<N; i1++) for (int j1=0; j1<N; j1++) for (int i2=i1; i2<N; i2++) for (int j2=j1; j2<N; j2++) if (is_PCL(i1,j1,i2,j2)) { PCL p = {i1,j1,i2,j2}; V.push_back(p); } int answer = 0; for (int i=0; i<V.size(); i++) if (PCL_maximal(i)) answer++; fout << answer << "\n"; return 0; }
|
http://usaco.org/current/data/sol_where_silver_open17.html
|
CC-MAIN-2017-13
|
refinedweb
| 783
| 69.01
|
Thank you for taking the time to read this cry for help.
I have just started learning Python (version 3.3) mainly to manipulate data from a SQL Server 2005 database as I find the SQL language a bit limiting.
I have cobbled together a few lines of code to return rows in a small SQl Server table using the pyodbc module. When run line by line in IDLE I get the results I expect, however when run as a program in my IDE (Aptana Studio) I get an AttributeError. With no experience with Python I'm not sure how to resolve this problem. I will greatly appreciate any help from you wisened Python coders out there.
The code is as follows:
- Code: Select all
import pyodbc
def main():
cnxn = pyodbc.connect('DSN=SQL2005;UID=xxx;PWD=xxx')
cursor = cnxn.cursor()
cursor.execute("select * from Users")
row = cursor.fetchone()
while row:
print(row)
row = cursor.fetchone()
if __name__ == "__main__": main()
The error the interpreter rudely throws back is as follows:
Traceback (most recent call last):
File "C:\Users\administrator\Desktop\Exercise Files\16 Databases\pyodbc.py", line 17, in <module>
if __name__ == "__main__": main()
File "C:\Users\administrator\Desktop\Exercise Files\16 Databases\pyodbc.py", line 9, in main
cnxn = pyodbc.connect('DSN=SQL2005;UID=xxx;PWD=xxx')
AttributeError: 'module' object has no attribute 'connect'
Kind regards
Grant aka gunglichen
|
http://www.python-forum.org/viewtopic.php?f=6&t=7439&p=9552
|
CC-MAIN-2016-22
|
refinedweb
| 230
| 58.38
|
This is my fastest one yet! May clean up the solution tomorrow, but:
import re
import time
class CoordinatePlane:
def __init__(self, values):
self.values = [Point(*value) for value in values]
def x_vals(self):
vals = [val.x for val in self.values]
return min(vals), max(vals)
def y_vals(self):
vals = [val.y for val in self.values]
return min(vals), max(vals)
def draw(self):
min_x, max_x = self.x_vals()
min_y, max_y = self.y_vals()
if max_x - min_x > 100 or max_y - min_y > 100: return
grid = [['.' for _ in range(min_x, max_x+1)] for _ in range(min_y, max_y+1)]
for value in self.values:
grid[value.y - min_y][value.x - min_x] = 'X'
for row in grid:
print(''.join(row))
time.sleep(2)
print('\n\n\n')
def increment(self):
for value in self.values:
value.move()
class Point:
def __init__(self, x, y, x_speed, y_speed):
self.x = x
self.y = y
self.x_speed = x_speed
self.y_speed = y_speed
def move(self):
self.x += self.x_speed
self.y += self.y_speed
with open('sample-input.txt', 'r') as f:
values = []
for line in f:
x, y, x_speed, y_speed = [int(n) for n in re.findall(r'[+-]?\d+', line)]
values.append((x, y, x_speed, y_speed,))
plane = CoordinatePlane(values)
speed = 0
while True:
print(speed)
plane.draw()
plane.increment()
speed += 1
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.
re: AoC Day 10: The Stars Align VIEW POSTFULL DISCUSSION
This is my fastest one yet! May clean up the solution tomorrow, but:
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aspittel/comment/7ea0
|
CC-MAIN-2020-34
|
refinedweb
| 263
| 71.82
|
Fun talk:New Conservapedia
Any reason why this is in the Fun namespace?--Colonel Sanders (talk) 16:07, 29 December 2010 (UTC)
- Can I take it a step further and ask why is it hear at all? I say get rid of it. --Ψ GremlinHable! 16:17, 29 December 2010 (UTC)
- Yeah, I agree. And I just did.--Colonel Sanders (talk) 16:20, 29 December 2010 (UTC)
- A bit quick perhaps. There are a lot of strange things in fun and it's not as if they do any harm here.--BobSpring is sprung! 16:31, 29 December 2010 (UTC)
- Yes, I see. Perhaps I should just restore it?--Colonel Sanders (talk) 16:33, 29 December 2010 (UTC)
- Ah, was this one of those "Remember, pillage BEFORE you burn!" moments? --Ψ GremlinFale! 16:34, 29 December 2010 (UTC)
- Perhaps. But, I know you don't like it and I particularly don't either. But maybe Bob is right. Is it really doing harm? This is an article that needs to get deleted. A lot of fun articles like this really don't hurt anything. Meh.--Colonel Sanders (talk) 16:38, 29 December 2010 (UTC)
- Personally I'd restore it. We don't actually have any criteria for "Fun" as far as I'm aware. At first it was just a dumping ground for articles which just didn't make the grade or were pointless (but under the name ACD). Later it was renamed to "fun" and people started to think that things in "fun" had to be "funny", but that was never the original intention. It's an article graveyard really - though with some really funny stuff just to confuse the issue.--BobSpring is sprung! 16:40, 29 December 2010 (UTC)
- Thanks for clearing the issue on the "Fun" namespace, Bob! I restored the article and will give mercy to the other I linked to.--Colonel Sanders (talk) 16:44, 29 December 2010 (UTC)
Dead[edit]
This insignificant site is sadly among the dead (with Liberapedia, Libertapedia, EvoWiki among others). Do we need this article anymore?--Colonel Sanders (talk) 22:57, 16 June 2011 (UTC)
YES![edit]
Holy crap! This looks awesome! Can I add some Poe's Law-esque articles on it? ...Damn I need a life.Moderateman3345 (talk) 03:26, 19 June 2011 (UTC)
Where'd it go?[edit]
The site appears to be down, main page deleted, can't view any other pages or navigate the site. Wikkii.com is not down. I take it the joke got old fast? Secret Squirrel (talk) 12:46, 7 October 2011 (UTC)
- It moved back to its old server here. User:Mr. Berty crashed the Wikkii site.--Colonel Sanders (talk) 01:02, 9 October 2011 (UTC)
- Ah, thanks :" Lol Secret Squirrel (talk) 21:38, 11 October 2011 (UTC)
- Although parodying Ken, as I've figured out, just cannot be done. Secret Squirrel (talk) 01:30, 12 October 2011 (UTC)
I remember these guys![edit]
I'm one of the unofficial community staff over at Wikkii, and there was drama on our support forum because one of the bureaucrats there went rouge and closed down the Conservapedia wiki there. (He couldn't delete it, but set it so no one but administrators can access it.) Inquisitor Ehrenstein (talk) 22:40, 9 August 2012 (UTC)
- Went rouge?
ГенгисIs the Pope a Catholic?
18:36, 13 August 2012 (UTC)
Parody?[edit]
Is this actually a parody? It's hard to tell these things sometimes; you can never know whether people are serious just by them being extreme. It looks like maybe it might actually be a parody. I believe I saw one of their admins here, so probably isn't real. Inquisitor Sasha Ehrenstein des Sturmkrieg Sector (talk) 23:53, 9 August 2012 (UTC)
Alive?[edit]
I think this site is editing again. Not sure.--Seonookim (talk) 05:05, 27 February 2013 (UTC)
|
https://rationalwiki.org/wiki/Fun_talk:New_Conservapedia
|
CC-MAIN-2021-49
|
refinedweb
| 650
| 76.82
|
The JAX-WS RI consists of the following major modules:
Runtime
Runtime module is available at application runtime and provide the actual core Web Services framework.
JAX-WS is the aggregating component of what is called the integrated Stack (I-Stack). The I-Stack consists of JAX-WS, JAXB, StAX, SAAJ and Fast Infoset. JAXB is the databinding component of the stack. StAX is the Streaming XML parser used by the stack. SAAJ is used for its attachment support with SOAP messages and to allow handler developers to gain access to the SOAP message via a standard interface. Fast Infoset is a binary encoding of XML that can improve performance.
Tools
Tools for converting WSDLs and Java source/class files to Web Services.
APT
A Java SE tool and framework for processing annotations. APT will invoke a JAX-WS AnnotationProcossor for processing Java source files with javax.jws.* annotations and making them web services. APT will compile the Java source files and generate any additional classes needed to make an javax.jws.WebService annotated class a Web service.
WsGen
Tool to process a compiled javax.jws.WebService annotated class and to generate the necessary classes to make it a Web service.
WsImport
Tool to import a WSDL and to generate an SEI (a javax.jws.WebService) interface that can be either implemented on the server to build a web service, or can be used on the client to invoke the web service.
APT
A Java SE tool and framework for processing annotations.
apt [-classpath classpath] [-sourcepath sourcepath] [-d directory] [-s directory] [-factorypath path] [-factory class] [-print] [-nocompile] [-Akey[=val] ...] [javac option] sourcefiles [@files] -s dir Specify the directory root under which processor-generated source files will be placed; files are placed in subdirectories based on package namespace. -nocompile Do not compile source files to class files. -d dir Specify where to place processor and javac generated class files -cp path or -classpath path Specify where to find user class files and annotation processor factories. If -factorypath is given, the classpath is not searched for factories..
Annotation Processor
An APT AnnotationProcessor for processing Java source files with javax.jws.* annotations and making them web services.
Runtime SPI
A part of JAX-WS that defines the contract between the JAX-WS RI runtime and Java EE.
Tools SPI
A part of JAX-WS that defines the contract between the JAX-WS RI tools and Java EE.
JAXB XJC-API
The schema compiler.
JAXB runtime-API
A part of the JAXB runtime that defines the contract between the JAXB RI and the JAX-WS RI.
|
http://java.boot.by/scdjws5-guide/ch04s06.html
|
CC-MAIN-2021-21
|
refinedweb
| 432
| 57.16
|
Subject: Re: [OMPI users] Finalize() does not return (UNCLASSIFIED)
From: Hazelrig, Chris CTR (US) (christopher.c.hazelrig.ctr_at_[hidden])
Date: 2013-08-22 14:33:43
Classification: UNCLASSIFIED
Caveats: NONE
Thank you, Jeff and Eloi, for your help. Yes, any suggestions regarding
profiling tools would be appreciated.
I was also wondering if there are any MPI functions that can be used to
assess communications status, too. The only MPI calls I am using are
Init(), Bcast(), Barrier(), and Finalize(). The Bcast() call is being used
to transfer a single boolean value from the rank 0 process to the others.
The Barrier() calls are used to resync the otherwise independent processes
at various stages during program execution. It seems unlikely there is a
communication issue since any rank that does not receive the Boolean value
would not be able to proceed as needed and the next Barrier() call would
effectively stall the program while the other ranks waited on the one to
catch up, but they are all reaching the Finalize() routine at the end of the
run.
Thanks again,
Chris
-----Original Message-----
From: users [mailto:users-bounces_at_[hidden]] On Behalf Of Eloi Gaudry
Sent: Wednesday, August 21, 2013 8:08 AM
To: Open MPI Users
Subject: Re: [OMPI users] Finalize() does not return
>>, could you advice one tool or set of options to perform such a check ?
_______________________________________________
users mailing list
users_at_[hidden]
Classification: UNCLASSIFIED
Caveats: NONE
|
http://www.open-mpi.org/community/lists/users/2013/08/22541.php
|
CC-MAIN-2015-40
|
refinedweb
| 237
| 57.2
|
There seems to be a problem with the module scapy.
The module does not seem to be functioning properly whether I use pycharm in kali in virtual box or if i am using vs code in a full kali install. I have tried importing scapy.all as in your videos as well
as tried
from scapy.all import *
As soon as I want to scan an entire range such as “10.0.2.1/24” the app breaks. The only way I can get any output is to target a single IP such as 10.0.2.1
Also, I have verified my full install will ping this windows PC when running the network_scanner.py file from my full kali install as long as I only use this PC’s IP. Again, the problem only seems to happen when scanning a range.
Is there an update or different way to import scapy?
I really enjoy your courses and teaching style and hope there is a quick/efficient resolution.
Thanks again for your help.
|
https://zsecurity.org/forums/reply/29261/
|
CC-MAIN-2021-43
|
refinedweb
| 173
| 84.27
|
Hi Andreas, All quilt commands start by sourcing scripts/patchfns, but before doing so, they check whether this file has already been sourced. Looking at the code, I can't see any case where the file would have been sourced already (it is being sourced exactly once in each command-specific file). Can you remember why you added that check? The only thing that came to my mind was that maybe you wanted to make it possible to pre-source the file so that individual commands do not need to do it again. That would make sense from a performance perspective, however I don't think this can be done at the moment, because a number of checks in scripts/patchfns are context-dependent (sourcing of $QUILTRC, addition of default arguments for the command, discovery of the source tree's root, setting of $QUILT_PATCHES, $QUILT_SERIES and $SERIES, version check and now series check). So it would only work if we would split non context-dependent code to a separate file and pre-source only that file. Whether we ever want to do that or not, I'm not sure, as it would seriously clutter the shell's environment, especially considering the lack of namespace marker for all functions in this file. So I am considering removing the check for double inclusion. Do you have any objection to me doing that? If the check is really needed then I'd rather move it to scripts/patchfns itself so that it isn't duplicated in every command-specific file, pretty much as is traditionally done for C header files. Thanks, -- Jean Delvare SUSE L3 Support
|
https://lists.gnu.org/archive/html/quilt-dev/2020-07/msg00000.html
|
CC-MAIN-2020-40
|
refinedweb
| 273
| 64.64
|
DelegateRecycler
#include <delegaterecycler.h>
Detailed Description
This class may be used as a delegate of a ListView or a GridView in the case the intended delegate is a bit heavy, with many objects inside.
This will ensure the delegate instances will be put back in a common pool after destruction, so when scrolling a big list, the delegates from old delete items will be taken from the pool and reused, minimizing the need of instantiating new objects and deleting old ones. It ensures scrolling of lists with heavy delegates is smoother and helps with memory fragmentations as well.
NOTE: CardListView and CardGridView are already using this recycler, so do NOT use it as a delegate for those 2 views. Also, do NOT use this with a Repeater.
- Since
- 2.4
Definition at line 48 of file delegaterecycler.h.
Property Documentation
The Component the actual delegates will be built from.
Note: the component may not be a child of this object, therefore it can't be declared inside the DelegateRecycler declaration. The DelegateRecycler will not take ownership of the delegate Component, so it's up to the caller to delete it (usually with the normal child/parent relationship)
Definition at line 59 of file delegaterecycler.h.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun May 24 2020 22:55:50 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/frameworks/kirigami/html/classDelegateRecycler.html
|
CC-MAIN-2020-24
|
refinedweb
| 251
| 53
|
#include <hallo.h> * Joey Hess [Tue, Mar 25 2003, 12:59:14PM]: > David Z Maze wrote: > > I'm actually somewhat curious to hear how apt-src changes the world in > > this case. My impression when it first came up was that it was > > intended to people who run "stable with one unstable package" to track > > the source for that package without having sid in their APT sources. > > Would you use apt-src to get source for all of the kernel modules you > > care about under $MODULE_LOC, and then use kernel-package as normal? > > You can do that, or you can tell apt-src where your kernel source is and > if the module package supports apt-src natively (ie, linux-wlan-ng), you > can use apt-src -b upgrade to rebuild debs of the modules anytime a new > version is released. Or you can do some mixture of both. Yes, _you_ can. I am talking about facts, about how the things have been done till now. It was pretty consistently for make-kpkg and dozens of documentation around the net, people got used to it, and the method did what it was supposed too: pull the source which is often rebuilt on user machines together with the main distribution files (as said, most CD vendors do not sell source CDs in the cheap versions). And now you tell linux-wlan-ng users to use apt-src (which is not better at all than "apt-get source" for this purpose). If you wish to experiment a bit and create the infrastructure for _this_ purpose, use "experimental". Do not force a half-cooked solution at any price. > > Also, does approach make it easier or harder to come up with binary > > modules for all of the Debian stock kernels? > > Unfortunatly it doesn't really help. Well, no offending comment from me here. It is worse enough that you are playing prima donna on -devel. Gruss/Regards, Eduard. -- Mutter: "Fritzchen, was hat da eben so gekracht?" Fritzchen: "Da wollte ein Auto in eine Seitenstraße abbiegen." Mutter: "Aber das macht doch nicht so einen Krach!" Fritzchen: "Da war ja auch keine Seitenstraße!"
Attachment:
pgpjoJnypFIyx.pgp
Description: PGP signature
|
https://lists.debian.org/debian-devel/2003/03/msg01496.html
|
CC-MAIN-2017-39
|
refinedweb
| 362
| 71.04
|
#include <coherence/util/LongArrayIterator.hpp>
Inherits Muterator.
List of all members.
Returns the index of the current value, which is the value returned by the most recent call to the
next method.
Returns the current value, which is the same value returned by the most recent call to the
next method, or the most recent value passed to
setValue if
setValue were called after the
next method.
Stores a new value at the current value index, returning the value that was replaced.
The index of the current value is obtainable by calling the
getIndex method.
Removes from the underlying collection the last element returned by the iterator (optional operation).
This method can be called only once per call to
next. The behavior of an iterator is unspecified if the underlying collection is modified while the iteration is in progress in any way other than by calling this method.
|
http://docs.oracle.com/cd/E24290_01/coh.371/e22845/classcoherence_1_1util_1_1_long_array_iterator.html
|
CC-MAIN-2015-22
|
refinedweb
| 148
| 55.13
|
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
U8 tcp_get_socket (
U8 type, /* Type of TCP socket. */
U8 tos, /* Type Of Service. */
U16 tout, /* Idle timeout period before disconnecting. */
U16 (*listener)( /* Function to call when a TCP event occurs. */
U8 socket, /* Socket handle of the local machine. */
U8 event, /* TCP event such as connect, or close. */
U8* ptr, /* Pointer to IP address of remote machine, */
/* or to buffer containing received data. */
U16 par )); /* Port number of remote machine, or length */
/* of received data. */
The tcp_get_socket function allocates a free TCP socket.
The function initializes all the state variables of the TCP socket to
the default state.
The argument type specifies the type of the TCP socket.
The argument tos specifies the IP Type Of Service. The most
common value for tos is 0.
The argument tout specifies the idle timeout in seconds.
The TCP connection is supervised by the keep alive timer. When the
connection has been idle for more than tout seconds, TCPnet
disconnects the the TCP connection or sends a keep-alive packet if
TCP_TYPE_KEEP_ALIVE attribute is set.
The argument listener is the event listening
function of the TCP socket. TCPnet calls the listener
function whenever a TCP event occurs. The arguments to the
listener function are:
TCPnet uses the return value of the callback function
listener only when the event is TCP_EVT_CONREQ. It uses
the return value to decide whether to accept or reject an incoming
connection when the TCP socket is listening. If the
listener function returns 1, TCPnet accepts the
incoming connection. If the listener function returns
0, TCPnet rejects the incoming connection. You can thus define the
listener function to selectively reject incoming
connections from particular IP addresses.
The tcp_get_socket function is in the RL-TCPnet library.
The prototype is defined in rtl.h.
note
The tcp_get_socket function returns the handle of the
allocated TCP socket. If the function could not allocate a socket, it
returns 0.
tcp_connect, tcp_listen, tcp_release_socket, tcp_reset_window
#include <rtl.h>
U8 tcp_soc;
U16 tcp_callback (U8 soc, U8 event, U8 *ptr, U16 par) {
/* This function is called on TCP event */
..
switch (event) {
case TCP_EVT_CONREQ:
/* Remote host is trying to connect to our TCP socket. */
/* 'ptr' points to Remote IP, 'par' holds the remote port. */
..
/* Return 1 to accept connection, or 0 to reject connection */
return (1);
case TCP_EVT_ABORT:
/* Connection was aborted */
..
break;
case TCP_EVT_CONNECT:
/* Socket is connected to remote peer. */
..
break;
case TCP_EVT_CLOSE:
/* Connection has been closed */
..
break;
case TCP_EVT_ACK:
/* Our sent data has been acknowledged by remote peer */
..
break;
case TCP_EVT_DATA:
/* TCP data frame has been received, 'ptr' points to data */
/* Data length is 'par' bytes */
..
break;
}
return (0);
}
void main (void) {
init ();
/* Initialize the TcpNet */
init_TcpNet ();
tcp_soc = tcp_get_socket (TCP_TYPE_SERVER, 0, 30, tcp_callback);
if (tcp_soc != 0) {
/* Start listening on TCP port 80 */
tcp_listen (tcp_soc, 80);
}
while (1);
/* Run main TcpNet 'thread' */
main_Tcp.
|
http://www.keil.com/support/man/docs/rlarm/rlarm_tcp_get_socket.htm
|
CC-MAIN-2019-43
|
refinedweb
| 478
| 59.19
|
Give better diagnostic when arguments are omitted to a function call in do-notation
When using any Monad other than
(->) e, it is almost always an error for a function call to yield another function, rather than a monadic value. For example:
import Control.Monad.Trans.State main = flip runStateT 10 $ do print print "Hello"
The error you get from this code is (with GHC 7.4.2):
Couldn't match expected type `StateT b0 m0 a0' with actual type `a1 -> IO ()'
While this is fully correct, I think the compiler could do much better. The fact that the naked call to "print" doesn't return an IO value, but rather a function type, could be immediately detected as an error, allowing GHC to say something like:
Detected function type (a -> IO ()) returned when IO () was expected, perhaps missing an argument in call to "print"?
My example with
StateT,
IO and
Since I can't think of a case (outside the function monad), where statements within a do-notation block should yield a function type rather than a monadic value appropriate to that Monad, perhaps we could do better here in guiding the user to the real source of the problem.
|
https://gitlab.haskell.org/ghc/ghc/-/issues/7851
|
CC-MAIN-2022-21
|
refinedweb
| 201
| 56.12
|
Versioning in the Java platform
By abuckley on Jul 31, 2009
The best-versioned artifact in the Java world today is the ClassFile structure. Two numbers that evolve with the Java platform (as documented in the draft Java VM Specification, Third Edition) are found in every .class file, governing its content. But what determines the version of a particular .class file, and how is the version really used? The answer turns out to be tricky because there are many interesting versionable artifacts in the Java platform.
The source language is the most obvious. A compiler doesn't have to accept multiple versions of a source language, though javac does, via the -source flag. (-source works on a global basis; it is also conceivable to work on a local basis, accepting different versions of the source language for different compilation units.) Less obvious versioned artifacts are hidden in plain sight: character sets and compilation strategies. And .class files themselves sometimes have their versions used in surprising ways. Let's see how javac handles all these versions, and make some claims about how an "ideal" compiler might work.
In the remainder, X and Y are versions. "source language X" means "version X of a source language". "Java X" means "version X of Java SE". "javac X" means "the javac that ships in version X of the JDK".
Character set
Happily, the Java platform has used the Unicode character set from day one. Unhappily, when javac for source language X is configured to accept an earlier source language Y, it uses the Unicode version specified for source language X rather than Y. For example, javac 1.4 -source 1.3 uses Unicode 3.0, since that was the Unicode specified for Java 1.4. It should use Unicode 2.1 as specified for Java 1.3.
Claim: A compiler configured to accept source language X should use the Unicode version specified for source language X.
It is difficult for javac to use multiple Unicode versions since the standard library (notably
java.lang.Character) effectively controls the version of Unicode available, and only one version of the standard library is usually available. We will return to the issue of multiple standard libraries later.
Sidebar: You may be surprised to discover that some other languages don't use Unicode by default. A factoid from 2008's JVM Language Summit was the existence of a performance bottleneck in converting 8-bit ASCII strings (used by dynamic languages' libraries) to and from UTF-8 strings (used by canonical JVM libraries). Who knows what the 2009 JVM Language Summit will reveal?
Compilation strategy
A compilation strategy is the translation of source language constructs to idiomatic bytecode, flags, and attributes in a ClassFile. As the Java platform evolves by changing the source language and ClassFile features, a compilation strategy can evolve too. For example, javac 1.4 may compile an inner class one way when accepting the Java 1.3 source language and another way when accepting the Java 1.4 source language.
Claim: A compiler may use a different compilation strategy for each source language.
The javac flag '-target' selects the compilation strategy associated with a particular source language. This mainly has the effect of setting the version of the emitted ClassFile: 46.0 for Java 1.2, 47.0 for Java 1.3, 48.0 for Java 1.4, 49.0 for Java 1.5, 50.0 for Java 1.6. For example, javac 1.4 could compile an inner class the same way when configured with a Java 1.3 target versus a Java 1.4 target \*, but emit 47.0 and 48.0 ClassFiles respectively:
javac 1.4 -source 1.3 -target 1.3 -> 47.0
javac 1.4 -source 1.3 -target 1.4 -> 48.0
\* It doesn't, as per Neal's comment, but suppose for sake of argument it does.
However, ClassFile version should be orthogonal to compilation strategy. For example, javac 1.4 could conceivably compile an inner class to a 48.0 ClassFile in two ways, one when configured to accept the Java 1.3 source language and another when configured to accept the Java 1.4 source language:
javac 1.4 -source 1.3 -target 1.4 -> 48.0
javac 1.4 -source 1.4 -target 1.4 -> 48.0
You would have to inspect the ClassFiles carefully to see the difference, since their versions wouldn't - don't - reveal the compilation strategy. Of course, the ClassFile version "dominates" a compilation strategy, since a strategy can only use artifacts legal in a given ClassFile version, even though the concepts are different. Joe has written more about the history of -source and -target.
The combination missing above is:
javac 1.4 -source 1.4 -target 1.3 -> 47.0
or, given that the target could refer strictly to compilation strategy and not ClassFile version:
javac 1.4 -source 1.4 -target 1.3 -> 48.0
javac does not accept a target (or compilation strategy) lower than the source language it is configured to accept. Each new version of the source language is generally accompanied by a new ClassFile version that allows the ClassFile to give meaning to new bytecode instructions, flags, and attributes. Encoding new source language constructs in older ClassFile versions is likely to be difficult. How would javac encode annotations from the Java 1.5 source language without the
Runtime[In]Visible[Parameter]Annotations attributes that appeared in the 49.0 ClassFile?
Claim: A compiler configured to accept source language X should not support a compilation strategy corresponding to a source language lower than X.
This policy can be rather restrictive. There were no changes \*\* between the Java 1.5 and 1.6 source languages, and only minor changes in the 49.0 and 50.0 ClassFiles that accompany those languages (really, platforms). Nevertheless, javac 1.6 does not accept -source 1.6 -target 1.5.
\*\* Except for a minor change in the definition of @Override to do what we meant, not what we said. Unfortunately, the definition changed in javac 1.6 but not in the JDK6 javadoc. Happily, javac 1.7 and the JDK7 javadoc are consistent.
The famous example of the restriction is that javac 1.5 does not accept -source 1.5 -target 1.4, so source code using generics cannot be compiled for pre-Java 1.5 VMs even though the generics are erased. This is partly because the compilation strategy for class literals changed between Java 1.4 and 1.5, to use the upgraded
ldc instruction in the 49.0 ClassFile rather than call
Class.forName. If javac's compilation strategy was more configurable, it would be conceivable to produce a 48.0 ClassFile from generic source code. There is however another reason why -source 1.5 -target 1.4 is disallowed ... read on.
Environment
Prior to JDK7, if javac for source language X was configured to accept an earlier source language Y, it used the ClassFile definition associated with source language X. For example, if javac 1.5 -source 1.2 reads a 46.0 ClassFile, it treats the ClassFile as a 49.0 ClassFile. This is unfortunate because user-defined attributes in the 46.0 ClassFile could share the names of attributes defined in the 49.0 ClassFile spec, and interpreting them as authentic 49.0 attributes is unlikely to succeed.
Even if javac 1.5 -source 1.2 reads a 49.0 ClassFile, there is little point in reading 49.0-defined attributes since they had no semantics in the Java 1.2 platform. This holds for non-attribute artifacts such as bridge methods too; if physically present in a 49.0 ClassFile, they should be logically invisible from a Java 1.2 point of view. In summary:
javac 1.5 -source 1.2 reading a Java 1.5 ClassFile -> should interpret as Java 1.2
javac 1.5 -source 1.5 reading a Java 1.2 ClassFile -> should interpret as Java 1.2
Claim: A compiler configured to accept source language X should interpret a ClassFile read during compilation as if the ClassFile's version is the smaller of a) the ClassFile version associated with source language X, and b) the actual ClassFile version.
In JDK7, javac behaves as per the claim. First, it interprets a ClassFile according to the ClassFile's actual version, regardless of the configured source language. For example, a 46.0 ClassFile is interpreted as it would have been in Java 1.2, ignoring attributes corresponding to a newer source language. Second, when the configured source language is older than a ClassFile, javac ignores ClassFile features newer than the source language it is configured to accept.
An important part of a compiler's environment is the standard library it is configured to use. The standard library used by javac can be configured by setting the bootclasspath. In future, a module system shipped with the JDK will allow a dependency on a particular standard library to be expressed directly.
Note that running against standard library X is deeply different than compiling against standard library X. Consider the Unicode issue raised earlier: javac implicitly uses the
java.lang.Character from the standard library against which it runs, but should use the class in the standard library for the configured source language. For example, javac 1.6 -source 1.2 should use the Unicode in effect for Java 1.2 not Java 1.6. In this case, suitable versioning can only be achieved at the application level, by javac either reflecting over the appropriate
java.lang.Character class or using overloaded
java.lang.Character.isJavaIdentifierStart/Part methods that each take a version parameter.
Things also get tricky when compiling an older source language to a newer target ClassFile version (and hence a later JVM with a newer standard library). For example, should javac 1.6 -source 1.2 -target 1.5 compile against the Java 1.2 or 1.5 standard library? Both answers have merit, which suggests further concepts are needed to disambiguate.
Using the right libraries matters at runtime too. The introduction of a source language feature in Java 1.5 - enums - added constraints on the standard library against which ClassFiles produced from the Java 1.5 source language can run. The
java.lang.Enum class must be present, and you can read the code of ObjectInputStream and ObjectOutputStream to see for yourself the mechanism for serializing enum constants. The simple way to guarantee that a suitable standard library is available for enum-using code at runtime is to ensure that only 49.0 ClassFiles are produced from the Java 1.5 source language. Such ClassFiles will not run on a Java 1.4 VM since it only accepts <=48.0 ClassFiles.
In a nutshell, the compilation strategy for enums is erasure++: an enum type compiles to an ordinary ClassFile with ordinary static members for the enum constants and ordinary static methods to list and compare constants. With a few changes in that strategy (to not extend
java.lang.Enum) and a serious amount of magic in the Java 1.5 VM (to track reflection and serialization of objects of enum type), the ClassFiles emitted by a compiler for the Java 1.5 source language could run safely enough on a Java 1.4 VM. But the drawbacks to such hackery are enormous, so erasure++ it was.
Thus, the reason why one new language feature implemented by erasure - generics - cannot run on earlier JVMs is because another new language feature - enums - is implemented by erasure. Such is life at the foundation of the Java platform.
Thanks to "Mr javac" Jon Gibbons for feedback on this entry.
You claimed that "For example, javac 1.4 compiles an inner class the same way when configured with targets 1.3 and 1.4". That isn't true, as illustrated by the following program.
public class ThreeFour {
static class C1 {
C1() { pow(); }
void pow() {}
}
public static void main(final String[] args) {
class C2 extends C1 {
void pow() {
System.out.println(args == null);
}
}
new C2();
}
}
Posted by Neal Gafter on July 31, 2009 at 08:11 AM PDT #
It is partially true and partially false.
If my memory is correct, the compiler change the way it compiles back reference between 1.4.1 and 1.4.2.
And the compiler also changes the way the most specific method is found before the type of the receiver was included in the signature of the method.
The same problem exists nowadays with generics and wildcards.
The good question is what javac 1.4 means ?
Posted by Rémi Forax on July 31, 2009 at 09:02 AM PDT #
Thanks Neal, I knew I should have stuck with 1.1 and 1.2-era examples! I will make the text more hypothetical.
Posted by Alex Buckley on July 31, 2009 at 09:43 AM PDT #
|
https://blogs.oracle.com/abuckley/entry/versioning_in_the_java_platform
|
CC-MAIN-2014-15
|
refinedweb
| 2,152
| 69.79
|
Prime String
January 10, 2017
My first version of the program prebuilt the prime string to some large length and simply indexed into that string. That worked, but I was dissatisfied because it required a large amount of storage, and inevitably fails when a too-large n is requested. But it did make a nice way to check results of my finished algorithm, so it wasn’t a total loss.
My second version of the program tried to calculate the lengths of the primes and work out where exactly the index occurred within the sequence of prime numbers, but that didn’t work; more precisely, I never convinced myself that I had found the last bug in the program. The problem is that there are lots of edge cases where the requested index doesn’t point to the beginning of a prime, and there are other edge cases where the next five characters spill across a boundary from one prime length to another, and my code kept getting more and more complicated to handle those edge cases, so eventually I threw away that version.
The third version of the program keeps a sliding window on the string, generating primes as necessary. If the window is too small, it is extended. If the index of the first character in the window is less than the target index, the window slides one character to the right. When the window finally reaches the target index, there are guaranteed to be enough characters to form a result, so the function returns. There are no special edge cases at the beginning of the prime string, or when the number of digits in the current prime is greater than its predecessor, or when the target index points to the middle of a prime, or anything else:
(define (prime-substring n) (let ((ps (primegen))) (let loop ((i 0) (str "")) (cond ((string (ps))))) ((< i n) (loop (+ i 1) (substring str 1 (string-length str)))) (else (substring str 0 5))))))
Here are some examples:
> (prime-substring 50) "03107" > (prime-substring 1000) "98719" > (prime-substring 10000) "02192"
We used the prime generator
primegen from a previous exercise. You can run the program at.
In Python. primegen is a lazy prime generator.
Decoupling digit generation from slice selection, in Python.
A Haskell version.
This is Julia 0.5, with its new generator expressions. I didn’t find an unbounded prime generator (one could be written) so I just fake it with a sufficiently large pool of primes, which is readily available. Julia’s iteration protocol is similar to but different from that of Python, ISWIM. In particular, the value of a generator expression does not itself have state.
(Re my Julia above, a nicer way to count the total number of digits in primes below 2017.)
Here’s some JS using the new ES6 generators. This one only starts building up the string once the first required prime is reached, before that it just keep a character count for the primes seen. Uses a simple incremental sieve of Eratosthenes – not the most efficient, but does the job.
package prime.string;
import static java.lang.System.out;
import java.util.Scanner;
public class PrimeString {
public static void main(String[] args) {
Scanner keyboard = new Scanner(System.in);
out.print(“Enter the index: “);
int index = keyboard.nextInt();
out.println();
int a = 2;
String b = “”;
while(b.length() < index+5)
{
if(a % 2 != 0 && a % 3 != 0 && a % 5 != 0 && a % 7 != 0 || (a == 2 || a == 3 || a == 5 || a == 7))
{
b += a;
}
a++;
}
System.out.println(b.substring(index,index+5));
}
}
|
https://programmingpraxis.com/2017/01/10/prime-string/2/
|
CC-MAIN-2017-09
|
refinedweb
| 597
| 71.14
|
Context managers were introduced in PEP 343.
They’re an incredibly useful construct for patterns of code that involve any sort of “cleanup” at the end of an execution of some code block.
In this article, I’m going to show the specification of context managers as laid out in PEP 343 in terms of actual code.
According to the spec, the following
with syntax:
with EXPR as VAR: BLOCK
Translates’s some code to demonstrate this equivalence.
First, this is a custom context manager I wrote for handling files opened for reading:
class MyFileContextManager(object): def __init__(self, name): self.name = name def __enter__(self): self.file = open(self.name, 'r') return self.file def __exit__(self, *exc_info): self.file.close()
Using the
with EXPR as VAR syntax:
with MyFileContextManager("http-request.py") as f: print(f.read())
And the “unpacked” form:
import sys mgr = MyFileContextManager('http-request.py') exit = type(mgr).__exit__ value = type(mgr).__enter__(mgr) exc = True try: try: f = value print(f.read()) except: exc = False if not exit(mgr, *sys.exc_info()): raise finally: if exc: exit(mgr, None, None, None)
Looking at actual code, we can see how much work is done behind the scenes when using
with. In a nutshell, it:
- Invokes
__enter__on the context manager object
- Executes the block of code nested inside the context manager
with
- Invokes
__exit__on the context manager object
If you look more closely, this implementation has many implications. For example, if you implement a custom context manager like I did and return a truthy value for
__exit__, the originating exception from the code block can get swallowed.
It’s not a ton of code, but the
with syntax is another one of those language constructs that make Python a joy to work with.
|
https://www.linisnil.com/articles/context-manager-specification-with-python-code/
|
CC-MAIN-2020-45
|
refinedweb
| 296
| 57.47
|
[Solved] How to limit QtQuick ComboBox display item count
If I have hundreds of items in ComboBox the drop down window takes all screen height and no scrollbar to navigate. I want it to limit number of items appear in drop down window in Combobox(as most of native ComoboBox does). How can I do that? here some sample code to reproduce the issue. I'm using Qt 5.1.0.
@import QtQuick 2.1
import QtQuick.Controls 1.0
import QtQuick.Window 2.1
ApplicationWindow {
title: "My Application"
minimumHeight: 600
minimumWidth: 800
visible: true
ComboBox {
model:100
anchors.centerIn: parent
}
}@
Sorry for troubling you guys. I Just found it's a known issue and will be address on next release (5.2)
|
https://forum.qt.io/topic/32776/solved-how-to-limit-qtquick-combobox-display-item-count
|
CC-MAIN-2022-27
|
refinedweb
| 123
| 69.79
|
Tax
Have a Tax Question? Ask a Tax Expert
Hi,
...
New York is one of the many state's that uses Federal AGI as it's starting point.
Because the form 982 filed with your federal return allows you to exclude (not include the Cancellation of Debt income on line 21 at all) the income ... your Federal AGI would already have that reduced (debt income excluded from) income.
You should use the Federal return (and send in a copy along with form 982) to show that the COD income was excluded
Make Sense?
If you used, say, turbo tax (or one of the other packages) to do the federal return ... that income will simply not flow TO the state return because it would not be in your AGI, (because it would not have ever been entered ON the 1040) ... the 982 is the answer to why you do not INCLUDE the income on the 1040
Hope this helps
Let me know if you have questions
Lane
If this HAS helped, and you don't have additional questions on this, I'd appreciate a positive rating (by clicking the stars or smiley faces on your screen) ... that's the only way I'll be credited for the work here....
Did you see my answer?
Do you need more here>
Let me know
.
|
http://www.justanswer.com/tax/9c0m1-filing-nys-personal-income-tax-form.html
|
CC-MAIN-2017-09
|
refinedweb
| 221
| 78.69
|
Hello,
I asked some flashers to take a look at my problem, but they did not know what happed or why it happend.
If you apply a color to a movieclip within a timeline, you also destroy the animation so it seems.
I enclosed the fla file for test purpuse. But what i am doing is:
I have a main clip called animation and in this clip the playhead goes from frame 1 to 30
Inside this animationclip i have a square called myclip that also does some animation.
But when you apply a color to this square and you run the animationclip, you will notice that
the square does not animate anymore. But the playhead still runs, so that is weird isn't it?
Does anyone know the answer to this bug ?
The code i used is:
function applyColor(mc:MovieClip, col:Number) {
var tempColor:Color = new Color(mc);
tempColor.setRGB(col);
}
mybutton.onRelease = function() {
applyColor(animation.myclip, 0xFFCC00);
}
mybutton2.onRelease = function() {
animation.gotoAndPlay(2);
}
Regards,
Chris
Don't know if you have already solved this, but the issue is you have a timeline animation and when you change a property like the color of the movieclip you are actually changing the movieclip in the first frame of the animation which causes a disassociation between the first frame and the rest of the animation and destroys the tween.
2 solutions.
1. just double click into myClip and highlight the gray block and make it into a movieclip called colorBlock with an instance name of cb. Then use the following code.
import flash.geom.ColorTransform;
import flash.geom.Transform;
function applyColor(mc:MovieClip, col:Number) {
var trans:Transform = new Transform(mc);
var ct:ColorTransform = new ColorTransform();
ct.rgb = col;
var tempColor:Color = new Color(mc);
trans.colorTransform = ct;
}
mybutton.onRelease = function() {
applyColor(animation.myclip.cb, 0xFFCC00);
}
mybutton2.onRelease = function() {
animation.gotoAndPlay(2);
}
2. Use programmatic tweens. I find them to be more robust and almost impossible to break. I would advise using Moses Gunesch's tweening engine for AS2 called the FuseKit. You can find it at
Thanks for your information! This workaround works great!
But i agree with the fact that tween engines are more powerfull, i never used FuseKit, but it looks very clean and effective.
Regards,
Chris.
|
https://forums.adobe.com/thread/465372
|
CC-MAIN-2018-39
|
refinedweb
| 380
| 58.58
|
public class ActorReceiver<T> extends Receiver<T> implements Logging
As Actors can also be used to receive data from almost any stream source. A nice set of abstraction(s) for actors as receivers is already provided for a few general cases. It is thus exposed as an API where user may come with their own Actor to run as receiver for Spark Streaming input source.
This starts a supervisor actor which starts workers and also provides [ fault-tolerance].
Here's a way to start more supervisor/workers as its children. ActorReceiver(akka.actor.Props props, String name, StorageLevel storageLevel, akka.actor.SupervisorStrategy receiverSupervisorStrategy, scala.reflect.ClassTag<T> evidence$1)<T>
public void onStop()
onStart()must be cleaned up in this method.
onStopin class
Receiver<T>
|
https://spark.apache.org/docs/1.2.1/api/java/org/apache/spark/streaming/receiver/ActorReceiver.html
|
CC-MAIN-2022-05
|
refinedweb
| 125
| 50.94
|
GETC(3V) GETC(3V)
NAME
getc, getchar, fgetc, getw - get character or integer from stream
SYNOPSIS
#include <<stdio.h>>
int getc(stream)
FILE *stream;
int getchar()
int fgetc(stream)
FILE *stream;
int getw(stream)
FILE *stream;
DESCRIPTION
getc() returns the next character (that is, byte) from the named input
stream, as an integer. It also moves the file pointer, if defined,
ahead one character in stream. getchar() is defined as getc(stdin).
getc() and getchar() are macros.
fgetc() behaves like getc(), but is a function rather than a macro.
fgetc() runs more slowly than getc(), but it takes less space per invo-
cation and its name can be passed as an argument to a function.
getw() returns the next C int (word) from the named input stream.
getw() increments the associated file pointer, if defined, to point to
the next word. The size of a word is the size of an integer and varies
from machine to machine. getw() assumes no special alignment in the
file.
RETURN VALUES
On success, getc(), getchar() and fgetc() return the next character
from the named input stream as an integer. On failure, or on EOF, they
return EOF. The EOF condition is remembered, even on a terminal, and
all subsequent operations which attempt to read from the stream will
return EOF until the condition is cleared with clearerr() (see fer-
ror(3V)).
getw() returns the next C int from the named input stream on success.
On failure, or on EOF, it returns EOF, but since EOF is a valid inte-
ger, use ferror(3V) to detect getw() errors.
SYSTEM V RETURN VALUES
On failure, or on EOF, these functions return EOF. The EOF condition
is remembered, even on a terminal, however, operations which attempt to
read from the stream will ignore the current state of the EOF indica-
tion and attempt to read from the file descriptor associated with the
stream.
SEE ALSO
ferror(3V), fopen(3V), fread(3S), gets(3S), putc(3S), scanf(3V),
ungetc(3S)
WARNINGS
If the integer value returned by getc(), getchar(), or fgetc() is
stored into a character variable and then compared against the integer
constant EOF, the comparison may never succeed, because sign-extension
of a character on widening to integer is machine-dependent.
BUGS
Because it is implemented as a macro, getc() treats a stream argument
with side effects incorrectly. In particular, getc(*f++) does not work
sensibly. fgetc() should be used instead.
Because of possible differences in word length and byte ordering, files
written using putw() are machine-dependent, and may not be readable
using getw() on a different processor.
21 January 1990 GETC(3V)
|
http://modman.unixdev.net/?sektion=3&page=getc&manpath=SunOS-4.1.3
|
CC-MAIN-2017-17
|
refinedweb
| 439
| 70.73
|
Non-Programmer's Tutorial for Python 3/FAQ
< Non-Programmer's Tutorial for Python 3(Redirected from Non-Programmer's Tutorial for Python 3.0/FAQ)Jump to navigation Jump to search
- How do I make a GUI in Python?
- You can use one of these library: TKinter, PyQt, PyGobject. For really simple graphics, you can use the turtle graphics mode
import turtle
- How do I make a game in Python?
- The best method is probably to use PyGame at
- How do I make an executable from a Python program?
- Short answer: Python is an interepreted language so that is impossible. Long answer is that something similar to an executable can be created by taking the Python interpreter and the file and joining them together and distributing that. For more on that problem see
- (IFAQ) Why do you use first person in this tutorial?
- Once upon a time in a different millenia, (1999 to be exact), an earlier version was written entirely by Josh Cogliati, and it was up on his webpage and it was good. Then the server rupert, like all good things than have a beginning came to an end, and Josh moved it to Wikibooks, but the first person writing stuck. If someone really wants to change it, I will not revert it, but I don't see much point. (The webpage has since moved to and )
- My question is not answered.
- Ask on the discussion page or add it to this FAQ, or email one of the Authors.
For other FAQs, you may want to see the Python 2.6 version of this page Non-Programmer's Tutorial for Python 2.6/FAQ, or the Python FAQ.
|
https://en.wikibooks.org/wiki/Non-Programmer%27s_Tutorial_for_Python_3.0/FAQ
|
CC-MAIN-2020-34
|
refinedweb
| 281
| 71.65
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.