text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Reading 1-Wire iButtons using a UART This post takes a look at a simple 1-Wire protocol reader/writer using a standard UART for detecting and communicating with 1-Wire devices. Written in C for Linux, and running on Intel Galileo. It could easily be ported to most other platforms. The iButton I am using is a the DS1990A, which is a common electronic key, it is a 1-Wire read-only unique serial number device. They are used for access control, vending machines, point of sale terminal and so on. They use what’s called the “1-Wire” protocol, meaning that they use a single wire to provide power to the device and to communicate with it. The 1-Wire protocol, including it’s timings, is described here. There are many different types of 1-Wire devices, including simple ROMs with unique serial numbers and data loggers. In this article I present a method of communicating with these devices using a standard UART, as found on just about any modern micro controller and/or dev board such as Raspberry PI, Intel Galileo, any PIC from Microchip, M-Bed, ATMega, Arduino and just about any other micro-controller you can think of. Reading an writing to an iButton doesn’t use standard UART timings, and so normally you would use either a dedicated IO Pin, where you have direct control over the port timings, or a dedicated iButton Interface chip. If you can’t control the timing on an IO pin down to as accurate as about 6 uS (micro seconds) then you need another approach. Hardware To write to and read from a1-Wire device, I’m using the UART pins at CMOS levels, not driven to RS232 Levels by a driver chip. So +5v is a logic ‘1’ and 0V is a logic ‘0’. I tie the TX and RX lines directly together, and since one is an output and the other an input this is perfectly safe to do. In my instance I’m using the Arduino UNO compatible Intel Galileo board, and using pins 0 (RX) and 1 (TX) joined directly together, and then taken to the iButton device along with a GND signal. Software What I’ve done with this code, is use the UART to generate the pulses on the wire. I’m not using the UART to actually read and write normal bytes, but to generate pulses on the wire and read back pulses with deterministic timings. By tying the TX and RX lines of the UART together we can read back our outgoing pulses, and any pulses coming from the iButton. Detecting the iButton I found that transmitting 0xF0 at 9,600 bps, generates an acceptable reset-pulse, which the iButton replies to with it’s own pulse. We write 0xF0 to the wire, which is preceded by a start-bit generated by the UART. Since UARTs send data least significant bit first, on the wire the 9 bits of data are transmitted as: 000001111 – this gives an almost perfect reset pulse to the device. The device reacts by pulling our line low after about 6 micro-sceonds. This corrupts the byte which we transmitted, and when we read it back in we get: 000000111 – one bit was pulled low by the iButton. So we transmitted 0xF0, but we read back 0xE0. Thus we know that there is something on the line. Here’s my code to detect a 1-Wire device: (fd is the file description obtained from an ‘open’ call on the uart). Full code is at the end of the post.; } Now that we have detected the iButton, to read it’s contents we need to send the single byte ‘Read ROM’ command which is 0x33 (hex). Transmitting bytes To transmit on the 1-Wire interface is also quite simple. We take the line low for 60 micro-seconds to write a 0, or for just 6 micro-seconds to write a ‘1’. So we take our command byte, and split it into 8 bits, and transmit long or short pulses representing the 1’s and 0’s of the byte. The iButton does not respond to these writes, so we just flush our incoming buffer to keep it clear. 9600 bps is too slow to transmit the ~6uS pulses we need, so we up the speed to 115200 bps now. The following code transmits any number of complete bytes, one bit at a time, onto the 1-Wire interface: void OW_WriteByte(int fd, unsigned char dataByte) { int i, res; unsigned char out; unsigned char inBuf[64]; // for flushing our echos setBaud(fd,115200); for(i=0; i<8; i++) // 8 Bits in a byte. { //; } } Receiving Bytes In order to read bytes back from the device, we again need to create short pulses on the wire. The device sees a short ~6 micro second pulse as a sign to transmit one bit of data. If that bit is a 0 it pulls the line low for a short period, if it’s a 1 it does nothing. We can detect the 0’s because they will corrupt the byte we send on the UART to creare the start pulse. Since the timing of this corruption is deterministic, it will be the same every time, so we can rely on it to read data back from the device. The following function reads any number of bytes from the device, into a buffer, one bit at a time:; } Putting it all together So, now we can detect a device, write to it, and read back from it. The iButtons that I am using in my tests always reply with exactly 8 Bytes of data which is: DeviceFamily - 1 Byte (always 0x01) DeviceID - 6 Bytes of unique serial n umber. CRC - 1 Byte of computed CRC data. We can determine that we have read the data correctly, by examining the device ID byte, and calculating the check sum. The checksum (CRC) is not the usual rolling addition checksum, but a complex routine, which is fortunately quite simple to code up. The following calculates the checksum on the given buffer of bytes: unsigned char OW_CRC(unsigned char *pBuf, int len) { unsigned char loop, i, bit; unsigned char crc = 0x00; for(loop=0; loop<len; loop++) { crc = (crc ^ pBuf[loop]); for(i=8; i>0; i--) { bit = (crc & 0x01); crc >>= 1; if(bit) { crc = (crc ^ 0x8c); } } } return crc; } So, that’s it – using a UART at standard speeds to detect, write to and read from a 1-Wire interface device such as an iButton. Here is the complete code listing, including some stuff specific to the Intel Galileo for setting up the pins correctly. To port it to other platforms should be pretty simple as it only uses standard POSIX calls. Complete code listing. // // OWUart.c // OWUart // // Created by Kenny Milar on 1/June/2014. // Copyright (c) 2014 SpiderElectron. All rights reserved. // #include <stdio.h> #include <stdlib.h> #include <string.h> /* String function definitions */ #include <unistd.h> /* UNIX standard function definitions */ #include <fcntl.h> /* File control definitions */ #include <errno.h> /* Error number definitions */ #include <termios.h> /* POSIX terminal control definitions */ #define DEVICE_FAMILY_IBUTTON 0x01 /* errExit - helper function, log message and quit. */ void errExit(char *p) { if(p) { printf("Exiting due to: %s\n",p); } else { printf("Error. Exit. Sorry.\n"); } exit(-1); } /* Set specified GPIO pin for use. */ void setGPIOPin(char* pin, char* dir, char* drive, char* val) { char buf[256]; int fd; // Open the GPIO Export file fd = open("/sys/class/gpio/export",O_WRONLY); if(fd == -1) errExit("GPIO Export"); // Export the required pin. write(fd, pin, strlen(pin)); // Export GPIO pin close(fd); // Open exported pin's DIRECTION file sprintf(buf,"/sys/class/gpio/gpio%s/direction",pin); fd = open(buf,O_WRONLY); // open GPIOxx direction file if(fd==-1) errExit("Gpio Direction"); // write out the direction write(fd,dir,strlen(dir)); // set GPIOxx direction to out close(fd); // open the drive file sprintf(buf,"/sys/class/gpio/gpio%s/drive",pin); fd = open(buf,O_WRONLY); if(fd==-1) errExit("Gpio Drive"); // Write the drive type. write(fd,drive,strlen(drive)); // set GPIO drive close(fd); // Open the initial value file. sprintf(buf,"/sys/class/gpio/gpio%s/value",pin); fd = open(buf,O_WRONLY); if(fd==-1) errExit("Gpio Value"); write(fd,val,strlen(val)); // set GPIO initial value close(fd); } /* setMux - set up the multiplexor on A0 to connect it to ADC VIN0 */ void setMux(void) { // See the Intel Galileo Port Mapping document for details of GPIO numbers. // Switch all the SPI1 pins through to the header pins. And enable level shifter. setGPIOPin("40","out","strong","0"); // ttyS0 connects to RX (Arduino 0) setGPIOPin("41","out","strong","0"); // ttyS0 to TX (Arduino 1) setGPIOPin("4","out","strong","1"); // Level shifter enabled (enable the final driver). } /* Open a file descripto to the specified path */ int open_port(char *path) { int fd; /* File descriptor for the port */ fd = open(path, O_RDWR | O_NOCTTY | O_NDELAY); if (fd == -1) { perror("open_port: Unable to open specified port. "); } else fcntl(fd, F_SETFL, 0); // ASYNC access. return (fd); } int setBaud(int fd, unsigned int baudrate) { struct termios options; // Get the current port options. tcgetattr(fd, &options); // Set the baud rates to one of our favourite bauds if(baudrate==9600) { cfsetispeed(&options, B9600); cfsetospeed(&options, B9600); } else if (baudrate==57600) { cfsetispeed(&options, B57600); cfsetospeed(&options, B57600); } else if (baudrate==115200) { cfsetispeed(&options, B115200); cfsetospeed(&options, B115200); } else { printf("I didn't bother setting the port speed.\n"); } // Enable the receiver and set local mode... options.c_cflag |= (CLOCAL | CREAD); // 8N1 options.c_cflag &= ~PARENB; options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; options.c_cflag |= CS8; // No flow control options.c_cflag &= ~CRTSCTS; options.c_iflag &= ~(IXON | IXOFF | IXANY); // raw input options.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); // raw output options.c_oflag &= ~OPOST; // Set the new options for the port tcsetattr(fd, TCSADRAIN, &options); // No waiting for characters fcntl(fd, F_SETFL, FNDELAY); } // Write a buffer of data to the port. int write_port(int fd, unsigned char *pData, int len) { int bytesWritten; bytesWritten = write(fd, pData, len); // printf("Write of %d bytes returned %d\n",len,bytesWritten); return bytesWritten; } // Read from the port into a buffer of data. int read_port(int fd, unsigned char *buf, int len) { int i,bytesRead; int totalBytes; totalBytes=0; while (1) { bytesRead = read(fd,buf,len); if(bytesRead <=0) break; totalBytes+=bytesRead; } return totalBytes; } // 1-Wire protocol byte write. // You must understand the 1-Wire protocol to see whats happening here. // We are using the UART to create pulses NOT to send actual bytes. // A short pulse is a '1' bit and a longer pulse is a '0' bit. // so we use the start bit, followed by 0 or more '0' bits to // send a pulse of the requried length. // // Effectively we are runing a software UART over a faster hardware UART. // void OW_WriteByte(int fd, unsigned char dataByte) { int i, res; unsigned char out; unsigned char inBuf[64]; // for flushin our echos setBaud(fd,115200); for(i=0; i<8; i++) { //; // printf("Flushed %d bytes\n",i); } } // Using the UART to create a read time slot, then // detecting if the iButton created a 0-pulse or not. // Effectively creating a software uart over the hardware uart.; } unsigned char OW_CRC(unsigned char *pBuf, int len) { unsigned char loop, i, shiftedBit; unsigned char crc = 0x00; for(loop=0; loop<len; loop++) { crc = (crc ^ pBuf[loop]); for(i=8; i>0; i--) { shiftedBit= (crc & 0x01); crc >>= 1; if(shiftedBit) { crc = (crc ^ 0x8c); } } } return crc; }; } int main(int argc, char *argv[]) { int fd; int i, bytesRead; unsigned char buf[64] = {0,}; unsigned char inBuf[512] = {0,}; unsigned char crc; setMux(); if(argc != 2) { printf("Usage: %s <path to serialPort>\n",argv[0]); exit(-1); } fd = open_port(argv[1]); if(fd == -1) { printf("Failed to open serial port at %s\n",argv[1]); exit(-1); } while(1) //one infinte loop { if(OW_detectKey(fd)) // is there a key on the probe? { OW_WriteByte(fd, 0x33); // if so, write the 'read rom' command to it. if(OW_ReadBytes(fd, inBuf, 8) == 8) // then try to read back 8 bytes. { if(inBuf[0] == DEVICE_FAMILY_IBUTTON) // 1st byte is device family. We want device family 1. { if(OW_CRC(inBuf,7)==inBuf[7]) // If CRC matches the last byte in the buffer { printf("Valid key detected with ID:"); printf("%02x%02x%02x%02x%02x%02x\n",inBuf[6],inBuf[5],inBuf[4],inBuf[3],inBuf[2],inBuf[1]); sleep(1); // Wait 1 second before trying again. } } } } usleep(1000*100); } close(fd); } /* end of file */ LED Matrix built and working Reading 1-Wire iButtons using Cypress PSoC 3 and 5/5LP
https://wphost.spider-e.com/?p=231%3Fshared%3Demail&msg=fail
CC-MAIN-2020-05
refinedweb
2,102
61.06
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #21023 closed Cleanup/optimization (invalid) Your documentation is poor at best. Description (last modified by ) Here is how you do it: URLS: from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^articles/(\d{4})/$', 'news.views.year_archive'), url(r'^articles/(\d{4})/(\d{2})/$', 'news.views.month_archive'), url(r'^articles/(\d{4})/(\d{2})/(\d+)/$', 'news.views.article_detail'), ) You go line by line and say r = raw. This means no escape characters, the compiler processes what it sees. ^ from Regular expressions means the beginning of the line beginning with the word articles? The slash is: Then you put parenthesis because?: GET IT? then there is another slash AND SO ON!!!! ONE STEP at a time THIS IS HOW YOU EXPLAIN THINGS. Make believe the person reading it doesnt know anything about Regular expressions. These are not standard Regex.. not like Perl Change History (5) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by comment:4 Changed 5 years ago by comment:5 Changed 5 years ago by Can we trade a lesson of "how to report a ticket without pissing off maintainers" against a lesson of "how to explain things in technical documentation"? ;-) The tutorial aims to strike a balance between newbie developers and new to django developers. It is explicitly not a *python* tutorial, and a detailed analysis of the structure of a python regex is not helpful in this context. The current version of the docs has a number of links for understanding regexes better already.
https://code.djangoproject.com/ticket/21023
CC-MAIN-2018-26
refinedweb
275
65.32
Scala support The 1.1 release of play will include support for the Scala programming language. Thanks to the flexibility of the play framework architecture, the Scala support is provided with a simple module. You just need to enable the scala module in the conf/application.conf file. module.scala=${play.path}/modules/scala Then you can write all or parts of your play application using scala. You can of course mix it with Java. We are in very very active development on this stuff. You can try it for now as an experimental feature. Don’t expect to write a complete play application in Scala right now. For a quick overview of the scala support, you can watch this Scala screencast Create a new application, with Scala support You can automatically create a scala ready application, by using the --with option of the play new command. Just try: play new myApp --with scala The play application will be created as usual, but if you look at the controllers package, the Application.java file is now replaced by a Application.scala file: package controllers import play._ import play.mvc._ object Application extends Controller { def index = render() } It is very close to the Java version of the default Application controller. Now just run the application as usual using play run and it will display the standard welcome page. Now just edit the Application.scala file to replace the render() call: def index = "Hello scala !" Refresh the page, and see the magic. As always, if you make a mistake, play will just show you the error in a perfect way; (it’s just more difficult now to forget the trailing semicolon) Direct return types As shown above, for simple action methods you can directly use the inferred return type to send the action result. For example using a String: def index = "<h1>Hello world</h1>" And you can even use the built-in XML support to write XHTML in a literal way: def index = <h1>Hello world</h1> If the return type looks like a binary stream, play will automatically use renderBinary(). So generating a captcha image using the built-in Captcha helper can be written as: def index = Images.captcha Action parameters, and scala default arguments You can declare some action parameter the same way you do it in Java: def index(name: String) = <h1>Hello {name}</h1> To big plus of scala is the ability to define some default values to these parameters: def index(name: String = "Guest") = <h1>Hello {name}</h1> This way if the name HTTP parameter is missing, play will use the default argument value. Controllers composition using traits A controller can use several traits to combine several interceptor. Let’s define a Secure trait: package controllers import play.__ import play.mvc.__ trait Secure extends Controller { @Before def check { session("user") match { name: String => info("Logged as %s", name) _ => Security.login } } } And you can them use it in the Application controller: package controllers object Application extends Controller with Secure { def index = "Hello world" } How to define and access Models Models can be defined not only in java but in scala as well. Unfortunately, due to the differences between the two langauges the Model API somewhat differs from the java version. Main differences - fields are passed as contructor arguments - each and every class needs to extend Model[T] - helper methods should be defined in the companion object - companion objects need to extend Model[T] here is an example: @Entity class User( //fields @Email @Required var email: String, @Required var password: String, var fullname: String ) extends Model[User] { //instance methods var isAdmin = false override def toString = email } //finder methods object User extends Model[User] { def connect(email: String, password: String) = { User.find("byEmailAndPassword", email, password).first } } Running queries against Scala Models from Scala classes The following methods are available when running queries against scala models: def count(implicit m: M[T]) = i.count(m) def count(q: String, ps: AnyRef*)(implicit m: M[T]) def findAll(implicit m: M[T]) def findById(id: Any)(implicit m: M[T]) def findBy(q: String, ps: AnyRef*)(implicit m: M[T]) def find(q: String, ps: AnyRef*)(implicit m: M[T]) def all(implicit m: M[T]) def delete(q: String, ps: AnyRef*)(implicit m: M[T]) def deleteAll(implicit m: M[T]) = def findOneBy(q: String, ps: AnyRef*)(implicit m: M[T]): T def create(name: String, ps: play.mvc.Scope.Params)(implicit m: M[T]): T As you can see, it’s really similar to the java API, so for example to count the number of users, you can just call count on the User class: User.count One known limitation of the Scala Model API is that the save method is not working in a chained call fashion, so you always need to execute it on an instance, as you can see it later at the unit testing section Running queries against Java Models from Scala classes In certain situations it might be desirable to query models written in java from scala. Since java models are not extending from the scala Model trait, Play needs to provide an alternative query interface which comes in the form of QueryRunner trait and the corresponding companion object. In order to utilize this feature you either need to import the query methods like import play.db.jpa.QueryRunner._ or you can mix in the trait class MyController extends Controller with QueryRunner {...} and the API is defined like this: def count[T](implicit m: M[T]) = i.count(m) def count[T](q: String, ps: AnyRef*)(implicit m: M[T]) def findAll[T](implicit m: M[T]) def findById[T](id: Any)(implicit m: M[T]) def findBy[T](q: String, ps: AnyRef*)(implicit m: M[T]) def find[T](q: String, ps: AnyRef*)(implicit m: M[T]) def all[T](implicit m: M[T]) def delete[T](q: String, ps: AnyRef*)(implicit m: M[T]) def deleteAll[T](implicit m: M[T]) = def findOneBy[T <: JPASupport](q: String, ps: AnyRef*)(implicit m: M[T]): T def create[T <: JPASupport](name: String, ps: play.mvc.Scope.Params)(implicit m: M[T]): T Using the previous example User.count becomes count[User] Unit Testing ScalaTest support is integrated into Play, so one can easily write unit tests using ScalaTest, for example: class SpecStyle extends UnitTest with FlatSpec with ShouldMatchers { "Creating a user" should "be succesfull" in { val user = new User("bob@gmail.com", "secret", "Bob") user.save bob = User.find("byEmail", "bob@gmail.com").first bob should not be (null) bob.fullname should be ("Bob") } } Tutorial We are currently writing a version of the play tutorial for scala. Read The play tutorial (Scala version).
https://www.playframework.com/documentation/1.1.1/scala
CC-MAIN-2015-22
refinedweb
1,125
50.26
I'm new to using java and I'm having trouble with this code. Eclipse tells me there is an issue with some conversion. I have no idea what that means, but I'm pretty sure it has to do with printf. Here is the code: package test; import java.util.*; public class TestPrintF { private int number; Scanner scr = new Scanner(System.in); public TestPrintF(){ number = 0; } public void changeNumber(){ System.out.print("Enter an integer: "); number = scr.nextInt(); } public void output(){ System.out.printf("%10", number); } } In this specific example, I know there are ways to do it without using printf, but I'm trying to work out how to use printf specifically.
http://forums.devshed.com/java-help-9/printf-941325.html
CC-MAIN-2017-39
refinedweb
114
68.67
servlet action not available - Struts servlet action not available hi i am new to struts and i am.... Struts Blank Application action org.apache.struts.action.ActionServlet config /WEB-INF/struts-config.xml 2 action *.do no action mapped for action - Struts no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld HTML Forms and PHP _size in the php.ini file; e.g. <form action=“Admission.php&rdquo...Here we would integrate the HTML forms and PHP by creating HTML forms.... 2. 3. 4. 5 2 Could not find action or result STRUTS 2 Could not find action or result hiii..i am new to struts 2...;package <action name="fetch" class...;/html> plzz help me out...i might have problem in my struts.xml...its ACTION - AGGREGATING ACTIONS IN STRUTS STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS... are a Struts developer then you might have experienced the pain of writing huge number of Action classes for your project. The latest version of struts provides classes simple eg simple eg <?php</li> $string = ?Hello Users?; print (?Welcome to Roseindia, I want to greet $string?); print (??); $string = ?Bye Bye!?; print (?OK meet you soon, $string?); ?> in this program we get a error at line Action Configuration - Struts Action Configuration I need a code for struts action configuration in XML Struts dispatch action - Struts Struts dispatch action i am using dispatch action. i send.... but now my problem is i want to send another value as querystring for example... not contain handler parameter named 'parameter' how can i overcome action tag - Struts action tag Is possible to add parameters to a struts 2 action tag? And how can I get them in an Action Class. I mean: xx.jsp Thank you fill out forms on web pages - JDBC fill out forms on web pages I want to fill out forms on web... , in which there wil be a form , what i hav to do is to fill that form..., I need help on this from roseindia(experts), who helped me in many aspects.. I Login Action Class - Struts Login Action Class Hi Any one can you please give me example of Struts How Login Action Class Communicate with i-batis jsf forms - Java Server Faces Questions jsf forms Hi, I have some queries in jsf forms please solve...,then i need text color in white). these are my queries please give me solution... the height of a table. Row 1Row 1Row 1 Row 2Row 2Row 2 3. how to set Aggregating Actions In Struts Revisited Aggregating Actions in Struts , I have given a brief idea of how to create action... for every action. In this part, I will walk through a full-fledged practical...; If you observed carefully, we have created only one action class struts struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>...) { if (!isValidDate(matched[2], matched[1], matched[3])) { if (i inteconnecting forms - JSP-Servlet inteconnecting forms hi, i have several tables in mysql database. i am creating some forms using jsp. i want to enter the name of the table and to get the table specified in the entry box to be displayed in the output struts - Struts struts hi.. i have a problem regarding the webpage in the webpage i have 3 submit buttons are there.. in those two are similar and another one... included third sumbit button on second form tag and i given corresponding action Dispatch Action - Struts Dispatch Action While I am working with Structs Dispatch Action . I am getting the following error. Request does not contain handler parameter named 'function'. This may be caused by whitespace in the label text History of Web 3 their views. e.g. Jerry Yang, founder and Chief of Yahoo, stated: ? Web... in the software layer. You don't have to be a computer scientist to create...: ?People keep asking what Web 3.0 is. I think maybe when you've got an overlay Date forms Date forms  ... the Date object in various forms. In the following code segment we are performing various operations on Date object. We have explained, how to retrieve use forms. use forms. How do I use forms Struts 2 action-validation.xml not loading Struts 2 action-validation.xml not loading Hi All, I am getting... error SERVER : Caught exception while loading file package/action-validation.xml Connection refused: Connect - (unknown location) The path of my action Understanding Struts Action Class Understanding Struts Action Class In this lesson I will show you how to use Struts Action... - if the application business logic throws an exception Adding the Action Mapping in the struts Servlet action is currently unavailable - Struts Servlet action is currently unavailable Hi, i am getting the below error when i run the project so please anyone can help me.. HTTP Status 503 - Servlet action is currently unavailable Struts Articles (or reuses existing) Form Bean. 3. Servlet calls the appropriate Struts Action..., Succeeding With Struts: Dynamically Sized Forms, I mentioned casually... application. The example also uses Struts Action framework plugins in order Web 3 far as several experts have given several meaning, which do not match to each... read them e.g. web connected bathroom mirrors, which can read the news coming... intelligence and the web?. While some experts have summarized the definition defining Hi, I m getting Error when runing struts application. i have already define path in web.xml i m sending -- ActionServlet... /WEB-INF/struts-config.xml 1 Multiple Forms in JSP Multiple Forms in JSP  ... can contain checkboxes, textfields, radio- buttons and many more. Forms are used to pass user- data to a specified URL which is specified in the action attribute Struts Projects In this tutorial I will show you how to integrate Struts and Hibernate... by combining all the three mentioned frameworks e.g. Struts, Hibernate and Spring... classes and Struts actions and forms. Furthermore it includes a JDBC, a JMS buttons and for submit buttons. i have a jsp page which contains check boxes,radio JSP WITH MULTIPLE FORMS and jsp tag to complete these Forms(Form1, Form2, Form3) that we have made... JSP WITH MULTIPLE FORMS In this example, you will learn how to make multiple forms hi, i have formbean class,action class,java classes and i configured all in struts-config.xml then i dont know how to deploy and test... and do run on server. whole project i have run?or any particular Print Only Forms Contents - Java Beginners Print Only Forms Contents Hello Sir I Have Created Simple Registration Form with Database Connectivity, Now I Want To Print Registration form Contents ,How I can Do that, plz Help Me Hi Friend, Please go through Struts Action Chaining Struts Action Chaining Struts Action Chaining online voting systems with 10 forms and 3 pages calculations online voting systems with 10 forms and 3 pages calculations kindly send me the online voting system and online inventory management systems used by java or html, with 10 forms and 3 validation pages, with form designing Transaction in last 3 months Transaction in last 3 months I have a project in which i have to calculate amount submitted through transactions in last three months. I have... submitted. What query can i use to get all the transactions in past 3 months starting Struts 2 Redirect Action Struts 2 Redirect Action In this section, you will get familiar with struts 2 Redirect action...;/html> Step 3 : Create an Action class. Login.java package  Spring 3 MVC Validation Example Spring 3 MVC Validation Example This tutorial shows you how to validate Spring 3 MVC based applications. In Spring 3 MVC annotation based controller has... will have the following fields: User Name Age The valid radio button value on edit action ...Problem 'm facing is on edit action 'm not retrieving radio button value..i have... project 'm having search module where i have to search customer by name...I have...; out.println("<form name='edit_customer' method='post' action='edit_form Struts 1 Tutorial and example programs article Aggregating Actions in Struts , I have given a brief idea of how... to the Struts Action Class This lesson is an introduction to Action Class...; - Struts file upload and save on server Action without a form. Action without a form. Can I have an Action without a Package in Action Script 3 Package in Action Script:- Package is the collection of related classes... must define your ActionScript custom components within a package. we have.... If we will used these class then we have use namespace prefix and package struts 2.0 - Struts struts 2.0 I have written print statement in action class. It is printing data 2 times. Why it is happening server side validation in struts tables to structure forms tables to structure forms How can I use tables to structure forms use tables to structure forms use tables to structure forms How can I use tables to structure forms Struts Action Class Struts Action Class What happens if we do not write execute() in Action Overview of Web 3 and now it is taking a handsome shape. Web 3.0 have some more feature including... would have three main objectives: Seeking Information... in the searching data e.g. if anyone wants to look for car. He/she types the word struts validation struts validation Sir i am getting stucked how to validate struts using action forms, the field vechicleNo should not be null and it should...:// JAVA BEAN WITH ORACLE FORMS JAVA BEAN WITH ORACLE FORMS Hi..I am doing my final year Project.. I need the ste-by-step procedure to integrate a bean with oracle forms? Please help me Struts ;Basically in Struts we have only Two types of Action classes. 1.BaseActions...Struts why in Struts ActionServlet made as a singleton what... only i.e ForwardAction,IncludeAction.But all these action classes extends Action Testing Struts Application , there are 3 subfolders there. i) META-INF ii) WEB-INF iii.... ----------------------------------------------- Thus, we have correctly installed struts in our... 2) struts-html.tld 3) struts-logic.tld 4) struts UIButton action example UIButton action example HI, How i can capture the Button click action? What to write the .h and .m file? Thanks Hi, Here is the code... is the code that you have to write in .m file: (void) buttonPress:(id)sender Action classes in struts Action classes in struts how many type action classes are there in struts Hi Friend, There are 8 types of Action classes: 1.ForwardAction class 2.DispatchAction class 3.IncludeAction class 4.LookUpDispatchAction Struts Interview Questions Struts Interview Questions Question: Can I setup Apache Struts to use multiple configuration files? Answer: Yes Struts can use multiple configuration files. Here 1)Have you used struts tag libraries in your application? 2)What are the various types of tag libraries in struts? Elaborate each of them? 3)How can you implement custom tag libraries in your application FLEX 3 Combobox - Development process FLEX 3 Combobox Hello, Love your site! Well, Im an old fart ,lol and just started tinkering with Adobe Flex. I have downloaded your combobox...; ORD or BOI or BUF etc... It is just a simple desktop application I can have struts struts dropdown in struts - Struts in struts application when i have the workflow as jsp->acton->Business Delegator->controller->Business object->DAO->database. Where should i... can i set the list to context to get it from there in a collection object
http://roseindia.net/tutorialhelp/comment/4132
CC-MAIN-2014-42
refinedweb
1,947
65.83
. This post assumes you have a basic understanding of MSPL, so if you haven’t worked with MSPL at all you may want to start by reading my previous posts on it, which you can find here. How Managed SIP Applications Work These applications start up in a slightly different way from the script-only applications that you can create with MSPL. For MSPL applications, there is a service that loads in all of the MSPL applications that have been registered, compiles them, and runs them. This is RtcSpl.exe, or the Lync Server Script-Only Applications Service. With an application built with the Managed SIP Application API (should we call it MSAA?) you need to perform these steps yourself. The managed SIP application consists of a few things. There is an executable of some kind, which can be a console app, a Windows service, a WinForms app, whatever. Within the executable there must be a class with methods that handle SIP messages using the classes from Microsoft.Rtc.Sip. There must also be an application manifest file, which, as with script-only MSPL applications, identifies to Lync Server which messages the application will take. This manifest also must contain an MSPL script which acts as a sort of “first line of defense,” deciding which messages are important enough to be sent over to your managed code. When the executable starts up, it loads in and compiles the application manifest file, creates an instance of your handler class, and establishes a connection with Lync Server. It then waits for messages to come through. The Application Manifest I’ll start by showing a very simple application manifest that you might use with a SIP application. By the way, there is a collection of sample applications that come along with the Lync Server 2010 SDK, and I highly recommend looking at these as well; they don’t come with a whole lot of documentation, but they’ll give you some ideas of what you can do with the API. If you used the default install path for the SDK, you can find them at C:\Program Files\Microsoft Lync Server 2010\SDK\Samples\. Here’s the content of my application manifest, which I’ve called ModifyHeaders.am: <?xml version="1.0"?> <r:applicationManifest r: <r:requestFilter <r:responseFilter <r:splScript><![CDATA[ if (sipRequest) { Dispatch("OnRequest"); } else { Dispatch("OnResponse"); } ]]></r:splScript> </r:applicationManifest> As you can see, this application manifest indicates that the application will handle SIP INVITE requests, and all SIP responses. The script itself is extremely simple, and calls the Dispatch method with a different parameter depending on whether the application is handling a request or a response. The Dispatch method in MSPL is a gateway of sorts to the managed code portion of the application, which we’ll look at in a moment. It passes the SIP message to the corresponding method in your handler class. The script above is about as basic as you can get, simply passing requests and responses to different managed code methods. For this example I’m keeping things simple, but I’ll quickly call out a few modifications you could make here to make the script more useful. For one thing, dispatching a message to managed code is a relatively expensive operation from a performance standpoint, so you want to minimize the frequency with which you have to do it. You definitely want to avoid dispatching any messages which your application isn’t going to do anything with. Let’s say, for instance, that your application modifies a particular SIP header on the message if it’s there. Before dispatching the message, you should check that the header is present. This way you can only dispatch messages that have the SIP header, and avoid unnecessary performance degradation. You might also be looking for messages from or to a particular SIP URI, which again you can check in MSPL. There are also some things, such as adding SIP headers, that you may be able to do entirely in MSPL, so you can avoid involving the managed code. You can also dispatch messages to different managed code methods depending on conditions you check in the MSPL script. For example, you might have separate methods for messages from internal users vs. external or PSTN users. Finally, you can pass an unlimited number of parameters to the Dispatch method. These go after the method name, and can be nearly anything that converts into a string. So you can do something like this (assuming you’ve stored something in variables called data1, data2, and data3): Dispatch("HandleMessage", data1, data2, data3); The Executable The other piece to the application, as I mentioned earlier, is an executable. The easiest way to start out is just to create a console app project in Visual Studio, but you could use a Windows Service as well. You’ll need to add a reference to ServerAgent.dll, which is installed by default at C:\Program Files\Microsoft Lync Server 2010\SDK\Bin\ServerAgent.dll. Next, let’s take a look at a handler class in managed code. This one doesn’t do anything very exciting, but it’s easy to understand. For requests, it first looks for the Ms-Sensitivity header and removes it if it’s there. For both requests and responses, it adds a ModifyHeadersSample header with the host name of the server. Note that we can end up with multiple instances of this header if the script runs on more than one server. Here’s the class: using System.Net; using Microsoft.Rtc.Sip; namespace ModifyHeadersSample { public class ModifyHeaders { public void OnRequest); } // Add a ModifyHeadersSample header. Header newHeader = new Header("ModifyHeadersSample", Dns.GetHostEntry("localhost").HostName); headers.Add(newHeader); // Send the request along. e.ServerTransaction.CreateBranch().SendRequest( e.Request); } public void OnResponse(object sender, ResponseReceivedEventArgs e) { // Get a collection of all headers on the response. HeaderCollection headers = e.Response.AllHeaders; // Add a ModifyHeadersSample header. Header newHeader = new Header("ModifyHeadersSample", Dns.GetHostEntry("localhost").HostName); headers.Add(newHeader); // Send the response along. e.ClientTransaction.ServerTransaction.SendResponse( e.Response); } } } I won’t explain every bit of this script, since some of it is fairly self-explanatory, although I will go into more detail on the classes and methods available in a future post. But I do want to draw attention to a few parts that may not be so clear. First, let’s look at this bit here: // Enable simple proxy mode and disable forking. e.Request.SimpleProxy = true; e.ServerTransaction.EnableForking = false; The first part deals with the Request class, which represents the SIP request itself. It sets a property called SimpleProxy to true. Unfortunately, I haven’t been able to find any documentation on this property, so I can’t give you the full details on what it does, but I know from looking through sample code that it helps improve performance when turned on. My guess would be that you can only use it if you’re not really modifying the routing of the message, and you’re simply changing or inspecting the message itself. I’ve turned it on here since we’re not doing anything with routing in this application. The second part is to say that forking (sending the message to two possible destinations) is disabled for this transaction. We’ll simply be passing the message along to wherever it was already going. After that we do some SIP header manipulation, and then there is this bit of code: // Send the request along. e.ServerTransaction.CreateBranch().SendRequest( e.Request); Essentially what we’re doing here is taking the server transaction (the transaction where we are acting as the server, receiving the request) and using it to create a client transaction (one where we’re acting as the client, sending the request along somewhere else). Then we’re calling the SendRequest method on that ClientTransaction object to send the request along to its destination. The OnResponse method is a simpler version of the same — in this case, the client and server parts are reversed because we’re receiving a response (as the client) and sending the same response back to the origin (as the server). Let me say that another way to make sure I’m being clear. For each request-response pair, the application gets to wear two hats: the server hat and the client hat. When a request first comes in, the app is wearing its server hat. It takes the message and does something with it. Then it puts on its client hat and sends it along to another user. That user sends back a response, which the app receives, still wearing its client hat. Then it switches back to its server hat and sends the response along to the original sender. Sorry for the awful hat; I couldn’t find any clip art. We need one last thing to run this application: a Program class to be the entry point for our executable. Here’s an example: using System; using System.Threading; using Microsoft.Rtc.Sip; namespace ModifyHeadersSample { public class Program { static void Main(string[] args) { ModifyHeaders serverApplication = new ModifyHeaders(); try { // Try to connect to the server 5 times. ServerAgent.WaitForServerAvailable(5); } catch (Exception ex) { Console.WriteLine(ex); } Environment.CurrentDirectory = System.AppDomain.CurrentDomain.BaseDirectory; // Load the app manifest from a file. ApplicationManifest manifest = ApplicationManifest.CreateFromFile("ModifyHeaders.am"); try { // Try to compile the manifest. manifest.Compile(); } catch (CompilerErrorException ex) { Console.WriteLine(ex); } ServerAgent agent = null; try { // Create the new server agent object, setting // the ModifyHeaders object as the handler for messages. agent = new ServerAgent(serverApplication, manifest); } catch (Exception ex) { Console.WriteLine(ex); } if (agent != null) { Console.WriteLine("Server application started."); while (true) { // Wait for a message to arrive and then handle it. agent.WaitHandle.WaitOne(); ThreadPool.QueueUserWorkItem( new WaitCallback(agent.ProcessEvent)); } } else { Console.WriteLine("Server application failed to start."); } } } } Nothing shocking here — it just waits for the server to be available, compiles the app manifest, and then connects to the server using a new instance of our handler class and the manifest. If you look at the samples that come with the SDK, there is a utility class in there that does some of this for you, which you may want to use. Building, Installing, and Testing the Application At this point, the application is ready to be built. Make sure you target x64 or Any CPU as the platform, not x86, or your application won’t work. Move the compiled application, along with the manifest file, to a Lync Front End Server. (I probably don’t need to say this, but please don’t try this on a production Lync server — you can seriously screw things up. Use an isolated test environment with no real users.) You can put it pretty much anywhere on the server; it doesn’t matter where the executable is located. The next very important step is to add the user under whose identity the application will run to the RTC Server Application local group on the Front End Server. If you don’t do this, your application will crash with an UnauthorizedException. Last but not least, we need to register the application with Lync Server. You can do this with the New-CsServerApplication PowerShell command. You would enter something like this: New-CsServerApplication -Identity Service:Registrar:lync-se.domain.local/ModifyHeaders -Uri -Enabled $true -Critical $false The idea is the same as with MSPL script-only apps, so you can refer back to those instructions if you’re not sure how to do this. The only difference is that you don’t specify any value for the ScriptName parameter. That parameter is specifically for script-only applications and if you put a value in here, Lync will treat your application as script-only and get confused. Once all of this is done, cross your fingers and run the .exe. (It’s usually easier to run from an existing console window so you can see any exceptions that get spit out if the app fails.) If all goes well, it will pause for a moment and then print a message to the console saying it has started. You can double-check that the application has connected to Lync Server by going into Event Viewer and looking for an event like the one I’ve selected here: If all is well at this point, you’re pretty much done. Start a logging session on your Front End Server, then open up the Lync client and send an IM to another user. Stop the log and look at the messages. You should see the ModifyHeadersSample header tacked on to the INVITE when it is going outbound from the Front End Server. This has been a very quick overview of the Managed SIP Application API, but should at least start you out if you are curious about using it for your own development. Stay tuned for future posts that delve into more specific topics on how to use the API. I actually quite like the hats 🙂 We’ve done quite a lot of work with this API over the last few years, and hit the same question mark over SimpleProxy as you did. I think your explanation pretty much covers it, but there is some more info for anyone interested here: Thanks, Paul! gr8 topic as usual but is it possible to modify a not available or any response and send it as 200 ok I’ve been trying to do this for a while now I’ve used my request headers with replacing some headers’ values like contact and so and tried blueface Libraries but still no success is there anyway to send a 200 ok response using c#? Would it be possible to use MSPL (with or without using managed code) to modify responses from the mediation server? Since OCS 2007 R2, many people have complained that the early media SDP that the mediation server sends in the 183 Progress message causes the [non-Lync] caller to experience no ring-back. A script that identifies the 183 response from the mediation server before it’s sent and deletes the SDP would really clear up some headaches. Thx, Sam Interesting – I’m pretty sure you could do that with a managed SIP application. I don’t know what side effects it might have, though. Let me see what I can come up with. Are you looking for specifics on how to do this, or do you already have an approach in mind? Hi , good explains ( as in your book ) , a question please : How can I monitor A/V call status from a certain group ? tranfered/answered/declined etc. Thanks , again I’m not quite sure what you mean by a certain group – could you explain a bit more? I’m getting UnauthorizedException when I run the service which worked fine on my development environment , but on production ( diffrent domain & environment) it doesn’t work . I added the user on BOTH Lync FE servers to RTC Server Application local group . any ideas? thanks Hi, I have made a service that runs a MSPL on a new thread , but I got a problem. When I do Stop-CsWindowsService while _agent.WaitHandle.WaitOne(); is activated , the thread is blocked , and Stop-CsWindowsService won’t free it . How can I set/kill _agent.WaitHandle? (_agent.WaitHandle as AutoResetEvent).Set(); doesn’t help neither thanks I’m having some trouble with adding headers. I add a History-Info header to a request I Retarget and Send. I can see the header in the header collection of the request, before the call to ClientTransaction.SendRequest(), but when I look at the SIP invite in OCSLogger, the header is not there. Can it be that Lync will remove headers in some cases? As @Sven wrote, I’m encountering the same issue. Following your example, but when I’m forwarding the call out of SFB, the ModifyHeadersSample header is stripped away. It seems to me like the Mediation server is stripping those headers away. Is there any way to prevent this? Running SFB 2015.
http://blog.greenl.ee/2011/12/30/modifying-sip-headers-managed-sip-application-api/
CC-MAIN-2019-30
refinedweb
2,681
62.98
What is the difference between with (*) forEach and without? What is the output and why? I know without forEach nothing will be printed...but why peek is needed? As I know for peek public class NaturalNumbers implements Supplier<Integer> { private int i = 0; public Integer get() { return ++i; } } public static void main(String[] args) { Stream<Integer> s = Stream.generate(new NaturalNumbers()); s.limit(5) .peek( System.out::println ) .forEach( System.out::println );/// ׂ (*) { peek is just a way to peek inside the stream, it doesn't change it but the function in peek will receive all the elements that are in the stream. forEach is an terminal operation that will consume all the data in the stream and return void. The result of the above code will be each number printed twice, when you remove peek you will get the numbers printed only once. If you remove forEach you wont' get any output because stream won't execute the actions until the terminal operation (like e.g. forEach, count) is encountered.
https://codedump.io/share/EPenHIepNjwd/1/java-stream-difference-between-with-and-without-foreach
CC-MAIN-2017-04
refinedweb
170
57.16
Ordering output. Send licence blurb to the top of the file. How to?. - Join Date - Mar 2007 - Location - Gainesville, FL - 38,378 - Vote Rating - 1092 The order of files included should be determined by Cmd based on your requires. Aw cmon Two js files. Concatenate them in a specific order. Without dragging in the rest of the world, cooking supper, organizing a union and and calling china. "The one required parameter is out". Which is not shown in the example. I had it working in an older version. Here I am guessing again. A license blurb is not a component to be required. In fact I generate it: Code: <!-- dsm 5 apr 13 Sets the DSTAMP, TSTAMP, and TODAY properties in the current project --> <tstamp > <format property="TODAY_BLURB" pattern="EEE, d MMM yyyy HH:mm:ss Z" /> </tstamp> <!-- dsm 5 apr 13 Create a blurb with todays' date/time stamp --> <target name="blurb" description="Write blurb to startup.js"> <echo file="${app.dir}/startup/blurb.js">/** * (c) Copyright 2013 steward. All Rights Reserved. * Generated for my.site.com on ${TODAY_BLURB} */ </echo> </target> But I understand the concepts, I'm just annoyed at constantly playing and guessing the syntax. I am trolling for samples. Simple examples. Please! Maybe this is a better question. Can anybody translate this Code: <!-- The order files are listed here is the honest to god order of the concatenated output --> <target name="final" description=""> <concat destfile="test"> <filelist dir="startup" files="blurb.js"/> <filelist dir="${build.dir}" files="${build.classes.name}"/> </concat> </target> I seem to have a gap in my head that prevents me from getting it right. Unless I am confused, I have this notion: The classpath contains many files. Excluding them by namespace and tags etc is okay but tedious when it would be faster to go the other way around: I only want to deal with two files. Perhaps what I want to do it override the classpath for a single compile statement: shrink my universe. I think I can do this easily outside the project. But that ultimately won't be useful. The docs fail for me at this point. I have read them nineteen times over two months. Why is it they are no good for reference? Because they lack the series of simple examples (cf Apache Ant pages). If all you want to do is concatenate two files, couldn't you use the "sencha fs concat" command? For example: Code: sencha fs concat -to=output.js input1.js input2.js The heart of that algorithm is based on which is a fancy way to say that unless the two files have an explicit order specified between them, any order is valid. Stated another way: the classpath is not an ordered sequence in itself - it is an unordered set of files. The final order of the files is determined by the relationships internal to those files (as Mitchell pointed out). So, if file B.js must follow file A.js and you are not using Ext.define/requires to specify the relationship, you can do this in B.js Code: //@require A.js alert('beta'); Don Griffin Director of Engineering - Frameworks (Ext JS / Sencha Touch) Check the docs. Learn how to (properly) report a framework issue and a Sencha Cmd issue "Use the source, Luke!" Because I didn't spot it, because I had a different notion in my head, and because initial tests got me nothing but error messages. The correct command line for 3.1.0.256 is sencha fs concat -to c.js -from b.js, a.js and it works great. Thank you very much for your reply. - // require A.js What happened is that I had problems building (still do) and found that I could release by compiling only. In some earlier version it so happened that the classpath determined the order of inclusion. Guess I leaned on that. I grok your viewpoint. By using requires we get the depencies ordered within a (terminology here page? workspace? temporary snapsot?). But a next step in the build is to string those together. I think that was done in the past by ordering script tags in the markup. The examples in the docs are sparse and incorrect, and I am getting very old. That's all. Thank you both for responding. It helped. We do it by using a modified build.xml, and include the blub there rather than in an external file. Here's the pertinent section to add (this works with Cmd v3.0.2.288...and earlier): Code: <target name="-after-page"> <move file="${build.classes.file}" tofile="${build.classes.file}.tmp"/> <concat destfile="${build.classes.file}"> <header filtering="no" trimleading="yes"> /* Your blub here will be at the top of the all-classes.js file. */ </header> <fileset file="${build.classes.file}.tmp"/> </concat> <delete file="${build.classes.file}.tmp" /> </target> @cmeans: Perfect. Thank you. Exactly what I was trying to work out. To my verbiage I added a time stamp. Now I have the blurb I want at the top of the file, and I know when that file was built: Code: <target name="-after-page"> <tstamp> <format property="MYtimestamp" pattern="yyyy.MM.dd-hh.mm.ss" locale="en,UK"/> </tstamp> <tstamp> <format property="MYyyyy" pattern="yyyy" locale="en,UK"/> </tstamp> <move file="${build.classes.file}" tofile="${build.classes.file}.tmp"/> <concat destfile="${build.classes.file}"> <header filtering="no" trimleading="yes"> /* Copyright 2002 - ${MYyyyy} All Rights Reserved ${MYtimestamp} MY COMPANY My verbiage */ </header> <fileset file="${build.classes.file}.tmp"/> </concat> <delete file="${build.classes.file}.tmp" /> </target>
https://www.sencha.com/forum/showthread.php?260406-Ordering-output.-Send-licence-blurb-to-the-top-of-the-file.-How-to&p=955698&viewfull=1
CC-MAIN-2015-27
refinedweb
933
68.97
pthread_join - wait for thread termination #include <pthread.h> int pthread_join(pthread_t thread, void **value_ptr); The pthread_join() function suspends execution of the calling thread until the target thread terminates, unless the target thread has already terminated. On return from a successful pthread_join() call with a non-NULL value_ptr argument, the value passed to pthread_exit() by the terminating thread will not be detached. It is unspecified whether a thread that has exited but remains unjoined counts against _POSIX_THREAD_THREADS_MAX. If successful, the pthread_join() function returns zero. Otherwise, an error number is returned to indicate the error. The pthread_join() function will will not return an error code of [EINTR]. None. None. None. pthread_create(), wait(), <pthread.h>. Derived from the POSIX Threads Extension (1003.1c-1995)
http://pubs.opengroup.org/onlinepubs/007908775/xsh/pthread_join.html
CC-MAIN-2014-10
refinedweb
121
50.02
. What are Editions? Rust ships releases on a six-week cycle. This means that users get a constant stream of new features. This is much faster than updates for other languages, but this also means that each update is smaller. After a while, all of those tiny changes add up. But, from release to release, it can be hard to look back and say "Wow, between Rust 1.10 and Rust 1.20, Rust has changed a lot!" Every two or three years, we'll be producing a new edition of Rust. Each edition brings together the features that have landed into a clear package, with fully updated documentation and tooling. New editions ship through the usual. Compatibility When a new edition becomes available in the compiler, crates must explicitly opt in to it to take full advantage. This opt in enables editions to contain incompatible changes, like adding a new keyword that might conflict with identifiers in code, or turning warnings into errors. A Rust compiler will support all editions that existed prior to the compiler's release, and can link crates of any supported editions together. Edition changes only affect the way the compiler initially parses the code. Therefore, if you're using Rust 2015, and one of your dependencies uses Rust 2018, it all works just fine. The opposite situation works as well. Just to be clear: most features will be available on all editions. People using any edition of Rust will continue to see improvements as new stable releases are made. In some cases however, mainly when new keywords are added, but sometimes for other reasons, there may be new features that are only available in later editions. You only need to upgrade if you want to take advantage of such features. Creating a new project When you create a new project with Cargo, it will automatically add configuration for the latest edition: > cargo +nightly new foo Created binary (application) `foo` project > cat .\foo\Cargo.toml [package] name = "foo" version = "0.1.0" authors = ["your name <you@example.com>"] edition = "2018" [dependencies] That edition = "2018" setting will configure your package to use Rust 2018. No more configuration needed! If you'd prefer to use an older edition, you can change the value in that key, for example: [package] name = "foo" version = "0.1.0" authors = ["your name <you@example.com>"] edition = "2015" [dependencies] This will build your package in Rust 2015. Transitioning an existing project to a new edition New editions might change the way you write Rust – they add new syntax, language, and library features, and also remove features. For example, try, async, and await are keywords in Rust 2018, but not Rust 2015. If you have a project that's using Rust 2015, and you'd like to use Rust 2018 for it instead, there's a few steps that you need to take. It's our intention that the migration to new editions is as smooth an experience as possible. If it's difficult for you to upgrade to Rust 2018, we consider that a bug. If you run into problems with this process, please file a bug. Thank you! Here's an example. Imagine we have a crate that has this code in src/lib.rs: #![allow(unused)] fn main() { trait Foo { fn foo(&self, Box<Foo>); } } This code uses an anonymous parameter, that Box<Foo>. Rust 2018, we've included a new subcommand with Cargo. To start, let's run it: > cargo fix --edition This will check your code, and automatically fix any issues that it can. Let's look at src/lib.rs again: #![allow(unused)] fn main() { trait Foo { fn foo(&self, _: Box<Foo>); } } It's re-written our code to introduce a parameter name for that trait object. corresponding section of this guide for help, and if you have problems, please seek help at the user's forums. Keep running cargo fix --edition until you have no more warnings. Congrats! Your code is now valid in both Rust 2015 and Rust 2018! Enabling the new edition to use new features In order to use some new features, you must explicitly opt in to the new edition. Once you're ready to commit, change your Cargo.toml to add the new edition key/value pair. For example: [package] name = "foo" version = "0.1.0" authors = ["Your Name <you@example.com>"] edition = "2018" If there's no edition key, Cargo will default to Rust 2015. But in this case, we've chosen 2018, and so our code is compiling with Rust 2018! Writing idiomatic code in a new edition Editions are not only about new features and removing old ones. In any programming language, idioms change over time, and Rust is no exception. While old code will continue to compile, it might be written with different idioms today. Our sample code contains an outdated idiom. Here it is again: #![allow(unused)] fn main() { trait Foo { fn foo(&self, _: Box<Foo>); } } In Rust 2018, it's considered idiomatic to use the dyn keyword for trait objects. Eventually, we want cargo fix to fix all these idioms automatically in the same manner we did for upgrading to the 2018 edition. Currently, though, the "idiom lints" are not ready for widespread automatic fixing. The compiler isn't making cargo fix-compatible suggestions in many cases right now, and it is making incorrect suggestions in others. Enabling the idiom lints, even with cargo fix, is likely to leave your crate either broken or with many warnings still remaining. We have plans to make these idiom migrations a seamless part of the Rust 2018 experience, but we're not there yet. As a result the following instructions are recommended only for the intrepid who are willing to work through a few compiler/Cargo bugs! With that out of the way, we can instruct Cargo to fix our code snippet with: $ cargo fix --edition-idioms Afterwards, src/lib.rs looks like this: #![allow(unused)] fn main() { trait Foo { fn foo(&self, _: Box<dyn Foo>); } } We're now more idiomatic, and we didn't have to fix our code manually! Note that cargo fix may still not be able to automatically update our code. If cargo fix can't fix something, it will print a warning to the console, and you'll have to fix it manually. As mentioned before, there are known bugs around the idiom lints which means they're not all ready for prime time yet. You may get a scary-looking warning to report a bug to Cargo, which happens whenever a fix proposed by rustc actually caused code to stop compiling by accident. If you'd like cargo fix to make as much progress as possible, even if it causes code to stop compiling, you can execute: $ cargo fix --edition-idioms --broken-code This will instruct cargo fix to apply automatic suggestions regardless of whether they work or not. Like usual, you'll see the compilation result after all fixes are applied. If you notice anything wrong or unusual, please feel free to report an issue to Cargo and we'll help prioritize and fix it. Enjoy the new edition! Rust 2015 Rust 2015 has a theme of "stability". It commenced with the release of 1.0, and is the "default edition". The edition system was conceived in late 2017, but Rust 1.0 was released in May of 2015. As such, 2015 is the edition that you get when you don't specify any particular edition, for backwards compatibility reasons. "Stability" is the theme of Rust 2015 because 1.0 marked a huge change in Rust development. Previous to Rust 1.0, Rust was changing on a daily basis. This made it very difficult to write large software in Rust, and made it difficult to learn. With the release of Rust 1.0 and Rust 2015, we committed to backwards compatibility, ensuring a solid foundation for people to build projects on top of. Since it's the default edition, there's no way to port your code to Rust 2015; it just is. You'll be transitioning away from 2015, but never really to 2015. As such, there's not much else to say about it! Rust 2018 The edition system was created for the release of Rust 2018. The theme of Rust 2018 is productivity. Rust 2018 improves upon Rust 2015 through new features, simpler syntax in some cases, a smarter borrow-checker, and a host of other things. These are all in service of the productivity goal. Rust 2015 was a foundation; Rust 2018 smooths off rough edges, makes writing code simpler and easier, and removes some inconsistencies. 2018-Specific Changes The following is a summary of changes that only apply to code compiled with the 2018 edition compared to the 2015 edition. - Path changes: - Paths in usedeclarations work the same as other paths. - Paths starting with ::must be followed with an external crate. - Paths in pub(in path)visibility modifiers must start with crate, self, or super. - Anonymous trait function parameters are not allowed. - Trait function parameters may use any irrefutable pattern when the function has a body. - Keyword changes: dynis a strict keyword, in 2015 it is a weak keyword. asyncand awaitare strict keywords. tryis a reserved keyword. - The following lints are now a hard error that you cannot silence: Cargo - If there is a target definition in a Cargo.tomlmanifest, it no longer automatically disables automatic discovery of other targets. - Target paths of the form src/{target_name}.rsare no longer inferred for targets where the pathfield is not set. cargo installfor the current directory is no longer allowed, you must specify cargo install --path .to install the current package. Module system In this chapter of the guide, we discuss a few changes to the module system. The most notable of these are the path clarity changes. Raw identifiers Rust, like many programming languages, has the concept of "keywords". These identifiers mean something to the language, and so you cannot use them in places like variable names, function names, and other places. Raw identifiers let you use keywords where they would not normally be allowed.. New keywords The new confirmed keywords in edition 2018 are: async and await Here, async is reserved for use in async fn as well as in async || closures and async { .. } blocks. Meanwhile, await is reserved to keep our options open with respect to await!(expr) syntax. See RFC 2394 for more details. try The do catch { .. } blocks have been renamed to try { .. } and to support that, the keyword try is reserved in edition 2018. See RFC 2388 for more details. Path clarity. Here's a brief summary: extern crateis no longer needed in 99% of circumstances. - The cratekeyword refers to the current crate. - Paths may start with a crate name, even within submodules. - Paths starting with ::must reference an external crate. - A foo.rsand foo/subdirectory may coexist; mod.rsis no longer needed when placing submodules in a subdirectory. - Paths in usedeclarations work the same as other paths. These may seem like arbitrary new rules when put this way, but the mental model is now significantly simplified overall. Read on for more details!. Usually these are only needed in very specialized situations. Starting in 1.41, rustc accepts the --extern=CRATE_NAME flag which automatically adds the given crate name in a way similar to extern crate. Build tools may use this to inject sysroot crates into the crate's prelude. Cargo does not have a general way to express this, though it uses it for proc_macro crates. Some examples of needing to explicitly import sysroot crates are: std: Usually this is not neccesary, because stdis automatically imported unless the crate is marked with #![no_std]. core: Usually this is not necessary, because coreis automatically imported, unless the crate is marked with #![no_core]. For example, some of theternflags to rustc. alloc: Items in the alloccrate are usually accessed via re-exports in the stdcrate. If you are working with a no_stdcrate that supports allocation, then you may need to explicitly import alloc. test: This is only available on the nightly channel, and is usually only used for the unstable benchmark support. Macros One other use for extern crate was to import macros; that's no longer needed. Check the macro section for more. Renaming crates. Extern crate paths Previously, using an external crate in a module without a use import required a leading :: on the path. // Rust 2015 extern crate chrono; fn foo() { // this works in the crate root let x = chrono::Utc::now(); } mod submodule { fn function() { // but in a submodule it requires a leading :: if not imported with `use` let x = ::chrono::Utc::now(); } } Now, extern crate names are in scope in the entire crate, including submodules. // Rust 2018 fn foo() { // this works in the crate root let x = chrono::Utc::now(); } mod submodule { fn function() { // crates may be referenced directly, even in submodules let x = chrono::Utc::now(); } } No more mod.rs In Rust 2015, if you have a submodule: // This `mod` declaration looks for the `foo` module in // `foo.rs` or `foo/mod.rs`. mod foo; It can live in foo.rs or foo/mod.rs. If it has submodules of its own, it must be foo/mod.rs. So a bar submodule of foo would live at foo/bar.rs. In Rust 2018 the restriction that a module with submodules must be named mod.rs is lifted. foo.rs can just be foo.rs, and the submodule is still foo/bar.rs. This eliminates the special name, and if you have a bunch of files open in your editor, you can clearly see their names, instead of having a bunch of tabs named mod.rs. use paths Rust 2018 simplifies and unifies path handling compared to Rust 2015. In Rust 2015, paths work differently in use declarations than they do elsewhere. In particular, paths in use declarations would always start from the crate root, while paths in other code implicitly started from the current scope.. You can use a relative path from the current use futures::Future; mod foo { pub struct Bar; } use foo::Bar; fn my_poll() -> futures::Poll { ... } enum SomeEnum { V1(usize), V2(String), } fn func() { let five = std::sync::Arc::new(5); use SomeEnum::*; match ... { V1(i) => { ... } V2(s) => { ... } } } The same code will also work completely unmodified in a submodule: // Rust 2018.. More visibility modifiers You can use the pub keyword to make something a part of a module's public interface. But in addition, there are some new forms: pub(crate) struct Foo; pub(in a::b::c) struct Bar; The first form makes the Foo struct public to your entire crate, but not externally. The second form is similar, but makes Bar public for one other module, a::b::c in this case. Nested imports with use A new way to write use statements has been added to Rust: nested import groups. If you’ve ever written a set of imports like this: #![allow(unused)] fn main() { use std::fs::File; use std::io::Read; use std::path::{Path, PathBuf}; } You can now write this: #![allow(unused)] fn main() { mod foo { // on one line use std::{fs::File, io::Read, path::{Path, PathBuf}}; } mod bar { // with some more breathing room use std::{ fs::File, io::Read, path::{ Path, PathBuf } }; } } This can reduce some repetition, and make things a bit more clear. Error handling and Panics In this chapter of the guide, we discuss a few improvements to error handling in Rust. The most notable of these is the introduction of the ? operator. The ? operator for easier error handling for Result<T, E> for Option<T> Rust has gained a new operator, ?, that makes error handling more pleasant by reducing the visual noise involved. It does this by solving one simple problem. To illustrate, imagine we had some code to read some data from a file: #![allow(unused)] fn main() { use std::{io::{self, prelude::*}, fs::File}; fn read_username_from_file() -> Result<String, io::Error> { let f = File::open("username.txt"); let mut f = match f { Ok(file) => file, Err(e) => return Err(e), }; let mut s = String::new(); match f.read_to_string(&mut s) { Ok(_) => Ok(s), Err(e) => Err(e), } } } Note: this code could be made simpler with a single call to std::fs::read_to_string, but we're writing it all out manually here to have an example with multiple errors.: #![allow(unused)] fn main() { use std::{io::{self, prelude::*}, fs::File};. Previously, read_username_from_file could have been implemented like this: #![allow(unused)] fn main() { use std::{io::{self, prelude::*}, fs::File};. You can use ? with Result<T, E>s, but also with Option<T>. In that case, ? will return a value for Some(T) and return None for None. One current restriction is that you cannot use ? for both in the same function, as the return type needs to match the type you use ? on. In the future, this restriction will be lifted. ? in main and tests Rust's error handling revolves around returning Result<T, E> and using ? to propagate errors. For those who write many small programs and, hopefully, many tests, one common paper cut has been mixing entry points such as main and #[test]s with error handling. As an example, you might have tried to write: use std::fs::File; fn main() { let f = File::open("bar.txt")?; } Since ? works by propagating the Result with an early return to the enclosing function, the snippet above does not work, and results today in the following error: error[E0277]: the `?` operator can only be used in a function that returns `Result` or `Option` (or another type that implements `std::ops::Try`) --> src/main.rs:5:13 | 5 | let f = File::open("bar.txt")?; | ^^^^^^^^^^^^^^^^^^^^^^ cannot use the `?` operator in a function that returns `()` | = help: the trait `std::ops::Try` is not implemented for `()` = note: required by `std::ops::Try::from_error` To solve this problem in Rust 2015, you might have written something like: // Rust 2015 use std::process; use std::error::Error; fn run() -> Result<(), Box<Error>> { // real logic.. Ok(()) } fn main() { if let Err(e) = run() { println!("Application error: {}", e); process::exit(1); } } However, in this case, the run function has all the interesting logic and main is just boilerplate. The problem is even worse for #[test]s, since there tend to be a lot more of them. In Rust 2018 you can instead let your #[test]s and main functions return a Result: // Rust 2018 use std::fs::File; fn main() -> Result<(), std::io::Error> { let f = File::open("bar.txt")?; Ok(()) } In this case, if say the file doesn't exist and there is an Err(err) somewhere, then main will exit with an error code (not 0) and print out a Debug representation of err. Note that this will always print out the Debug representation. If you would like to, for example, print out the Display representation of err, you will still have to do what you would in Rust 2015. More details Getting -> Result<..> to work in the context of main and #[test]s is not magic. It is all backed up by a Termination trait which all valid return types of main and testing functions must implement. The trait is defined as: #![allow(unused)] fn main() { pub trait Termination { fn report(self) -> i32; } } When setting up the entry point for your application, the compiler will use this trait and call .report() on the Result of the main function you have written. Two simplified example implementations of this trait for Result and () are: #![allow(unused)] fn main() { #![feature(process_exitcode_placeholder, termination_trait_lib)] use std::process::ExitCode; use std::fmt; pub trait Termination { fn report(self) -> i32; } impl Termination for () { fn report(self) -> i32 { use std::process::Termination; ExitCode::SUCCESS.report() } } impl<E: fmt::Debug> Termination for Result<(), E> { fn report(self) -> i32 { match self { Ok(()) => ().report(), Err(err) => { eprintln!("Error: {:?}", err); use std::process::Termination; ExitCode::FAILURE.report() } } } } } As you can see in the case of (), a success code is simply returned. In the case of Result, the success case delegates to the implementation for () but prints out an error message and a failure exit code on Err(..). To learn more about the finer details, consult either the tracking issue or the RFC. Controlling panics with std::panic There is a std::panic module, which includes methods for halting the unwinding process started by a panic: #![allow(unused)] fn main() { use std::panic; let result = panic::catch_unwind(|| { println!("hello!"); }); assert!(result.is_ok()); let result = panic::catch_unwind(|| { panic!("oh no!"); }); assert!(result.is_err()); }. It's also worth noting that programs may choose to abort instead of unwind, and so catching panics may not work. If your code relies on catch_unwind, you should add this to your Cargo.toml: [profile.dev] panic = "unwind" [profile.release] panic = "unwind" If any of your users choose to abort, they'll get a compile-time failure. The catch_unwind API offers a way to introduce new isolation boundaries within a thread. There are a couple of key motivating examples: - Embedding Rust in other languages - Abstractions that manage threads - Test frameworks, because tests may panic and you don't want that to kill the test runner. Aborting on panic By default, Rust programs will unwind the stack when a panic! happens. If you'd prefer an immediate abort instead, you can configure this in Cargo.toml: [profile.dev] panic = "abort" [profile.release] panic = "abort" Why might you choose to do this? By removing support for unwinding, you'll get smaller binaries. You will lose the ability to catch panics. Which choice is right for you depends on exactly what you're doing. Control flow In this chapter of the guide, we discuss a few improvements to control flow. The most notable of these will be async and await. loops can break with a value loops can now break with a value: #![allow(unused)] fn main() { //. For now, this only applies to loop, and not things like while or for. See the rationale for this decision in RFC issue #1767. async/await for easier concurrency The initial release of Rust 2018 won't ship with async/ await support, but we have reserved the keywords so that a future release will contain them. We'll update this page when it's closer to shipping! Trait system In this chapter of the guide, we discuss a few improvements to the trait system. The most notable of these is impl Trait. impl Trait for returning complex types with ease impl Trait is the new way to specify unnamed but concrete types that implement a specific trait. There are two places you can put it: argument position, and return position. trait Trait {} // argument position fn foo(arg: impl Trait) { } // return position fn foo() -> impl Trait { } Argument Position In argument position, this feature is quite simple. These two forms are almost the same: trait Trait {} fn foo<T: Trait>(arg: T) { } fn foo(arg: impl Trait) { } That is, it's a slightly shorter syntax for a generic type parameter. It means, " arg is an argument that takes any type that implements the Trait trait." However, there's also an important technical difference between T: Trait and impl Trait here. When you write the former, you can specify the type of T at the call site with turbo-fish syntax as with foo::<usize>(1). In the case of impl Trait, if it is used anywhere in the function definition, then you can't use turbo-fish at all. Therefore, you should be mindful that changing both from and to impl Trait can constitute a breaking change for the users of your code. Return Position In return position, this feature is more interesting. It means "I am returning some type that implements the Trait trait, but I'm not going to tell you exactly what the type is." Before impl Trait, you could do this with trait objects: #![allow(unused)] fn main() { trait Trait {} impl Trait for i32 {} fn returns_a_trait_object() -> Box<dyn Trait> { Box::new(5) } } However, this has some overhead: the Box<T> means that there's a heap allocation here, and this will use dynamic dispatch. See the dyn Trait section for an explanation of this syntax. But we only ever return one possible thing here, the Box<i32>. This means that we're paying for dynamic dispatch, even though we don't use it! With impl Trait, the code above could be written like this: #![allow(unused)] fn main() { trait Trait {} impl Trait for i32 {} fn returns_a_trait_object() -> impl Trait { 5 } } Here, we have no Box<T>, no trait object, and no dynamic dispatch. But we still can obscure the i32 return type. With i32, this isn't super useful. But there's one major place in Rust where this is much more useful: closures. impl Trait and closures If you need to catch up on closures, check out their chapter in the book. In Rust, closures have a unique, un-writable type. They do implement the Fn family of traits, however. This means that previously, the only way to return a closure from a function was to use a trait object: #![allow(unused)] fn main() { fn returns_closure() -> Box<dyn Fn(i32) -> i32> { Box::new(|x| x + 1) } } You couldn't write the type of the closure, only use the Fn trait. That means that the trait object is necessary. However, with impl Trait: #![allow(unused)] fn main() { fn returns_closure() -> impl Fn(i32) -> i32 { |x| x + 1 } } We can now return closures by value, just like any other type! More details The above is all you need to know to get going with impl Trait, but for some more nitty-gritty details: type parameters and impl Trait work slightly differently when they're in argument position versus return position. Consider this function: fn foo<T: Trait>(x: T) { When you call it, you set the type, T. "you" being the caller here. This signature says "I accept any type that implements Trait." ("any type" == universal in the jargon) This version: fn foo<T: Trait>() -> T { is similar, but also different. You, the caller, provide the type you want, T, and then the function returns it. You can see this in Rust today with things like parse or collect: let x: i32 = "5".parse()?; let x: u64 = "5".parse()?; Here, .parse has this signature: pub fn parse<F>(&self) -> Result<F, <F as FromStr>::Err> where F: FromStr, Same general idea, though with a result type and FromStr has an associated type... anyway, you can see how F is in the return position here. So you have the ability to choose. With impl Trait, you're saying "hey, some type exists that implements this trait, but I'm not gonna tell you what it is." So now, the caller can't choose, and the function itself gets to choose. If we tried to define parse with Result<impl F,... as the return type, it wouldn't work. Using impl Trait in more places As previously mentioned, as a start, you will only be able to use impl Trait as the argument or return type of a free or inherent function. However, impl Trait can't be used inside implementations of traits, nor can it be used as the type of a let binding or inside a type alias. Some of these restrictions will eventually be lifted. For more information, see the tracking issue on impl Trait. dyn Trait for trait objects The dyn Trait feature is the new syntax for using trait objects. In short: Box<Trait>becomes Box<dyn Trait> &Traitand &mut Traitbecome &dyn Traitand &mut dyn Trait And so on. In code: #![allow(unused)] fn main() { trait Trait {} impl Trait for i32 {} // old fn function1() -> Box<Trait> { unimplemented!() } // new fn function2() -> Box<dyn Trait> { unimplemented!() } } That's it! More details Using just the trait name for trait objects turned out to be a bad decision. The current syntax is often ambiguous and confusing, even to veterans, and favors a feature that is not more frequently used than its alternatives, is sometimes slower, and often cannot be used at all when its alternatives can. Furthermore, with impl Trait arriving, " impl Trait vs dyn Trait" is much more symmetric, and therefore a bit nicer, than " impl Trait vs Trait". impl Trait is explained here. In the new edition, you should therefore prefer dyn Trait to just Trait where you need a trait object. More container types support trait objects In Rust 1.0, only certain, special types could be used to create trait objects. With Rust 1.2, that restriction was lifted, and more types became able to do this. For example, Rc<T>, one of Rust's reference-counted types: use std::rc::Rc; trait Foo {} impl Foo for i32 { } fn main() { let obj: Rc<dyn Foo> = Rc::new(5); } This code would not work with Rust 1.0, but now works. If you haven't seen the dynsyntax before, see the section on it. For versions that do not support it, replace Rc<dyn Foo>with Rc<Foo>. Associated constants You can define traits, structs, and enums that have “associated functions”: struct Struct; impl Struct { fn foo() { println!("foo is an associated function of Struct"); } } fn main() { Struct::foo(); } These are called “associated functions” because they are functions that are associated with the type, that is, they’re attached to the type itself, and not any particular instance. Rust 1.20 adds the ability to define “associated constants” as well: struct Struct; impl Struct { const ID: u32 = 0; } fn main() { println!("the ID of Struct is: {}", Struct::ID); } That is, the constant ID is associated with Struct. Like functions, associated constants work with traits and enums as well. Traits have an extra ability with associated constants that gives them some extra power. With a trait, you can use an associated constant in the same way you’d use an associated type: by declaring it, but not giving it a value. The implementor of the trait then declares its value upon implementation: trait Trait { const ID: u32; } struct Struct; impl Trait for Struct { const ID: u32 = 5; } fn main() { println!("{}", Struct::ID); } Before this feature, if you wanted to make a trait that represented floating point numbers, you’d have to write this: #![allow(unused)] fn main() { trait Float { fn nan() -> Self; fn infinity() -> Self; // ... } } This is slightly unwieldy, but more importantly, because they’re functions, they cannot be used in constant expressions, even though they only return a constant. Because of this, a design for Float would also have to include constants as well: mod f32 { const NAN: f32 = 0.0f32 / 0.0f32; const INFINITY: f32 = 1.0f32 / 0.0f32; impl Float for f32 { fn nan() -> Self { f32::NAN } fn infinity() -> Self { f32::INFINITY } } } Associated constants let you do this in a much cleaner way. This trait definition: #![allow(unused)] fn main() { trait Float { const NAN: Self; const INFINITY: Self; // ... } } Leads to this implementation: mod f32 { impl Float for f32 { const NAN: f32 = 0.0f32 / 0.0f32; const INFINITY: f32 = 1.0f32 / 0.0f32; } } much cleaner, and more versatile. No more anonymous trait parameters In accordance with RFC #1685, parameters in trait method declarations are no longer allowed to be anonymous. For example, in the 2015 edition, this was allowed: #![allow(unused)] fn main() { trait Foo { fn foo(&self, u8); } } In the 2018 edition, all parameters must be given an argument name (even if it's just _): #![allow(unused)] fn main() { trait Foo { fn foo(&self, baz: u8); } } Slice patterns Have you ever tried to pattern match on the contents and structure of a slice? Rust 2018 will let you do just that. For example, say we want to accept a list of names and respond to that with a greeting. With slice patterns, we can do that easy as pie with: fn main() { greet(&[]); // output: Bummer, there's no one here :( greet(&["Alan"]); // output: Hey, there Alan! You seem to be alone. greet(&["Joan", "Hugh"]); // output: Hello, Joan and Hugh. Nice to see you are exactly 2! greet(&["John", "Peter", "Stewart"]); // output: Hey everyone, we seem to be 3 here today. } fn greet(people: &[&str]) { match people { [] => println!("Bummer, there's no one here :("), [only_one] => println!("Hey, there {}! You seem to be alone.", only_one), [first, second] => println!( "Hello, {} and {}. Nice to see you are exactly 2!", first, second ), _ => println!("Hey everyone, we seem to be {} here today.", people.len()), } } Now, you don't have to check the length first. We can also match on arrays like so: #![allow(unused)] fn main() { let arr = [1, 2, 3]; assert_eq!("ends with 3", match arr { [_, _, 3] => "ends with 3", [a, b, c] => "ends with something else", }); } More details Exhaustive patterns In the first example, note in particular the _ => ... pattern. Since we are matching on a slice, it could be of any length, so we need a "catch all pattern" to handle it. If we forgot the _ => ... or identifier => ... pattern, we would instead get an error saying: error[E0004]: non-exhaustive patterns: `&[_, _, _]` not covered If we added a case for a slice of size 3 we would instead get: error[E0004]: non-exhaustive patterns: `&[_, _, _, _]` not covered and so on... Arrays and exact lengths In the second example above, since arrays in Rust are of known lengths, we have to match on exactly three elements. If we try to match on 2 or 4 elements,we get the errors: error[E0527]: pattern requires 2 elements but array has 3 and error[E0527]: pattern requires 4 elements but array has 3 In the pipeline When it comes to slice patterns, more advanced forms are planned but have not been stabilized yet. To learn more, follow the tracking issue. Ownership and lifetimes In this chapter of the guide, we discuss a few improvements to ownership and lifetimes. One of the most notable of these is default match binding modes. Non-lexical lifetimes for 2018 edition for 2015 edition The borrow checker has been enhanced to accept more code, via a mechanism called "non-lexical lifetimes.". Better errors What if we did use y, like this? fn main() { let mut x = 5; let y = &x; let z = &mut x; println!("y: {}", y); } Here's the error: error[E0502]: cannot borrow `x` as mutable because it is also borrowed as immutable --> src/main.rs:5:18 | 4 | let y = &x; | - immutable borrow occurs here 5 | let z = &mut x; | ^ mutable borrow occurs here ... 8 | } | - immutable borrow ends here With non-lexical lifetimes, the error changes slightly:. Default match bindings Have you ever had a borrowed Option<T> and tried to match on it? You probably wrote this: let s: &Option<String> = &Some("hello".to_string()); match s { Some(s) => println!("s is: {}", s), _ => (), }; In Rust 2015, this would fail to compile, and you would have to write the following instead: // Rust 2015 let s: &Option<String> = &Some("hello".to_string()); match s { &Some(ref s) => println!("s is: {}", s), _ => (), }; Rust 2018, by contrast, will infer the &s and refs, and your original code will Just Work. This affects not just match, but patterns everywhere, such as in let statements, closure arguments, and for loops. More details The mental model of patterns has shifted a bit with this change, to bring it into line with other aspects of the language. For example, when writing a for loop, you can iterate over borrowed contents of a collection by borrowing the collection itself: let my_vec: Vec<i32> = vec![0, 1, 2]; for x in &my_vec { ... } The idea is that an &T can be understood as a borrowed view of T, and so when you iterate, match, or otherwise destructure a &T you get a borrowed view of its internals as well. More formally, patterns have a "binding mode," which is either by value ( x), by reference ( ref x), or by mutable reference ( ref mut x). In Rust 2015, match always started in by-value mode, and required you to explicitly write ref or ref mut in patterns to switch to a borrowing mode. In Rust 2018, the type of the value being matched informs the binding mode, so that if you match against an &Option<String> with a Some variant, you are put into ref mode automatically, giving you a borrowed view of the internal data. Similarly, &mut Option<String> would give you a ref mut view. '_, the)] fn main() { struct StrWrap<'a>(&'a str); } In Rust 2015, you might have written: #![allow(unused)])] fn main() { // Rust 2015 struct Foo<'a, 'b: 'a> { field: &'a &'b str, } impl<'a, 'b: 'a> Foo<'a, 'b> { // some methods... } } We can rewrite this as: #![allow(unused)]. Lifetime elision in impl When writing impl blocks, you can now elide lifetime annotations in some situations. Consider a trait like MyIterator: trait MyIterator { type Item; fn next(&mut self) -> Option<Self::Item>; } In Rust 2015, if we wanted to implement this iterator for mutable references to Iterators, we'd need to write this: impl<'a, I: MyIterator> MyIterator for &'a mut I { type Item = I::Item; fn next(&mut self) -> Option<Self::Item> { (*self).next() } } Note all of the 'a annotations. In Rust 2018, we can write this: impl<I: MyIterator> MyIterator for &mut I { type Item = I::Item; fn next(&mut self) -> Option<Self::Item> { (*self).next() } } Similarly, lifetime annotations can appear due to a struct that contains references: struct SetOnDrop<'a, T> { borrow: &'a mut T, value: Option<T>, } In Rust 2015, to implement Drop on this struct, we'd write: impl<'a, T> Drop for SetOnDrop<'a, T> { fn drop(&mut self) { if let Some(x) = self.value.take() { *self.borrow = x; } } } But in Rust 2018, we can combine elision with the anonymous lifetime and write this instead. impl<T> Drop for SetOnDrop<'_, T> { fn drop(&mut self) { if let Some(x) = self.value.take() { *self.borrow = x; } } } T: 'a inference in structs An annotation in the form of T: 'a, where T is either a type or another lifetime, is called an "outlives" requirement. Note that "outlives" also implies 'a: 'a. One way in which edition 2018 helps you out in maintaining flow when writing programs is by removing the need to explicitly annotate these T: 'a outlives requirements in struct definitions. Instead, the requirements will be inferred from the fields present in the definitions. Consider the following struct definitions in Rust 2015: #![allow(unused)] fn main() { // Rust 2015 struct Ref<'a, T: 'a> { field: &'a T } // or written with a `where` clause: struct WhereRef<'a, T> where T: 'a { data: &'a T } // with nested references: struct RefRef<'a, 'b: 'a, T: 'b> { field: &'a &'b T, } // using an associated type: struct ItemRef<'a, T: Iterator> where T::Item: 'a { field: &'a T::Item } } In Rust 2018, since the requirements are inferred, you can instead write: // Rust 2018 struct Ref<'a, T> { field: &'a T } struct WhereRef<'a, T> { data: &'a T } struct RefRef<'a, 'b, T> { field: &'a &'b T, } struct ItemRef<'a, T: Iterator> { field: &'a T::Item } If you prefer to be more explicit in some cases, that is still possible. More details For more details, see the tracking issue and the RFC. Simpler lifetimes in static and const In older Rust, you had to explicitly write the 'static lifetime in any static or const that needed a lifetime: #![allow(unused)] fn main() { mod foo { const NAME: &'static str = "Ferris"; } mod bar { static NAME: &'static str = "Ferris"; } } But 'static is the only possible lifetime there. So Rust now assumes the 'static lifetime, and you don't have to write it out: #![allow(unused)] fn main() { mod foo { const NAME: &str = "Ferris"; } mod bar { static NAME: &str = "Ferris"; } } In some situations, this can remove a lot of boilerplate: #![allow(unused)] fn main() { mod foo { // old const NAMES: &'static [&'static str; 2] = &["Ferris", "Bors"]; } mod bar { // new const NAMES: &[&str; 2] = &["Ferris", "Bors"]; } } Data types In this chapter of the guide, we discuss a few improvements to data types. One of these are field-init-shorthand. Field init shorthand In older Rust, when initializing a struct, you must always give the full set of key: value pairs for its fields: #![allow(unused)] fn main() { struct Point { x: i32, y: i32, } let a = 5; let b = 6; let p = Point { x: a, y: b, }; } However, often these variables would have the same names as the fields. So you'd end up with code that looks like this: let p = Point { x: x, y: y, }; Now, if the variable is of the same name, you don't have to write out both, just write out the key: #![allow(unused)] fn main() { struct Point { x: i32, y: i32, } let x = 5; let y = 6; // new let p = Point { x, y, }; } ..= for inclusive ranges Since well before Rust 1.0, you’ve been able to create exclusive ranges with .. like this: #![allow(unused)] fn main() { for i in 1..3 { println!("i: {}", i); } } This will print i: 1 and then i: 2. Today, you can now create an inclusive range, like this: #![allow(unused)] fn main() { for i in 1..=3 { println!("i: {}", i); } } This will print i: 1 and then i: 2 like before, but also i: 3; the three is included in the range. Inclusive ranges are especially useful if you want to iterate over every possible value in a range. For example, this is a surprising Rust program: fn takes_u8(x: u8) { // ... } fn main() { for i in 0..256 { println!("i: {}", i); takes_u8(i); } } What does this program do? The answer: it fails to compile. The error we get when compiling has a hint: error: literal out of range for u8 --> src/main.rs:6:17 | 6 | for i in 0..256 { | ^^^ | = note: #[deny(overflowing_literals)] on by default That’s right, since i is a u8, this overflows, and the compiler produces an error. We can do this with inclusive ranges, however: fn takes_u8(x: u8) { // ... } fn main() { for i in 0..=255 { println!("i: {}", i); takes_u8(i); } } This will produce those 256 lines of output you might have been expecting. 128 bit integers A very simple feature: Rust now has 128 bit integers! #![allow(unused)] fn main() { let x: i128 = 0; let y: u128 = 0; } These are twice the size of u64, and so can hold more values. More specifically, u128: 0- 340,282,366,920,938,463,463,374,607,431,768,211,455 i128: −170,141,183,460,469,231,731,687,303,715,884,105,728- 170,141,183,460,469,231,731,687,303,715,884,105,727 Whew! "Operator-equals" are now implementable The various “operator equals” operators, such as += and -=, are implementable via various traits. For example, to implement += on a type of your own: use std::ops::AddAssign; #[derive(Debug)] struct Count { value: i32, } impl AddAssign for Count { fn add_assign(&mut self, other: Count) { self.value += other.value; } } fn main() { let mut c1 = Count { value: 1 }; let c2 = Count { value: 5 }; c1 += c2; println!("{:?}", c1); } This will print Count { value: 6 }. union for an unsafe form of enum Rust now supports unions: #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32, } } Unions are kind of like enums, but they are “untagged”. Enums have a “tag” that stores which variant is the correct one at runtime; unions don't have this tag. Since we can interpret the data held in the union using the wrong variant and Rust can’t check this for us, that means reading a union’s field is unsafe: #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32, } let mut u = MyUnion { f1: 1 }; u.f1 = 5; let value = unsafe { u.f1 }; } Pattern matching works too: #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32, }, unions also simplify Rust implementations of space-efficient or cache-efficient structures relying on value representation, such as machine-word-sized unions using the least-significant bits of aligned pointers to distinguish cases. There’s still more improvements to come. For now, unions can only include Copy types and may not implement Drop. We expect to lift these restrictions in the future. Choosing alignment with the repr attribute. The #[repr] attribute has a new parameter, align, that sets the alignment of your struct: #![allow(unused)] fn main() {! The alignment of a type is normally not worried about as the compiler will "do the right thing" of picking an appropriate alignment for general use cases. There are situations, however, where a nonstandard alignment may be desired when operating with foreign systems. For example these sorts of situations tend to necessitate or be much easier with a custom alignment: - Hardware can often have obscure requirements such as "this structure is aligned to 32 bytes" when it in fact is only composed of 4-byte values. While this can typically be manually calculated and managed, it's often also useful to express this as a property of a type to get the compiler to do a little extra work instead. - C compilers like gccand clangoffer the ability to specify a custom alignment for structures, and Rust can much more easily interoperate with these types if Rust can also mirror the request for a custom alignment (e.g. passing a structure to C correctly is much easier). - Custom alignment can often be used for various tricks here and there and is often convenient as "let's play around with an implementation" tool. For example this can be used to statically allocate page tables in a kernel or create an at-least cache-line-sized structure easily for concurrent programming. The purpose of this feature is to provide a lightweight annotation to alter the compiler-inferred alignment of a structure to enable these situations much more easily. SIMD for faster computing The basics of SIMD are now available! SIMD stands for “single instruction, multiple data.” Consider a function like this: #![allow(unused)] fn main() { pub fn foo(a: &[u8], b: &[u8], c: &mut [u8]) { for ((a, b), c) in a.iter().zip(b).zip(c) { *c = *a + *b; } } } Here, we’re taking two slices, and adding the numbers together, placing the result in a third slice. The simplest possible way to do this would be to do exactly what the code does, and loop through each set of elements, add them together, and store it in the result. However, compilers can often do better. LLVM will usually “autovectorize” code like this, which is a fancy term for “use SIMD.” Imagine that a and b were both 16 elements long. Each element is a u8, and so that means that each slice would be 128 bits of data. Using SIMD, we could put both a and b into 128 bit registers, add them together in a single instruction, and then copy the resulting 128 bits into c. That’d be much faster! While stable Rust has always been able to take advantage of autovectorization, sometimes, the compiler just isn’t smart enough to realize that we can do something like this. Additionally, not every CPU has these features, and so LLVM may not use them so your program can be used on a wide variety of hardware. The std::arch module allows us to use these kinds of instructions directly, which means we don’t need to rely on a smart compiler. Additionally, it includes some features that allow us to choose a particular implementation based on various criteria. For example: # use cfg flags to choose the correct version based on the machine we’re targeting; on x86 we use that version, and on x86_64 we use its version. We can also choose at runtime: fn foo() { #[cfg(any(target_arch = "x86", target_arch = "x86_64"))] { if is_x86_feature_detected!("avx2") { return unsafe { foo_avx2() }; } } foo_fallback(); } Here, we have two versions of the function: one which uses AVX2, a specific kind of SIMD feature that lets you do 256-bit operations. The is_x86_feature_detected! macro will generate code that detects if your CPU supports AVX2, and if so, calls the foo_avx2 function. If not, then we fall back to a non-AVX implementation, foo_fallback. This means that our code will run super fast on CPUs that support AVX2, but still work on ones that don’t, albeit slower. If all of this seems a bit low-level and fiddly, well, it is! std::arch is specifically primitives for building these kinds of things. We hope to eventually stabilize a std::simd module with higher-level stuff in the future. But landing the basics now lets the ecosystem experiment with higher level libraries starting today. For example, check out the faster crate. Here’s a code snippet with no SIMD: let lots_of_3s = (&[-123.456f32; 128][..]).iter() .map(|v| { 9.0 * v.abs().sqrt().sqrt().recip().ceil().sqrt() - 4.0 - 2.0 }) .collect::<Vec<f32>>(); To use SIMD with this code via faster, you’d change it to this: let lots_of_3s = (&[-123.456f32; 128][..]).simd_iter() .simd_map(f32s(0.0), |v| { f32s(9.0) * v.abs().sqrt().rsqrt().ceil().sqrt() - f32s(4.0) - f32s(2.0) }) .scalar_collect(); It looks almost the same: simd_iter instead of iter, simd_map instead of map, f32s(2.0) instead of 2.0. But you get a SIMD-ified version generated for you. Beyond that, you may never write any of this yourself, but as always, the libraries you depend on may. For example, the regex crate contains these SIMD speedups without you needing to do anything at all! Macros In this chapter of the guide, we discuss a few improvements to the macro system. A notable addition here is the introduction of custom derive macros. Custom Derive In Rust, you’ve always been able to automatically implement some traits through the derive attribute: #![allow(unused)] fn main() { #[derive(Debug)] struct Pet { name: String, } } The Debug trait is then implemented for Pet, with vastly less boilerplate. For example, without derive, you'd have to write this: #![allow(unused)] fn main() { use std::fmt; struct Pet { name: String, } impl fmt::Debug for Pet { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { Pet { name } => { let mut debug_trait_builder = f.debug_struct("Pet"); let _ = debug_trait_builder.field("name", name); debug_trait_builder.finish() } } } } } Whew! However, this only worked for traits provided as part of the standard library; it was not customizable. But now, you can tell Rust what to do when someone wants to derive your trait. This is used heavily in popular crates like serde and Diesel. For more, including learning how to build your own custom derive, see The Rust Programming Language. Macro changes macro_rules! style macros In Rust 2018, you can import specific macros from external crates via use statements, rather than the old #[macro_use] attribute. For example, consider a bar crate that implements a baz! macro. In src/lib.rs: #![allow(unused)] fn main() { #[macro_export] macro_rules! baz { () => () } } In your crate, you would have written // Rust 2015 #[macro_use] extern crate bar; fn main() { baz!(); } Now, you write: // Rust 2018 use bar::baz; fn main() { baz!(); } This moves macro_rules macros to be a bit closer to other kinds of items. Note that you'll still need #[macro_use] to use macros you've defined in your own crate; this feature only works for importing macros from external crates. Procedural macros When using procedural macros to derive traits, you will have to name the macro that provides the custom derive. This generally matches the name of the trait, but check with the documentation of the crate providing the derives to be sure. For example, with Serde you would have written // Rust 2015 extern crate serde; #[macro_use] extern crate serde_derive; #[derive(Serialize, Deserialize)] struct Bar; Now, you write instead: // Rust 2018 use serde_derive::{Serialize, Deserialize}; #[derive(Serialize, Deserialize)] struct Bar; More details This only works for macros defined in external crates. For macros defined locally, #[macro_use] mod foo; is still required, as it was in Rust 2015. Local helper macros Sometimes it is helpful or necessary to have helper macros inside your module. This can make supporting both versions of rust more complicated. For example, let's make a simplified (and slightly contrived) version of the log crate in 2015 edition style: #![allow(unused)] fn main() { use std::fmt; /// How important/severe the log message is. #[derive] macro_rules! warn { ($($args:tt)*) => { __impl_log!($crate::LogLevel::Warn, format_args!($($args)*)) } } /// Error level log message #[macro_export] macro_rules! error { ($($args:tt)*) => { __impl_log!($crate::LogLevel::Error, format_args!($($args)*)) } } } Our __impl_log! macro is private to our module, but needs to be exported as it is called by other macros, and in 2015 edition all used macros must be exported. Now, in 2018 this example will not compile: use log::error; fn main() { error!("error message"); } will give an error message about not finding the __impl_log! macro. This is because unlike in the 2015 edition, macros are namespaced and we must import them. We could do use log::{__impl_log, error}; which would make our code compile, but __impl_log is meant to be an implementation detail! Macros with $crate:: prefix. The cleanest way to handle this situation is to use the $crate:: prefix for macros, the same as you would for any other path. Versions of the compiler >= 1.30 will handle this in both editions: #![allow(unused)] fn main() { macro_rules! warn { ($($args:tt)*) => { $crate::__impl_log!($crate::LogLevel::Warn, format_args!($($args)*)) } } // ... } However, this will not work for older versions of the compiler that don't understand the $crate:: prefix for macros. Macros using local_inner_macros We also have the local_inner_macros modifier that we can add to our #[macro_export] attribute. This has the advantage of working with older rustc versions (older versions just ignore the extra modifier). The downside is that it's a bit messier: #[macro_export(local_inner_macros)] macro_rules! warn { ($($args:tt)*) => { __impl_log!($crate::LogLevel::Warn, format_args!($($args)*)) } } So the code knows to look for any macros used locally. But wait - this won't compile, because we use the format_args! macro that isn't in our local crate (hence the convoluted example). The solution is to add a level of indirection: we create a macro that wraps format_args, but is local to our crate. That way everything works in both editions (sadly we have to pollute the global namespace a bit, but that's ok). #![allow(unused)] fn main() { // I've used the pattern `_<my crate name>__<macro name>` to name this macro, hopefully avoiding // name clashes. #[doc(hidden)] #[macro_export] macro_rules! _log__format_args { ($($inner:tt)*) => { format_args! { $($inner)* } } } } Here we're using the most general macro pattern possible, a list of token trees. We just pass whatever tokens we get to the inner macro, and rely on it to report errors. So the full 2015/2018 working example would be: #![allow(unused)] fn main() { use std::fmt; /// How important/severe the log message is. #[derive(Debug,(local_inner_macros)] macro_rules! warn { ($($args:tt)*) => { __impl_log!($crate::LogLevel::Warn, _log__format_args!($($args)*)) } } /// Error level log message #[macro_export(local_inner_macros)] macro_rules! error { ($($args:tt)*) => { __impl_log!($crate::LogLevel::Error, _log__format_args!($($args)*)) } } #[doc(hidden)] #[macro_export] macro_rules! _log__format_args { ($($inner:tt)*) => { format_args! { $($inner)* } } } } Once everyone is using a rustc version >= 1.30, we can all just use the $crate:: method (2015 crates are guaranteed to carry on compiling fine with later versions of the compiler). We need to wait for package managers and larger organisations to update their compilers before this happens, so in the mean time we can use the local_inner_macros method to support everybody. :) At most one repetition for 2018 edition for 2015 edition In Rust 2018, we have made a couple of changes to the macros-by-example syntax. - We have added a new Kleene operator ?which means "at most one" repetition. This operator does not accept a separator token. - We have disallowed using ?as a separator to remove ambiguity with ?. For example, consider the following Rust 2015 code: #![allow(unused)] fn main() { macro_rules! foo { ($a:ident, $b:expr) => { println!("{}", $a); println!("{}", $b); }; ($a:ident) => { println!("{}", $a); } } } Macro foo can be called with 1 or 2 arguments; the second one is optional, but you need a whole other matcher to represent this possibility. This is annoying if your matchers are long. In Rust 2018, one can simply write the following: #![allow(unused)] fn main() { macro_rules! foo { ($a:ident $(, $b:expr)?) => { println!("{}", $a); $( println!("{}", $b); )? } } } The compiler In this chapter of the guide, we discuss a few improvements to the compiler. A notable addition here is our new and improved error messages. Improved error messages We're always working on error improvements, and there are little improvements in almost every Rust version, but in Rust 1.12, a significant overhaul of the error message system was created. For example, here's some code that produces an error: fn main() { let mut x = 5; let y = &x; x += 1; println!("{} {}", x, y); } Here's the error in Rust 1.11: foo.rs:4:5: 4:11 error: cannot assign to `x` because it is borrowed [E0506] foo.rs:4 x += 1; ^~~~~~ foo.rs:3:14: 3:15 note: borrow of `x` occurs here foo.rs:3 let y = &x; ^ foo.rs:4:5: 4:11 help: run `rustc --explain E0506` to see a detailed explanation error: aborting due to previous error Here's the error in Rust 1.28: error[E0506]: cannot assign to `x` because it is borrowed --> foo.rs:4:5 | 3 | let y = &x; | - borrow of `x` occurs here 4 | x += 1; | ^^^^^^ assignment to borrowed `x` occurs here error: aborting due to previous error For more information about this error, try `rustc --explain E0506`. This error isn't terribly different, but shows off how the format has changed. It shows off your code in context, rather than just showing the text of the lines themselves. Incremental Compilation Back in September of 2016, we blogged about Incremental Compilation. While that post goes into the details, the idea. This is now turned on by default. This means that your builds should be faster! Don’t forget about cargo check when trying to get the lowest possible build times. This is still not the end story for compiler performance generally, nor incremental compilation specifically. We have a lot more work planned in the future. One small note about this change: it makes builds faster, but makes the final binary a bit slower. For this reason, it's not turned on in release builds. An attribute for deprecation If you're writing a library, and you'd like to deprecate something, you can use the deprecated attribute: #![allow(unused)] fn main() { #[deprecated( since = "0.2.1", note = "Please use the bar function instead" )] pub fn foo() { // ... } } This will give your users a warning if they use the deprecated functionality: Compiling playground v0.0.1 () warning: use of deprecated item 'foo': Please use the bar function instead --> src/main.rs:10:5 | 10 | foo(); | ^^^ | = note: #[warn(deprecated)] on by default Both since and note are optional. since can be in the future; you can put whatever you'd like, and what's put in there isn't checked. Rustup for managing Rust versions (this tool has its own versioning scheme and works with all Rust versions) The Rustup tool has become the recommended way to install Rust, and is advertised on our website. Its powers go further than that though, allowing you to manage various versions, components, and platforms. For installing Rust To install Rust through Rustup, you can go to, which will let you know how to do so on your platform. This will install both rustup itself and the stable version of rustc and cargo. To install a specific Rust version, you can use rustup toolchain install: $ rustup toolchain install 1.30.0 This works for a specific nightly, as well: $ rustup toolchain install nightly-2018-08-01 As well as any of our release channels: $ rustup toolchain install stable $ rustup toolchain install beta $ rustup toolchain install nightly For updating your installation To update all of the various channels you may have installed: $ rustup update This will look at everything you've installed, and if there are new releases, will update anything that has one. Managing versions To set the default toolchain to something other than stable: $ rustup default nightly To uninstall a specific Rust version, you can use rustup toolchain uninstall: $ rustup toolchain uninstall 1.30.0 To use a toolchain other than the default, use rustup run: $ rustup run nightly cargo build There's also an alias for this that's a little shorter: $ cargo +nightly build If you'd like to have a different default per-directory, that's easy too! If you run this inside of a project: $ rustup override set nightly Or, if you'd like to target a different version of Rust: $ rustup override set 1.30.0 Then when you're in that directory, any invocations of rustc or cargo will use that toolchain. To share this with others, you can create a rust-toolchain file with the contents of a toolchain, and check it into source control. Now, when someone clones your project, they'll get the right version without needing to override set themselves. Installing other targets Rust supports cross-compiling to other targets, and Rustup can help you manage them. For example, to use MUSL: $ rustup target add x86_64-unknown-linux-musl And then you can $ cargo build --target=x86_64-unknown-linux-musl To see the full list of targets you can install: $ rustup target list Installing components Components are used to install certain kinds of tools. While cargo-install has you covered for most tools, some tools need deep integration into the compiler. Rustup knows exactly what version of the compiler you're using, and so it's got just the information that these tools need. Components are per-toolchain, so if you want them to be available to more than one toolchain, you'll need to install them multiple times. In the following examples, add a --toolchain flag, set to the toolchain you want to install for, nightly for example. Without this flag, it will install the component for the default toolchain. To see the full list of components you can install: $ rustup component list Next, let's talk about some popular components and when you might want to install them. rust-docs, for local documentation This first component is installed by default when you install a toolchain. It contains a copy of Rust's documentation, so that you can read it offline. This component cannot be removed for now; if that's of interest, please comment on this issue. rust-src for a copy of Rust's source code The rust-src component can give you a local copy of Rust's source code. Why might you need this? Well, autocompletion tools like Racer use this information to know more about the functions you're trying to call. $ rustup component add rust-src rustfmt for automatic code formatting If you'd like to have your code automatically formatted, you can install this component: $ rustup component add rustfmt This will install two tools, rustfmt and cargo-fmt, that will reformat your code for you! For example: $ cargo fmt will reformat your entire Cargo project. rls for IDE integration Many IDE features are built off of the langserver protocol. To gain support for Rust with these IDEs, you'll need to install the Rust language sever, aka the "RLS": $ rustup component add rls For more information about integrating this into your IDE, see the RLS documentation. clippy for more lints For even more lints to help you write Rust code, you can install clippy: $ rustup component add clippy This will install cargo-clippy for you: $ cargo clippy For more, check out clippy's documentation. The "preview" components There are several components in a "preview" stage. These components currently have -preview in their name, and this indicates that they're not quite 100% ready for general consumption yet. Please try them out and give us feedback, but know that they do not follow Rust's stability guarantees, and are still actively changing, possibly in backwards-incompatible ways. llvm-tools-preview for using extra LLVM tools If you'd like to use the lld linker, or other tools like llvm-objdump or llvm-objcopy, you can install this component: $ rustup component add llvm-tools-preview This is the newest component, and so doesn't have good documentation at the moment. Cargo and crates.io In this chapter of the guide, we discuss a few improvements to cargo and crates.io. A notable addition here is the new cargo check command. cargo necessary.. cargo install for easy installation of tools Cargo has grown a new install command. This is intended to be used for installing new subcommands for Cargo, or tools for Rust developers. This doesn't replace the need to build real, native packages for end-users on the platforms you support. For example, this guide is created with mdbook. You can install it on your system with $ cargo install mdbook And then use it with $ mdbook --help Cargo Extensions As an example of extending Cargo, you can use the cargo-update package. To install it: $ cargo install cargo-update This will allow you to use cargo install-update -a command, which checks everything you've cargo install'd and updates it to the latest version. cargo new defaults to a binary project, it used to default to --lib. At the time, we made this decision because each binary (often) depends on many libraries, and so we thought the library case would’ve changed it, and it now defaults to --bin. cargo rustc for passing arbitrary flags to rustc cargo rustc is a new subcommand for Cargo that allows you to pass arbitrary rustc flags through Cargo. For example, Cargo does not have a way to pass unstable flags built-in. But if we'd like to use print-type-sizes to see what layout information our types have, we can run this: $ cargo rustc -- -Z print-type-sizes And we'll get a bunch of output describing the size of our types. Note cargo rustc only passes these flags to invocations of your crate, and not to any rustc invocations used to build dependencies. If you'd like to do that, see $RUSTFLAGS. Cargo workspaces for multi-package projects Cargo used to have two levels of organization: - A package contains one or more crates - A crate has one or more modules Cargo now has an additional level: - A workspace contains one or more packages This can be useful for larger projects. For example, the futures package is a workspace that contains many related packages: - futures - futures-util - futures-io - futures-channel and more. Workspaces allow these packages to be developed individually, but they share a single set of dependencies, and therefore have a single target directory and a single Cargo.lock. For more details about workspaces, please see the Cargo documentation. Multi-file examples Cargo has an examples feature for showing people how to use your package. By putting individual files inside of the top-level examples directory, you can create multiple examples. But what if your example is too big for a single file? Cargo supports adding sub-directories inside of examples, and looks for a main.rs inside of them to build the example. It looks like this: my-package └──src └── lib.rs // code here └──examples └── simple-example.rs // a single-file example └── complex-example └── helper.rs └── main.rs // a more complex example that also uses `helper` as a submodule Replacing dependencies with patch The [patch] section of your Cargo.toml can be used when you want to override certain parts of your dependency graph. Cargo has a [replace]feature that is similar; while we don't intend to deprecate or remove [replace], you should prefer [patch]in all circumstances. So what’s it look like? Let’s say we have a Cargo.toml that looks like this: [dependencies] foo = "1.2.3" In addition, our foo package depends on a bar crate, and we find a bug in bar. To test this out, we’d download the source code for bar, and then update our Cargo.toml: [dependencies] foo = "1.2.3" [patch.crates-io] bar = { path = '/path/to/bar' } Now, when you cargo build, it will use the local version of bar, rather than the one from crates.io that foo depends on. You can then try out your changes, and fix that bug! For more details, see the documentation for patch. Cargo can use a local registry replacement Cargo finds its packages in a "source". The default source is crates.io. However, you can choose a different source in your .cargo/config: [source.crates-io] replace-with = 'my-awesome-registry' [source.my-awesome-registry] registry = '' This configuration means that instead of using crates.io, Cargo will query the my-awesome-registry source instead (configured to a different index here). This alternate source must be the exact same as the crates.io index. Cargo assumes that replacement sources are exact 1:1 mirrors in this respect, and the following support is designed around that assumption. When generating a lock file for crate using a replacement registry, the original registry will be encoded into the lock file. For example in the configuration above, all lock files will still mention crates.io as the registry that packages originated from. This semantically represents how crates.io is the source of truth for all crates, and this is upheld because all replacements have a 1:1 correspondance. Overall, this means that no matter what replacement source you're working with, you can ship your lock file to anyone else and you'll all still have verifiably reproducible builds! This has enabled tools like cargo-vendor and cargo-local-registry, which are often useful for "offline builds." They prepare the list of all Rust dependencies ahead of time, which lets you ship them to a build machine with ease. Crates.io disallows wildcard dependencies Crates.io will not allow you to upload a package with a wildcard dependency. In other words, these: [dependencies] regex = "*" A wildcard dependency means that you work with any possible version of your dependency. This is highly unlikely to be true, and would cause unnecessary breakage in the ecosystem. Instead, depend on a version range. For example, ^ is the default, so you could use [dependencies] regex = "1.0.0" instead. >, <=, and all of the other, non- * ranges work as well. Documentation In this chapter of the guide, we discuss a few improvements to documentation. A notable addition here is the second edition of "the book". New editions of the "the book" for the final version of the second edition for the 2018 edition We've distributed a copy of "The Rust Programming Language," affectionately nicknamed "the book", with every version of Rust since Rust 1.0. However, because it was written before Rust 1.0, it started showing its age. Many parts of the book are vague, because it was written before the true details were nailed down for the 1.0 release. It didn't do a fantastic job of teaching lifetimes. Starting with Rust 1.18, we shipped drafts of a second edition of the book. The final version was shipped with Rust 1.26. The second edition is a complete re-write from the ground up, using the last two years of knowledge we’ve gained from teaching people Rust. You can purchase a printed version of the second edition from No Starch Press. Now that the print version has shipped, the second edition is frozen. As of 1.31, the book has been completely updated for the 2018 Edition release. It's still pretty close to the second edition, but contains information about newer features since the book's content was frozen. Additionally, instead of publishing separate editions of the book, only the latest version of the book is published online. You’ll find brand-new explanations for a lot of Rust’s core concepts, new projects to build, and all kinds of other good stuff. Please check it out and let us know what you think! The Rust Bookshelf , each book is different. As Rust's documentation has grown, we've gained far more than just "The book" and the reference. We now have a collection of various long-form docs, nicknamed "the Rust Bookshelf." Different resources are added at various times, and we're adding new ones as more get written. The Cargo book Historically, Cargo’s docs were hosted on, which doesn’t follow the release train model, even though Cargo itself does. This led to situations where a feature would land in Cargo nightly, the docs would be updated, and then for up to twelve weeks, users would think that it should work, but it wouldn’t yet. is the new home of Cargo’s docs, and now redirects there. The rustdoc book Rustdoc, our documentation tool, now has a guide at. Rust By Example Rust by Example used to live at, but now is part of the Bookshelf! It can be found at. RBE lets you learn Rust through short code examples and exercises, as opposed to the lengthy prose of The Book. The Rustonomicon We now have a draft book, The Rustonomicon: the Dark Arts of Advanced and Unsafe Rust Programming. From the title, I'm sure you can guess: this book discusses some advanced topics, including unsafe. It's a must-read for anyone who's working at the lowest levels with Rust. std::os has documentation for all platforms The std::os module contains operating system specific functionality. You’ll now see more than just linux, the platform we build the documentation on. We’ve long regretted that the hosted version of the documentation has been Linux-specific; this is a first step towards rectifying that. This is specific to the standard library and not for general use; we hope to improve this further in the future. rustdoc In this chapter of the guide, we discuss a few improvements to rustdoc. A notable addition to it was that documentation tests can now compile-fail. Documentation tests can now compile-fail You can now create compile-fail tests in Rustdoc, like this: #![allow(unused)] fn main() { /// ```compile_fail /// let x = 5; /// x += 2; // shouldn't compile! /// ``` fn foo() {} } Please note that these kinds of tests can be more fragile than others, as additions to Rust may cause code to compile when it previously would not. Consider the first release with ?, for example: code using ? would fail to compile on Rust 1.21, but compile successfully on Rust 1.22, causing your test suite to start failing. Rustdoc uses CommonMark for support by default for support via a flag Rustdoc lets you write documentation comments in Markdown. At Rust 1.0, we were using the hoedown markdown implementation, written in C. Markdown is more of a family of implementations of an idea, and so hoedown had its own dialect, like many parsers. The CommonMark project has attempted to define a more strict version of Markdown, and so now, Rustdoc uses it by default. As of Rust 1.23, we still defaulted to hoedown, but you could enable Commonmark via a flag, --enable-commonmark. Today, we only support CommonMark. Platform and target support In this chapter of the guide, we discuss a few improvements to platform and target support. A notable addition to it was that the libcore library now works on stable Rust. libcore for low-level Rust things like memory allocation and I/O. Applications using Rust in the embedded space, as well as those writing operating systems, often eschew libstd, using only libcore. As an additional note, while building libraries with libcore is supported today, building full applications is not yet stable. To use libcore, add this flag to your crate root: #![no_std] This will remove the standard library, and bring the core crate into your namespace for use: #![no_std] use core::cell::Cell; You can find libcore's documentation here. WebAssembly support for emscripten for wasm32-unknown-unknown Rust has gained support for WebAssembly, meaning that you can run Rust code in your browser, client-side. In Rust 1.14, we gained support through emscripten. With it installed, you can write Rust code and have it produce asm.js (the precusor to wasm) and/or WebAssembly. Here's an example of using this support: $ rustup target add wasm32-unknown-emscripten $ echo 'fn main() { println!("Hello, Emscripten!"); }' > hello.rs $ rustc --target=wasm32-unknown-emscripten hello.rs $ node hello.js However, in the meantime, Rust has also grown its own support, independent from Emscripten. This is known as "the unknown target", because instead of wasm32-unknown-emscripten, it's wasm32-unknown-unknown. This will be the preferred target to use once it's ready, but for now, it's really only well-supported in nightly.. MSVC toolchain support At the release of Rust 1.0, we only supported the GNU toolchain on Windows. With the release of Rust 1.2, we introduced initial support for the MSVC toolchain. After that, as support matured, we eventually made it the default choice for Windows users. The difference between the two matters for interacting with C. If you're using a library built with one toolchain or another, you need to match that with the appropriate Rust toolchain. If you're not sure, go with MSVC; it's the default for good reason. To use this feature, simply use Rust on Windows, and the installer will default to it. If you'd prefer to switch to the GNU toolchain, you can install it with Rustup: $ rustup toolchain install stable-x86_64-pc-windows-gnu MUSL support for fully static binaries By default, Rust will statically link all Rust code. However, if you use the standard library, it will dynamically link to the system's libc implementation. If you'd like a 100% static binary, the MUSL libc can be used on Linux. Installing MUSL support To add support for MUSL, you need to choose the correct target. The forge has a full list of targets supported, with a number of ones using musl. If you're not sure what you want, it's probably x86_64-unknown-linux-musl, for 64-bit Linux. We'll be using this target in this guide, but the instructions remain the same for other targets, just change the name wherever we mention the target. To get support for this target, you use rustup: $ rustup target add x86_64-unknown-linux-musl This will install support for the default toolchain; to install for other toolchains, add the --toolchain flag. For example: $ rustup target add x86_64-unknown-linux-musl --toolchain=nightly Building with MUSL To use this new target, pass the --target flag to Cargo: $ cargo build --target x86_64-unknown-linux-musl The binary produced will now be built with MUSL! cdylib crates for C interoperability for rustc for cargo If you're producing a library that you intend to be used from C (or another language through a C FFI), there's no need for Rust to include Rust-specific stuff in the final object code. For libraries like that, you'll want to use the cdylib crate type in your Cargo.toml: [lib] crate-type = ["cdylib"] This will produce a smaller binary, with no Rust-specific information inside of it. The Next Edition We have not decided if and when the next edition will ship; there is talk of a 2021 edition to keep up the three-year schedule, but that has not been formally decided. Until we do, this section keeps track of changes that landed after Rust 2018. Next-Specific Changes There have been no specific changes accepted for the next edition yet. The dbg! macro The dbg! macro provides a nicer experience for debugging than println!: fn main() { let x = 5; dbg!(x); } If you run this program, you'll see: [src/main.rs:4] x = 5 You get the file and line number of where this was invoked, as well as the name and value. Additionally, println! prints to the standard output, so you really should be using eprintln! to print to standard error. dbg! does the right thing and goes to stderr. It even works in more complex circumstances. Consider this factorial example: #![allow(unused)] fn main() { fn factorial(n: u32) -> u32 { if n <= 1 { n } else { n * factorial(n - 1) } } } If we wanted to debug this, we might write it like this with eprintln!: #![allow(unused)] fn main() { fn factorial(n: u32) -> u32 { eprintln!("n: {}", n); if n <= 1 { eprintln!("n <= 1"); n } else { let n = n * factorial(n - 1); eprintln!("n: {}", n); n } } } We want to log n on each iteration, as well as have some kind of context for each of the branches. We see this output for factorial(4): n: 4 n: 3 n: 2 n: 1 n <= 1 n: 2 n: 6 n: 24 This is servicable, but not particularly great. Maybe we could work on how we print out the context to make it more clear, but now we're not debugging our code, we're figuring out how to make our debugging code better. Consider this version using dbg!: #![allow(unused)] fn main() { fn factorial(n: u32) -> u32 { if dbg!(n <= 1) { dbg!(1) } else { dbg!(n * factorial(n - 1)) } } } We simply wrap each of the various expressions we want to print with the macro. We get this output instead: [src/main.rs:3] n <= 1 = false [src/main.rs:3] n <= 1 = false [src/main.rs:3] n <= 1 = false [src/main.rs:3] n <= 1 = true [src/main.rs:4] 1 = 1 [src/main.rs:5] n * factorial(n - 1) = 2 [src/main.rs:5] n * factorial(n - 1) = 6 [src/main.rs:5] n * factorial(n - 1) = 24 [src/main.rs:11] factorial(4) = 24 Because the dbg! macro returns the value of what it's debugging, instead of eprintln! which returns (), we need to make no changes to the structure of our code. Additionally, we have vastly more useful output. No jemalloc by default Long, long ago, Rust had a large, Erlang-like runtime. We chose to use jemalloc instead of the system allocator, because it often improved performance over the default system one. Over time, we shed more and more of this runtime, and eventually almost all of it was removed, but jemalloc was not. We didn't have a way to choose a custom allocator, and so we couldn't really remove it without causing a regression for people who do need jemalloc. Also, saying that jemalloc was always the default is a bit UNIX-centric, as it was only the default on some platforms. Notably, the MSVC target on Windows has shipped the system allocator for a long time. While jemalloc usually has great performance, that's not always the case. Additionally, it adds about 300kb to every Rust binary. We've also had a host of other issues with jemalloc in the past. It has also felt a little strange that a systems language does not default to the system's allocator. For all of these reasons, once Rust 1.28 shipped a way to choose a global allocator, we started making plans to switch the default to the system allocator, and allow you to use jemalloc via a crate. In Rust 1.32, we've finally finished this work, and by default, you will get the system allocator for your programs. If you'd like to continue to use jemalloc, use the jemallocator crate. In your Cargo.toml: jemallocator = "0.1.8" And in your crate root: #[global_allocator] static ALLOC: jemallocator::Jemalloc = jemallocator::Jemalloc; That's it! If you don't need jemalloc, it's not forced upon you, and if you do need it, it's a few lines of code away. Uniform Paths Rust 2018 added several improvements to the module system. We have one last tweak landing in 1.32.0. Nicknamed "uniform paths", it permits previously invalid import path statements to be resolved exactly the same way as non-import paths. For example: #![allow(unused)] fn main() { enum Color { Red, Green, Blue, } use Color::*; } This code did not previously compile, as use statements had to start with super, self, or crate. Now that the compiler supports uniform paths, this code will work, and do what you probably expect: import the variants of the Color enum defined above the use statement. literal macro matcher A new literal matcher was added for macros: macro_rules! m { ($lt:literal) => {}; } fn main() { m!("some string literal"); } literal matches against literals of any type; string literals, numeric literals, char literals. ? operator in macros macro_rules macros can use ?, like this: #![allow(unused)] fn main() { macro_rules! bar { ($(a)?) => {} } } The ? will match zero or one repetitions of the pattern, similar to the already-existing * for "zero or more" and + for "one or more." const fn Initially added: Expanded in many releases, see each aspect below for more details. A const fn allows you to execute code in a "const context." For example: #![allow(unused)] fn main() { const fn five() -> i32 { 5 } const FIVE: i32 = five(); } You cannot execute arbitrary code; the reasons why boil down to "you can destroy the type system." The details are a bit too much to put here, but the core idea is that const fn started off allowing the absolutely minimal subset of the language, and has slowly added more abilities over time. Therefore, while you can create a const fn in Rust 1.31, you cannot do much with it. This is why we didn't add const fn to the Rust 2018 section; it truly didn't become useful until after the release of the 2018 edition. This means that if you read this document top to bottom, the earlier versions may describe restrictions that are relaxed in later versions. Additionally, this has allowed more and more of the standard library to be made const, we won't put all of those changes here, but you should know that it is becoming more const over time. Arithmetic and comparison operators on integers You can do arithmetic on integer literals: #![allow(unused)] fn main() { const fn foo() -> i32 { 5 + 6 } } Many boolean operators You can use boolean operators other than && and ||, because they short-circut evaluation: #![allow(unused)] fn main() { const fn mask(val: u8) -> u8 { let mask = 0x0f; mask & val } } Constructing arrays, structs, enums, and tuples You can create arrays, structs, enums, and tuples: #![allow(unused)] fn main() { struct Point { x: i32, y: i32, } enum Error { Incorrect, FileNotFound, } const fn foo() { let array = [1, 2, 3]; let point = Point { x: 5, y: 10, }; let error = Error::FileNotFound; let tuple = (1, 2, 3); } } Calls to other const fns You can call const fn from a const fn: #![allow(unused)] fn main() { const fn foo() -> i32 { 5 } const fn bar() -> i32 { foo() } } Index expressions on arrays and slices You can index into an array or slice: #![allow(unused)] fn main() { const fn foo() -> i32 { let array = [1, 2, 3]; array[1] } } Field accesses on structs and tuples You can access parts of a struct or tuple: #![allow(unused)] fn main() { struct Point { x: i32, y: i32, } const fn foo() { let point = Point { x: 5, y: 10, }; let tuple = (1, 2, 3); point.x; tuple.0; } } Reading from constants You can read from a constant: #![allow(unused)] fn main() { const FOO: i32 = 5; const fn foo() -> i32 { FOO } } Note that this is only const, not static. & and * of references You can create and de-reference references: #![allow(unused)] fn main() { const fn foo(r: &i32) { *r; &5; } } Casts, except for raw pointer to integer casts You may cast things, except for raw pointers may not be casted to an integer: #![allow(unused)] fn main() { const fn foo() { let x: usize = 5; x as i32; } } Irrefutable destructuring patterns You can use irrefutable patterns that destructure values. For example: #![allow(unused)] fn main() { const fn foo((x, y): (u8, u8)) { // ... } } Here, foo destructures the tuple into x and y. if let is another place that uses irrefutable patterns. let bindings You can use both mutable and immutable let bindings: #![allow(unused)] fn main() { const fn foo() { let x = 5; let mut y = 10; } } Assignment You can use assignment and assignment operators: #![allow(unused)] fn main() { const fn foo() { let mut x = 5; x = 10; } } Calling unsafe fn You can call an unsafe fn inside a const fn: #![allow(unused)] fn main() { const unsafe fn foo() -> i32 { 5 } const fn bar() -> i32 { unsafe { foo() } } } Pinning Rust 1.33 introduced a new concept, implemented as two types: Pin<P>, a wrapper around a kind of pointer which makes that pointer "pin" its value in place, preventing the value referenced by that pointer from being moved. Unpin, types that are safe to be moved, even if they're pinned. Most users will not interact with pinning directly, and so we won't explain more here. For the details, see the documentation for std::pin. What is useful to know about pinning is that it's a pre-requisite for async/ await. Folks who write async libraries may need to learn about pinning, but folks using them generally shouldn't need to interact with this feature at all. No more FnBox The book used to have this code in Chapter 20, section 2: #![allow(unused)] fn main() { trait FnBox { fn call_box(self: Box<Self>); } impl<F: FnOnce()> FnBox for F { fn call_box(self: Box<F>) { (*self)() } } type Job = Box<dyn FnBox + Send + 'static>; } Here, we define a new trait called FnBox, and then implement it for all FnOnce closures. All the implementation does is call the closure. These sorts of hacks were needed because a Box<dyn FnOnce> didn't implement FnOnce. This was true for all three posibilities: Box<dyn Fn>and Fn Box<dyn FnMut>and FnMut Box<dyn FnOnce>and FnOnce However, as of Rust 1.35, these traits are implemented for these types, and so the FnBox trick is no longer required. In the latest version of the book, the Job type looks like this: #![allow(unused)] fn main() { type Job = Box<dyn FnOnce() + Send + 'static>; } No need for all that other code. Alternative Cargo registries Initially added: For various reasons, you may not want to publish code to crates.io, but you may want to share it with others. For example, maybe your company writes Rust code that's not open source, but you'd still like to use these internal packages. Cargo supports alternative registries by settings in .cargo/config: [registries] my-registry = { index = "" } When you want to depend on a package from another registry, you add that in to your Cargo.toml: [dependencies] other-crate = { version = "1.0", registry = "my-registry" } To learn more, check out the registries section of the Cargo book. TryFrom and TryInto Initially added: The TryFrom and TryInto traits are like the From and Into traits, except that they return a result, meaning that they may fail. For example, the from_be_bytes and related methods on integer types take arrays, but data is often read in via slices. Converting between slices and arrays is tedious to do manually. With the new traits, it can be done inline with .try_into(): use std::convert::TryInto; fn main() -> Result<(), Box<dyn std::error::Error>> { let slice = &[1, 2, 3, 4][..]; let num = u32::from_be_bytes(slice.try_into()?); Ok(()) } The Future trait Initially added: In Rust 1.36.0 the long awaited Future trait has been stabilized! TODO: this will probably be folded into a larger async section once we're closer to the next edition. The alloc crate Initially added: Before 1.36.0, the standard library consisted of the crates std, core, and proc_macro. The core crate provided core functionality such as Iterator and Copy and could be used in #![no_std] environments since it did not impose any requirements. Meanwhile, the std crate provided types like Box<T> and OS functionality but required a global allocator and other OS capabilities in return. Starting with Rust 1.36.0, the parts of std that depend on a global allocator, e.g. Vec<T>, are now available in the alloc crate. The std crate then re-exports these parts. While #![no_std] binaries using alloc still require nightly Rust, #![no_std] library crates can use the alloc crate in stable Rust. Meanwhile, normal binaries, without #![no_std], can depend on such library crates. We hope this will facilitate the development of a #![no_std] compatible ecosystem of libraries prior to stabilizing support for #![no_std] binaries using alloc. If you are the maintainer of a library that only relies on some allocation primitives to function, consider making your library #[no_std] compatible by using the following at the top of your lib.rs file: #![no_std] extern crate alloc; use alloc::vec::Vec; MaybeUninit Initially added: In previous releases of Rust, the mem::uninitialized function has allowed you to bypass Rust's initialization checks by pretending that you've initialized a value at type T without doing anything. One of the main uses of this function has been to lazily allocate arrays. However, mem::uninitialized is an incredibly dangerous operation that essentially cannot be used correctly as the Rust compiler assumes that values are properly initialized. For example, calling mem::uninitialized::<bool>() causes instantaneous undefined behavior as, from Rust's point of view, the uninitialized bits are neither 0 (for false) nor 1 (for true) - the only two allowed bit patterns for bool. To remedy this situation, in Rust 1.36.0, the type MaybeUninit<T> has been stabilized. The Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. Therefore, you can do gradual initialization more safely and eventually use .assume_init() once you are certain that maybe_t: MaybeUninit<T> contains an initialized T. As MaybeUninit<T> is the safer alternative, starting with Rust 1.39, the function mem::uninitialized will be deprecated. cargo vendor Initially added: After being available as a separate crate for years, the cargo vendor command is now integrated directly into Cargo. The command fetches all your project's dependencies unpacking them into the vendor/ directory, and shows the configuration snippet required to use the vendored code during builds. There are multiple cases where cargo vendor is already used in production: the Rust compiler rustc uses it to ship all its dependencies in release tarballs, and projects with monorepos use it to commit the dependencies' code in source control.
https://doc.rust-lang.org/edition-guide/print.html
CC-MAIN-2021-17
refinedweb
15,788
64.51
I want to get the current captcha that is displayed on a website. An example of this would be How would I get the image link of the captcha that is displayed... I want to get the current captcha that is displayed on a website. An example of this would be How would I get the image link of the captcha that is displayed... --- Update --- I've switched to a boxlayout but there is a random spacing between the two: JPanel panel = new JPanel(); panel.setLayout(new... There's supposed to be a panel on the EAST. package wbot.nl; import java.awt.BorderLayout; import java.awt.Color; import java.awt.GridLayout; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import... PM'ed it to you. I am wanting to add a panel to the right of the JFrame, but it isn't showing up when I do this: getFrame().getContentPane().add(new Hiscores(), "West"); What am I doing wrong? Hiscores is... I receive this in my terminal on Linux randomly: And the Game I'm hosting shuts down since the java process isn't running anymore. Why? What am I doing wrong? How... Yes I have I'm still hopelessly lost as to what I'm doing wrong. I've been searching for hours on end. Can you just tell me what to do contentPane.add(treeScroll); Makes no difference. Am I doing something wrong in my code? There is no scroll bar, theres more than 4 values in the tree but they arent visible. I can't get the scrollpane to appear here: package oldschool.runescape.com; import java.awt.EventQueue; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import... The class RS2Loader is the one that compiles and executes fine, you just have to put the classes I gave you into the right packages. There's nothing being printed out, I have the code in an Eclipse project. It's just not showing the tabbed pane. The main code I posed is using the JTabbedPane. In method loadClient(). It is missing the Calculator class and ProgressBar class. Calculator class: package oldschool.runescape.com; import java.awt.EventQueue; import java.awt.ev - Pastebin.com ProgressBar class:... package oldschool.runescape.com; import java.applet.Applet; import java.applet.AppletContext; import java.applet.AppletStub; import java.awt.Dimension; import java.awt.event.ActionEvent;... I created a class that loads up a JFrame, and that frame has JMenus and JMenuItems. One of those JMenuItems opens up a new JFrame, and when you close that JFrame (a customised calculator), the... How would I find out the percentage of data that has been downloaded from this code? public static void main(String[] args) throws Exception { pageAddressUrl = new... Last modified date would be best I suppose. I dont know how to start. I want to check the version of a file, so the client knows whether or not to download the new updated client. The file is located at public static boolean sentBan(String username, long time, String reason) { DatabaseConnection connection = World.getConnectionPool().nextFree(); if (connection == null) return false;... Wouldn't it just be: Light light = new Light(new Coord3D(object), 5); ??
http://www.javaprogrammingforums.com/search.php?s=f5ae42d4d27465ed0be43ce90cdbbc4c&searchid=784390
CC-MAIN-2014-10
refinedweb
532
70.7
When it comes to Python, the first things that come to mind are simplicity and the ability to work seamlessly on complex projects. And, why not? Python is a high-level programming language created for the new generation of programmers who want to solve problems and not reinvent the wheel. Python is an excellent choice for data scientists, web developers, and even game programmers. The wide range of uses enables Python to be one of the best programming languages available. It is also used extensively in teaching beginners on “how to code”. Platforms such as Livecoding.tv are seeing a surge in developers who are interested in broadcasting new projects using Python, especially to beginners. Python is also ideal for Rapid Application Development because of the ecosystem surrounding the programming language. For example, Django, a Python Web Framework, offers rapid web development. There are also a large number of libraries you can use for Rapid Application Development. Now, when it comes to Python, there are some common mistakes made by both beginners and experienced developers. If you visit live broadcasts, you can see many developers getting stuck on some of these common problems. These mistakes can only lead to time waste and, therefore, must be avoided. Knowing common pitfalls can make you a better Python developer, and helps you move your code to the next level. Let’s get started with some common Python errors that developers should know about. Advanced: Python Scope Roles If you have ever worked with Object Oriented Programming(OOP), you should have a good understanding of how Python Scope Resolution works. Python Scope works with a LEGB framework, where L stands for Local, E for Enclosing function locals, G for Global, and B for Built-in. You can read more about it here. Python scopes are simple to understand but can be confusing when used with different data types. Let’s look at some examples to clearly understand the picture. var_one = 1 => None def var_test(): .. var_one +=1 .. print (var_one) .. => None var_test() Traceback (most recent call last): File "python", line 1, in <module> File "python", line 2, in var_test UnboundLocalError: local variable 'var_one' referenced before assignment You might be surprised to see UnboundLocalError as you might think that var_one is already defined. However, it is out of scope when it comes to the var_test(). The only solution is to declare the variable within the function. Similar kinds of behavior can also be seen when working on Lists. You can read more about them here. Advanced: Creating modules that are already in Python libraries. That’s one of the most common mistakes that developers make when naming their functions similar to already available functions in Python’s rich library. If your code has a crypt function in your code, it is undeniably going to get into conflict with the standard library which provides an interface to the crypt(3) routine. The impacts get worse as it is used to check Unix passwords and acts as a one-way hash function. To avoid the problem, you need to ensure that modules are named differently. This can easily be used to avoid conflict when importing libraries in your program, keeping app behavior as intended. Beginner: Not doing proper indentation Python indentation is different from other programming languages. For example, in Java, if the indentation is not maintained, the program will execute without any issues, but when it comes to Python, proper indentation is a must. Let’s look at the following code. def indentation_example(): print "this will work \n" indentation_example() sh-4.3$ python main.py this will work But, if you just miss the indentation inside the method, things will not work as intended. def indentation_example(): print "this will work \n" indentation_example() sh-4.3$ python main.py File "main.py", line 2 print "this will work \n" ^ IndentationError: expected an indented block The indentation rules cannot be bypassed by using bracers, and hence care needs to be taken at all time. The best solution is to use any of the popular text editor or IDEs for Python such as Sublime Text or Pycharm. If you are curious about the indentation rules or want to see a style guide for Python, read PEP8, Style Guide for Python Code here. Beginner: Enumeration Another problem common among new time Python learners is confusing iteration and enumeration over a list. For the beginners it won’t be much of an issue, but for those who have worked with other programming languages such as Java and C++, things can get a little confusing. For example, most programming languages have simple loop structure to go through a list or array in the below code. for (int i=0; i<5; i++ ) { System.out.println(“Count is:” + i); } Output: 0,1,2,3,4 However, Python array iteration can be different. In Python, you can go through each of the array elements without the help of the indices. Let’s take a look at an example. arr = [1,2,3,4,5] => None for each in arr: .. print (each) .. 1 2 3 4 5 Now if you want to use indices, you need to use the built-in enumerate function. This function takes a sequence and starting position as arguments. arr = [5,6,7,8,9,10] => None for each in enumerate(arr): .. print (each) .. (0, 5) (1, 6) (2, 7) (3, 8) (4, 9) (5, 10) As you can see from the code above, each element is printed as a pair with indices and value as a first and second element. You can read more about enumerate function here. Beginner:Working with Modules Modules can be a tricky proposition to work with in the beginning. It’s easy to call methods, but when you are importing your own modules, things can become complicated. Let’s look at some example. When you define a method, it looks like below. #amodule.py def a_module(): print "5 \n" a_module() #prints 5 when called Input: a_module() Output: 5 Now, when you try to use the method in a different file, importing the amodule will result in an instant out of 5 which is not desirable in many cases. >>> import amodule 5 If you want to avoid this common mistake, you need to call the method under the if name == ‘__main__’: #amodule.py def a_module(): print "5 \n" if __name__ == '__main__': a_module() Now, when you import the a_module() method without executing it. It will only execute when it is executed explicitly. >>>import amodule >>> Conclusion Python is undoubtedly the most advanced programming language out there. It comes with a huge list of features which can easily confuse Python developers. This list is in no way exhaustive of all the common errors, but you can get an idea. When it comes to understanding common developer habits, watching them code live can be a good way to learn. Learning from your own mistakes is one way to learn, but watching others make a mistake and rectify it can be much more valuable. What do you think about the other common errors when developing with Python? Comment below and let us know.
http://codeforces.com/blog/entry/48439
CC-MAIN-2021-04
refinedweb
1,190
64
In today’s Programming Praxis exercise, our task is to implement two inefficient sorting algorithms. Let’s get started, shall we? Some imports: import Control.Arrow import System.Random import System.Random.Shuffle Stoogesort is fairly bad at O(n^2.7). stoogesort :: Ord a => [a] -> [a] stoogesort [] = [] stoogesort xs@(h:t) = f $ if last xs < h then last xs : init t ++ [h] else xs where f = if length xs > 2 then s first 2 . s second 1 . s first 2 else id s p n = uncurry (++) . p stoogesort . splitAt (div (n * length xs) 3) Bogosort is more interesting. It has the potential of sorting a list in O(n). The chance of this happening, however, is pretty low. The resulting average performance is a terrible O(n*n!). bogosort :: Ord a => [a] -> IO [a] bogosort [] = return [] bogosort xs = if and $ zipWith (<=) xs (tail xs) then return xs else bogosort . shuffle' xs (length xs) =<< newStdGen Some tests to see if everything is working properly: main :: IO () main = do print . (== [1..5]) =<< bogosort [3,1,5,4,2] print $ stoogesort [3,1,5,4,2] == [1..5] Seems like it is. Having said that, never use either of these in practice. Advertisements Tags: algorithm, bogosort, bonsai, code, Haskell, kata, praxis, programming, sort, stoogesort
https://bonsaicode.wordpress.com/2011/05/17/programming-praxis-two-bad-sorts/
CC-MAIN-2017-30
refinedweb
212
69.89
If you're not interested in the way to the solution, simply skip to the section The Solution. Windows 2000 includes a new version of NTFS dubbed "Version 5". This version of NTFS includes something called reparse points. Reparse points can, among other things, be used to implement a type of softlinks, seen in other operating systems for the last decades. diskmgmt.msc I created a directory \mnt\s and tried to mount S: on this directory. \mnt\s S: It worked! I now had access to my S: through \mnt\s. OK, can I now remove the driveletter from this drive and still have it working? Yes! Finally, you don't need all those driveletters anymore (except for the boot- and system-drive). You can simply remove e.g. S: and mount the Volume under \mnt\s. This might not seem like a big deal to some people, but it can remove a lot of clutter. It also helps a lot when moving programs from one place to another, since just about every program in the Windows world expects to never be moved from the directory it was installed in. E.g. moving your "Program Files" directory to another drive, and linking the original "Program Files" directory to this new location. A Volume mount point basically contains the Unicode string "\??\Volume{ GUID }\" You can list accessible Volume GUIDs by typing MountVol. MountVol For a quick look at where these are used, start RegEdit and look in the key HKLM\SYSTEMS\MountedDevices Note that a Volume isn't the Media, but rather the logical "device", since a Volume can refer to a floppy drive with no media in it. Reading a bit more revealed that reparse points should also be able to point at another directory. Ahhh, finally, I thought. While looking for a way to create Directory junction points, I found a reference to a tool called "linkd.exe" in the Windows help. Hunting high and low for this tool I ended up empty handed. Perhaps the most important evolution of NTFS ever, and they didn't supply the documented tool to use it! (It's apparently supposed to appear in Windows 2000 Resource Kit) What's even more bothering is that junction points API usage is undocumented. Starting to work up some steam over this issue, I got going on writing a tool that could create and manipulate Directory junction points (i.e. softlinks). Now, how do you write code with completely undocumented structures? As usual in this world, by disassembling, trial-and-error, and searching old documentation and SDKs. The the Windows 2000 SDK documentation that mentions Reparse Points points you to the struct REPARSE_GUID_DATA_BUFFER. REPARSE_GUID_DATA_BUFFER typedef struct _REPARSE_GUID_DATA_BUFFER { DWORD ReparseTag; WORD ReparseDataLength; WORD Reserved; GUID ReparseGuid; struct { BYTE DataBuffer[1]; } GenericReparseBuffer; } REPARSE_GUID_DATA_BUFFER, *PREPARSE_GUID_DATA_BUFFER; ReparseGuid No help here. There are three FSCTLs defined in WinIoCtl.h to manipulate reparse points using DeviceIoControl: DeviceIoControl FSCTL_SET_REPARSE_POINT FSCTL_GET_REPARSE_POINT FSCTL_DELETE_REPARSE_POINT Looking at the definition of these, you find that all three of them has a comment // REPARSE_DATA_BUFFER. This comment is still present in the Windows 2000 SDK, but the structure definition is nowhere to be found. // REPARSE_DATA_BUFFER At this time my cursing started to approach a level not suitable for printing, and I decided that it was a long night with a lot of coffe and disassemly ahead. But, I had a vague memory of this structure in the VC6 header. A contents search in the include directory for REPARSE_DATA_BUFFER displayed that it indeed existed in WinNT.h from VC6. Apparently this needed structure was removed from the Windows 2000 SDK. Go figure... Since I've now mentioned REPARSE_DATA_BUFFER, I think it's fair to show you the definition of it. REPARSE_DATA_BUFFER // slightly edited for displaying purposes struct REPARSE_DATA_BUFFER { DWORD ReparseTag; WORD ReparseDataLength; WORD Reserved; struct { WORD SubstituteNameOffset; WORD SubstituteNameLength; WORD PrintNameOffset; WORD PrintNameLength; WCHAR PathBuffer[1]; } SymbolicLinkReparseBuffer; }; Armed with this struct, and a little knowledge of how Volume mount point strings looked, I went of to write some code. I had no success whatsoever in using the SET FSCTL. DeviceIoControl only returned an error. Now, I could GET a Volume mount point, that are easily created with the Disk Administrator equivalent or MountVol.exe, but I couldn't SET it back! Suddenly a thought struck me, what if the definition of the SET macro changed?! Comparing the definitions of these macros from the new SDK with the VC6 version confirmed my suspicions. The VC6 version looks like #define FSCTL_SET_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, 41, METHOD_BUFFERED, FILE_WRITE_DATA) // REPARSE_DATA_BUFFER, #define FSCTL_GET_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, 42, METHOD_BUFFERED, FILE_ANY_ACCESS) // , REPARSE_DATA_BUFFER #define FSCTL_DELETE_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, 43, METHOD_BUFFERED, FILE_WRITE_DATA) // REPARSE_DATA_BUFFER, // Windows 2000 SDK #define FSCTL_SET_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, 41, METHOD_BUFFERED, FILE_SPECIAL_ACCESS) // REPARSE_DATA_BUFFER, #define FSCTL_GET_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, 42, METHOD_BUFFERED, FILE_ANY_ACCESS) // REPARSE_DATA_BUFFER #define FSCTL_DELETE_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, 43, METHOD_BUFFERED, FILE_SPECIAL_ACCESS) // REPARSE_DATA_BUFFER, The new FILE_SPECIAL_ACCESS is defined as: #define FILE_SPECIAL_ACCESS (FILE_ANY_ACCESS) Actually, the change in access protection makes some sense. A SET or DELETE operation on a reparse point doesn't need write access to the Directory it's used on. It only needs access to the NTFS Attributes for that directory. As a sidenote I might add the following snippet from WinIoCtl.h from the windows 2000 SDK. // FILE_SPECIAL_ACCESS is checked // by the NT I/O system the same as FILE_ANY_ACCESS. // The file systems, however, may add additional access checks // for I/O and FS controls // that use this value. I wonder how they are supposed to do that, since both ANY and SPECIAL access are defined to be zero. Finally I could both GET and SET a Volume mount point using DeviceIoControl, and thereby get some info of how this struct should be filled in. The Volume mount points PathBuffer in REPARSE_DATA_BUFFER is a Unicode string that looks like PathBuffer "\??\Volume{9424a4a2-bbb6-11d3-a640-806d6172696f}\" "\??\Volume{9424a4a2-bbb6-11d3-a640-806d6172696f}\" The SubstituteNameLength tells how many bytes the PathBuffer contains. By disassembling SetVolumeMountPoint from kernel32.dll, I found out that it only accepts 96 or 98 bytes as buffer length. SubstituteNameLength SetVolumeMountPoint Strange, was my first thought. But still I tried to use the Volume GUID with an appended directory name through DeviceIoControl, in the hope that it wouldn't have the same restrictions and only get resolved during access. Right? Wrong. Why make it orthogonal when you can make it "cumbersome"? After many hours of trial-and-error, and even more cursing, I was about ready to give in, and admit defeat, when I got an idea. What if you instead of using a Volume GUID, look back on the CreateFile documentation? CreateFile According to CreateFile documentation, you can enter a non-parsed path by prepending "\??\" to it. What if we used this approach and put in a "normal" full path like "\??\" "\??\C:\Program Files" Type some code, build and test...Finally! It worked! So, finally, to create a directory junction point, you must do the following: FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OPEN_REPARSE_POINT DeviceIoControl(hDir, // HANDLE to the directory FSCTL_SET_REPARSE_POINT, (LPVOID)&rdb, // REPARSE_DATA_BUFFER dwBytesToSet, NULL, 0, &dwBytes, 0); Well, almost. It's still that little matter of filling in this struct. // quick 'n' dirty solution wchar_t wszDestDir[] = "\\??\\C:\\Program Files\"; const int nDestBytes = lstrlenW(wszDestDir) * sizeof(wchar_t); char szBuff[1024] = { 0 }; REPARSE_DATA_BUFFER& rdb = *(REPARSE_DATA_BUFFER*)szBuff; rdb.ReparseTag = IO_REPARSE_TAG_MOUNT_POINT; rdb.ReparseDataLength = nDestBytes + 12; rdb.SymbolicLinkReparseBuffer.SubstituteNameLength = nDestBytes; rdb.SymbolicLinkReparseBuffer.PrintNameOffset = nDestBytes + 2; lstrcpyW(rdb.SymbolicLinkReparseBuffer.PathBuffer, wszDestDir); const DWORD dwBytesToSet = // input buffer size to give to DeviceIoControl rdb.ReparseDataLength + REPARSE_DATA_BUFFER_HEADER_SIZE; Ugly or what? I especially dislike the unnamed (i.e. it doesn't have a typename) struct SymbolicLinkReparseBuffer. Both that it's unnamed, and the length of its name makes the code quite unreadable. SymbolicLinkReparseBuffer I copied the definition of REPARSE_DATA_BUFFER from the VC6 header file to be able to use this even with the Windows 2000 SDK. I renamed it and removed the unnamed struct. In the process, it got some member functions to make its usage a lot easier. As I said earlier in this article, possibly one of the most sought for features (and by that, looong overdue) in NTFS is "softlinks", and they didn't have the decency to neither document it, nor to provide any API whatsoever to use it. I mean, get real; DeviceIoControl() to create a softlink?! To make this a bit more usable, I wrote a little library that you can use in your own creations. The included program MakeLink.exe uses this library, and it's used to create, list and delete junction points. Just start MakeLink without arguments to see its usage. MakeLink The functions that IMO were missing from Microsofts API, and got implemented by this library (though in its own C++ namespace) are: BOOL CreateJunctionPoint(LPCTSTR szMountDir, LPCTSTR szDestDir); BOOL DeleteJunctionPoint(LPCTSTR szMountDir); DWORD GetJunctionPointDestination(LPCTSTR szMountDir, LPTSTR szDestBuff, DWORD dwBuffSize /* in TCHARs */); These should be self explaining, but in the interest of completeness, here's some documentation. This function allows you to create or overwrite an existing junction point. szMountDir szDestDir Note that using the second form, you could create a Directory junction point that points to nowhere usable (e.g. "\??\foo:bar/baz"). If the function fails, the return value is FALSE. To get extended error information, call GetLastError. Note: Strictly speaking, you can use this function as a replacement for the MountVol.exe command using the "\??\" form, but I think that Disk Admin is better suited for that purpose. This function allows you to remove any Volume Mount Points or Directory Junction Points from the specified directory. This function allows you to query any directory for its reparse point destination. Note that it will only work for reparse points of the type IO_REPARSE_TAG_MOUNT_POINT, but since this includes both Volume mount points and Directory junction points, it's fit for this library. IO_REPARSE_TAG_MOUNT_POINT If the GetJunctionPointDestination succeeds, the return value is the the length, in TCHARs, of the string copied to szDestDir, not including the terminating null character. If the szDestDir buffer is too small, the return value is the size of the buffer, in TCHARs, required to hold the path. If the function fails, the return value is zero. To get extended error information, call GetLastError. The code is compilable as both ANSI and Unicode. It does not use MFC, standard C++ library, or any CRT memory management functions. Writing this library and I had a few criterias in mind: The application MakeLink.exe is 5 632 bytes. It does however depend on MSVCRT.dll (Microsoft C Runtime Library), but I think the size criteria was met. BTW: While browsing the new documentation, in the documentation for SetVolumeMountPoint I found the following text ... "\\?\C:\myworld\private" is seen as "C:\myworld\private". Trying out this API (which according to its name is to mount Volumes only), I found out that they've only mentioned it, they don't implement this behaviour in SetVolumeMountPoint. Another point of interest is that the creator of this API apparently was completely unaware of the already documented approach of creating a non-parsed file system name "\??\", and charged ahead to invent "\\?\". Happy Filesystem linking. Mike Nordell - Nordell.
http://www.codeproject.com/Articles/194/Windows-2000-Junction-Points?fid=75&df=90&mpp=10&noise=3&prof=True&sort=Position&view=None&spc=None&select=432801&fr=44
CC-MAIN-2014-35
refinedweb
1,842
56.25
Hi! Another question. I need to obtain the list of images of a folder and put them in an array, so that the script could choose one of them. I can’t find methods to access folders content. Any suggestion? I’m looking also for generic methods to work with floders, subfolder, file in the folders and so on. Thanks. Hi! Another question. Define a list of image file extensions and use e.g. the following method: Ok, thankyou, I did it. This is the solution for my case. import bpy, random, os, fnmatch, ntpath exts = ["*.png", "*.tif", "*.tiff"] #getting graphics list for root, dirnames, filenames in os.walk(graphics_path): #put here the folder where you want search for ext in exts: #do the search for each extension in the list for filename in fnmatch.filter(filenames, ext): #here specify the wanted file extension. Do a loop for each want you want find graphics_list.append(os.path.join(root, filename)) #here add the file found to the list #it returns a list of complete path of the files found #if you want just the name of the file, use this function, that should work even under win and linux def get_filename(path): head, tail = ntpath.split(path) return tail or ntpath.basename(head) bye and thankyou again! If there are a lot of files, and you know you won’t read all their names, you should turn the matcher into a generator.
https://blenderartists.org/t/get-images-list-of-a-folder/618684
CC-MAIN-2020-45
refinedweb
242
73.27
Newbie. Trying to learn about being complete in declarations and definitions. Want to make sure to declare things static (for better compiler efficiency, is that the reason?). But, when I do this, I get all kinds of warnings, although the program runs properly on the board. Here's an example of a .h and .c and the warnings I get: display.h: # ifndef Display # define Display // #pragma once // what does this do? Still needed with the ifndef? #include "IoPins.h" #include <avr/io.h> static char ButtonState; static char FlashLEDState; extern void DisplayInitialize(); extern void DisplayUpdate(); static void DisplayOn(); static void DisplayOff(); extern void getInputs(); # endif display.c: #include "Display.h" extern void DisplayInitialize() { ButtonState = 0; } extern void DisplayUpdate() { if (FlashLEDState | ButtonState) { DisplayOn(); } else { DisplayOff(); } } void DisplayOn() { PORTB |= (1<<LED01); } static void DisplayOff() { PORTB &= ~(1<<LED01); } extern void getInputs() { ButtonState = !(PINB & (1<<Button01)); } warnings: Warning 1 'ButtonState' defined but not used [-Wunused-variable] Warning 2 'DisplayOn' declared 'static' but never defined [-Wunused-function] Warning 3 'DisplayOff' declared 'static' but never defined [-Wunused-function] Warning 4 'ButtonState' defined but not used [-Wunused-variable] Warning 5 'FlashLEDState' defined but not used [-Wunused-variable] Warning 6 'DisplayOn' declared 'static' but never defined [-Wunused-function] Warning 7 'DisplayOff' declared 'static' but never defined [-Wunused-function] Now, I understand this is kind of a hokey little program snippet (and better ways to implement the functionality). I'm just trying to get my arms around 'static' and 'extern'. I don't understand the warnings, because they warn me of a variables that are not used (but, ar used) and functions that are never defined (but, I think they are, and they run on the board). I don't know if I should just ignore these warnings, or I'm actually doing something incorrect. On a side note, that #pragma once. Is that needed if I'm doing the ifndef? Or, are those doing 2 different things? Any chance that you are somehow compiling just the include file by itself? Top - Log in or register to post comments You do know how to search don't you? About your use of "extern" for function prototypes and function definitions, well, you need to read this Whey are you defining variables in a header file? Surely you only want extern declarations in a .h with the actual definition in a .c but if they are "static" you wouldn't be exporting them anyway (which is kind of the whole point of "static" in this context!). The rules about what you put in a .h are pretty simple and basically boil down to: "if you were giving a binary copy of the code to someone else and all they could "see" was the .h what would they need to know to be able to make full use of your code?". For sure they would not need to know about any "private" static variables you might happen to have in your implementation. They won't be interested in accessing your "ButtonState" or "FlashLEDState". In fact you almost certainly don't want them to even know they exist. You don't want those mentioned in the .h file - keep them "private" within the .c file. (later when you move from C to C++ you'll actually come across "public:" and "private:" which more formally dictate these things). Top - Log in or register to post comments Clawson, Thanks! Moving those 'static' down to the .c file eliminated the compile warnings ... Top - Log in or register to post comments No, that is not the primary reason - though it may be a side-benefit. You can look up what 'static' does in you 'C' textbook - it limits the scope of the identifier to the current Compilation Unit. But, if you define a 'static' variable in a header, and #include that header in multiple source files - that means you'll get a separate, independent instance of the variable for each 'Compilation Unit' in which it appears! Note that this is standard 'C' stuff - nothing specifically to do with Atmel Studio. Some 'C' learning/reference resources:... Top Tips: Top - Log in or register to post comments Agree - outside a function it means "this variable only used in this file". This does give the optimiser opportunity to discard or optimize access it might not have had if it thought some other compilation unit that might be linked in later could also need to access it. But the optimization thing is more of a benefit than the true intent of using it. But as awneil says the principal use of static is to keep things "private". It's saying "no one outside this file ever needs to know this variable exists". One use would be if you had both a timer module and an ADC module and they both wanted a (file global) variable called "count". Each could have a "count" and the linker would not be confused about there being two things of the same name when it links together timer.o and adc.o Of course if you use something like: a further benefit of the static here is that by saying "this is private to this file" and the const saying "this is an unchanging value" the compiler does not actually need to create a location in memory called "max_volume" and stuff 37 into it. So when you write: the code here will not be going off to memory to pick up 37 from a location called max_volume. The compiler can basically treat this as if you had written: So a "static const" is a bit like a: in this sense but the difference from a pre-processor define is that it has a type (int in this case). Top - Log in or register to post comments You're talking C++ here? In 'C', const just tells the compiler to disallow writes to the object - it does not mean that the value is unchanging! Top Tips: Top - Log in or register to post comments
https://www.avrfreaks.net/forum/warnings-about-static-declarations-functions-and-variables
CC-MAIN-2021-31
refinedweb
998
70.63
Hello, I had no idea which title I should have chosen. Anyway, I'll try to explain my problem. What I'm trying to do is to prevent some code using functions in a header. This problem causes invulnerabilities in my application. I made a test program to show what exactly I'm trying to do. I think it's called macros, but I have no idea if what I'm doing is good, I read some tutorials on macros, but it didn't work. I'm using Linux and GCC 4.2.4. main.c #define _MAIN_ #include "test.h" int main(){ fcn1(); fcn2(); return 0; } test.h #ifdef _MAIN_ void fcn1(); #endif #ifndef _MAIN_ void fcn2(); #endif code.c #include <stdio.h> #include "test.h" void fcn1(){ printf("No problem, you can enter here.\n"); } void fcn2(){ printf("Who let you in here?!\n"); } Commands: gcc -c main.c gcc -c code.c gcc main.o code.o -o test Output: No problem, you can enter here. Who let you in here?! What I want to know: - Are macros enough to solve this problem? - Are there any mistakes in the way I use macros? - What might be a solution to this problem? (if macros aren't enough to solve this problem) Thanks for your help. -Marek
https://www.daniweb.com/programming/software-development/threads/170753/preventing-code-from-using-code
CC-MAIN-2017-34
refinedweb
218
88.33
table of contents NAME¶ tep_register_comm, tep_override_comm, tep_pid_is_registered, tep_data_comm_from_pid, tep_data_pid_from_comm, tep_cmdline_pid - Manage pid to process name mappings. SYNOPSIS¶ #include <event-parse.h> int tep_register_comm(struct tep_handle *tep, const char *comm, int pid); int tep_override_comm(struct tep_handle *tep, const char *comm, int pid); bool tep_is_pid_registered(struct tep_handle *tep, int pid); const char *tep_data_comm_from_pid(struct tep_handle *pevent, int pid); struct cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm, struct cmdline *next); int tep_cmdline_pid(struct tep_handle *pevent, struct cmdline *cmdline); DESCRIPTION¶ These functions can be used to handle the mapping between pid and process name. The library builds a cache of these mappings, which is used to display the name of the process, instead of its pid. This information can be retrieved from tracefs/saved_cmdlines file. The tep_register_comm() function registers a pid / process name mapping. If a command with the same pid is already registered, an error is returned. The pid argument is the process ID, the comm argument is the process name, tep is the event context. The comm is duplicated internally. The tep_override_comm() function registers a pid / process name mapping. If a process with the same pid is already registered, the process name string is udapted with the new one. The pid argument is the process ID, the comm argument is the process name, tep is the event context. The comm is duplicated internally. The tep_is_pid_registered() function checks if a pid has a process name mapping registered. The pid argument is the process ID, tep is the event context. The tep_data_comm_from_pid() function returns the process name for a given pid. The pid argument is the process ID, tep is the event context. The returned string should not be freed, but will be freed when the tep handler is closed. The tep_data_pid_from_comm() function returns a pid for a given process name. The comm argument is the process name, tep is the event context. The argument next is the cmdline structure to search for the next pid. As there may be more than one pid for a given process, the result of this call can be passed back into a recurring call in the next parameter, to search for the next pid. If next is NULL, it will return the first pid associated with the comm. The function performs a linear search, so it may be slow. The tep_cmdline_pid() function returns the pid associated with a given cmdline. The tep argument is the event context. RETURN VALUE¶ tep_register_comm() function returns 0 on success. In case of an error -1 is returned and errno is set to indicate the cause of the problem: ENOMEM, if there is not enough memory to duplicate the comm or EEXIST if a mapping for this pid is already registered. tep_override_comm() function returns 0 on success. In case of an error -1 is returned and errno is set to indicate the cause of the problem: ENOMEM, if there is not enough memory to duplicate the comm. tep_is_pid_registered() function returns true if the pid has a process name mapped to it, false otherwise. tep_data_comm_from_pid() function returns the process name as string, or the string "<...>" if there is no mapping for the given pid. tep_data_pid_from_comm() function returns a pointer to a struct cmdline, that holds a pid for a given process, or NULL if none is found. This result can be passed back into a recurring call as the next parameter of the function. tep_cmdline_pid() functions returns the pid for the give cmdline. If cmdline is NULL, then -1 is returned. EXAMPLE¶ The following example registers pid for command "ls", in context of event tep and performs various searches for pid / process name mappings: #include <event-parse.h> ... int ret; int ls_pid = 1021; struct tep_handle *tep = tep_alloc(); ... ret = tep_register_comm(tep, "ls", ls_pid); if (ret != 0 && errno == EEXIST) ret = tep_override_comm(tep, "ls", ls_pid); if (ret != 0) { /* Failed to register pid / command mapping */ } ... if (tep_is_pid_registered(tep, ls_pid) == 0) { /* Command mapping for ls_pid is not registered */ } ... const char *comm = tep_data_comm_from_pid(tep, ls_pid); if (comm) { /* Found process name for ls_pid */ } ... int pid; struct cmdline *cmd = tep_data_pid_from_comm(tep, "ls", NULL); while (cmd) { pid = tep_cmdline_pid(tep, cmd); /* Found pid for process "ls" */ cmd = tep_data_pid_from_comm(tep, "ls", cmd); } FILES¶ event-parse.h Header file to include in order to have access to the library APIs.
https://manpages.debian.org/bullseye/libtraceevent-doc/tep_cmdline_pid.3.en.html
CC-MAIN-2022-40
refinedweb
706
73.27
Introduction: Description: In previous articles I explained create zip files in asp.net, Delete files from uploaded folder in asp.net, create/delete directory in asp.net, Ajax ConfirmbuttonExtender example with modalpopup, Joins in SQL Server and many articles relating to Gridview, SQL ,jQuery,asp.net, C#,VB.NET. Now I will explain how to extract unzip files from zip folder in asp.net using C# unzip files from zip folder Now in code behind add the following namespaces C# Code Once you add namespaces write the following code in code behind VB.NET Code Demo To test application first upload files after that next click on create zip folder button after that click on Extract Zip Files button and check your directory for the files Download Sample Code Attached 3 comments : very useful tnq :) useful..thank you Note: Only a member of this blog may post a comment.
https://www.aspdotnet-suresh.com/2013/04/extract-unzip-files-in-zip-folder-in.html
CC-MAIN-2021-25
refinedweb
149
53.92
What are Lambda Functions? A Quick Guide to Lambda Functions in Python Introduction For loops are the antithesis of efficient programming. They’re still necessary and are the first conditional loops taught to Python beginners but in my opinion, they leave a lot to be desired. These for loops can be cumbersome and can make our Python code bulky and untidy. But wait – what’s the alternative solution? Lambda functions in Python! Lambda functions offer a dual boost to a data scientist. You can write tidier Python code and speed up your machine learning tasks. The trick lies in mastering lambda functions and this is where beginners can trip up. Initially, I also found lambda functions difficult to understand. They are short in length yet can appear confusing as a newcomer. But once I understood how to use them in Python, I found them very easy and powerful. And I’m sure you will as well by the end of this tutorial. So in this article, you’ll be learning about the power of lambda functions in Python and how to use them. Let’s begin! Note: New to Python? I highly recommend checking out the below free courses to get up to scratch: What are Lambda Functions? A lambda function is a small function containing a single expression. Lambda functions can also act as anonymous functions where they don’t require any name. These are very helpful when we have to perform small tasks with less code. We can also use lambda functions when we have to pass a small function to another function. Don’t worry – we’ll cover this in detail soon when we see how to use lambda functions in Python. Lambda functions were first introduced by Alonzo Church in the 1930s. Mr. Church is well known for lambda calculus and the Church-Turing Thesis. Lambda functions are handy and used in many programming languages but we’ll be focusing on using them in Python here. In Python, lambda functions have the following syntax: Lambda functions consist of three parts: - Keyword - Bound variable/argument, and - Body or expression The keyword is mandatory, and it must be a lambda, whereas the arguments and body can change based on the requirements. You must be wondering why you should go for lambda functions when you have other regular functions. Fair question – let me elaborate on this. Comparing Lamba Functions with Regular Functions Lambda functions are defined using the keyword lambda. They can have any number of arguments but only one expression. A lambda function cannot contain any statements, and it returns a function object which can be assigned to any variable. They are generally used for one-line expressions. Regular functions are created using the def keyword. They can have any number of arguments and any number of expressions. They can contain any statements and are generally used for large blocks of code. IIFEs using lambda functions IIFEs are Immediately Invoked Function Expressions. These are functions that are executed as soon as they are created. IIFEs require no explicit call to invoke the function. In Python, IIFEs can be created using the lambda function. Here, I have created an IIFE that returns the cube of a number: (lambda x: x*x*x)(10) Awesome! Application of Lambda Functions with Different Functions Time to jump into Python! Fire up your Jupyter Notebook and let’s get cracking. Here, I have created a random dataset that contains information about a family of 5 people with their id, names, ages, and income per month. I will be using this dataframe to show you how to apply lambda functions using different functions on a dataframe in Python. df=pd.DataFrame({ 'id':[1,2,3,4,5], 'name':['Jeremy','Frank','Janet','Ryan','Mary'], 'age':[20,25,15,10,30], 'income':[4000,7000,200,0,10000] }) Lambda with Apply Let’s say we have an error in the age variable. We recorded ages with a difference of 3 years. So, to remove this error from the Pandas dataframe, we have to add three years to every person’s age. We can do this with the apply() function in Pandas. apply() function calls the lambda function and applies it to every row or column of the dataframe and returns a modified copy of the dataframe: df['age']=df.apply(lambda x: x['age']+3,axis=1) We can use the apply() function to apply the lambda function to both rows and columns of a dataframe. If the axis argument in the apply() function is 0, then the lambda function gets applied to each column, and if 1, then the function gets applied to each row. apply() function can also be applied directly to a Pandas series: df['age']=df['age'].apply(lambda x: x+3) Here, you can see that we got the same results using different methods. Lambda with Filter Now, let’s see how many of these people are above the age of 18. We can do this using the filter() function. The filter() function takes a lambda function and a Pandas series and applies the lambda function on the series and filters the data. This returns a sequence of True and False, which we use for filtering the data. Therefore, the input size of the map() function is always greater than the output size. list(filter(lambda x: x>18,df['age'])) Lambda with Map You’ll be able to relate to the next statement. 🙂 It’s performance appraisal time and the income of all the employees gets increased by 20%. This means we have to increase the salary of each person by 20% in our Pandas dataframe. We can do this using the map() function. This map() function maps the series according to input correspondence. It is very helpful when we have to substitute a series with other values. In map() functions, the size of the input is equal to the size of the output. df['income']=list(map(lambda x: int(x+x*0.2),df['income'])) Lambda with Reduce Now, let’s see the total income of the family. To calculate this, we can use the reduce() function in Python. It is used to apply a particular function to the list of elements in the sequence. The reduce() function is defined in the ‘functools’ module. For using the reduce() function, we have to import the functools module first: import functools functools.reduce(lambda a,b: a+b,df['income']) reduce() function applies the lambda function to the first two elements of the series and returns the result. Then, it stores that result and again applies the same lambda function to the result and the next element in the series. Thus, it reduces the series to a single value. Note: Lambda functions in reduce() cannot take more than two arguments. Conditional Statements using Lambda Functions Lambda functions also support conditional statements, such as if..else. This makes lambda functions very powerful. Let’s say in the family dataframe we have to categorize people into ‘Adult’ or ‘Child’. For this, we can simply apply the lambda function to our dataframe: df['category']=df['age'].apply(lambda x: 'Adult' if x>=18 else 'Child') Here, you can see that Ryan is the only child in this family, and the rest are adults. That wasn’t so difficult, was it? What’s Next? Lambda functions are quite useful when you’re working with a lot of iterative code. They do appear complex, I understand, but I’m sure you’ll have grasped their importance in this tutorial. Do share this article and comment below in case you have any queries or feedback. Here, I have listed some in-depth blogs and courses related to data science and Python: Courses: - Python for Data Science - Data Science Hacks, Tips and Tricks - Introduction to Data Science - A comprehensive Learning path to become a data scientist in 2020 Blogs: Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2020/03/what-are-lambda-functions-in-python/
CC-MAIN-2021-39
refinedweb
1,335
65.12
On Sat, Nov 29, 2003 at 11:12:18PM -0500, Anthony DeRobertis wrote: > > Perhaps Debian username should share some "namespace", like having > "debian-" in front of or behind them? I realize that'd be ugly, but it > should avoid conflicts. i rather have the underscore "_" before the user name (_postfix) than having something that can clash with other local user (heh, i have a debian-mail here). The idea of having a wrapper (even a daemon) for the creating and erasing system users process seems also a good idea. It could keep track of the already created users in a system and avoid creating a user with the same uid. The uids would be saved on a file so it could be used after backup/restore procedures. A man can convince anyone he's somebody else, but never himself. --Narrator (The usual suspects)
https://lists.debian.org/debian-devel/2003/11/msg02310.html
CC-MAIN-2016-40
refinedweb
144
70.84
Empowering Middle School Students through Mobile Applications and Social Networking - Joshua Dawson - 2 years ago - Views: Transcription 1 Empowering Middle School Students through Mobile Applications and Social Networking Melissa Serrano, Alain Edwards, Sifat Islam, Ravi Shankar, Iris Minor, Susanne Lapp Abstract During the fall 2014 semester at FAU a group of undergraduate students created an app to empower middle school students called Cityville, which had the highest ranking among judges at the end of the semester. The pilot version of the Cityville app allows users to add community events and report locations of concern in their community. We propose to enhance the pilot version of Cityville by allowing users to create a personal profile using the Processing programming language, integrate the app with their preferred social media account. We have included mockups of our proposed changes. We feel that the enhancements we have proposed are extremely important to obtain the amount of usage we are wanting out of the students with the app. Without a significant amount of usage it would make our research data very sparse and likely inaccurate. Upon implementation of our proposed enhancements we will have achieved an Android app which will empower middle school students as well as provide us with useful data that we can use in our analysis. Background The expansion of internet use on mobile devices and the vast accessibility to technology since the turn of the millennium has greatly empowered our society. Twenty years ago a child might have asked his dad how to fix the chain on his bicycle when it fell off, now that same child won t wait for dad to get home from work he ll look it up on YouTube! Having an endless sea of knowledge at our fingertips is a great empowerment tool. But does everyone realize how empowering the internet and technology is? Bill Gates once stated, As we look ahead into the next century, leaders will be those who empower others. During the fall 2014 semester at FAU, Professor Shankar s undergraduate Android Application Development class showed that all of us are leaders, because every one of us has the ability to empower others in some way. The development of Android applications (versus other platforms) offers a lot of development support and an active open-source community. Android also has the largest user community compared to other mobile platforms like ios, Blackberry, and Windows. 2 With suggested topics from community youth counselor Iris Minor, the students developed many Android applications designed to empower students.- App-Cityville-Fall-2014 Among the apps created the Cityville App was ranked the highest by a group of academic, high tech, and movie industry professionals at FAU, and therefore chosen to be used in a study to analyze STEM interest amongst middle school students. With our contributions, the Cityville App will empower middle school students by allowing them to add community events or safety reports to Cityville as well as, learn basic programming skills while creating and sharing their profile, and integrate Cityville with their preferred social media app. The usage of Cityville by the middle schoolers will then be analyzed by PhD candidate Sifat Islam to study the students usage as it pertains to STEM interest and activities. Pilot method 3 Among the Android apps developed by Professor Shankar s undergraduate class was Cityville created by Alain Edwards, Adam Moulton, and Lance Williams. Cityville s pilot method features a dynamically loading grid view, interactive Google Maps view to view events and safety advisories, a form to allow users to post a city event, and a form that allows users to report safety advisories. The Java programming language was used to develop the core functionality of the Android application and it is extended by the use of the Parse.com and Google Play Services plugins. Google Play Services API was used, specifically Google Maps API. Google Maps API is used to allow for an interactive map view. The interactive map view allows users to see locations of events and safety alerts such as locations of police, fires, and traffic posted within the Cityville app. The Parse.com plugin is used so that all the event and safety advisory data can be stored in the Parse database. Parse.com is also used for reporting and tracking the usage of the Cityville app. Some of the data the pilot version of the app currently tracks are the number of events posted within a specified time period, number of safety reports posted within a specified time period. Proposed Enhancements User Interface layout of CityVille s pilot version 4 To enhance the pilot version of Cityville developed by the undergraduate students we propose to allow users to learn basic programming skills while creating and sharing their profile and integrate the app with their preferred social media account. Personalization The app will allow the user to create a personal profile using the Processing programming language. For the middle school students we will have the Android Processing App installed on the Nexus 7 tablets. Melissa, a masters student, created a tutorial series for them with a skeleton code showing them the basics of programming in Processing. The tutorial series is available on the YouTube Channel EmpowerMe - Melissa Serrano. This Channel is linked to our EmpowerMe GitHub repository where all code and documentation we created that is related to this paper have been uploaded. The lesson plan that Melissa created walks students through using the Android Processing Integrated Development Environment (IDE) on an Android tablet. Tutorial Lesson Plan Outline Over the course of 3 weeks, in 1 hour sessions we will show the students how to draw on a digital canvas by providing resources along with video tutorials and sample code for them to 5 build off of. We relate the coordinate system on a computer screen to the cartesian coordinate system they are used to in school. Then they draw shapes (rectangles, ellipses, lines, and points) and text. Snippet of Sample Processing Code Provided to Students They also learn how to use a color picker and programmatically set colors using the rgb(red, green, blue) format commonly used in programming. The students even get exposure to more advanced aspects of programming such as setting permissions to allow programmatic access to web URLs, which they use to display an image on their personal profile; as well as interactivity programming using Processing keywords to detect finger touch locations using (x,y) coordinates and providing defined functionality when the user touches the screen. Snippet of Sample Processing Code Provided to Students This will empower the students to learn how to program and write their own profile in code. Once all the students complete their profiles and upload them to a shared Google Drive folder Melissa will package them with the CityVille code in an updated APK file to provide to Iris, this way the students can see their profile integrated with the CityVille app as well as those of their 6 classmates. Giving the students the opportunity to see what they created with the code they wrote should get them excited about programming and interested in learning more. Sifat will use data retrieved from the students profiles to analyze how much of the provided sample code they changed, thus measuring how empowered they are through coding. Aspects of the students profile code that will be measured consist of how many lines of code the student programmed for their profile, which API functions did they use for it, and any textual information they include in their profile. We asked students to textually write in code the answer to a question, Why are you interested in STEM? CityVille Menu with New Profile Feature Social Media Integration Most importantly, we will enhance the app to allow students to integrate the Cityville app with their preferred social media account. By doing this the students will continue using the social apps they always use and more readily upload events they are chatting about in their social networks to Cityville. With this feature we will be able to analyze users connections in their social networks and relevant conversations they are having by recording the event that the student accessed this social network and allowing access to it. The graph below illustrates the results of a survey given to Iris s group of 34 middle school students. Each student indicated that they have an account for each applicable social network. 7 Survey Results of Student s Social Media Usage The results of the survey show that there was a tie for the majority of students using Instagram and KIK. Since we need to collect textual data to analyze students interest in STEM, we decided that Instagram would provide a limited amount of data for analysis. Based on these results we decided to integrate CityVille with KIK. The goal is to enable the student to share and add a comment related to the profile page they created with the Processing programming language, as well as other communications which can be used to analyze STEM interest. Kik is a smartphone messenger based on usernames, not phone numbers. Over 200 million people use Kik, including 40% of US youth. The KIK API provides a pop-up conversation functionality. Implementing this feature within the CityVille will encourage the students to use CityVille more and share events, reported areas, and profiles with their connections on KIK. This way each of the shares and text can be stored in the Parse.com database so that we can later analyze it. To measure how empowered the kids are we will track connections made within their integrated social media account, events and reports uploaded, as well as the extent of their programming in the profile they create. Results When Melissa began to use the pilot version of CityVille, she realized that Android had made significant updates to it s operating system since CityVille was developed and last used (approximately three months prior). CityVille was original developed for Android Version 4.4.2 8 (API 19), by the time she imported CityVille from GitHub into her Eclipse workspace, the Nexus 7 tablet that we were using to develop with (and the middle school students would be using) was running Android Version (API 21). When CityVille ran on the tablet it would not display any maps because of the change in versions and updates to the Google Maps API. To solve this issue Melissa added a method getmapfragment() to HomeMapFragment class. By implementing this method we execute a different Fragment Manager function depending on the version. This resolved the issue of the app not displaying any maps. getmapfragment() in HomeMapFragment class of CityVille allows us to view Google Maps on Android Lollipop When Melissa began to test CityVille and confirm with Sifat the data he was wanting to collect, they realized that all user login and registration information was being stored in another location - not in the Parse.com database. Since Sifat would like to capture how often a particular user logs in, posts events, or reports areas Melissa had to make changes to the pilot version of the CityVille code. She also had to make changes to the Parse.com database. New tables were created in Parse.com to capture all CityVille login and registration information and a new userid field was added to existing Parse.com tables to be able to identify which user posted events or reported areas of concern. Now CityVille will capture and store in the Parse.com app usage relative to its user. Adding the Profile Feature to CityVille When Melissa integrated profiles created with the Processing programming language into CityVille some changes had to be made to the file. Processing is very similar to Java but it is not an exact match to just put it right into an Android project that requires java classes. Processing produces a.pde file, and can be opened as a normal.txt file. The Processing profile must first be saved as a.java file. She added to the top of the file the CityVille package name and import processing.core.*; as required with all java files. She then encapsulated the Processing code into a java class by surrounding it with curly braces and heading it with public class <profile_filename> extends PApplet. PApplet allows us to run this class as a Processing Applet with access to all the Processing programming functionality. Now that she created the 9 java class a few more pieces of code needed to be added to comply with the java syntax. An access modifier must be added to all methods within the class. Processing has a much more relaxed syntax than java (which is what makes it great for kids learning to code), for example Processing did not require type-casting of variables between double and float, but to meet the java syntax requirements in our Android project the type casting is required. Processing code snippet without type-casting Processing code converted to Java syntax with required type-casting An additional method had to be added to this java class so that when a user is done viewing the profile they can use the back button on their device to get back to the menu. Without this method the application will crash when a user presses the back button. onbackpressed() method added to Processing code within a Java class Once Melissa made the changes outlined here the Processing code complies with Java syntax and it works seamlessly within the Android project. When implementation of our lesson plan began. Iris quickly realized that we should have allocated more time. Only one hour per week for five weeks is very limited timeframe when trying to teach middle school students something new to them. Iris also noticed that with a one week gap between sessions, students had forgotten what they did the week before. She suggested included a short refresher activity each week so the students could remember what they have learned so far. 10 Social Media Integration To give CityVille users the ability to send messages or share items with others, we considered integrating CityVille with KIK via the KiK API (Application Programming Interface). In our case, we would utilize the KiK API to access KiK users' profiles data or to send messages to other KiK users. We realized that the KiK API is very limited in the functionality that it provides to access data, especially for an Android App like CityVille. For example, the KiK API would not be able to provide us with any data related to the user s KiK contacts; we wanted to collect this data to track the user's interaction in their social network. However, the KiK API does provide a way for the user to send text message through KiK from the CityVille App. The user could type a message, but when they send the message they would be redirected to the KiK App to choose the recipients of the message. This would provide us with only a limited amount of data to gather for our analysis, because we would only be able to collect data regarding the text message and the Cityville user that sends the message. We would not be able to collect data about the KIK users that receive the text message and their reply. Therefore, we will be unable to measure the impact of the social network for empowerment. Since at the time of this writing, the KiK API doesn't support the functionality that we deem necessary for the data we would like to collect, we decided to evaluate another social network from the students' social media usage survey results. When we reevaluated the social media usage survey completed by the students we noticed that a total of twenty-one students use FaceBook or Instagram. For our data analysis we were not concerned with collecting photos, as in Instagram. However FaceBook owns Instagram, and a user posting to either social media site has the option to also share their post on the other. Since our focus for data analysis is text we decided to integrate FaceBook with the CityVille app. The integration with FaceBook allows a user to share an event or reported area in CityVille. When the user clicks the Share button from an event or reported area the details are included and get posted on the FaceBook wall of the FaceBook users they choose to share it with. A user can choose to make the post public, only for friends, only for himself, or customize who they would like to view their post. 11 A user can click the Share button and the details of the City Event will be included in their FaceBook post A user can click the Share button and the details of the Reported Area will be included in their FaceBook post 12 To accomplish this, initially Melissa tried to use a new version of the Parse.com API that included a version of the FaceBook API. Since FaceBook recently acquired Parse.com and we were already using Parse for our database structure she thought this would be a good solution. However, because of the newness of the integration of the two APIs it still had some issues which other developers had yet to find solutions. So she decided to continue to use the original Parse API we were using from the pilot version of CityVille and use the standalone FaceBook API with it. Below are snippets of the code that was added to the CityVille app to accomplish this. The Share Button implements the onclicklistener() to show the Share Dialog with the Event or Reported Area details A new column was added to the CityEvent and ReportedArea tables in the Parse.com database so that Sifat could collect data related to the user sharing events or area. The sharebuttonflag boolean variable is set to true in the onclicklistener() so that when we put the event or area in the Parse.com database we can record if the user shared it as well. Sifat will use this in his data analysis to determine how empowered the students are. 13 When the sharebuttonflag boolean variable is set to true we will record the share in the database Discussion To give the app more personalization and allow the students to learn basic programming we will introduce them to the Processing programming language. We are adding this aspect because it will allow the students to show empowerment and individuality by self-expression, while providing another metric to measure student empowerment. In order to increase usage of the app we think that it is extremely important to allow students to integrate the Cityville app with their preferred social media account. If this app is just another app they download they ll probably take a look at it a few times and go back to their usual social apps whether it s Facebook, Instagram, Snapchat, Tumblr, KIK, or any others. By giving the students this option, the app will work more closely with what they are already doing. This also allows for us to collect more data connected to each student to analyze later. Conclusion Upon implementation of our proposed enhancements we will have achieved an Android app which will empower middle school students as well as provide us with useful data which we can use in our analysis by making the app easy to integrate with what the students are already doing and giving it the personalized feel they are accustomed to with many other social games and apps. In this paper we documented app and lesson plan development, implementation, and data collection. Future papers by this group will include details about data analysis of the data we collected during the 5-week program we implemented. References 14 EmpowerMe - Melissa Serrano YouTube Channel Global Smartphone Market Share By Platform Processing Programming Language Reference KIK API Reference KIK Native Android API Reference FaceBook API Reference Store & Share Quick Start Store & Share Quick Start What is Store & Share? Store & Share is a service that allows you to upload all of your content (documents, music, video, executable files) into a centralized cloud storage. You Google Apps for Education: The Basics Google Apps for Education: The Basics You will learn how to get started with Google Drive by uploading and converting documents. You will also learn how to share your documents with others. Plus learn Using the owncloud Android App Using the owncloud Android App Accessing your files on your owncloud server via the Web interface is easy and convenient, as you can use any Web browser on any operating system without installing special owncloud Android App Manual owncloud Android App Manual Release 2.0.0 The owncloud developers July 22, 2016 CONTENTS 1 Using the owncloud Android App 1 1.1 New In Version 2.0.0........................................... 1 1.2 Upgrading................................................ Client Training Manual Client Training Manual Contents Quick Summary on How to Open Encrypted Email from Arlington County.2 I. Overview... 4 A. Overview of Email Encryption with Arlington County Government... 4 Link to YouTube. APP ANALYTICS PLUGIN support@magestore.com Phone: 084.4.8585.4587 APP ANALYTICS PLUGIN USER GUIDE Table of Contents 1. INTRODUCTION 2. HOW TO INSTALL 3. HOW TO SET UP YOUR GOOGLE ANALYTICS ACCOUNT 4. HOW TO CONFIGURE IN MAGENTO Developing Android applications in Windows Developing Android applications in Windows Below you will find information about the components needed for developing Android applications and other (optional) software needed to connect to the institution 1. GENERAL INFORMATION... 3 1.1. 3 1.2. 3 2. PROCESS AND WORKSPACE OVERVIEW... 4 2.1. 4 2.2. 5 2.2.1. 5 2.2.2. 6 2.2.3. 7 2.2.4. 9 2.2.5. TABLE OF CONTENTS 1. GENERAL INFORMATION... 3 1.1. System overview... 3 1.2. The purpose of the user guide... 3 2. PROCESS AND WORKSPACE OVERVIEW... 4 2.1. Authorization... 4 2.2. Workspace overview... Generate Android App Generate Android App This paper describes how someone with no programming experience can generate an Android application in minutes without writing any code. The application, also called an APK file can ONLINE ACCOUNTABILITY FOR EVERY DEVICE. Quick Reference Guide V1.0 ONLINE ACCOUNTABILITY FOR EVERY DEVICE Quick Reference Guide V1.0 TABLE OF CONTENTS ACCOUNT SET UP Creating an X3watch account DOWNLOADING AND INSTALLING X3WATCH System Requirements How to install on a Blackboard Mobile Learn: Best Practices for Making Online Courses Mobile-Friendly Blackboard Mobile Learn: Best Practices for Making Online Courses Mobile-Friendly STAFF GUIDE Contents Introduction 2 Content Considerations 5 Discussions 9 Announcements 10 Mobile Learn Content Compatibility Campus Mobile App User Guide Requirements Campus Mobile App User Guide The following items are required to view information on the Campus Mobile App: An active Campus Portal account. A supported ios (ipad, iphone, ipod Touch) or Android Google Apps Migration Academic Technology Services Google Apps Migration Getting Started 1 Table of Contents How to Use This Guide... 4 How to Get Help... 4 Login to Google Apps:... 5 Import Data from Microsoft Outlook:... My Stuff Everywhere Your Content On Any Screen Technical Brief Bob Lund, Distinguished Technologist, CableLabs September, 2014 My Stuff Everywhere Your Content On Any Screen The My Stuff Everywhere Concept The My Stuff Everywhere (MSE) concept is simple Anchor End-User Guide Table of Contents How to Access Your Account How to Upload Files How to Download the Desktop Sync Folder Sync Folder How to Share a File 3 rd Party Share from Web UI 3 rd Party Share from Sync Folder Team-Share How to Register for the Heart Walk 1. Click Register and Join Heart Walk How to Register for the Heart Walk 2. Step 1: Start a Team, Join a Team, Join as Individual Click Start a Team if you want be a team captain and recruit walkers to ShareSync Get Started Guide ShareSync Get Started Guide WHAT IS SHARESYNC? ShareSync is a cloud backup and file sync and share service. ShareSync allows you to easily sync files between multiple computers, the ShareSync web App and Getting started. In this guide... You will learn how to quickly set up your meeting room before the meeting, how to deliver best presentation online and how to get feedback about your meetings after AWS Account Management Guidance AWS Account Management Guidance Introduction Security is a top priority at AWS. Every service that is offered is tightly controlled and adheres to a strict security standard. This is evident in the security Challenges in Android Application Development: A Case Study Available Online at International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.294 Android: Setup Hello, World: Android Edition. due by noon ET on Wed 2/22. Ingredients. Android: Setup Hello, World: Android Edition due by noon ET on Wed 2/22 Ingredients. Android Development Tools Plugin for Eclipse Android Software Development Kit Eclipse Java Help. Help is available throughout Tutorial on Basic Android Setup Tutorial on Basic Android Setup EE368/CS232 Digital Image Processing, Spring 2015 Windows Version Introduction In this tutorial, we will learn how to set up the Android software development environment Share Drive Frequently Asked Questions Table of Contents How do I change my password?... How do I reset my password if I forgot it?... How do I share files/folders with Groups Troubleshooting the Campus Mobile Portal Troubleshooting the Campus Mobile Portal February 2015 This document is intended for restricted use only. Infinite Campus asserts that this document contains proprietary information that would give our Android Mobile App Building Tutorial Android Mobile App Building Tutorial Seidenberg-CSIS, Pace University This mobile app building tutorial is for high school and college students to participate in Mobile App Development Contest Workshop. Using Google Analytics Using Google Analytics Overview Google Analytics is a free tracking application used to monitor visitors to your website in order to provide site designers with a fuller knowledge of their audience. At Microsoft SharePoint 2010 End User Quick Reference Card Microsoft SharePoint 2010 End User Quick Reference Card Microsoft SharePoint 2010 brings together the people, documents, information, and ideas of the University into a customizable workspace where everyone Using Qualtrics Offline Surveys. Getting Started Using Qualtrics Offline Surveys Qualtrics Offline Surveys is a new application available for ios and Android that allows you to administer surveys on your mobile device without an internet connection. CMS, CRM, shopping carts, Web applications CMS, CRM, shopping carts, Web applications Applications in PHP, open source, Add-ins, templates, modules on demand Mobile applications jquery Mobile + PhoneGap Several platforms in one price in JavaScript! CFC Nexus Payroll Coordinator (donor print) Slide 1. Slide notes Slide 1 In this video, we will examine the work flow for users designated as "Payroll Coordinators" where their Federal Agency is set to the "donor print" method. The "donor print" method is setting whereby ASUS WebStorage Client-based for Windows [Advanced] User Manual ASUS WebStorage Client-based for Windows [Advanced] User Manual 1 Welcome to ASUS WebStorage, your personal cloud space Our function panel will help you better understand ASUS WebStorage services. The First Security Bank. Retail User Guide. First Security Bank - Retail User Guide First Security Bank Retail User Guide Rev: 09/23/2015 UUX Support Overview About the Unified User Experience (UUX) Hardware and software requirements Exploring the Accounts page Accounts The Menu The Sidebar DreamFactory & Modus Create Case Study DreamFactory & Modus Create Case Study By Michael Schwartz Modus Create April 1, 2013 Introduction DreamFactory partnered with Modus Create to port and enhance an existing address book application created Software Development Environment. Installation Guide Software Development Environment Installation Guide Software Installation Guide This step-by-step guide is meant to help teachers and students set up the necessary software development environment. By For PGF Coaches & Managers For PGF Coaches & Managers Quick Start - Building Your Team STEP 1 STEP 2 STEP 3 Step 4 Add players and other coaches to your team rosters. [See page 16] Your players or coaches register online for PG Creating a Facebook Page for your classroom is super easy! Here s how to get started! Using Facebook Pages in the Classroom There are many ways that you can incorporate Facebook Pages to support and compliment what you teach both inside and outside the traditional classroom. For example, The Smartest Way to Get Meetings Done The Smartest Way to Get Meetings Done It is time to say goodbye to email and calendar hassle: Meetin.gs is the smartest way to meet. Get your meetings on the cloud and access them from any device. The PORTAL ADMINISTRATION 1 Portal Administration User s Guide PORTAL ADMINISTRATION GUIDE Page 1 2 Portal Administration User s Guide Table of Contents Introduction...5 Core Portal Framework Concepts...5 Key Items...5 Layouts...5 REAL ESTATE CLIENT MANAGEMENT QUICK START GUIDE REAL ESTATE CLIENT MANAGEMENT QUICK START GUIDE The purpose of the quick start guide is to help you get started using the Real Estate Client Management (RECM) product quickly. For a more in-depth quick CN-ONLINE LEARNING MANAGEMENT SYSTEM STUDENT MANUAL 2011-2012 CARSON- NEWMAN COLLEGE CN-ONLINE LEARNING MANAGEMENT SYSTEM STUDENT MANUAL Table of Contents Introduction... 1 Getting Started... 1 Accessing and Logging In to C-N Online... 2 Access... 2 Log Operational Decision Manager Worklight Integration Copyright IBM Corporation 2013 All rights reserved IBM Operational Decision Manager V8.5 Lab exercise Operational Decision Manager Worklight Integration Integrate dynamic business rules into a Worklight Uncovering the Covered Tracks: Finding What s Left Behind JAD SALIBA FOUNDER & CTO Uncovering the Covered Tracks: Finding What s Left Behind JAD SALIBA FOUNDER & CTO Background Teenage geek - IT/Software industry Police officer for 7 years Worked in Tech Crime Unit Started JADsoftware CAS CLOUD WEB USER GUIDE. UAB College of Arts and Science Cloud Storage Service CAS CLOUD WEB USER GUIDE UAB College of Arts and Science Cloud Storage Service Windows Version, April 2014 Table of Contents Introduction... 1 UAB Software Policies... 1 System Requirements... 2 Supported OPC UA App development for Android OPC UA App development for Android Ismo Leszczynski Master s Thesis presentation 13.11.2015 Contents 1. Introduction 2. Targets 3. OPC Unified Architecture 4. Android Operating System 5. App development Quick Guide: How Do I Use BOX in ELMS? Quick Guide: How Do I Use BOX in ELMS? Box is a cloud- based storage and collaboration system that provides a Web interface for uploading, downloading, sharing, and discussing files. Box is designed to Remote Online Support Remote Online Support STRONGVON Tournament Management System 1 Overview The Remote Online Support allow STRONGVON support personnel to log into your computer over the Internet to troubleshoot your system Student Quick Start Guide Student Quick Start Guide Copyright 2012, Blackboard Inc. Student Quick Start Guide 1 Part 1: Requesting Enrollment and Accessing the Course 1.1 1.2 1.3 Accepting a Course Invitation and Accessing the Phone Pal: Remote Mobile Access through Short Message Service Prof. Mitul K. Patel 1 Gadhiya Janki D. 2 Phone Pal: Remote Mobile Access through Short Message Service Prof. Mitul K. Patel 1 Gadhiya Janki D. 2 Assistant Professor B.E. Student Computer Department Computer Department Shree Swami Atmanand Saraswati An easy guide to... MARKETING FOR CLUBS An easy guide to... MARKETING FOR CLUBS Inspiration to Participation Lincolnshire Sport Marketing your Club and Activities Does your club have a media lead? Think about appointing A User s Introduction to. Global Rescue s GRID TM Mobile Application A User s Introduction to Global Rescue s GRID TM Mobile Application GRID TM Mobile App Highlights Travel Preparation and Planning the featured destination reports and country risk ratings provide travelers Connect. Engage. Learn. USER MANUAL Connect. Engage. Learn. USER MANUAL Table of Contents Introduction... 3 NetTutor Dashboard and Header Bar... 3 Contacting Customer Service... 4 Managing Multiple Groups... 5 Accessibility-Enhanced Mode... Backend as a Service Backend as a Service Apinauten GmbH Hainstraße 4 04109 Leipzig 1 Backend as a Service Applications are equipped with a frontend and a backend. Programming and administrating these is challenging. As the GadgetTrak Mobile Security Android & BlackBerry Installation & Operation Manual GadgetTrak Mobile Security Android & BlackBerry Installation & Operation Manual Overview GadgetTrak Mobile Security is an advanced software application designed to assist in the recovery of your Document OwnCloud Collaboration Server (DOCS) User Manual. How to Access Document Storage Document OwnCloud Collaboration Server (DOCS) User Manual How to Access Document Storage You can connect to your Document OwnCloud Collaboration Server (DOCS) using any web browser. Server can be accessed IMPLEMENTATION BEST PRACTICES IMPLEMENTATION BEST PRACTICES Last updated: 8.6.2014 This implementation best practices document will guide you through the process we believe will get you the most value out of Amplitude. Our philosophy The IMarEST Nexus A Beginners Guide The IMarEST Nexus A Beginners Guide Version 1.0 - James McRae Information & Knowledge Manager IMarEST 2014 Table of Contents 1 Introduction... 3 2 Finding the Nexus... 4 3 Wall... 5 4 Profile... 6 5 NETSTORAGE MANUAL INTRODUCTION Virtual Office will provide you with access to NetStorage, a simple and convenient way to access your network drives through a Web browser. You can access the files on your Advanced Configuration Steps Advanced Configuration Steps After you have downloaded a trial, you can perform the following from the Setup menu in the MaaS360 portal: Configure additional services Configure device enrollment settings Secure File Transfer Guest User Guide Updated: 5/8/14 Secure File Transfer Guest User Guide Updated: 5/8/14 TABLE OF CONTENTS INTRODUCTION... 3 ACCESS SECURE FILE TRANSFER TOOL... 3 REGISTRATION... 4 SELF REGISTERING... 4 REGISTER VIA AN INVITATION SENT BY 1 1 Index Revision History Revision Date Description 01 Jan 2015 Socialscoup User Guide 1.0.1 Contents 1. Login 6 1.1 Using Facebook 6 1.2 Using Google+ 7 1.3 Using Registered mail id Creating, Sharing, and Selling Buzztouch Plugins Plugins Market Creating, Sharing, and Selling Buzztouch Plugins Rev 11/20/2013 1 of 17 Table of Contents About Plugins... 4 What plugins do...4 Why plugins are necessary... 5 Who creates plugins... 5 How... Your events are about to get smarter Your events are about to get smarter CrowdCompass mobile event apps leverage smart content to make your event more relevant, more social, and more personalized than ever before. It s simple content is Schools Remote Access Server Schools Remote Access Server This system is for school use only. Not for personal or private file use. Please observe all of the school district IT rules. 6076 State Farm Rd., Guilderland, NY 12084 Phone: Mobile Game and App Development the Easy Way Mobile Game and App Development the Easy Way Developed and maintained by Pocketeers Limited (). For support please visit This document is protected IBM Watson Ecosystem. Getting Started Guide IBM Watson Ecosystem Getting Started Guide Version 1.1 July 2014 1 Table of Contents: I. Prefix Overview II. Getting Started A. Prerequisite Learning III. Watson Experience Manager A. Assign User Roles Mobile App Framework For any Website Mobile App Framework For any Website Presenting the most advanced and affordable way to create a native mobile app for any website The project of developing a Mobile App is structured and the scope of Guide Novell iprint 1.1 March 2015 User Guide Novell iprint 1.1 March 2015 Legal Notices Novell, Inc., makes no representations or warranties with respect to the contents or use of this documentation, and specifically LogMeIn Rescue+Mobile for Android LogMeIn Rescue+Mobile for Android Contents How to Connect to an Android Device...3 How to Start a Code Session on an Android Device...4 How to Chat with the Customer...5 How to Manage Files on a Customer's ShareSync Get Started Guide ShareSync Get Started Guide WHAT IS SHARESYNC? ShareSync is a cloud file sync and share service. ShareSync allows you to easily sync files between multiple computers, the ShareSync web portal and mobile Blackboard Learning System: Student Instructional Guide Blackboard Learning System: Student Instructional Guide This manual was prepared to assist students in the understanding, orientation, and usage of the Blackboard Learning System online course management Managing Existing Mobile Apps Adobe Summit 2016 Lab 324: Managing Existing Mobile Apps Adobe Experience Manager Mobile 1 Table of Contents INTRODUCTION 4 GOAL 4 OBJECTIVES 4 MODULE 1 AEM INTRODUCTION 5 LESSON 1 - AEM BASICS 5 OVERVIEW
http://docplayer.net/9241075-Empowering-middle-school-students-through-mobile-applications-and-social-networking.html
CC-MAIN-2018-30
refinedweb
6,073
50.26
As we discussed on Part2, it's not recommended to create duplicates of the same class with different target membership. You can download the project here as we left it on Part2. We could tackle this issue using Preprocessor flags, which will allow us to add conditions based on which target the app is running. Go ahead and delete the Coffee class under RealCoffee. Then go to Coffee class under the TestCoffee folder and check the RealCoffee target under Target Membership. Now we need to set an identifier for each Target. Will set this identifier under Build Settings -> Active Compilation Condition for both identifiers. Use TESTCOFFEE and REALCOFFEE for both Debug and Release. Now head to the Coffee class and modify the coffeeDescription as follows: static func coffeeDescription() -> String { #if TESTCOFFEE return "TestCoffee" #elseif REALCOFFEE return "RealCoffee" #else return "WRONG" #endif } It's clear from the code that the response from the coffeeDescription function will depend on the target. If the response is "WRONG" then something went wrong setting the Active Compilation Conditions. I personally don't like using Preprocessor flags for the coffeeDescription function because it won't scale. Imagine having 30 targets and having 30 Preprocessor flags for each one. I just wanted you to know that you have that option! Use with caution. Happy coding! How to create a white label iOS app (Part 1) How to create a white label iOS app (Part 2) How to create a white label iOS app (Part 3) How to create a white label iOS app (Part 4) How to create a white label iOS app (Part 5) Discussion
https://dev.to/mavris/how-to-create-a-white-label-ios-app-part-3-14bm
CC-MAIN-2020-45
refinedweb
269
61.97
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project. With version 5, API change 16, FreeBSD has improved how the system thread libraries are handled to be in line with other POSIX platforms (i.e. they must now be provided in addition to, not in place of, -lc). John Polstra authored the new LIB_SPEC, which has been in use in the FreeBSD 5.0 system compiler since 2001/01/25. I added the logic to key off the precise API change point; tested both paths (by visual inspection of the resultant specs file and the usual more extensive bootstrap and check cycle with no regressions seen); and checked that the included header is known to exist on all versions of FreeBSD that have ever used ELF (which are the only configurations that use config/freebsd.h). Pre-approved by David O'Brien (although we didn't discuss this exact keying logic), primary gcc maintainer for freebsd. Fully bootstrapped 3.0 branch (gcc_ss_20010507 with the patch): i386-*-freebsd4.3 i386-*-freebsd5.0 alpha-*-freebsd5.0 Fully bootstrapped 3.0 branch (current with the patch): i386-*-freebsd4.2 alpha-*-freebsd4.2 Fully bootstrapped current mainline with this one patch and unrelated patches: i386-*-freebsd4.2 With that, I think my ducks are in order on this very straightforward port configuration patch which is confined to one platform's configuration file. I would like approval to apply on both the mainline and the 3.0 branch. gcc 2.95.3 with few patches is the system compiler on FreeBSD 5.0 thus it would seem to be a regression if gcc 3.0 didn't work there. Granted, the API was changed on us and that might break the contract thus I will respect the release manager's statement on the matter. But I do hope the amount of effort I put into testing various configurations will count for something and that we should not penalize a platform that is attempting to look more like other POSIX platforms. Regards, Loren 2001-05-11 Loren J. Rittle <ljrittle@acm.org> * config/freebsd.h (LIB_SPEC): Add logic to select the correct setting and add new setting for FreeBSD 5.0 post API change 16 from John Polstra. Index: config/freebsd.h =================================================================== RCS file: /cvs/gcc/egcs/gcc/config/freebsd.h,v retrieving revision 1.6 diff -c -r1.6 freebsd.h *** freebsd.h 2001/04/16 18:30:34 1.6 --- freebsd.h 2001/05/11 05:41:23 *************** *** 56,66 **** %{fPIC:-D__PIC__ -D__pic__} %{fpic:-D__PIC__ -D__pic__} \ %{posix:-D_POSIX_SOURCE}" ! /* Provide a LIB_SPEC appropriate for FreeBSD. Just select the appropriate ! libc, depending on whether we're doing profiling or need threads support. ! (simular to the default, except no -lg, and no -p). */ #undef LIB_SPEC #define LIB_SPEC " \ %{!shared: \ %{!pg: \ --- 56,77 ---- %{fPIC:-D__PIC__ -D__pic__} %{fpic:-D__PIC__ -D__pic__} \ %{posix:-D_POSIX_SOURCE}" ! /* Provide a LIB_SPEC appropriate for FreeBSD. Before ! __FreeBSD_version 500016, select the appropriate libc, depending on ! whether we're doing profiling or need threads support. (similar to ! the default, except no -lg, and no -p). At __FreeBSD_version ! 500016 and later, when threads support is requested include both ! -lc and -lc_r instead of only -lc_r. */ #undef LIB_SPEC + #include <osreldate.h> + #if __FreeBSD_version >= 500016 + #define LIB_SPEC " \ + %{!shared: \ + %{!pg: %{pthread:-lc_r} -lc} \ + %{pg: %{pthread:-lc_r_p} -lc_p} \ + }" + #else #define LIB_SPEC " \ %{!shared: \ %{!pg: \ *************** *** 70,75 **** --- 81,87 ---- %{!pthread:-lc_p} \ %{pthread:-lc_r_p}} \ }" + #endif /* Code generation parameters. */
https://gcc.gnu.org/legacy-ml/gcc-patches/2001-05/msg00627.html
CC-MAIN-2022-33
refinedweb
570
68.97
Hi, I am building an app in Xamarin Forms and have been using Xlab as the base class for the Main Activity on the Android side. I just switched to FormsAppCompatActivity as described in this link: By default the tabbed pages can be swiped. How do I disable swiping? protected override void OnElementChanged(ElementChangedEventArgs e) { base.OnElementChanged(e); // Uncomment to Disable scrolling //var propInfo = typeof (TabbedPageRenderer).GetProperty("UseAnimations", BindingFlags.Instance | BindingFlags.NonPublic); //propInfo.SetValue(this, false); See the commented out code, Thank you @MaxMeng! I should have thought of reflection! Btw: if you want to keep the animation between tabs, and just disable gestures (swipe), then this (ugly) hack will work for now: protected override void OnElementChanged(ElementChangedEventArgs<TabbedPage> e) { // Disable animations only when UseAnimations is queried for enabling gestures var fieldInfo = typeof(TabbedPageRenderer).GetField("_useAnimations", BindingFlags.Instance | BindingFlags.NonPublic); fieldInfo.SetValue(this, false); base.OnElementChanged(e); // Re-enable animations for everything else fieldInfo.SetValue(this, true); } Answers Hi @LazareenaThaveethu! Did you find a solution to this issue? Thanx! protected override void OnElementChanged(ElementChangedEventArgs e) { base.OnElementChanged(e); See the commented out code, Thank you @MaxMeng! I should have thought of reflection! Btw: if you want to keep the animation between tabs, and just disable gestures (swipe), then this (ugly) hack will work for now: HI @Lush ! This solution worked for me. public class CustomTabbedRenderer : TabbedPageRenderer { protected override void OnElementChanged(ElementChangedEventArgs e) { @bunny-wabbit Hi, What if I want opposite? I want to enable swipe but don't want animation when I press tab because when I hit fourth tab from first tab is passes through every other tab and it makes app slow. Thanks. Hello! While trying to use this code, i'm getting System.NullReferenceException: Object reference not set to an instance of an object.Any help? Hi All. I tried the above codes marked as answers to this question but still I am not able to disable the swipe gesture for tabbed pages in my application. I have created a tabbed page for which I have added a custom renderer in my Android project containing above code. I have recently updated all Xamarin components. I just wanted to ask if there anyone else also facing same issue or do I need to add something more which is not mentioned here. Or can the Xamarin update affecting this code? @krishnamverma, apparently with the last update, this code not work, at least on my app that code stopped working. @krishnamverma, to extend the previous comment, with the last xamarin forms update (2.3.4.231) to disable the swipe gesture in tabbed forms, I use this code and is working. Yes this solution works well on and above xamarin forms update (2.3.4.231). This worked for me. Thanks I work on Crossplatform and I also use syncfusion for a listview with a left / right swipe and the latter is in my tabbed (or the fact that I need to lock the tabbed swipe) I try the proposed solution, the worry is that, because I am on crossplatforms, there is an error on: public partial class MenuPrincipal : TabbedPage Indeed, he tells me that "TabbedPage is an ambiguous reference" the enbuggeée being done between the TabbedPage of Android and the TabbedPage of Forms. How can I remedy this? I have to make a renderer? If this is the case, I can not use this function (despite the tutorial) edit: Sorry, it's morning at home ^^' public partial class MenuPrincipal : Xamarin.Forms.TabbedPage quite simply @franoisLecointe You can simply disable swiping of tabs by adding the platform specific configuration to your tabbedpage You don't need a renderer. The code based platformspecifics approach is also shown in the post by @EnriqueRangel If you wish to do it form code behind you can use following this.On<Xamarin.Forms.PlatformConfiguration.Android>().SetIsSwipePagingEnabled(false); Guys, any solution for this, please?
https://forums.xamarin.com/discussion/comment/214928/
CC-MAIN-2019-35
refinedweb
648
55.44
Talk:Main Page Guidelines Initial sketch I've put up some content to try out things a bit. This should make the discussions about the C++ book more productive. Also, it can be a good starting point. Note: the current content is only a sketch. It does have major flaws. If you think it can be improved, just edit it, or open a discussion here if you're unsure. I've spent some time thinking about the issues raised and the solutions proposed in the discussion that followed the initial proposal. One thing is clear: most of the problems can be solved by not having a rigid structure. We should strive for as much flexibility as possible. A good way to achieve this is to have a lot of small, independent items, that we can later rearrange as we want without much effort. This requirement is covered by the following points, which could serve as a starting point of our guidelines after improvements: - The content is divided into independent, self-contained items. The basic criteria is as follows: it must be possible to move the item to a separate page without impacting its quality much. - Each item has a very limited scope. - A single article may contain several items that have similar scope. In some cases this may improve navigation quite a bit. - Since there are no warranties that the structure of the wiki will be retained in the future, all items must provide an easy way to get a link to the item though some redirect wrapper, which can be updated in the case item is moved to another page. (This point will be made more concrete when we have such redirect wrapper). If the above guidelines are met, the high-level structure of the book could be changed on the whim just by rearranging the items in the main page (or in any summary pages). This means that the process of writing tutorials is simplified a lot: one is concerned only with the scope-limited item he's writing now, not the overall structure. Also, we don't need to "design" the structure of the wiki from the start, given that better decisions can be made once the content is available. The current sketch is divided into two sections: an introduction and the rest of content. The purpose of the introduction is to explain C++ to the readers that have little if any programming experience in C++. This will allow to assume that the reader at least knows the basics in the rest of the wiki, so that we can explain all the peculiarities of one feature or another, without worrying that, e.g. a detailed article about variable scope is read by a reader who doesn't know what a function is. It won't be possible to easily rearrange the introduction section due to its purpose, but I think that's not a big problem, since the bulk of the content and effort will go to the rest of the wiki. I think we can start filling in the tutorials in the introduction section. This should reveal any issues with the current approach and allow us to refine the guidelines. --P12 04:22, 12 September 2013 (PDT) The main page Should we have a link to each article in the main page? This will improve discoverability. Since the readers can be assumed to come here to learn something, not to solve a specific issue, we don't need to keep the main page short. But maybe someone knows other reasons for limiting the size of the main page? --P12 04:42, 12 September 2013 (PDT) - I'd imagine that a wall of links on the main page wouldn't hurt, at least while the wiki is still in semi-stealth mode. A link dump is a reasonable starting point that we can use until we figure out how (if at all) we want to do the high-level organization. --Nate (talk) 19:04, 14 September 2013 (PDT) goto in intro section? I'm not opposed to mentioning goto for completeness, but should it really be in the "intro" section? New programmers are never going to see it in use, and it seems a bad idea to introduce them to something so quirky, dangerous, and esoteric in an intro book. Not to mention that covering all of its quirks requires a lot more depth than would be available at that point (for example, the interaction of goto and try- catch blocks). --Indi (talk) 10:42, 16 September 2013 (PDT) - I agree. By the way, there's no need to discuss every single bit. You can just go ahead and improve things. I think discussion is worthwhile only for wiki-wide policies, major changes and when there's disagreement. Otherwise a better idea is to invest time in the articles themselves. --P12 13:37, 16 September 2013 (PDT) Proposal for another structure in the beginners-section I really like the idea of having a good and free book for people who are going to learn C++, but while the current structure looks better than most of the currently available stuff, I still believe, that it starts to early with some of the low-level-stuff: For instance, I wouldn't teach pointers and build-in-arrays in the beginners-section. Instead I propose the following structure for the beginners-section (everything not mentioned should instead be covered in the intermediate- or expert-sections): Note that this uses some C++14-features. The rationale behind that is the fact, that C++11 will be past pretty soon and that there are workarounds for almost all uses that can be noted for people who have old compilers. Some rationales for this approach: - There are enough topics to pack them into some smaller subtopics which may help with recognizing structure while learning. - The addition of string and vector to the introduction is the fact, that they enable you to write small programs that already do something. The fact that the basic idea behind strings and arrays is almost trivially to understand for almost everyone is another reason to do that. - Moving the for-each-loop further to the front: This loop is another of these small features that makes many things easier and is easy to understand too. - Since one should pass arguments to functions by cons-reference in later programs, references and const should be introduced directly after functions. References are again easy to understand and const is important enough to mention it here too - Function- and Operator-overloading: I put them where they are mostly in order to keep related things together. - IO may be moved bevore functions, I am not sure about that. - I believe that basic templates are somewhat easier to understand than virtual functions and that new programmers should get used to them pretty early, because as the language currently evolves they will become even more important. - Enums are relatively related to classes and unlike unions not that dangerous, so I put them there. - I believe, that a proper introduction into the STL is very important to teach people into writing good code, so I would dedicate a chapter for that at this point. - Since everything in the STL lives in the std-namespace, this might be a good moment, to teach the readers about what namespaces are. - blank pointers and manual resource-managment are a dangerous topic and shouldn't therefore be touched in the beginners-section. - Since std::optional will be a very elegant and fast way of returning failure and is somewhat easier to imagine than exceptions, I would introduce it before them. Opinions? --FJW (talk) 13:54, 18 September 2013 (PDT) - I like this proposal. It's a clear improvement over the current structure. I'm a bit unsure about std::optional, but since we can introduce it along simple error codes (or std::pair<bool,T>), being from C++14 land is not a problem. --P12 00:17, 19 September 2013 (PDT) - Seems reasonable. Also note that good top-level organization might become more apparent as we fill in the individual items. --Nate (talk) 11:51, 19 September 2013 (PDT) - Great! Since there seems to be consent about that, I'll change the main page. Then there are some points, I forgot when writing this: - undefined behavior. While this is not a trivial topic, I do not see a passable way around introducing it early; early like in “really, really early”. UB is way too dangerous to allow people believing “yeah, It may be UB, but it will work in practice” (until they change the compiler-switches). - A glossary with short definitions that would be nice, maybe in its own wiki-namespace. The descriptions of a topic should consist of a very short (one or two sentences) summary and a short (1-3 paragraphs) description. Possible example for undefined behaviour: - - An introduction on how to set up your working environment on different platforms might be nice. It should however not be part of the core-book itself. - A short introduction into what C++ is, what it's design-goals were and what implications this may have on programmers (somewhat around “C++ was designed with performance as important goal, so you need to ask for the seat-belts yourself”). This might be a good place to introduce UB too. --FJW (talk) 16:18, 19 September 2013 (PDT) - Well, turns out it isn't that trivial to make it nice especially since the form I thought of would be incompatible to the names of the current chapters. So I just created a first version in my personal namespace. Feel free to change it until you believe, it might suit the mainpage. Some parts are clearly just mock-ups, but I believe that it may be a good base for further work. --FJW (talk) 17:12, 19 September 2013 (PDT) - I'd suggest an additional chapter before "hello world", introducing some very basic but fundamental C++ concepts. Stuff that will come up again and again even in introductory chapters. Like "undefined behaviour" (explaining that C++ is underspecified for efficiency and implementation flexibility, which means when you break the rules, anything could happen - it could even appear to work fine - but in "C++ speak" it basically means "bad"), the "as if rule" (explaining that a compiler is free to not do exactly what you wrote, so long as what it does behaves as if it were what you wrote), the "one definition rule" (explaining that "stuff" (functions, classes, etc.) can only be defined once in the program). - You don't need to go into any detail - just set things up so that we can say stuff like "undefined behaviour" in topics and it won't be meaningless jargon (and so that we don't need to qualify everything we say - like we don't need to explain that creating a variable doesn't necessarily equal a variable actually being created, so long as the program behaves as if a variable was created). I don't know what to call it or I'd put it in the skeleton myself. Also, shouldn't we give a brief explanation of "compiling", "linking", etc. - ie, the build process?--Indi (talk) 22:17, 3 November 2013 (PST) - I agree -- this idea has already appeared several times in the talk pages here already. I've created an initial sketch here intro/undefined behavior. --P12 02:18, 4 November 2013 (PST) Problems with locale-dependent stuff While writing the parts about functions I encountered a few situations where I needed locale-dependent functions (toupper, isupper, islower). The problem is, that the automatic links point to the C-functions instead of the C++ templates which are used in the current versions. Is there a possibility to change this? On a sidenode: I've been using the templates because I believe in “Don't show how to do things good by doing them bad” and therefore prefer the new stuff to the old. On the other hand this either requires introducing locales very early (together with std::string) or using it without much explanation. I am interested in opinions on that questions. --FJW (talk) 16:16, 4 October 2013 (PDT) - I have been working on improving the linking for some time already. This problem should be solved at some time in the future. - As for the second question, I think that we should keep things simple in the introduction section. Perhaps showing imperfect, but simple examples and adding a link to the advanced section would be better. However, I guess you can do as you wish for now: doing the fine-tuning in the future makes more sense. --P12 13:17, 6 October 2013 (PDT) Style of the Chapters We currently have several chapters that start by telling the reader that X exists, without explanation why X is needed and what it is good for. Personally I clearly prefer to start a chapter with the already known techniques and a realistic (or as realistic as is reasonably possible) example where they fail to solve the problem in a proper way. Starting from that, I introduce the new feature and show how it solves that problem. I do this because I remember my own problems, when I learned C++ from web-tutorials, where I often couldn't understand, why a certain feature was of any use (why use const, when you can have a variable or a literal; why use classes, just pass all the paramters…) or didn't know what to use (mixing char* and std::string, blank arrays and std::vector…). An example for a chapter that has all the technical stuff, but none of the explanations is Access Specifiers: It fails to explain why making stuff private can be a good idea, which is actually the most important part. I didn't touch it upto now, because I will probably include public and private in the chapter about classes directly and protected should most likely be taught once inheritance has been introduced, but there are currently other articles that have the same problem. Keep in mind that we don't write a reference for people who already know the language but for novices who may not have programed before at all. --FJW (talk) 08:28, 18 October 2013 (PDT) - I agree completely. This will go into the guidelines page, when we have it. --P12 12:04, 18 October 2013 (PDT) what has happened here long time no update here, what has happened here? if we need to restart this project, there are lots of things for us to do or we should mark this project has deprecated for a long time on this page head
https://en.cppreference.com/book/Talk:Main_Page
CC-MAIN-2022-27
refinedweb
2,464
58.11
Get the highlights in your inbox every week. Create your first Knative app | Opensource.com Create your first Knative app Knative is a great way to get started quickly on serverless development with Kubernetes. Opensource.com Subscribe now. First, some background Knative uses custom resource definitions (CRDs), a network layer, and a service core. For this walkthrough, I used Ubuntu 18.04, Kubernetes 1.19.0, Knative 0.17.2, and Kourier 0.17.0 as the Knative networking layer, as well as the Knative command-line interface (CLI).A CRD is a custom resource definition within Kubernetes. A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind; for example, the built-in pod's resource contains a collection of pod objects. This allows an expansion of the Kubernetes API with new definitions. One example is the Knative serving core, which is defined to have internal autoscaling and rapid deployment of pods with the correct roles and access predefined. Kourier is an Ingress (a service to let in external network traffic) for Knative serving and a lightweight alternative for the Istio ingress. Its deployment consists only of an Envoy proxy and a control plane for it. To understand the concepts in this tutorial, I recommend you are somewhat familiar with: - Serverless, cloud-native applications - Ingress with Envoy proxies, i.e., Istio - DNS in Kubernetes - Kubernetes patching configurations - Custom resource definitions in Kubernetes - Configuring YAML files for Kubernetes Set up and installation There are some prerequisites you must do before you can use Knative. Configure Minikube Before doing anything else, you must configure Minikube to run Knative locally in your homelab. Below are the configurations I suggest and the commands to set them: $ minikube config set kubernetes-version v1.19.0 $ minikube config set memory 4000 $ minikube config set cpus 4 To make sure those configurations are set up correctly in your environment, run the Minikube commands to delete and start your cluster: $ minikube delete $ minikube start Install the Knative CLI You need the Knative CLI to make a deployment, and you need Go v1.14 or later to work with the CLI. I created a separate directory to make it easier to find and install these tools. Use the following commands to set up the command line: $ mkdir knative $ cd knative/ $ git clone $ cd client/ $ hack/build.sh -f $ sudo cp kn /usr/local/bin $ kn version Version: v20201018-local-40a84036 Build Date: 2020-10-18 20:00:37 Git Revision: 40a84036 Supported APIs: * Serving - serving.knative.dev/v1 (knative-serving v0.18.0) * Eventing - sources.knative.dev/v1alpha2 (knative-eventing v0.18.0) - eventing.knative.dev/v1beta1 (knative-eventing v0.18.0) Once the CLI is installed, you can configure Knative in the Minikube cluster. Install Knative Since Knative is composed of CRDs, much of its installation uses YAML files with kubectl commands. To make this easier, set up some environment variables in the terminal so that you can get the needed YAML files a little faster and in the same version: $ export KNATIVE="0.17.2" First, apply the service resource definitions: $ kubectl apply -f Then apply the core components to Knative: $ kubectl apply -f This deploys the services and deployments to the namespace knative-serving. You may have to wait a couple of moments for the deployment to finish. To confirm the deployment finished, run the kubectl command to get the deployments from the namespace: $ kubectl get deployments -n knative-serving NAME READY UP-TO-DATE AVAILABLE AGE 3scale-kourier-control 1/1 1 1 107m activator 1/1 1 1 108m autoscaler 1/1 1 1 108m controller 1/1 1 1 108m webhook 1/1 1 1 108m Install Kourier Because you want to use a specific version and collect the correct YAML file, use another environment variable: $ export KOURIER="0.17.0" Then apply your networking layer YAML file: $ kubectl apply -f You will find the deployment in the kourier-system namespace. To confirm the deployment is correctly up and functioning, use the kubectl command to get the deployments: $ kubectl get deployments -n kourier-system NAME READY UP-TO-DATE AVAILABLE AGE 3scale-kourier-gateway 1/1 1 1 110m Next, configure the Knative serving to use Kourier as default. If you don't set this, the external networking traffic will not function. Set it with this kubectl patch command: $ kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}' Configure the DNS Before you can access the load balancer, you need to run the minikube tunnel command in a separate terminal window. This command creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. Without this command, you will never get an External-IP from the load balancer. Your output will look like this: Status: machine: minikube pid: 57123 route: 10.96.0.0/12 -> 192.168.39.67 minikube: Running services: [kourier] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 57123 route: 10.96.0.0/12 -> 192.168.39.67 minikube: Running services: [kourier] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Now that the services and deployments are complete, configure the DNS for the cluster. This enables your future deployable application to support DNS web addresses. To configure this, you need to get some information from your Kourier service by using the kubectl get command: $ kubectl get service kourier -n kourier-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) kourier LoadBalancer 10.103.12.15 10.103.12.15 80:32676/TCP,443:30558/TCP Get the CLUSTER-IP address and save it for the next step. Next, configure the domain to determine your internal website on local DNS. (I ended mine in nip.io, and you can also use xip.io.) This requires another kubectl patch command: $ kubectl patch configmap -n knative-serving config-domain -p "{\"data\": {\"10.103.12.15.nip.io\": \"\"}}" Once it's patched, you will see this output: configmap/config-domain patched Use the Knative CLI Now that your configurations are done, you can create an example application to see what happens. Deploy a service Earlier in this walkthrough, you installed the Knative CLI, which is used for Serving and Eventing resources in a Kubernetes cluster. This means you can deploy a sample application and manage services and routes. To bring up the command-line menu, type kn. Here is a snippet of the output: $ kn kn is the command line interface for managing Knative Serving and Eventing resources Find more information about Knative at: Serving Commands: service Manage Knative services revision Manage service revisions route List and describe service routes Next, use the Knative CLI to deploy a basic "hello world" application with a web frontend. Knative provides some examples you can use; this one does a basic deployment: kn service create hello --image gcr.io/knative-samples/helloworld-go Your output should look something like this: $ kn service create hello --image gcr.io/knative-samples/helloworld-go Creating service 'hello' in namespace 'default': 0.032s The Configuration is still working to reflect the latest desired specification. 0.071s The Route is still working to reflect the latest desired specification. 0.116s Configuration "hello" is waiting for a Revision to become ready. 34.908s ... 34.961s Ingress has not yet been reconciled. 35.020s unsuccessfully observed a new generation 35.208s Ready to serve. Service 'hello' created to latest revision 'hello-dydlw-1' is available at URL: This shows that the service was deployed with a URL into the namespace default. You can deploy to another namespace by running something like the following, then look at the output: $ kn service create hello --image gcr.io/knative-samples/helloworld-go --namespace hello Creating service 'hello' in namespace 'hello': 0.015s The Configuration is still working to reflect the latest desired specification. 0.041s The Route is still working to reflect the latest desired specification. 0.070s Configuration "hello" is waiting for a Revision to become ready. 5.911s ... 5.958s Ingress has not yet been reconciled. 6.043s unsuccessfully observed a new generation 6.213s Ready to serve. Service 'hello' created to latest revision 'hello-wpbwj-1' is available at URL: Test your new deployment Check to see if the new service you deployed is up and running. There are two ways to check: - Check your web address in a browser - Run a curlcommand to see what returns If you check the address in a web browser, you should see something like this: knative_browser-check.png Good! It looks like your application's frontend is up! Next, test the curl command to confirm everything works from the command line. Here is an example of a curl to my application and the output: $ curl Hello World! Interact with the Knative app From here, you can use the Knative CLI to make some basic changes and test the functionality. Describe the service and check the output: $ kn service describe hello Name: hello Namespace: default Age: 12h URL: Revisions: 100% @latest (hello-dydlw-1) [1] (12h) Image: gcr.io/knative-samples/helloworld-go (pinned to 5ea96b) Conditions: OK TYPE AGE REASON ++ Ready 12h ++ ConfigurationsReady 12h ++ RoutesReady 12h It looks like everything is up and ready as you configured it. Some other things you can do with the Knative CLI (which won't show up now due to the minimal configuration in this example) are to describe and list the routes with the app: $ kn route describe hello Name: hello Namespace: default Age: 12h URL: Service: hello Traffic Targets: 100% @latest (hello-dydlw-1) Conditions: OK TYPE AGE REASON ++ Ready 12h ++ AllTrafficAssigned 12h ++ CertificateProvisioned 12h TLSNotEnabled ++ IngressReady 12h jess@Athena:~/knative/client$ kn route list hello NAME URL READY hello True This can come in handy later when you need to troubleshoot issues with your deployments. Clean up Just as easily as you deployed your application, you can clean it up: $ kn service delete hello Service 'hello' successfully deleted in namespace 'default'. jess@Athena:~/knative/client$ kn service delete hello --namespace hello Service 'hello' successfully deleted in namespace 'hello'. Make your own app This walkthrough used an existing Knative example, but you are probably wondering about making something that you want. You are right, so I'll provide this example YAML then explain how you can apply it with kubectl and manage it with the Knative CLI. Example apps.yaml, and then you can make changes to some things. For example, you can change your metadata, name, and namespace. You can also change the value of the target (which I set to This is my app) so that, rather than Hello World, you'll see a new message that says Hello ${TARGET} ! when you deploy the file. To deploy a file like this, you will have to use kubectl apply -f apps.yaml. First, deploy your new service using the apply command: $ kubectl apply -f apps.yaml service.serving.knative.dev/helloworld created Next, you can describe your new deployment, which is the name provided in the YAML file: $ kn service describe helloworld Name: helloworld Namespace: default Age: 50s URL: Revisions: 100% @latest (helloworld-qfr9s) [1] (50s) Image: gcr.io/knative-samples/helloworld-go (at 5ea96b) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ ConfigurationsReady 43s ++ RoutesReady 43s Run a curl command to confirm it produces the new output you defined in your YAML file: $ curl Hello This is my app! Double-check by going to the simple web frontend. knativebrowser2.png This proves your application is running! Congratulations! Final thoughts Knative is a great way for developers to move quickly on serverless development with networking services that allow users to see changes in apps immediately. It is fun to play with and lets you take a deeper dive into serverless and other exploratory uses of Kubernetes!
https://opensource.com/article/20/11/knative?ref=alian.info
CC-MAIN-2020-50
refinedweb
1,993
53.21
In this tutorial, we will check how to send a HTTP PUT request using the ESP32 and the Arduino core. The tests from this tutorial were done using a DFRobot’s ESP32 module integrated in a ESP32 development board. Introduction In this tutorial, we will check how to send a HTTP PUT request using the ESP32 and the Arduino core. We will be sending our request to a fake online testing API, to this endpoint. Since this API is for testing, our request won’t have any effect on the back-end status, and the answer from the server will always be the same, independently of the content of our request. Figure 1 illustrates the expected result of sending a PUT request to the mentioned endpoint, using Postman (a very useful tool for testing HTTP requests). Figure 1 – Testing the API with Postman. As can be seen, the API simulates a successful update to an already existing resource (note that the URL contains a number that identifies a resource). You can check here a very interesting explanation about when to use PUT and a comparison with POST. The tests from this tutorial were done using a DFRobot’s ESP32 module integrated in a ESP32 development board. The code We will start the code by including the necessary libraries. The first one will be the WiFi.h, which will allow us to connect the ESP32 to a WiFi network. The second one will be the HTTPClient.h, which will expose to us the functionality needed to perform the PUT request. #include "WiFi.h" #include "HTTPClient.h" We will also need to store the credentials of the WiFi network, so we can later connect to it. We will make use of two global variables to store those credentials, namely the network name and the password. const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPassword"; Moving on to the setup function, we will first open a serial connection, to output the results of our program. Additionally, we will connect the ESP32 to the WiFi network, using the credentials we have declared before. void setup() { Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println("Connected to the WiFi network"); } We will perform the actual request on the Arduino main loop. So, the first thing we will do is declaring an object of class HTTPClient. This object will expose to us the methods we will need to perform the PUT request. HTTPClient http; Next, we need to call the begin method on our object, passing as input the complete URL of the server endpoint to which we want to send the request. http.begin(""); After this, we should specif the content-type of the body of our PUT request, so the server knows how to interpret it. Naturally, since we are reaching a fake testing API, the actual content we will send doesn’t matter. Nonetheless, for a real application scenario, it’s important to specify the content-type correctly. In our case, we are going to send a simple testing plain text message. Thus, the content-type should be “text/plain“. To specify the content-type header, which contains the mentioned information, we need to call the addHeader method on our HTTPClient object. As first input this method receives the name of the header (“content-type”) and as second input the value of the header (“text/plain”). http.addHeader("Content-Type", "text/plain"); To send the actual request, we need to call the PUT method of the HTTPClient object, passing as input the body content to send to the server. In case of success, the method will return as output a value greater than zero, which corresponds to the HTTP response code. We will store it for error checking. Note that success means that the request was sent to the server and no error occurred in the ESP32 side. Any HTTP status code that represents a back-end error means that the request was successfully sent to the server, but then some problem happened processing it in the back-end. int httpResponseCode = http.PUT("PUT sent from ESP32"); After this, we will check if the value returned by the method is indeed greater than zero, so we know everything worked. if(httpResponseCode>0){ // print server response } To get the server response, we simply need to call the getString method on the HTTPClient object. This method takes no arguments and returns the response as a string. We will print the response, together with the HTTP response code from the server. Serial.println(httpResponseCode); Serial.println(response); To finalize, we will call the end method on the HTTPClient object, to free the resources. This method takes no arguments and returns void. http.end(); The final source code can be seen below. We have added a small delay between each iteration of the loop, so we are not constantly polling the server when testing this code. We have also added a pre-check to confirm we are still connected to the WiFi network, before trying to do the request. Additionally, we have included the handling of the situation where an internal error occurs when trying to send the request to the server, where we print the error code to help debugging. (""); http.addHeader("Content-Type", "text/plain"); int httpResponseCode = http.PUT("PUT sent from ESP32"); if(httpResponseCode>0){ String response = http.getString(); Serial.println(httpResponseCode); Serial.println(response); }else{ Serial.print("Error on sending PUT Request: "); Serial.println(httpResponseCode); } http.end(); }else{ Serial.println("Error in WiFi connection"); } delay(10000); } Testing the code To test the code, simply compile it and upload it to your ESP32 device, using the Arduino IDE. After finishing, open the Arduino IDE serial monitor and wait for the WiFi connection to be established. After that, as shown in figure 2, you should start seeing the response to the requests being periodically sent to the server, together with the status code 200 (the HTTP code for OK). Figure 2 – Output of the program, with the HTTP status code and server response. Related Posts - ESP32: HTTP GET Requests - ESP32 HTTP/2: GET Request - ESP32: HTTP POST requests - ESP32 HTTP/2: POST request - ESP32 HTTP/2: PUT request - ESP32: Connecting to a WiFi network
https://techtutorialsx.com/2019/01/07/esp32-arduino-http-put-request/
CC-MAIN-2019-04
refinedweb
1,049
63.59
import "github.com/vladvelici/sessions" Package gorilla/sessions provides cookie and filesystem sessions and infrastructure for custom session backends. The key features are: * Simple API: use it as an easy way to set signed (and optionally encrypted) cookies. * Built-in backends to store sessions in cookies or the filesystem. * Flash messages: session values that last until read. * Convenient way to switch session persistency (aka "remember me") and set other attributes. * Mechanism to rotate authentication and encryption keys. * Multiple sessions per request, even using different backends. * Interfaces and infrastructure for custom session backends: sessions from different stores can be retrieved and batch-saved using a common API. Let's start with an example that shows the sessions API in a nutshell: import ( "net/http" "github.com/gorilla/sessions" ) var store = sessions.NewCookieStore([]byte("something-very-secret")) func MyHandler(w http.ResponseWriter, r *http.Request) { // Get a session. We're ignoring the error resulted from decoding an // existing session: Get() always returns a session, even if empty. session, err := store.Get(r, "session-name") if err != nil { http.Error(w, err.Error(), 500) return } // Set some session values. session.Values["foo"] = "bar" session.Values[42] = 43 // Save it before we write to the response/return from the handler. session.Save(r, w) } First we initialize a session store calling NewCookieStore() and passing a secret key used to authenticate the session. Inside the handler, we call store.Get() to retrieve an existing session or a new one. Then we set some session values in session.Values, which is a map[interface{}]interface{}. And finally we call session.Save() to save the session in the response. Note that in production code, we should check for errors when calling session.Save(r, w), and either display an error message or otherwise handle it. Save must be called before writing to the response, otherwise the session cookie will not be sent to the client.. That's all you need to know for the basic usage. Let's take a look at other options, starting with flash messages. Flash messages are session values that last until read. The term appeared with Ruby On Rails a few years back. When we request a flash message, it is removed from the session. To add a flash, call session.AddFlash(), and to get all flashes, call session.Flashes(). Here is an example: func MyHandler(w http.ResponseWriter, r *http.Request) { // Get a session. session, err := store.Get(r, "session-name") if err != nil { http.Error(w, err.Error(), 500) return } // Get the previously flashes, if any. if flashes := session.Flashes(); len(flashes) > 0 { // Use the flash values. } else { // Set a new flash. session.AddFlash("Hello, flash messages world!") } session.Save(r, w) } Flash messages are useful to set information to be read after a redirection, like after form submissions. There may also be cases where you want to store a complex datatype within a session, such as a struct. Sessions are serialised using the encoding/gob package, so it is easy to register new datatypes for storage in sessions: import( "encoding/gob" "github.com/gorilla/sessions" ) type Person struct { FirstName string LastName string Email string Age int } type M map[string]interface{} func init() { gob.Register(&Person{}) gob.Register(&M{}) } As it's not possible to pass a raw type as a parameter to a function, gob.Register() relies on us passing it an empty pointer to the type as a parameter. In the example above we've passed it a pointer to a struct and a pointer to a custom type representing a map[string]interface. This will then allow us to serialise/deserialise values of those types to and from our sessions. Note that because session values are stored in a map[string]interface{}, there's a need to type-assert data when retrieving it. We'll use the Person struct we registered above: func MyHandler(w http.ResponseWriter, r *http.Request) { session, err := store.Get(r, "session-name") if err != nil { http.Error(w, err.Error(), 500) return } // Retrieve our struct and type-assert it val := session.Values["person"] var person = &Person{} if person, ok := val.(*Person); !ok { // Handle the case that it's not an expected type } // Now we can use our person object } By default, session cookies last for a month. This is probably too long for some cases, but it is easy to change this and other attributes during runtime. Sessions can be configured individually or the store can be configured and then all sessions saved using it will use that configuration. We access session.Options or store.Options to set a new configuration. The fields are basically a subset of http.Cookie fields. Let's change the maximum age of a session to one week: session.Options = &sessions.Options{ Path: "/", MaxAge: 86400 * 7, HttpOnly: true, } Sometimes we may want to change authentication and/or encryption keys without breaking existing sessions. The CookieStore supports key rotation, and to use it you just need to set multiple authentication and encryption keys, in pairs, to be tested in order: var store = sessions.NewCookieStore( []byte("new-authentication-key"), []byte("new-encryption-key"), []byte("old-authentication-key"), []byte("old-encryption-key"), ) New sessions will be saved using the first pair. Old sessions can still be read because the first pair will fail, and the second will be tested. This makes it easy to "rotate" secret keys and still be able to validate existing sessions. Note: for all pairs the encryption key is optional; set it to nil or omit it and and encryption won't be used. Multiple sessions can be used in the same request, even with different session backends. When this happens, calling Save() on each session individually would be cumbersome, so we have a way to save all sessions at once: it's sessions.Save(). Here's an example: var store = sessions.NewCookieStore([]byte("something-very-secret")) func MyHandler(w http.ResponseWriter, r *http.Request) { // Get a session and set a value. session1, _ := store.Get(r, "session-one") session1.Values["foo"] = "bar" // Get another session and set another value. session2, _ := store.Get(r, "session-two") session2.Values[42] = 43 // Save all sessions. sessions.Save(r, w) } This is possible because when we call Get() from a session store, it adds the session to a common registry. Save() uses it to save all registered sessions. doc.go sessions.go store.go NewCookie returns an http.Cookie with the options set. It also sets the Expires field calculated based on the MaxAge value, for Internet Explorer compatibility. Save saves all sessions used during the current request. type CookieStore struct { Codecs []securecookie.Codec Options *Options // default configuration } CookieStore stores sessions using secure cookies. func NewCookieStore(keyPairs ...[]byte) *CookieStore NewCookieStore returns a new CookieStore. Keys are defined in pairs to allow key rotation, but the common case is to set a single authentication key and optionally an encryption key. The first key in a pair is used for authentication and the second for encryption. The encryption key can be set to nil or omitted in the last pair, but the authentication key is required in all pairs. It is recommended to use an authentication key with 32 or 64 bytes. The encryption key, if set, must be either 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256 modes. Use the convenience function securecookie.GenerateRandomKey() to create strong keys. Get returns a session for the given name after adding it to the registry. It returns a new session if the sessions doesn't exist. Access IsNew on the session to check if it is an existing session or a new one. It returns a new session and an error if the session exists but could not be decoded. func (s *CookieStore) MaxAge(age int) MaxAge sets the maximum age for the store and the underlying cookie implementation. Individual sessions can be deleted by setting Options.MaxAge = -1 for that session. New returns a session for the given name without adding it to the registry. The difference between New() and Get() is that calling New() twice will decode the session data twice, while Get() registers and reuses the same decoded session after the first call. func (s *CookieStore) Save(r *http.Request, w http.ResponseWriter, session *Session) error Save adds a single session to the response. type FilesystemStore struct { Codecs []securecookie.Codec Options *Options // default configuration // contains filtered or unexported fields } FilesystemStore stores sessions in the filesystem. It also serves as a referece for custom stores. This store is still experimental and not well tested. Feedback is welcome. func NewFilesystemStore(path string, keyPairs ...[]byte) *FilesystemStore NewFilesystemStore returns a new FilesystemStore. The path argument is the directory where sessions will be saved. If empty it will use os.TempDir(). See NewCookieStore() for a description of the other parameters. Get returns a session for the given name after adding it to the registry. See CookieStore.Get(). func (s *FilesystemStore) MaxAge(age int) MaxAge sets the maximum age for the store and the underlying cookie implementation. Individual sessions can be deleted by setting Options.MaxAge = -1 for that session. func (s *FilesystemStore) MaxLength(l int) MaxLength restricts the maximum length of new sessions to l. If l is 0 there is no limit to the size of a session, use with caution. The default for a new FilesystemStore is 4096. New returns a session for the given name without adding it to the registry. See CookieStore.New(). func (s *FilesystemStore) Save(r *http.Request, w http.ResponseWriter, session *Session) error Save adds a single session to the response. MultiError stores multiple errors. Borrowed from the App Engine SDK. func (m MultiError) Error() string type Options struct { Path string Domain string // MaxAge=0 means no 'Max-Age' attribute specified. // MaxAge<0 means delete cookie now, equivalently 'Max-Age: 0'. // MaxAge>0 means Max-Age attribute present and given in seconds. MaxAge int Secure bool HttpOnly bool } Options stores configuration for a session or session store. Fields are a subset of http.Cookie fields. Registry stores sessions used during a request. GetRegistry returns a registry instance for the current request. Get registers and returns a session for the given name and session store. It returns a new session if there are no sessions registered for the name. func (s *Registry) Save(w http.ResponseWriter) error Save saves all sessions registered for the current request. type Session struct { ID string Values map[interface{}]interface{} Options *Options IsNew bool // contains filtered or unexported fields } Session stores the values and optional configuration for a session. NewSession is called by session stores to create a new session instance. AddFlash adds a flash message to the session. A single variadic argument is accepted, and it is optional: it defines the flash key. If not defined "_flash" is used by default. Flashes returns a slice of flash messages from the session. A single variadic argument is accepted, and it is optional: it defines the flash key. If not defined "_flash" is used by default. Name returns the name used to register the session. Save is a convenience method to save this session. It is the same as calling store.Save(request, response, session). You should call Save before writing to the response or returning from the handler. Store returns the session store used to register the session. type Store interface { // Get should return a cached session. Get(r *http.Request, name string) (*Session, error) // New should create and return a new session. // // Note that New should never return a nil session, even in the case of // an error if using the Registry infrastructure to cache the session. New(r *http.Request, name string) (*Session, error) // Save should persist session to the underlying store implementation. Save(r *http.Request, w http.ResponseWriter, s *Session) error } Store is an interface for custom session stores. See CookieStore and FilesystemStore for examples. Package sessions imports 12 packages (graph). Updated 2018-03-21. Refresh now. Tools for package owners.
https://godoc.org/github.com/vladvelici/sessions
CC-MAIN-2018-30
refinedweb
1,999
60.82
- Available API resources - SCIM - GraphQL API - Compatibility guidelines - How to use the API - Authentication - Status codes - Pagination - Path parameters - Namespaced path encoding - File path, branches, and tags name encoding - Request Payload - Encoding API parameters of arrayand hashtypes idvs iid - Data validation and error reporting - Unknown route - Encoding +in ISO 8601 dates - Clients - Rate limits - Content type API Docs Use the GitLab REST API to automate GitLab. You can also use a partial OpenAPI definition, to test the API directly from the GitLab user interface. Contributions are welcome. Available API resources For a list of the available resources and their endpoints, see API resources. For an introduction and basic steps, see How to make GitLab API calls. SCIM GitLab provides an SCIM API that both implements the RFC7644 protocol and provides the /Users endpoint. The base URL is /api/scim/v2/groups/:group_path/Users/. GraphQL API A GraphQL API is available in GitLab.. When GraphQL is fully implemented, GitLab: - Can delete controller-specific endpoints. - Will no longer maintain two different APIs. Compatibility. Backward-incompatible changes (for example, endpoint and parameter removal), and removal of entire API versions are done in tandem with major GitLab releases. All deprecations and changes between versions are in the documentation. For the changes between v3 and v4, see the v3 to v4 documentation. Current status Only API version v4 is available. Version v3 was removed in GitLab 11.0. How to use the API API requests must include both api and the API version. The API version is defined in lib/api.rb. For example, the root of the v4 API is at /api/v4. Valid API request If you have a GitLab instance at gitlab.example.com: curl "" The API uses JSON to serialize data. You don’t need to specify .json at the end of the API URL. API request to expose HTTP response headers If you want to expose HTTP response headers, use the --include option: curl --include "" HTTP/2 200 ... This request can help you investigate an unexpected response. API request that includes the exit code If you want to expose the HTTP exit code, include the --fail option: shell script curl --fail "" curl: (22) The requested URL returned error: 404 The HTTP exit code can help you diagnose the success or failure of your REST request. Authentication 401: { "message": "401 Unauthorized" } O. GitLab CI/CD job token When a pipeline job is about to run, GitLab generates a unique token and injects it as the CI_JOB_TOKEN predefined variable. You can use a GitLab CI/CD job token to authenticate with specific API endpoints: - Packages: - Package Registry. To push to the Package Registry, you can use deploy tokens. - Container Registry (the $CI_REGISTRY_PASSWORDis $CI_JOB_TOKEN). - Container Registry API (scoped to the job’s project, when the ci_job_token_scopefeature flag is enabled) - Get job artifacts. - Get job token’s job. - Pipeline triggers, using the token=parameter. - Release creation. - Terraform plan.. Git. Impersonation tokens Impersonation tokens are a type of personal access token. They can be created only by an administrator, and are used to authenticate with the API as a specific user. Use impersonation tokens Introduced in GitLab 11.6.. Sudo and a request using cURL with sudo request, providing an ID: GET /projects?private_token=<your_access_token>&sudo=23 curl --header "PRIVATE-TOKEN: <your_access_token>" --header "Sudo: 23" "" Status codes The API is designed to return different status codes according to context and action. This way, if a request results in an error, you can get insight into what went wrong. The following table gives an overview of how the API functions generally behave. The following table shows the possible return codes for API requests. Pag ... Linksheader is scheduled to be removed in GitLab 14.0 to be aligned with the W3C Linkspecification. The Linkheader was added in GitLab 13.1 and should be used instead. The link to the next page contains an additional filter id_after=42 that excludes already-retrieved records. The type of filter depends on the order_by option used, and we may have more than one additional filter. When the end of the collection is If using namespaced API requests, variables is a parameter of type array containing hash key/value pairs [{ 'key': 'UPLOAD_TO_S3', 'value': 'true' }]: curl --globoff --request POST --header "PRIVATE-TOKEN: <your_access_token>" \ "[][key]=VAR1&variables[][value]=hello&variables[][key]=VAR2&variables[][value]=world" curl --request POST --header "PRIVATE-TOKEN: <your_access. When you, such as: 2017-10-17T23:11:13.000+05:30 The correct encoding for the query parameter would be: 2017-10-17T23:11:13.000%2B05:30 Clients There are many unofficial GitLab API Clients for most of the popular programming languages. For a complete list, visit the GitLab website. Rate limits For administrator documentation on rate limit settings, see Rate limits. To find the settings that are specifically used by GitLab.com, see GitLab.com-specific rate limits. Content type The GitLab API supports the application/json content type by default, though some API endpoints also support text/plain. In GitLab 13.10 and later, API endpoints do not support text/plain by default, unless it’s explicitly documented.
https://docs.gitlab.com/13.12/ee/api/README.html
CC-MAIN-2021-49
refinedweb
850
57.98
I’m trying to compile my code to test a function to read and print a data file, but I get a compiling error that I don’t understand – “error: expected constructor, destructor, or type conversion before ‘;’ Solution #1:: int main() { GetMonth(); } Solution #2: (In addition to other replies.) In order to excute your ‘GetMonth()’ function you have to either call it from another function (‘main’ or whatever is called from ‘main’) or use it in initializer expression of an object declared at namespace scope, as in double global_dummy = GetMonth(); However, the latter method might suffer from initialization order problems, which is why it is recommended to use the former method whenever possible. Solution #3: In C/C++, you cannot simply add executable code into the body of a header or implementation (.c,.cpp,.cxx,etc…) file. Instead you must add it to a function. If you want to have the code run on startup, make sure to add it to the main method. int main(int argc, char *argv[]) { GetMonth(); } Solution #4: C++ programs don’t execute in a global context. This means you need to put the call to GetMonth into a function for it to run. int main() { } might be appropriate.
https://techstalking.com/programming/question/solved-error-expected-constructor-destructor-or-type-conversion-before-token/
CC-MAIN-2022-40
refinedweb
204
55.07
In this blog, I will be describing the step-by step procedure of creating a service model from CE 7.1 ESR and implementing the service definition thru the proxy generation in the appropriate backend system(Outside-in). *Pre-requisite:* ** SAP NW 7.0 SP14 System. *Step 1*:Message Types: *BUPA_ReqMT* refers Data Types: *BUPA_ReqDT* Message Types: *BUPA_ResMT* refers Data Types: *BUPA_ResDT* Next step is, to create the service Interface from our process component model, Select the Service Interface block from the model and then right click to create (select the option “Create Assignment” from the context menu) the service interface. System will automatically propose the service interface name and you need to select the namespace and the service interface attributes like below: Create the assignment for the operation as explained for the service Interface. Now save all your XI design objects and activate it. *Step 3*:In this step, we will be creating the inbound proxy for the service interface from the backend. Login to your backend system, where you have configured the ESR connection setup. Launch the transaction ‘SPROXY’ or SE80 transaction (select the ‘Enterprise Service Browser’). Examine all your generated XI design objects from the ES Brower, Expand the Service Interface node and select the Service Interface “BUPASearch”. Right click to create the proxy. A generated proxy for your service interface will look like below: Double click on the class name to implement the code for the inbound proxy in the method “BUPASearch” like below: *Step 4:*: In configuration, specify the authentication type as ‘UserID/Password’. Now save and test your service by clicking on ‘Open Web Service navigator for selected binding’ link from the Overview tab like below: Select the Operation “BUPASearch” and provide the input parameters for the business partner search as shown below: Looks like you finally got your ESR working 🙂 Incidentally, there was a question in the forum on the same. I have forwarded your blog to them. I am assuming you will be getting some mails / calls soon! regards Rathish Hello Velu, You are almost there. Since you are using the backend NW70 SP14, you need to import SAP ABAP 7.0 XI Content from SAP Market Place. After that you can see the SCV in the ESR (backend). In your case, the connection to CE 7.1 ESR from backend is successful. Since you do not have any SCV matching with your backend system in CE 7.1 ESR, it is not displaying the objects. Note: if your backend system version is less than the CE 7.1 SCV, then it will not be displayed in the ESR (backend). Regards, Velu Imported the XI content and it worked exactly as you have described. Thanks a lot! Regards, Jiannan It’s me again. I am running problem creating the model in ESR. Would you mind taking a look with your expertise at my open thread at Problem creating operation assignment in ESR Model ? Thanks. Regards, Jiannan I have successfully implemented this blog till creation of proxy & its implementation,but since my ABAP stack is SAPECC 6.0,i m unable to use transaction SOAMANAGER. Are there other methods of creating service interface endpoints ?? (may be by using transaction WSADMIN,WSCONFIG) As I mentioned in my blog, you require NW 7.0 SP14 system. Unfortunately all my system is patched with SP14, so I can not try and give solutions to your questions. But if you have completed till Step3, you can test the proxy class in the backend system by providing the input (Business Partner number). Thanks. Regards, Velu Hope it is okay to ask another question. With a configuration like the one in this blog, ESR in CE71 SP4 and NW70 SP14 backend, I have created a model with three operations in the interface. But I am having problem activating it. The error message says that multiple operations is not supported for XI 3.0 compatible interfaces. Since the SCV is NW70 the Stateless XI 3.0 Compatible is the only option for interface pattern. But if I set the SCV to NW71 the SPROXY will not be able to see the objects in ESR. Any idea if here is any solution or workaround? Thanks. Regards, Jiannan This blog is very helpful! But I have a question about this modeling above. Can we export these models to any file format? (ex. word, excel…) Best regards, Tomoe Its a nice blog and exciting to discover the features of PI 7.1. I have some questions, may be you would be able to answer them – 1. In this blog, you have created only an inbound interface. And we generated the proxy and made the service available in ESR. So, dont we need any configuration in Directory? Without the sender SOAP channel and receiving XI (proxy) adapters also this service works? 2. In case if have to have mapping in between the service call and the proxy call, how should I go about it? In this case do I need directory configuration? 3. Personal.. What would be the best way come in contact with you… so that I can keep asking you some questions..:-) and get them answered… Vijay The blog covers with the aspect of CE 7.1 ESR. The service modeling in ESR does not require any configuration in directory as we do it in PI. The service call can be implemented in any web service client program by creating the appropriate proxy in the backend system. Since we have already created the endpoint & transport binding for the service, in this case the configuration not required in directory. You can reach me thru the email-id in SDN business card. Regards, Velu Also when I do a check from SPROXY the and run SPRIX_CHECK_IFR_Address it says the Repositiory address is correct but the log says that there is an authorization issue. But, SXMB_IFR does go to the Enterprise Services Repository. So, I may have an error? I would like to grasp the whole picture of the modeling in ESR (the theory, all the models types, the capabilities, etc.). I searched the network but I found only pieces of information (blogs, forums, etc.). Could you please suggest me a comprehensive document (or book) to read? Thanks, Livio.
https://blogs.sap.com/2008/02/06/esr-service-modeling-step-by-step-guide-for-outside-in-approach/
CC-MAIN-2019-09
refinedweb
1,044
65.12
The phrase “Email! Get connected!” is a registered trademark of This Could Be Better LLC, which is a company that doesn’t exist, and also it’s not a registered trademark because no one ever, um, as is. Which isn’t a sentence. And neither was that. Or that, or this. Offer void in Canada. 1. If you have not already done so, download and install the Java Development Kit. Details are available in a previous tutorial. Make a note of the directory to which the files “javac.exe” and “java.exe” are installed. 2. Download the Apache James mail server. As of this writing, the latest stable version is available at the URL “”. Locate and download the “Binary ZIP” format of Apache James version 2.3.2. 3. Extract the downloaded ZIP archive for Apache James to any convenient directory. Open the extracted directory, then the “bin” subdirectory, and locate the file “run.bat”. Double click run.bat to start the default mail server in a console window. This console window should remain open for the rest of the tutorial. 4. In any convenient location, create a new directory called “EmailTest”. 5. Download the JavaMail library. JavaMail is Java’s official email library. As of this writing, the latest version is available at “”. 6. Extract the downloaded ZIP archive for JavaMail to any convenient location. Open the extracted directory, and copy the precompiled library file “mail.jar” to the EmailTest directory. 7. In the EmailTest directory, create a new text file named “EmailTest.java”, containing the following text. import java.util.*; import javax.mail.*; import javax.mail.internet.*; public class EmailTest { public static void main(String[] args) { String addressee = args[0]; EmailMessage emailMessage = new EmailMessage ( "nonesuch@bogusiosity.org", addressee, "test email", "This is a test email." ); emailMessage.send(); System.out.println("email message sent to " + addressee); } } class EmailMessage { public String sender; public String addressee; public String subject; public String body; public EmailMessage ( String sender, String addressee, String subject, String body ) { this.sender = sender; this.addressee = addressee; this.subject = subject; this.body = body; } public void send() { Properties mailServerProperties = new Properties(); mailServerProperties.put ( "mail.smtp.host", "localhost" ); mailServerProperties.put ( "mail.smtp.port", "25" ); Session session = Session.getDefaultInstance ( mailServerProperties ); MimeMessage messageToSend = new MimeMessage(session); try { messageToSend.setFrom ( new InternetAddress(this.sender) ); messageToSend.addRecipient ( Message.RecipientType.TO, new InternetAddress(this.addressee) ); messageToSend.setSubject(this.subject); messageToSend.setText(this.body); Transport.send(messageToSend); } catch (MessagingException ex) { ex.printStackTrace(); } } } 8. Still in the EmailTest directory, create a new text file named “ProgramBuildAndRun.bat”, containing the following text. Substitute the path of the directory containing javac.exe and a test email address in the indicated places. set javaPath="[the path of the directory containing javac.exe]" @echo on %javaPath%\javac.exe -classpath .;mail.jar *.java %javaPath%\java.exe -classpath .;mail.jar EmailTest [a test email address] pause 9. Execute ProgramBuildAndRun.bat. The test program will be compiled and run, and an email will be sent to the email account specified. 10. Check the specified email account and verify that the test email has been received. I tried above code . It works but actual email never sent to intended recipient . Why ? For configuration visit : I tried this and found that send mail and read mail functionality is working to send and receive mails from the user who are in apache james server only (for localhost only).
https://thiscouldbebetter.wordpress.com/2011/09/04/sending-an-email-from-java-using-jmail-and-apache-james/
CC-MAIN-2017-13
refinedweb
559
53.58
. 17 comments: Some comments: "[...] littered with SQL that's impossible to refactor when my schema changes." Never saw a ORM (like Hibernate) help when changing the database. And on the contrary: I haven't seen schema changes in large databases, because too many systems (reporting, accounting) depend on a schema. Your domain model will change much more likely, and when the gap between your domain classes and your db is too large, your ORM will break. This often prevents refactoring of domain classes. "If you are not designing an ActiveRecord based model, it's of paramount importance that you keep your domain model decoupled from the persistent model." As said above, ORMs do not decouple your domain classes from the database, but instead nail your domain classes to your database schema. Ever tried splitting domain classes that are in one table? Everything beside renaming classes and attributes is out of the window if you use an ORM (just my experience, YMMV). Cheers Stephan You can also use refactoring aware SQL dsls like squill, jequel, empiredb. Regarding those _big_ domain models. In DDD terms they are broken anyway as there are no modules or bounded contexts that address the relevant part of the domain model at once. When talking to BigDaveThomas at JAOO he also stressed that most solutions today are just simple CRUD systems that are bloated with ORM. Just mapping the tables to a screen is often a simple case of generic SQL and you're done :) Michael Michael "Have a look at how grids like Terracotta can use distributed caches like EhCache to scale out your data layer seamlessly." We use TC and it does scale out our data without Hibernate. Cheers Stephan I am starting to learn that Hibernate requires more understanding and time than most people want to give it, but with that understanding, it can really work for you. I was talking to a user yesterday who asserts that QueryCaching is bad for him. I listened, checkpointed with someone who knew query caching very well, and found out that it will indeed work for this user if used properly. I see that since Terracotta stopped fighting Hibernate and embraced it in the market and now that we build products for Hibernate users, my understanding of the technology has grown. Our ability to serve the needs of higher performance while staying within the confines of the Hibernate world have vastly improved over the last year. Yes Hibernate has a few problems but I see the path fwd as contributing fixes and helping and not trying to invent yet another way to do what is inevitable, marshaling data to and from an RDBMS. --Ari Lazy Initialization is not a feature/side-effect of ORMs, they can be present in your DSLs as well. ORMs seems like a hindrance at the start of the project or when there are less "objects". I think they are well suited for Object-Oriented minded teams. But now, as we are exploring different areas, paradigms, we tend to move away from ORMs and that's natural for these kinds of projects. ORM is nothing more than an alternate marshalling scheme. To add layers upon layers of marselling has never made sense. That said db calls are tied to the network and is the case with all technologies that rely on aslow under pinnings caching will be essential. The best way to make an application cache resistant is to scatter the calls throughout your application. At least ORM normalize execution paths which makes it easier to add caching. That said for the moment, applications are going to need to rely on something other than RDB technologies if they are going to scale. Technologies such as memcached look very interesting in that it is a very simple technology that is highly scalable Kirk I'm not sure with Stephen means that ORMs cannot help when refactoring a database; they help a hell of a lot more than strings containing SQL all over the place would; it's trivial to write an integration test to load up all your mapped beans and try to access one. If your mappings and database are inconsistent, you will know immediately what is the problem and where. Having seen many codebases NOT using an ORM, I have to say they were all a big, huge, mess. ORM makes the code cleaner (or can help). And clean code can be refactored, maintained and optimized a lot better than a big mess of SQL statements everywhere. Just put everything in stored procs, then your java code is completely shielded from the database structure. I've used this approach on a number of projects, and it has worked well. Simple and easy to maintain. This is in stark contrast to the ORM based projects I've worked on where no one on the project truly understood what was going on with all of the complex mappings, cachings, cryptic errors, etc. I can show any who knows SQL how to do virtually anything needed in a few hours with stored procs. ORM adds much more complexity...and I often see lazy loading all over the place causing horrible performance. I can understand decoupling your domain model from the database in that the domain objects should be simple POJOs. That is where something like JdbcTemplate really shines (similar to using iBatis). In this modern era of polyglot, how come we don't recognize that SQL is a language of its own. When tuning queries, it seems easier to enlist our team's database engineer to help me out by showing him queries, rather than bringing him up to speed on XYZ-QL. Maybe the real solution is some kind of "extended ddl" where one could specify validations/constraints/other business logic" more easily. Like Hibernate, but without the "object mapping" part (why should one try to map relational data to objects?). Like "stored procedures", but functional instead of procedural (though i like spaghetti with cheese). (just an idea) Rails doesnt use straight SQL, so there is no need to move away from ORMs. Just wait and see what the Rails guys do. I worked on a complete rewrite of a system and the Lead Developer did not use an ORM. We basically wrote our own, and it was a total mess and a waste of time. We spent most of our time debugging our data access layer and never made meaningful progress on the true functional requirements. After a year of hell, the Lead was fired and we threw the code away and started over with an ORM. What a relief! Not having to spend a lot of time dealing with the data access layer freed us up to focus on the functional requirements which makes happy clients which makes happy managers which makes happy developers. Somehow I ended up on a team of developers that thought 3rd party tools are for wimps, and they could write everything themselves. What a bunch of arrogant fools, and what a waste of time. If we had all the time in the world, maybe we could write a better tool, but I doubt it. I freely admit that developers who create tools like Hibernate are smarter than me. Why would I waste any time trying to reinvent the wheel? It may not be perfect, but it's better than anything I could create myself. As for the arrogant fools who liked to create their own tools instead of just finding one already built? They were all fired at various times for consistently not finishing projects. @Anonymous (of the last comment) You resonate pretty much what I wanted to say. It's true that ORMs like Hibernate are not without the warts. At the same time they offer a tonne of benefits too. My suggestion will be : 1. to use what good they offer (and they really offer a lot) 2. avoid the sucky features 3. use your judgement to selectively apply the ones that are debatable. If you do not want to use the persistence context or automated session management, use the stateless session interface, where you use your ORM to marshal / unmarshal data out of your RDBMS and get stuff in the form of detached objects. Hibernate offers this .. check out .. Session-less Approach ---------------------- For those wanting a "session-less" approach you can also check out Ebean ORM to see if it is more of your liking. This means you don't need to worry about - LazyInitialisationException - management of session objects (Hibernate session/ JPA EntityManager). - merge/persist/flush replaced with save Sorry for the blatent plug but if you are looking for a simpler/session less approach it would be worth a look :) Cheers, Rob. I don't think ORMs are a thing of the past but I also don't think they are a one size fits all option. I was wondering if you've had a chance to check out Squeryl ? This is a LINQ style DSL for Scala. Eg: def songsInPlaylistOrder = from(playlistElements, songs)((ple, s) => where(ple.playlistId === id and ple.songId === s.id) select(s) orderBy(ple.songNumber asc) ) This is translated to SQL and executed for you. If you need to refactor (assuming someone will develop adequate refactoring tools for Scala) nothing is missed because there are no hbms to worry about. I have looked at SQueryl. Then there is ScalaQuery as well and quite a few other frameworks inspired by LINQ. All of them do a nice job of providing type safe queries on the domain objects. This way you save a lot from writing SQLs. But my main concern is that this process can quickly go out of bounds in a large project where you may have thousands of tables. Besides hiding SQLs, an ORM also does this job of virtualizing the data layer. This means you can scale up your data layer as transparently using products like Terracotta, Coherence or Gigaspaces. I like the elegance of LINQ inspired frameworks, but still skeptical about their usage in a typical enterprise application which needs high scalability. I view SQueryl as more of a small scale option although I'd still write a domain model that is separate from the persistent model with that tool. The .Net world offers the best of both worlds with the NHibernate guys supplying a Linq provider. The Linq generates criteria API calls rather than SQL.
http://debasishg.blogspot.com/2009/10/are-orms-really-thing-of-past.html
CC-MAIN-2017-39
refinedweb
1,741
70.73
Most said that primitive in object are stored in Heap, however, I got different results from the following performance test: public class Performance { long sum = 0; public static void main(String[] args) { // TODO Auto-generated method stub long startTime = System.currentTimeMillis(); long pSum = 0; for(int i = 0; i < Integer.MAX_VALUE; i++){ pSum += i; } long endTime = System.currentTimeMillis(); System.out.println("time of using primitive:" + Long.toString(endTime - startTime)); System.out.println(pSum); long startTime1 = System.currentTimeMillis(); Long Sum = 0L; for(int i = 0; i < Integer.MAX_VALUE; i++){ Sum += i; } long endTime1 = System.currentTimeMillis(); System.out.println("time of using object:" + Long.toString(endTime1 - startTime1)); System.out.println(Sum); Performance p = new Performance(); long startTime2 = System.currentTimeMillis(); for(int i = 0; i < Integer.MAX_VALUE; i++){ p.sum += i; } long endTime2 = System.currentTimeMillis(); System.out.println("time of using primitive in object:" + Long.toString(endTime2 - startTime2)); System.out.println(p.sum); } } The results look like this: time of using primitive:1454 2305843005992468481 time of using object:23870 2305843005992468481 time of using primitive in object:1529 2305843005992468481 We can find the time of using primitive and using primitive in object are almost same. So I am confused if primitives in objects are stored in Heap. And why the time cost of using primitive and using primitive in object are almost same? When you go Long sum; ... sum += 1; the JVM, in theory, allocates a new Long each time, cause Longs are immutable. Now, a really smart compiler could do something smart here, but this explains why your time for the second loop is so much larger. It is allocating Integer.MAXINT new Sum objects. Yet another reason Autoboxing is tricky. The two other loops do not require allocating new Objects. One uses a primitive int, and in the other you can increment Performance.sum without needing to allocate a new Performance each time. Accessing the primitive int on the stack or in the heap should be roughly equally fast, as shown. Your timings have very little to do with heap vs. stack speed of access, but everything to do with allocating large numbers of Objects in loops. As others have noted, micro benchmarks can be misleading. Similar Questions
http://ebanshi.cc/questions/3253769/primitive-in-object-heap-or-stack
CC-MAIN-2017-43
refinedweb
366
52.87
How do I stop my JTable model from being edited? I'm using a DefaultTableModel and the only way I desire new rows to be entered is via the addRow method. Created May 4, 2012 Jan Borchers Override the isCellEditablemethod to prevent editing: public class MyModel extends DefaultTableModel { public boolean isCellEditable(int row, int col) { return false; } } JTable will ask its model if it's ok to edit cells by calling this method. If you want to allow some rows, columns, or particular cells to be editable, change the above method to return true or false depending on the (row,column) passed in.
http://www.jguru.com/faq/view.jsp?EID=137289
CC-MAIN-2019-47
refinedweb
103
54.76
Ok, I have a pandas dataframe like this: lat long level date time value 3341 29.232 -15.652 10.0 20100109.0 700.0 0.5 3342 27.887 -13.668 120.0 20100109.0 700.0 3.2 ... 3899 26.345 -11.234 0.0 20100109.0 700.0 5.8 level date time ipython c c[c['lat'] == 26.345] c.loc[c['lat'] == 26.345] c[c['lat'] == 27.702] c.loc[c['lat'] == 27.702] This is probably because you are asking for an exact match against floating point values, which is very, very dangerous. They are approximations, often printed to less precision than actually stored. It's very easy to see 0.735471 printed, say, and think that's all there is, when in fact the value is really 0.73547122072282867; the display function has simply truncated the result. But when you try a strict equality test on the attractively short value, boom. Doesn't work. Instead of c[c['lat'] == 26.345] Try: import numpy as np c[np.isclose(c['lat'], 26.345)] Now you'll get values that are within a certain range of the value you specified. You can set the tolerance.
https://codedump.io/share/SHtZwkJuM0wc/1/pandas-selecting-row-by-column-value-strange-behaviour
CC-MAIN-2017-34
refinedweb
202
77.13
{-# LANGUAGE DeriveDataTypeable #-} {- | 'Sink's are a more flexible alternative to lazy I/O ('unsafeInterleaveIO'). Lazy I/O conflates evaluation with execution; a value obtained from 'unsafeInterleaveIO' can perform side-effects during the evaluation of pure code. Like lazy I/O, a 'Sink' provides a way to obtain the value of the result of an 'IO' action before the action has been executed, but unlike lazy I/O, it does not enable pure code to perform side-effects. Instead, the value is explicitly assigned by a later 'IO' action; repeated attempts to assign the value of a 'Sink' fail. The catch is that this explicit assignment must occur before the value is forced, so just like with lazy I/O, you can't get away with completely ignoring evaluation order without introducing bugs. However, violating this condition does not violate purity because if the value is forced before it has been assigned, it is ⊥. In practice, using 'Sink's instead of 'unsafeInterleaveIO' requires a bit more 'IO' boilerplate. The main practical difference is that while 'unsafeInterleaveIO' requires you to reason about effects from the point of view of pure code, 'Sink's require you to reason about evaluation order of pure code from the point of view of 'IO'; the 'IO' portion of your program will have to be aware of what data is necessary to produce *for* your pure code in order to be able to consume the output it expects *from* your pure code. -} module Data.Sink ( Sink (), newSinkMsg, newSink, tryWriteSink, writeSink , MultipleWrites (..) ) where import Control.Applicative import Control.Exception import Control.Monad import Data.IORef import Data.Maybe import Data.Typeable import System.IO.Unsafe (unsafeInterleaveIO) -- | A write-once reference newtype Sink a = Sink (IORef (Maybe a)) deriving (Eq, Typeable) -- | Create a new 'Sink' and a pure value. If you force the value -- before writing to the 'Sink', the value is ⊥. If you write to the -- 'Sink' before forcing the value, the value will be whatever you -- wrote to the 'Sink'. The 'String' argument is an error message in -- case you force the value before writing to the 'Sink'. newSinkMsg :: String -> IO (Sink a, a) newSinkMsg msg = do ref <- newIORef Nothing x <- unsafeInterleaveIO $ fromMaybe (error msg) <$> readIORef ref return (Sink ref, x) -- | Create a new 'Sink' with a default error message. newSink :: IO (Sink a, a) newSink = newSinkMsg "Evaluated an unwritten sink" -- | Attempt to assign a value to a 'Sink'. If the 'Sink' was -- previously unwritten, write the value and return 'True', otherwise -- keep the old value and return 'False'. This is an atomic (thread -- safe) operation. tryWriteSink :: Sink a -> a -> IO Bool tryWriteSink (Sink ref) x = atomicModifyIORef ref $ maybe (Just x, True) (\y -> (Just y, False)) -- | Attempt to assign a value to a 'Sink'. If the 'Sink' had already -- been written to, throw a 'MultipleWrites' exception. This is an -- atomic (thread safe) operation. writeSink :: Sink a -> a -> IO () writeSink sink x = do success <- tryWriteSink sink x unless success $ throwIO MultipleWrites -- | An exception that is thrown by 'writeSink' if you attempt to -- write to a 'Sink' more than once. data MultipleWrites = MultipleWrites deriving (Show, Typeable) instance Exception MultipleWrites
http://hackage.haskell.org/package/sink-0.1.0.1/docs/src/Data-Sink.html
CC-MAIN-2015-22
refinedweb
520
62.48
I'm having a problem using PyAudio on my Raspberry Pi. All the samples seem to behave the same way and I've pinpointed the problem to this smallest snippet: I get a segmentation fault whenever the code tries to initialize PyAudio. Code: Select all import pyaudio p = pyaudio.PyAudio() Do you know what might be causing this? Is there any way to get more information on why the segmentation fault is occurring? I've tested recording and playing sound from the sound card with: And this worked fine. Code: Select all arecord -D plughw:0,0 -f cd test.wav aplay test.wav Thank you.
https://www.raspberrypi.org/forums/viewtopic.php?p=553565
CC-MAIN-2020-10
refinedweb
106
77.33
Configuration in Azure (Year of Azure–Week 14) October 8, 2011 Leave a comment Another late post, and one that isn’t nearly what I wanted to do. I’m about a quarter of the way through this year of weekly updates and frankly, I’m not certain I’ll be able to complete it. Things continue to get busy with more and more distractions lined up. Anyways… So my “spare time” this week has been spent looking into configuration options. How do know where to load a configuration setting from? So you’ve sat through some Windows Azure training and they explained that you have the service configuration and you should use it instead of the web.config and they covered using RoleEnvironment.GetConfigurationSettingValue. So you know how to get a setting from with location? This is where RoleEnvironment.IsAvailable comes into play. Using this value, we an write code that will pull from the proper source depending on the environment our application is running in. Like the snippet below: return RoleEnvironment.GetConfigurationSettingValue("mySetting"); else return ConfigurationManager.AppSettings["mySetting"].ToString(); Take this a step further and you can put this logic into a property so that all your code can just reference the property. Simple! But what about CloudStorageAccount? Ok, but CloudStorageAccount has methods that automatically load from the service configuration. If I’ve written code to take advantage of this, they’re stuck. Right? Well not necessarily. Now you may have a seen a code snippet like this before: (configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)) ); This is the snippet that needs to be done to help avoid the “SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used.” error message. But what is really going on here is that we are setting a handler for retrieving configuration settings. In this case, RoleEnvironment.GetConfigurationSettingValue. But as is illustrated by a GREAT post from Windows Azure MVP Steven Nagy, you can set your own handler, and in this handler you can role your own provider that looks something like this: { if (RoleEnvironment.IsAvailable) return (configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)); return (configName, configSetter) => configSetter(ConfigurationManager.AppSettings[configName]); } Flexibility is good! Where to next? Keep in mind that these two examples both focus on pulling from configuration files already available to us. There’s nothing stopping us from creating methods that pull from other sources. There’s nothing stopping us from creating methods that can take a single string configuration setting that is an XML document and hydrate it. We can pull settings from another source, be it persistent storage or perhaps even another service. The options are up to us. Next week, I hope (time available of course) to put together a small demo of how to work with encrypted settings. So until then! PS – yes, I was renewed as an Azure MVP for another year! #geekgasm
https://brentdacodemonkey.wordpress.com/2011/10/08/configuration-in-azure-year-of-azureweek-14/
CC-MAIN-2018-26
refinedweb
477
50.02
Play Twine stories in Unity. No Data Twine and Twine-like stories in Unity. Cradle (formerly UnityTwine) is a plugin for Unity that powers the storytelling side of a game. Based on the foundations of Twine, it imports Twine stories, plays them and makes it easy to add custom interactivity via scripting. Writers can independently design and test their stories as they would a normal Twine story; programmers and artists can develop the interaction and presentation without worrying about flow control. When imported to Unity, Cradle kicks in and brings the two worlds together. Snoozing is a short interactive story created with Cradle. The entire source code is available here. Clockwork by Aaron Steed, included in the Examples folder of the plugin, is provided courtesy of the author. Cradle is in active development. It is currently being used for the development of the puzzle-adventure game Clastic Morning, as well as other smaller projects. Hopefully it will be useful to anyone looking to create narrative-based games in Unity. If you use Cradle in your project or game jam and find bugs, develop extra features, or even just have general feedback, please contribute by submitting to this GitHub page. Thank you! Logo design by Eran Hilleli. Table of Contents - Overview - What is Cradle? - What is it not? - Installation - Importing a story - Exporting from Twine - From Twine 2 - From Twine 1 - Supported story formats - Playback - TwineTextPlayer - Scripting - Interacting with the story - Reading story content - Links - Named links - Variables - Story state - Pause and Resume - Cues - Simple example - Setting up a cue script - Cue types - Coroutine cues - Extending - Runtime macros - Variable types - Code generation macros - Addition story formats - Source code - Change log A framework for building a narrative game. A story-driven game relies on player choices to unfold, often branching out in many directions. The code required to handle and respond to these choices can be cumbersome and easily broken, getting messier and harder to maintain as the project grows. Cradle offers a clean, straightforward system for adding story-related code to a game, keeping it separate from other game code and ensuring that changes to the narrative flow and structure can be done with minimal hassle. An editor plugin that imports Twine stories into this framework. Twine is a popular, simple yet powerful tool for writing and publishing interactive stories. Using Twine to write the story parts of a game allows leveraging its tools and its wonderful community, with the added benefit of having a lightweight text-only version of the game that can be played and tested outside of Unity. Whenever a new version of the story is ready, it's published from Twine as an HTML file and dropped into Unity. It is not a Twine emulator. Cradle is not meant to be a Unity-based version of Twine (even though it comes pretty close with the TwineTextPlayer). It is also not an embedded HTML player in Unity. Rather, it turns a Twine file into a standard Unity script which, when added to a scene, runs the story and exposes its text and links to other game scripts, which can use them creatively. It is not only for text and dialog. Twine can be an excellent interactive dialog editor, but it can do many other things as well. Cradle doesn't make any assumptions about how your story will be used or displayed in your game. You could choose to trigger a story choice when the player clicks on a certain object, or treat a specific passage as a cue to play a cutscene. There are 2 ways to install Cradle into a Unity project: The Cradle asset importer listens for any new .html or .twee files dropped into the project directory, and proceeds to import them. The asset importer treats the Twine markup as logic, translating it into a C# script with a similar structure (and file name). A story can be reimported over and over again as necessary; the C# script will be refreshed. Cradle supports the following Twine story formats: * Harlowe, the default format of Twine 2 (recommended) * Sugarcane, the default format of Twine 1 * SugarCube, a richer version of Sugarcane that works in both Twine 1 and 2 Most features of these story formats are available in Cradle, but there are some limitations. Please see their individual readme's in the Documentation folder for information on supported macros, syntax, and more. Cradle can be extended to support additional story formats (Twine or other), see Extending. Once a story is imported and a story script is generated, this script can be added to a game object in the scene like any normal script. All story scripts include the following editor properties: AutoPlay: when true, begins playing the story immediately when the game starts. StartPassage: indicates which passasge to start playback from AdditionalCues: additional game objects on which to search for cues (see the cues section for more info) OutputStyleTags: when checked, the story will output tags that indicate style information (see the styles section for more info) Included in Cradle is a prefab and script that can be used to quickly display and interact with a story in a text-only fashion. The prefab is built with Unity UI components (4.6+) to handle text and layout. To use: Each passage in an imported story becomes a function that outputs text or links. Custom scripts can listen for generated output, displaying it as necessary and controlling which links are used to advance the story. To understand scripting with Cradle it is first necessary to get to know the Storyclass, from which all imported stories derive. The Storyclass is at the heart of Cradle. It contains the story content and includes several methods that allow other scripts to play and interact with a running story. Begin()- starts the story by playing the passage defined by StartPassage. DoLink(string linkName)- follows the link with the specified name (see links). GoTo(string passageName)- jumps to the specified passage and plays the story from there. (Only recommended for special cases.) Example: public Story story; void Start() { story.Begin(); } void Update() { // You'd want your script to check a few things before doing this, but hey this is an example if (Input.GetMouseButtonDown(0)) story.DoLink("myLink"); } After a passage has been reached, its output can be inspected on your Storyscript. Output- a list of all the output of the current passage. GetCurrentText()- a sub-list of Output, includes only the text of the passage. GetCurrentLinks()- a sub-list of Output, includes only the links of the passage. Tags- the tags of the current passage Vars- the current values of any global story variables CurrentPassageName- the name of the current passage that was just executed. PassageHistory- a list of all passage names visited since the story began, in chronological order. (Passages will appear twice if visited twice.) Passage output can also be intercepted while it is executing using cues or with the OnOutputevent: public Story story; void Start() { story.OnOutput += story_OnOutput; story.Begin(); } void story_OnOutput(StoryOutput output) { // Do something with the output here Debug.Log(output.Text); } As a web-based format, Twine is built around the concept of links. Clicking on links is the primary way Twine games are played and the way a story is advanced. In Cradle, links are represented by the StoryLinkclass, and perform either one or both of the following functions when triggered with Story.DoLink: * Go to a different passage * Execute an 'action' - a fragment of a passage which wasn't shown when the passage was entered (Example: setting variable values, revealing additional text) If both an action and a passage name are specified, the action is executed first, and only when it is done does the story advance to the next passage. Consider the following Twine link: [[Visit your grandmother|grandma]] To activate it and enter the "grandma" passage you must call Story.DoLink("Visit your grandmother")in your script. But what if the writer decides to change the text of the link to "Go to your grandmother's house"? You will have to update your script in Unity or the call to DoLink will fail. To avoid breaking links in this way, Cradle extends the standard link syntax to allow naming the link: [[visitGrandma = Visit your grandmother|grandma]]Now you can call Story.DoLink("visitGrandma")and it will work. As long as the writer keeps the name intact, changing the rest of the text will not affect scripting. Why not just use the target passage as the link's name, you ask? For two reasons: Stories often use macros to store values in variables, reading them later in order to check conditions ( if), display them ( Using the getter or setter. Example: ```c# public Story story; void Update() { if (story.Vars["ammo"] > 10) { Debug.Log("Ammo limited to 10"); story.Vars["ammo"] = 10; } } ``` Using the generated variable directly: ```c# public JimsAdventure story; // generated class name from the file JimsAdventure.twee void Update() { if (story.Vars.ammo > 10) { Debug.Log("Ammo limited to 10"); story.Vars.ammo = 10; } } ``` Notes: When a story is playing, it can have one of several states. The state of the story is accessible from the Story.State property. Idle- the story has either not started or has completed executing a passage or a passage fragment. Inspect the Outputproperty of the story to see what was outputted, and then call DoLink()to continue. Playing- the story is currently executing a passage; interaction methods will not work. Paused- the story is currently executing a passage, but was paused in the middle; interaction methods will not work. Call Resume()to continue. To detect when the state has changed, use the OnStateChangedevent: ```c# public Story story; void Start() { story.OnStateChanged += story_OnStateChanged; story.Begin(); } void story_OnStateChanged() { if (story.State == StoryState.Idle) { // Interaction methods can be called now story.DoLink("enterTheCastle"); } } ``` The story can be paused in order to do time-consuming tasks such as waiting for animations to end or for a scene to load, before further story output is generated. Pausing is only necessary when the story is in the Playing state; if it is Idle, there is nothing to pause. Example (using cues): ```c# publicStory story; public Sprite blackOverlay; const float fadeInTime = 2f; IEnumerator castle_Enter() { story.Pause(); blackOverlay.color = new Color(0f, 0f, 0f, 1f); for (float t = 0; t <= fadeInTime; t+=Time.deltaTime) { // Update the alpha of the sprite float alpha = 1f - Mathf.Clamp(t/fadeInTime, 0f, 1f); blackOverlay.color = new Color(0f, 0f, 0f, alpha); // Wait a frame yield return null; } story.Resume(); } ``` Cradle includes a powerful cue system that allows scripts to easily run in conjuction with the current passage. Note: Before version 2.0 cues were called 'hooks', this was changed to avoid confusion with the term hook as it is used in the Harlowe story format. Let's say your story includes 2 passages named "Attack" and "Defend". Here's a script with cues to change camera background color according to the passage. bool shieldsUp; void Attack_Enter() { Camera.main.backgroundColor = Color.blue; } void Defend_Enter() { Camera.main.backgroundColor = Color.red; shieldsUp = true; } void Defend_Update() { // Runs every frame like a normal Update method, // but only when the current passage is Defend } void Defend_Exit() { shieldsUp = false; } The following cues types are supported (replace 'passage' with the name of a passage): passage_Enter()- called immediately when a passage is entered. This means after Begin, DoLink or GoTo are called and whenever a sub-passage is embedded via a macro (i.e. Twine's displaymacro) passage_Exit()- called on the current passages just before a new main passage is entered via DoLink or GoTo. (An embedded sub-passage's exit cue is called before that of the passage which embedded it, in a last-in-first-out order.) passage_Done()- called when the passage is done executing and the story has entered the Idle state. All passage output is available. passage_link_Done()- called after a link's action has been completed, and before the next passage is entered (if specified). (replace 'link' with the name of a link) passage_Update()- when the story is in the Idle state, this cue is called once per frame. passage_Output(StoryOutput output)- whenever a passage generates output (text, links, etc.), this cue receives it. If you want to attach a cue to a passage with a name that contains spaces or other characters not allowed in C#, you can decorate your method with an attribute: using Cradle; // Specifies that this method is an Enter cue for the passage named "A large empty yard" [StoryCue("A large empty yard", "Enter")] void enterYardCutscene() { // ... } Notes: * You can have multiple StoryCue attributes on a single method. * The StoryCue attribute takes precedence over the method's name, so if an attribute is present the method's name is ignored, even if it looks like a valid cue name. If a cue is an enumeration method (returns IEnumeratorin C# or includes a yieldstatement in UnityScript) it is used to start a coroutine. Coroutine cues behave just like normal Unity coroutines. IEnumerator spaceship_Enter() { Debug.Log("Wait for it..."); yield return new WaitForSeconds(3f); Debug.Log("Go!") } Notes: Pause()and Resume()(example) Cradle can be extended to include macros and var types that do not exist within the original story format. Runtime macros are the simplest kind of extension to add to Cradle. A runtime macro is simply a function that you can call from within a story passage. It can't generate any additional story output or affect the flow of passages, but it can trigger some Unity-specific functionality at precise points in your story. MonoBehaviour, your class should inherit from Cradle.RuntimeMacros [RuntimeMacro]attribute. If you want the name of the macro as written in Twine to be different from the C# method name, simply add the name to the attribute: [RuntimeMacro("sfx")] Here is a complete example that plays/stops an audio source: using UnityEngine; public SoundEffectsMacros: Cradle.RuntimeMacros { [RuntimeMacro] public void sfxPlay(string soundName) { GameObject.Find(soundName).GetComponent().Play(); } [RuntimeMacro] public void sfxStop(string soundName) { GameObject.Find(soundName).GetComponent<audiosource>().Stop(); } } Here's how to use it in Harlowe: Gareth stares intently at the screen and presses the play button. (sfx-stop: "ambient") (sfx-play: "recording") Notes: * To access the Story component from within a macro, simple use this.Story* If you want to add properties that can be assigned from the editor, it is recommended to pass the call onto a regular MonoBehaviour script attached to the same GameObject as your Story component. For example, this.Story.SendMessage("PlaySound", soundName);will pass the macro onto any script attached to that GameObject, where properties can be defined/assigned and the actual work can be done. * An instance of this class is created once per story. So any member variables will exist for the lifetime of your Story component. * When played in the browser, the Sugarcane/Cube story formats might throw an error if an unrecognized function is encountered. The easiest way to avoid this is to create a custom dummy JavaScript function that will avoid the error. Example (add this in your story's JavaScript): window.sfxPlay = function() {}; window.sfxStop = function() {}; (TODO) (TODO) (TODO) The plugin source code is available in the .src directory on GitHub (the period prefix hides this folder from Unity). There are separate solutions for Visual Studio and MonoDevelop. To build, open the appropriate solution for your IDE and run "Build Solution" (Visual Studio) or "Build All" (MonoDevelop). The DLLs created will replace the previous DLLs in the the Cradle plugin directory. If you make modifications to the source code, you might want to run the Cradle test suite. Pre-release.
https://xscode.com/daterre/Cradle
CC-MAIN-2021-49
refinedweb
2,610
55.03
A simple real-world example Now that we have explored the basics, it's time for a real-world example. I often refer to the logs of my instant messaging program or IRC client in order to find important links that I neglected to bookmark but end up needing later. Unfortunately, message logs only get stored on the computer where the message is received. I want to be able to preserve all of the links that I receive in messages and have that data available on all of my computers. It seems like a perfect job for CouchDB and Ubuntu One. I made a small script called LinkCollector that runs in the background and automatically extracts links from messages. The links are stored in Desktop CouchDB and propagated to my other computers via Ubuntu One. To get access to the messages, it monitors D-Bus event signals from Pidgin and XChat. The following code is the complete script: import sys, time, re import dbus, dbus.glib, gobject from desktopcouch.records.server import CouchDatabase from desktopcouch.records.record import Record as CouchRecord COUCH_MESSAGE_TYPE = "" URL_REGEX = re.compile("(?<!w)((?:http|https|ftp|mailto):/*(?!/)(?:[w$+*@&=-/]|%[a-fA-F0-9]{2}|[?.:(),;!'~](?!(?:s|$))|(?:(?<=[^/:]{2})#)){2,})") XCHAT_PLUGIN_FILE = sys.argv[0] XCHAT_PLUGIN_NAME = "LinkCollector" XCHAT_PLUGIN_DESC = "Collects links from XChat messages" XCHAT_PLUGIN_VER = "1.0" database = CouchDatabase("links", create=True) bus = dbus.SessionBus() xchat_plugin_obj = bus.get_object("org.xchat.service", "/org/xchat/Remote") xchat_plugin = dbus.Interface(xchat_plugin_obj, "org.xchat.connection").Connect( XCHAT_PLUGIN_FILE, XCHAT_PLUGIN_NAME, XCHAT_PLUGIN_DESC, XCHAT_PLUGIN_VER) xchat_obj = bus.get_object("org.xchat.service", xchat_plugin) xchat = dbus.Interface(xchat_obj, "org.xchat.plugin") xchat.HookPrint("Channel Message", 0, 0) purple_obj = bus.get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject") purple = dbus.Interface(purple_obj, "im.pidgin.purple.PurpleInterface") def on_xchat_message(data, id, unknown): for url in URL_REGEX.findall(data[1]): record = CouchRecord({ "program": "xchat", "message": data[1], "time": time.time(), "user": data[0], "url": url, }, COUCH_MESSAGE_TYPE) database.put_record(record) bus.add_signal_receiver(on_xchat_message, "PrintSignal", "org.xchat.plugin") def on_purple_message(account, name, message, conv, flags): for url in URL_REGEX.findall(message): record = CouchRecord({ "program": "pidgin", "message": message, "time": time.time(), "user": name, "url": url, }, COUCH_MESSAGE_TYPE) database.put_record(record) bus.add_signal_receiver(on_purple_message, "ReceivedImMsg", "im.pidgin.purple.PurpleInterface") gobject.MainLoop().run() The program is relatively straightforward. A lot of the code is D-Bus boilerplate that sets up the connections and attaches callback methods to the relevant signals. We want to connect to the ReceivedImMsg signal for Pidgin and PrintSignal for XChat. A regex is used to extract the links from the message text. The callback methods, which are performed on every message, iterate through the URLs that are matched by the regex and create a new CouchDB record for each one. At the very end of the script, a GObject main loop is initiated so that it will stay running and perform the callbacks as intended. The script is headless and can simply be left running in the background. Working with CouchDB views Now that we have the data in CouchDB, we need to be able to get it out so that we can actually use it. As I explained earlier, CouchDB doesn't have a conventional query mechanism. We need to create a "view" in JavaScript that CouchDB will use to filter the JSON data that is stored in the database. Each CouchDB view can have two functions: map and reduce. The map function iterates over every item and allows you to emit the data that you want as your output. The reduce function, which is optional, works like 'fold' from functional programming and can be used to combine all of the data outputted by the map function into a single value. When you emit a record in a map function, you return a key and a value for each item. These values can be programmatically generated with JavaScript code or can be retrieved from the content of the record. The keys do not have to be unique. As an example, we are going to use a view to retrieve all of the links that were posted on IRC by Caesar. To do that, we need to match against the "program" and "user" fields in our record. A simple and naive solution would be to create a JavaScript function that uses an if statement to match against those values: function(doc) { // This is generally the wrong approach if (doc.program == "xchat" && doc.user == "Caesar") emit(null, doc); } Although that will work, it's problematic because you would need to create a separate view for every single variation that you want to match against. For example, if I wanted to also be able to find all IRC links from Clint, I'd have to make a whole new view with a different function. A better way to accomplish the above example is to create a general view that exposes the fields that we want to match against as keys and then we can use startkey and endkey to get just the records that we want. This is the view that we would create: function(doc) { // This is much smarter emit([doc.program, doc.user], doc); } To get the values we want from this view, we tell CouchDB to give us just the ones that have ["xchat", "Caesar"] as the key. If you were using CouchDB's HTTP interface directly, you would do that specifying those keys with the startkey and endkey parameters in the URL. When we filter a CouchDB view in Python, we use Python's getitem and slice syntax to supply the desired keys. In the following code example, I'll show you how to create a view in Python and filter the results. #!/usr/bin/env python from desktopcouch.records.server import CouchDatabase from desktopcouch.records.record import Record as CouchRecord database = CouchDatabase("links") if not database.view_exists("program_and_user", "links"): viewfn = 'function(doc) { emit([doc.program, doc.user], doc); }' database.add_view("program_and_user", viewfn, None, "links") results = database.execute_view("program_and_user", "links") for rec in results[["xchat", "Caesar"]]: print rec.value["url"] Before we create our view, we use the view_exists method to make sure that it hasn't already been created. For a variety of performance reasons, we want to avoid creating the view over again every time we run the program. If the view does not already exist, then we define the JavaScript function in a string and then use the add_view method to add the view to the database. The first argument to the view_exists and add_view methods is the name that we want to call the view. The second argument, which is optional, is the name of the CouchDB design document in which the view should be stored. Design documents are a simple mechanism for organizing views in CouchDB. The script uses the execute_view method to retrieve the results of the view from CouchDB. The result object is lazy, which means that the actual result records won't be instantiated until we start accessing the view. We filter the results against the ["xchat", "Caesar"] key and then iterate over the output and display the link value from each record. For our next trick, we are going to try to get all of the links that were sent within the last 24 hours. When the LinkCollector puts links into the database, it includes a "time" field in the record. For the sake of convenience, I chose to use the UNIX timestamp format to store my time values. This makes it really easy to do programmatic filtering. To filter records by date and time, all we have to do is create a view that uses the time as the key and then we can use the desired beginning and ending time as the startkey and endkey values. import time, datetime from desktopcouch.records.server import CouchDatabase from desktopcouch.records.record import Record as CouchRecord database = CouchDatabase("links") viewfn = 'function(doc) { emit(doc.time, doc); }' if not database.view_exists("time", "links"): database.add_view("time", viewfn, None, "links") results = database.execute_view("time", "links") yesterday = datetime.datetime.now() - datetime.timedelta(days=1) for rec in results[time.mktime(yesterday.timetuple()):time.time()]: print rec.value["url"] In Python, the filter is expressed using slice notation. We use a timedelta to subtract a day from a datetime object that represents the current time. The time.mktime method is called to convert that into a UNIX timestamp which we can use for our filter. The time.time method gets us the current time in UNIX timestamp format. As you can see, the filter retrieves all of the records with a time value that is between one day ago and the current time. You must login or create an account to comment.
https://arstechnica.com/information-technology/2009/12/code-tutorial-make-your-application-sync-with-ubuntu-one/2/
CC-MAIN-2018-05
refinedweb
1,440
65.93
How to execute java program without main() method Let’s learn how to execute a java program without main() method. Yes, there is a chance of executing Java program without using a main() method. How to run or execute a java program without main() method public class Main { static { System.out.println("hello world"); } } //You can even try this way! You can try this way also! public class Main { public static void main(String[] args) { } static{ System.out.println("hello world"); } } Output:Hello World Yes, it is possible to run a java program without main() method by using a static block. And the reason that we can execute a program without main() method. Because static block is executed before loading the main() method. There is a limitation to this concept of using static block. It is possible to use static block only if the java JDK version is 5. And on the other latest versions like java 6, 7, it is not possible since we will face an error generated mentioning to add main method to the program. You have to just mention static keyword and then open braces, continue with your code and finally run. Therefore, for the latest versions, there is no possibility of running java program without main() method. We can use this static method only when the java JDK version is 5 or below. And in the second example mentioned above, you can run a block of code without main() method by static block which is not in the main() method. This is possible in all versions of JDK and no errors are generated. Hope the examples are clear and explanation is well understood. Also read:
https://www.codespeedy.com/how-to-execute-java-program-without-main-method/
CC-MAIN-2021-43
refinedweb
280
64.61
Seems that I can not get this code to work correctly. I've looked over it many times but forever whatever reason. I'm still unable to get this code to run all the way though and output the last part of the code. It seems to happen to everyone of my simple little programs I make. Can someone please assist me, and tell me what I am doing wrong? Code: // Simple program to figure out the size of a volume of a box... #include <vcl.h> #include <iostream> #pragma hdrstop #pragma argsused using namespace std; int main() { double side1, side2, square; cout << "Enter the frist side of the square: "; cin >> side1; cout << "Now enter side 2: "; cin >> side2; square = (side1 * side2); cout << "Square total is: " << square; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/93966-newbie-mistake-printable-thread.html
CC-MAIN-2016-07
refinedweb
129
78.59
Forum:Can non-users view my userpage and contributions? From Uncyclopedia, the content-free encyclopedia Note: This topic has been unedited for 2598 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response. So, I've tried searching User:Mental Gear and found nothing, so I was wondering if I told my friends on another site about my account here would they be able to find my userpage and view my contribution history? —The preceding unsigned comment was added by Mental_Gear (talk • contribs) - The people below this post are, with all due respect, morons. I don't think you can search for userpages like that. Your contributions can be found by going special:contributions/Mental Gear: I don't think you can search for contributions, either. • <19:32, 13 Jun 2008> - Well you can search userspace if you tick the "User" box from the "Search in namespaces:" section of the search results screen. It's best to un-tick the mainspace box otherwise you will get loads of links to the wrong things especially if your name is something like Mental_Gear, but less so if your name is Cajek. Although if your name is Cajek you will get something, but again, depending on the search domain you include results may vary... Or you could just give your friends a link to your user page (). That might be easier. MrNFromOuterSpace - Give them this link: [1]. MrN Fork you! 18:31, Jun 13 - Yes....they can also touch you whilst you are sleeping. -- Sir Mhaille (talk to me) - /me decides not to go to bed tonight... MrN Fork you! 19:23, Jun 13
http://uncyclopedia.wikia.com/wiki/Forum:Can_non-users_view_my_userpage_and_contributions%3F
CC-MAIN-2015-32
refinedweb
280
74.9
15 March 2010 09:10 [Source: ICIS news] SINGAPORE (ICIS news)--Mitsui Chemicals plans to resume production at its phenol-acetone plant in Chiba in the next couple of days, following an unplanned shutdown early this month, a company source said on Monday. The plant, which can produce 190,000 tonnes/year of phenol and 114,000 tonnes/year of acetone, was taken off line on 1 March as feedstock propylene supply was affected due to an outage at its upstream cracker facility, the source said. “We will procure propylene from the market and we plan to resume operation from mid-March,” the source said. The plant was also scheduled for a month-long maintenance shutdown in mid October, the source added. Mitsui Chemicals also has a phenol-acetone plant in ?xml:namespace> The plant was shut on 10 March for a scheduled turnaround, and would remain off line until 13 May,
http://www.icis.com/Articles/2010/03/15/9342593/japans-mitsui-chem-to-restart-chiba-phenol-acetone-unit-h2.html
CC-MAIN-2014-42
refinedweb
152
56.08
After a month of active development, gathering feedback and introducing new features, we are thrilled to announce the GA release of the new JavaScript client! Wait, a new JavaScript client? Yes! We announced the RC1 a few weeks back, and now we’re production-ready. Try out the Elasticsearch JavaScript client First, install the client and run Elasticsearch. You can install Elasticsearch locally with our docker image. npm install @elastic/elasticsearch Then create a JavaScript file (TypeScript is supported as well!) and paste inside the following snippet: 'use strict' const { Client } = require('@elastic/elasticsearch') const client = new Client({ node: '' }) async function run () { // Let's start by indexing some data await client.index({ index: 'game-of-thrones', body: { character: 'Ned Stark', quote: 'Winter is coming.' } }) await client.index({ index: 'game-of-thrones', body: { character: 'Daenerys Targaryen', quote: 'I am the mother of dragons.' } }) await client.index({ index: 'game-of-thrones', // here we are forcing an index refresh, // otherwise we will not get any result // in the consequent search refresh: true, body: { character: 'Tyrion Lannister', quote: 'A mind needs books like a sword needs a whetstone.' } }) // Let's search! const { body } = await client.search({ index: 'game-of-thrones', body: { query: { match: { quote: 'winter' } } } }) console.log(body.hits.hits) } run().catch(console.log) What’s new? In addition to all the features added with the new client, in the past month we have listened to your feedback and added new cool features. Some recently-added features are observability, support for sniffing hostnames, improved the type definitions, and the support for a custom HTTP agent. We also vastly improved our JS client documentation and decreased the size of the library. Observability Thanks to the new observability features now it will be easier to connect the dots between events. In every event, you will find the id of the request that generated the event, as well as the name of the client (which will be very useful if you are working with child clients). Below you can see all the data that is exposed by our observability events: body: any; statusCode: number | null; headers: anyObject | null; warnings: string[] | null; meta: { context: any; name: string; request: { params: TransportRequestParams; options: TransportRequestOptions; id: any; }; connection: Connection; attempts: number; aborted: boolean; sniff?: { hosts: any[]; reason: string; }; }; Want to know more? Check out our observability documentation! Type definitions The client offers first-class support for TypeScript, it offers type definitions for its entire API and the parameter of every method. The type definition of the request and response bodies are not yet supported, but we are working on that! In the meantime, we have vastly improved the developer experience by using generics for all the body definitions. import { RequestParams } from '@elastic/elasticsearch' interface SearchBody { query: { match: { foo: string } } } const searchParams: RequestParams.Search<SearchBody> = { index: 'test', body: { query: { match: { foo: 'bar' } } } } // This is valid as well const searchParams: RequestParams.Search = { index: 'test', body: { query: { match: { foo: 'bar' } } } } You can find a complete example in our TypeScript documentation. Conclusion We’re excited about the new JavaScript client, and we hope you are too. Try it out today, locally or on the Elasticsearch Service, and let us know what you think. If you want to know more, you can open an issue in the client repository, post a question in Discuss, or ping @delvedor on Twitter. Happy coding!
https://www.elastic.co/blog/new-elasticsearch-javascript-client-released
CC-MAIN-2020-24
refinedweb
555
62.27
This post provides a sample WebTestRequestPlugin that shows you how to stop a web test when an error occurs. Often when a test has an error, the remainder of requests will either fail or not do what was intended. If they fail, then it might be difficult to pinpoint the exact problem with your web test because the test may have many errors. Another factor where this can cause a problem is when you are running your tests within a load test. The load test will report many errors, most of which are misleading. The load test will also stop collecting errors after a certain number of errors have been hit. A WebTestRequestPlugin is a block of code which can be executed right before a request is submitted or right after a request is submitted. Here is a link with more help on this kind of plug-in: WebTestRequestPlugin Help. This example plug-in will stop a test on 2 different conditions. First, if there is no response returned. This may happen if you have custom code that is failing. For example, if you try to bind a query string parameter to a parameter in the WebTestContext, but the parameter does not exist in the context. The second condition for stopping the test is if the http status code is in the 400 or 500 range. Here is the plug-in code: using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; namespace WebTestExamples { public class StopTestPlugin : WebTestRequestPlugin { public override void PostRequest(object sender, PostRequestEventArgs e) { //Check 2 conditions: // 1) If there is no response, that usually indicates an // error prior to even sending a request. For // example, binding to a context parameter that does // not exist. // // 2) Check the return code. If it is in the 400 or 500 // range, then stop the test. if (!e.ResponseExists || ( ((int)e.Response.StatusCode) >= 400)) { e.WebTest.Stop(); } } public override void PreRequest(object sender, PreRequestEventArgs e) { } } } To use this plug-in, do the following: 1) Add this class to a test project 2) Compile the class 3) Open a web test 4) Click on the web test toolbar item for Set Request Plug-in… 5) Select the new plug-in in the dialog which opens. 6) Save and run a test. Now if a request hits an error, the test should stop. When you open the result for the web test, the request which hit the error should always be the last request in the result. You will not see the requests which were not executed. Hopefully this will make it easier to identify the exact request which is causing your test to error. How can you stop a web test if an error occurred during preRequest plugin?
https://blogs.msdn.microsoft.com/slumley/2006/12/15/stopping-a-webtest-on-a-request-error/
CC-MAIN-2017-09
refinedweb
460
64
OGSF library - setting and manipulating keyframes animation. More... #include <stdlib.h> #include <grass/gis.h> #include <grass/glocale.h> #include <grass/ogsf.h> Go to the source code of this file. OGSF library - setting and manipulating keyframes animation. GRASS OpenGL gsurf OGSF Library (C) 1999-2008 by the GRASS Development Team This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details. Definition in file gk2.c. Add keyframe. The pos value is the relative position in the animation for this particular keyframe - used to compare relative distance to neighboring keyframes, it can be any floating point value. The fmask value can be any of the following or'd together: Other fields will be added later. The value precis and the boolean force_replace are used to determine if a keyframe should be considered to be at the same position as a pre-existing keyframe. e.g., if anykey.pos - newkey.pos <= precis, GK_add_key() will fail unless force_replace is TRUE. Definition at line 429 of file gk2.c. Deletes all keyframes, resets field masks. Doesn't change number of frames requested. Definition at line 310 of file gk2.c. References gk_free_key(), and NULL. Delete keyframe The values pos and precis are used to determine which keyframes to delete. Any keyframes with their position within precis of pos will be deleted if justone is zero. If justone is non-zero, only the first (lowest pos) keyframe in the range will be deleted. Definition at line 367 of file gk2.c. Move keyframe. Precis works as in other functions - to identify keyframe to move. Only the first keyframe in the precis range will be moved. Definition at line 336 of file gk2.c. References key_node::next, and key_node::pos. Print keyframe info. Definition at line 209 of file gk2.c. References _, key_node::fields, G_fatal_error(), KF_DIRX, KF_DIRY, KF_DIRZ, KF_FOV, KF_FROMX, KF_FROMY, KF_FROMZ, KF_TWIST, key_node::next, NULL, and key_node::pos. Recalculate path using the current number of frames requested. Call after changing number of frames or when Keyframes change. Definition at line 243 of file gk2.c. Referenced by GK_update_tension(). Update tension. Definition at line 195 of file gk2.c. References GK_update_frames().
https://grass.osgeo.org/programming7/gk2_8c.html
CC-MAIN-2020-40
refinedweb
369
69.79
How to set up a client library varies by programming language. Select the tab for the language you're using for development. If you are using a language not available here, see the complete table list of available Downloads. Java Follow the setup intructions in the Java Quickstart. Python Using the Google APIs Client Library for Python requires that you download the Python source. In the future, packages will be provided. Refer to the project page for more details. Run the following commands to download and install the source: $ hg clone google-api-python-client $ cd google-api-python-client $ sudo python setup.py install You can now import the classes you will need using the following statements: from apiclient.discovery import build from apiclient.oauth import OAuthCredentials import httplib2 import oauth2 as oauth PHP Follow the setup intructions in the PHP Quickstart. .NET Install the Google Tasks NuGet package.
https://developers.google.com/google-apps/tasks/setup
CC-MAIN-2015-27
refinedweb
150
68.06
- Getting Wii Controller working with Raspberry Pi - Getting Serial Peripheral Interface (SPI) working on the Raspberry Pi - Python over SPI channel - Installing Debian (linux) on the Pi My current hardware setup looks like this: - Raspberry Pi - Digilent Cerebot II (Atmega64 Microcontroller) - 2x H-bridges (Digilent) - 2x 6 Volt DC Motors (PWM controlled) - USB to Bluetooth - this one SABRENT at Fry's is what I used - Nintendo Wii Controller - 8x 2AA batteries (6V for the Micro, 6V for the Pi) This picture describes the basic setup of my remote control robot. Getting Started (Python + Bluetooth): So in this post i'm going to go over the different parts that build up my robot project. The first one is the Raspberry Pi and Bluetooth module. As I've used in my previous Wii Controller + Python + Raspberry Pi = Amazing! blog I used the Cwiid python module that enables my to connect to a USB Bluetooth dongle I put in my Raspberry Pi. You'll need to install Cwiid for python on the Pi. sudo apt-get install python-cwiid Once this is done you'll have to start your program with some python code that connects to the controller. So in your Python code make sure you include the library and then cwiid.Wiimote() connects to the Nintendo Wii remote. import cwiid wm = cwiid.Wiimote() After this you'll want to enable button data reporting. wm.rpt_mode = cwiid.RPT_BTN All the buttons on the wii remote have values that can be queried. The best part is they add, so if you're pressing multiple buttons such as left + gas, which are values of 2048 + 1 = 2049. Something like this wm.state['buttons'] == 2049 (button values for CWiid) Getting Started (Python + SPI): The next phase of this project was getting the Serial Peripheral interface (SPI) working with the hardware. As with many parts of my blog this one can take a lot longer as I had to write code in C and make a module that can be called in Python. Please check out my previous post on Python Controlling SPI Bus on the Raspberry Pi and Beagleboard XM for getting Python working. You have to make sure you have SPI enabled in the hardware and from there you can check it by going to the /dev directory. I have another blog Getting SPI working on the Raspberry Pi that helps with getting this setup. Once your hardware is showing up you can use the SPI.so module I created and call it in Python using something similar to this. For the record my SPI command is in this format, no real reason, just an arbitrary one I chose for hex values. spidata = ['FFDDLLRR00'] Where the DD is representing the direction of the robot, LL is the left wheel speed and RR is the right wheel speed. spilist = [] #combine the string combine = str('FF')+ direction[0] +str(rws)+str('0')+str(lws)+str('0')+str('00') spilist.append(combine) # append the string of combined direction and wheelspeeds length_data = len(spilist[0])/2 + 1 spiconnection.transfer(spilist[0], length_data) time.sleep(.2) There is a lot more detail here that i'm not going to go into, but all the logic to my code, directions, logic behind what buttons are pushed and such are written in Python and located in file name "robot_version0.x.py" where x is the version number. All my code revisions will be found in the following folder location. Feel free to check back over time for newer versions. ===> My Code versions: version 0.1 at time of post. <=== One of the many important things to know about SPI is how it works. Understanding it has a data line in and out, a clock and a slave select. All data from the master controller goes to all the slaves, but the master just tells which slave to listen. SPI was originally created by Motorola many years ago and is still very popular in computer communication. Getting Started (SPI + PWM + Atmega64): So the last part of this project deals with setting up the Digilent Inc board that is a Cerebot II miro-controller. I'm using AVR studio 4.0 and first started by setting up the mico-controller as a SPI slave so it will take in the clock and Slave Select (SS) from the Raspberry Pi. Once the SPI signal was setup and tested I configured the Pulse Width Modulation (PWM) signals that would talk to the H-Bridges and help higher voltages go to the DC motors. There are registers that control the speed and the direction. So I have the SPI data read in and then it configures what the speed for each wheel is and set the direction. These values change the speed and feed back the data to the Python code saying what was read in, small form of error checking. Check the code folder for "WiiRobotProject" as this is my code for AVR code. Still a work in progress so there are tons of comments and things I was trying. it will look cleaner in my final code release. There are a bunch of features I would love to still get out of this Cerebot II board, such as the motor feedback, maybe add some SERVOs to control a camera (phase 3)? Time will only tell if i get around to it. FINAL NOTE (CODE): Like posted earlier if you would like to check out my code or get some ideas from what I'm doing please check the link below for my source code. Comments. suggestions, and questions are always welcome. Please check out previous tutorials i've written to help with the small details. ===> My Code versions: version 0.1 at time of post. <=== Until next time, peace! Future NEXT STEPS: So my next phases of this project will include some of the following as basic ideas: - Battery operated - wireless putty connection for controll - Wii Nunchuck support for camera/laser - Servo control - camera/ front wheels - Hash checking of SPI data Hi Brian Thanks for this code, it works really well with my robot project. I have tried to get the script to run at boot, which I have managed but it has a bit of a quirk. The wiimote thinks it has paired, but the script doesn't unless you wait until it gets to at least one failed attempt and then it all works just fine. Do you have any idea on a fix for this? Thanks Paul Hi Paul, I never tired to get it to start on boot. What i ended up doing was adding a wifi usb module and setup a peer to peer network connection using putty so I could access and see whats updating on the robot. Mine works on the first pairing attempt. Cheers, Brian Thanks for sharing this. I have been curious about how to do this and ordered my rasp pi last weekend. Does the Spi connection have to be a certain type of wires? Not really, just make sure they're well isolated so you don't get any EMI problems. Also, the shorter the better, if you continue to have problems try checking to see if the same commands are returning over the MISO path. I just use a oscilloscope to look at the signal, but understand most probable don't have access to one. Cheers, Brian At the end I noticed a future incorporation of a Nunchuck. Can the Nunchuck's C and Z buttons be used in the same or a similar way to the the remote's buttons using cwiid? I'm sure they could be used. I haven't tried them yet. but i'm hoping with more iterations of this robot project that it will eventually have the nun chuck controlling a camera on servos or something like that. Cheers, Brian I tried this and your other wiimote blog and I get a missing cwiidmodule error. I tried all manners of building the cwwid package from various sources on the web with no luck. I had my raspberry pi working last week and I seem to remember using a build and install command. I can't remember what that was or how I got it to work. All I know is I had to start with a fresh wheezy image and reload everything including apt-get install python-cwiid. This installs ok, but the missing cwiidmodule error now appears. Do you have any idea what build/install commands I might have used last time? This comment has been removed by the author. This comment has been removed by the author. This comment has been removed by the author. This comment has been removed by the author. Hello, where can I buy DC motors that are PWM controlled? I can't seem to find them anywhere! Very nice post and this is a great reminder that there is always room for improvement. Thanks for the great examples and inspiration. Robotics Courses | How to learn robotics
http://www.brianhensley.net/2013/03/raspberry-pi-robot-wii-remote-phase-1.html
CC-MAIN-2017-04
refinedweb
1,507
71.75
Long time reader first time poster. Hello, great site :). I am developing a program that adds the sum of even numbers and the sum of odd numbers. I am self teaching myself C++ and have some problems with Express 2008, No window is comming up to input my numbers or the amount of numbers to enter. I believe the code I have wrote is pretty good. I did include the proper stuff in the header file like <iostream>. I did add a pause at the bottom last night and got the console window to stay open, but all that did was ask me to press any key which resulted in it closing. Thanks for any help Surefall Code: #include "header2.h" using namespace std; int main () { int limit; int counter; int number; int evensum; int oddsum; limit = 0; counter = 0; number = 0; evensum = 0; oddsum = 0; cout<<"This program takes a set of numbers"<<endl; cout<<"and gives the sum of the even and odd numbers"<<endl; cout<<"enter the amount of numbers"<<endl; cin>>limit; cout<<"Please enter "<<limit<<" numbers"<<endl; while(counter < limit) { cin>>number; if(number % 2 == 0) evensum = evensum + number; else oddsum = oddsum + number; counter++; } cout<<"The sum of the even numbers is "<<evensum<<endl; cout<<"The sum of the odd numbers is "<<oddsum<<endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/115104-no-input-window-cin-express-2008-a-printable-thread.html
CC-MAIN-2014-52
refinedweb
221
65.35
Before handling HTTP get requests and HTTP post requests, you should have some knowledge about HTML forms. HTML forms provide a simple and reliable user interface to collect data from the user and transmit the data to a servlet or other server side programs for processing. You might have seen HTML forms while using search engine visiting online book stores, tracking stocks on the web, getting information about availability of train tickets etc. In order to construct a HTML form the following HTML tags are generally used . • <FORM ACTION = "url" METHOD = "method"></FORM> This tag defines the form body. It contains two attributes action and method. The action attribute specifies the address of the server program (i.e. servlet or JSP page) that will process the form data when the form is submitted. The method attribute can be either GET or POST, it indicates the type of HTTP request sent to the server. If the Method attribute is set to GET, the form data will be appended to the end of the specified URL after the question mark. If it is set to POST, then the form data will be sent after the HTTP request headers and a blank line. • <INPUT TYPE = "type" NAME = "name" >............ </INPUT> This tag creates an input field. The type attribute specifies the input type. Possible types are text for single line text field, radio for radio button, checkbox for a check box ,textarea for multiple line text field, password for password field, submit for submit button ,reset for reset button etc. The name attribute gives a formal name for the attribute. This name attribute is used by the servlet program to retrieve its associated value. The names of radio buttons in a group must be identical. • <SELECT NAME = "name" SIZE =" size">…….. </SELECT> This tag defines a combobox or a list. The NAME attribute gives it a formal name. The SIZE attribute specifies the number of rows in the list. • <OPTION SELECTED= "selected" VALUE = "value"> This tag defines a list of choices within the <SELECT> and </SELECT> tag. The value attribute gives the value to be transmitted with the name of the select menu if the current position is selected. The selected attribute specifies the particular menu item shown is selected, when the page is loaded. The Fig. shows a user registration form In order to handle form data using HTTP GET request, we first create a user registration form as shown earlier and a servlet that will handle HTTP GET request. The servlet is invoked when the user enters the data in the form and clicks the submit button. The servlet obtains all the information entered by the user in the form and displays it. The coding for user-registration form (c:\Tomcat6\webApps\examples) is as follows, <html> <head> <title> User Registration Form </title> </head> <body> <h2 align=”center”> User Registration Form </h2> <form method =”get” action =”/examples/servlet/DisplayUserInfo”> <p align=”center”> First Name :<input type=”text” name=”fname”/> <p align=”center”> Last Name : <input type=”text” name=”lname”/> <p align=”center”> Email Id : <input type=”text” name=”emailid”/> <p align=”center”> <input type="submit" value="submit" /> <input type="reset" value="Reset" /> </p> </form> </body> </html> In the given html code, notice that the Method attribute of the FORM tag is set to GET, which indicates that the HTTP GET request is sent to the server. The ACTION attribute of the FORM tag is set to/examples/servlet/DisplayUserInfo which identifies the address of the servlet that will process the HTTP GET request. The code for the DisplayUserInfo. java is as follows,. import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class DisplayUserInfo extends HttpServlet { public void doGet (HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType (“text/html”); PrintWriter out =response.getWriter (); String fname = request.getParameter (“fname”); String lname = request.getParameter (“lname”); String emailid = request.getParameter (“emailid”); out.println (“You have entered:<br>”) out.println (“First Name ……”+fname); out.println (“<br> Last Name ……”+lname); out.println (“<br> Emailid ……”+emailid); out.println (“<h2> thanks for registration </h2> <hr>”); out.close (); } } In the above servlet code, our servlet class DisplayUserInfo extends HttpServlet class which is the most likely class that all the servlets will extend when you create servlets in web application. We override doGet () method as it has to process HTTP GET request. This method takes an HttpServletRequest object that encapsulates the information contained in the request and HTTPServletResponse object that encapsulates the information contained in the response. Our implementation of doGet () method performs two tasks: • To extract the form parameters from HTTP request. • To generate the response. We call the getParameter () method of HTTP servlet request to get the value of a form parameter. If the parameter does not exist, this method returns null. It returns an empty string if an empty value is passed to the servlet. The rest of the statements in the servlet are similar to the ones we have already discussed in SimpleServlet.java. NOTE: The parameter name specified in the getParameter () method must match to the one that is specified in HTML source code. Also, the parameter names are case sensitive. Now compile and save the servlet program in the C:\Tomcat6\webapps\examples\WEB-INF\classes directory. After this, open your web browser and type the following URL To load user_reg. html in the web browser. As a result, the following webpage will be displayed. Enter the firstname, lastname and email id and click on the Submit button. As a result, SimpleServlet.java servlet will be invoked. When you click the Submit but ton, the values you entered in the form are passed as name-value pars in a GET request. This form data will be appended to the end of the specified URL after the question mark as shown Address: http:localhost:8080/examples/servlet/DisplayUserInfo? fname =Daljeet & lname=Singh & emailid= This email address is being protected from spambots. You need JavaScript enabled to view it. it is clear that name-value pairs are passed with the name and value separated by ‘=’. If there is more than one name-value pair, each pair is separated by ampersand (&). When the servlet SimpleServlet.java receives the request, it extracts the fname, lname and email parameters from the HTTP request using the getParameter () method of the HttpServletRequest interface. The statement, String fname = request.getParameter (“fname”); will extract the parameter named fname in the <FORM> tag of the user-reg.html file and store it in request object. Similarly, we extract the lname and email parameters. After processing the request, the servlet displays the form data along with an appropriate message. When the user enter the data in the form displayed in the web browser and click the Submit button, then as in case of get () method, the servlet obtains all the information entered by the user in the form and displays it. But unlike the get () method, the values entered are not appended to the requested URL rather the form data will be sent after the HTTP request headers and a blank line. The output displayed will be as
http://ecomputernotes.com/servlet/ex/understanding-html-forms
CC-MAIN-2019-39
refinedweb
1,183
65.22
Apport is the software that handles bug crashes in Ubuntu. Whenever you want to report a bug in Ubuntu, apport collects data about that package, attach it to the bug report and send it to Launchpad. Hooks are scripts written in python that help apport to collect specific data about a package. In this post, I’ll show how to write one, specifically an apport hook for the MPD package which includes the user configuration file. There are some links that you have to read to get a better understanding of what hooks are, and how to write them, those links will be at the end of this post. If you want to write a hook for a package that doesn’t have a hook but need it, then you can report a bug similar to this one: First off, we need to get the source of the package: bzr branch ubuntu:mpd ; bzr branch mpd fix-missing-hook ; cd fix-missing-hook And we’re ready to start. In this example, th eMPD user configuration file is stored at ~/.mpdconf. Everytime we write a hook, we *must* ensure that we keep the user data safe. That means that if the configuration file contains passwords, they *must* kept away from the public. Fortunately for this example I found a snippet that replaces passwords strings in files (link at the end). I just had to modified it to fit my needs and add some little things to it. Here’s the apport hook to include MPD user configuration file: ”’ apport package hook for Music Player Deamon Author: Ronny Cardona This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. See for the full text of the license. ”’ from apport.hookutils import * import os # Reference for this function: def _my_files(report, filename, keyname): if not (os.path.exists(filename)): return key = keyname report[key] = “” for line in read_file(filename).split(‘\n’): try: if ‘password’ in line.split(‘”‘)[0]: line = “%s \”@@APPORTREPLACED@@\” ” % (line.split(‘”‘)[0]) report[key] += line + ‘\n’ except IndexError: continue def add_info(report): _my_files(report, os.path.expanduser(‘~/.mpdconf’), ‘UserMpdConf’) As you can see, the script was initially released under GPL3+, BUT! MPD uses GPL2+ which means that it’s incompatible with the software license. The best solution is to always release the hooks under the same licence as the software the hook was written for, in this example GPL2+. After the script is ready and copied to /usr/share/apport/package-hooks it *must* be tested to check that it works like expected: ubuntu-bug mpd This will call apport which will call then the hooks for the specific package, if you can see in attached files the user configuration file (or whatever file you wanted to include), then the script works like it should do. So, the script is ready and working, just left to package it. Here comes the interesting part. Apport hooks *must* be included in the debian/ directory, named source_package.py. That means that MPD hook must be named as source_mpd.py and then put it inside the debian/ directory. Now, we can edit mpd.install file to point out where our script will be allocated after installation. That destination is at /usr/share/apport/package-hooks/. So, we append a line like this one: debian/source_mpd.py usr/share/apport/package-hooks That is all we need to do when writing a simple apport hook. We haven’t change code outside debian/ dir, so we’re ready to write the changelog entry: dch -i And here we can specify our changes: mpd (0.16.5-1ubuntu4) precise; urgency=low * debian/source_mpd.py – Added apport hook to include user configuration file. (LP: #947551) – Ronny Cardona Wed, 07 Mar 2012 18:03:12 -0600 Then, commit the change: bzr commit And build the package: bzr builddeb -S pbuilder-dist precise build ../*.dsc We install the package and test it. And check the hook destination directory to verify that it was installed in it’s respective directory. We also verify (one more time) if the hook is called when apport is launched =) Push the branch to LP: bzr push lp:~rcart/ubuntu/precise/mpd/fix-947551 bzr lp-open Then propose for merge That’s all. Links:
http://viajemotu.wordpress.com/author/rcart19/
CC-MAIN-2014-41
refinedweb
741
62.38
Add the timer and actions So far, we've created the visual appearance of our app and created a custom timer to keep track of the traffic light countdown. To finish the app, we'll use the custom timer in QML and add the application logic that's needed to respond appropriately when the app's button is clicked. Use the custom timer in QML The whole purpose of creating our Timer class was so that we could access it in QML, so let's do that now. To be able to use Timer like any other Cascades control, we first need to register it as a type in QML. In the src folder of your project, there should be an appui.hpp file and applicationui.cpp file. Open the applicationui.cpp file. We need to include our timer.hpp header file here, and then register the Timer class in the constructor for the App class by using the qmlRegisterType() template function: #include "timer.hpp" // ... ApplicationUI::ApplicationUI(bb::cascades::Application *app) : QObject(app) { // Register the Timer class in QML as part of version 1.0 of the // CustomTimer library qmlRegisterType<Timer>("CustomTimer", 1, 0, "Timer"); // ... } The qmlRegisterType() function above registers our custom Timer class and makes it available in QML by the name "Timer". The Timer component is placed in version 1.0 of the CustomTimer library. For more information about registering custom types for use in QML, see Extending QML Functionalities using C++ on the Qt website. If you're running your app on a device with a physical keyboard, make sure that you make the following changes in both main.qml files in your project (the one in the assets folder and the one in the assets/720x720 folder). To access Timer in QML, we need to import the CustomTimer library. Open the main.qml file, and add the following line after the existing import bb.cascades 1.0 statement: import CustomTimer 1.0 When you import this custom library, you'll notice that the QML preview no longer shows the preview of your UI. Don't worry, you haven't done anything wrong. In a future version of the tools, you'll be able to import custom libraries and still use the QML preview. Now we're ready to create the timers that our app needs. The first timer keeps track of the time remaining until the traffic light turns yellow. Because we want the countdown text that's displayed on the screen to change every second, we specify an interval of 1000 for this timer. We also need to know the current numerical value of the countdown, so we use a custom property for this purpose. Most Cascades controls include predefined properties that you can set, but you can also create any additional properties that you might need by using the property keyword. We add both the custom property and our first timer to the top of the root container of our app (right after the layout: DockLayout {} line): property int currentCount: 0 Timer { id: lightTimer // Specify a timeout interval of 1 second interval: 1000 A signal handler called onTimeout is automatically available for us to use, and the JavaScript code that we place inside this signal handler is executed whenever the timer's timeout() signal is emitted. When the timer emits this signal, we want to decrement the currentCount property and update the countdown text that's displayed. Recall that we gave our TextArea an id of timerDisplay, so we can access the text property of the TextArea and update it accordingly. When the timer reaches 0, we change the state of the traffic light (from green to yellow), stop the countdown timer, and start the timer that pauses at the yellow state of the traffic light: onTimeout: { // Decrement the counter and update the countdown text root.currentCount -= 1; timerDisplay.text = "" + root.currentCount; // When the counter reaches 0, change the traffic light // state, stop the countdown timer, and start the pause // timer if (root.currentCount == 0) { trafficLight.changeLight(); lightTimer.stop(); pauseTimer.start(); } } // end of onTimeout signal handler } // end of Timer Next, let's create the second timer that we need. This timer is used to pause the state of the traffic light in the yellow state, before it changes back to the red state. When this timer emits its timeout() signal, we change the state of the traffic light (from yellow to red). In our finished app, the "Change!" button becomes disabled when it's clicked and the light changes to the green state, and becomes enabled again when the light returns to the red state. In this timer's onTimeout signal handler, we enable the button again and stop the timer: Timer { id: pauseTimer // Specify a timeout interval of 2 seconds interval: 2000 // When the timeout interval elapses, change the traffic // light state, enable the "Change!" button, and stop // the pause timer onTimeout: { trafficLight.changeLight(); changeButton.enabled = true; pauseTimer.stop(); } } Finally, we need to start the sequence of events that occurs when the button is clicked. We implement the button's onClicked signal handler, which does the following: - Changes the state of the traffic light (from red to green) - Sets currentCount to 9, representing the start of the countdown - Starts the countdown timer - Updates the countdown TextArea with the initial countdown text - Disables the button so that it can't be clicked while the countdown is in progress Find the button that we created earlier in the tutorial (we gave it an id of changeButton), and add the onClicked signal handler to it: onClicked: { // Change the traffic light state trafficLight.changeLight(); // Set the initial countdown time root.currentCount = 9; // Start the countdown timer lightTimer.start(); // Update the countdown text timerDisplay.text = "" + root.currentCount; // Disable the "Change!" button changeButton.enabled = false; } We're done! Build and run the app one last time to see the finished product: Now that you're done Here are a couple of ideas for how to extend the features of our traffic light app: Last modified: 2015-03-31 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/documentation/dev/signals_slots/signals_slots_add_actions.html
CC-MAIN-2017-34
refinedweb
1,025
70.23
I received two of these units without any disks. (They were pulled and wiped to remove all traces of customer data due to compliance reasons) I have placed two 1TB disks in them and realized that the software is not on-chip but rather on the drive (after reading through things on here). Correct me if I am wrong but the procedure for getting these running is as follows: Pulled the following from here: viewtopic.php?f=151&t=5435#p22973 [quote=jtoppic]Use Sectedit.exe under windows7. I used my laptop with two Serial ATA converters on USB (20 euro each). [...] If the disks are not fresh, please remove all partitions first (i.e. Partition Wizard)! select disk. import sectors. [...] [/quote] Restore the images? I think that is what this means. I found the images here: ... artitions/ [quote=jtoppic] [...] choose R or L_disk.bin [???] OK (should take a minute or so). Again for the second disk. Attach both harddisks to Ubuntu 10.04. I ran Ubuntu in a VMPlayer box (or a VMware Workstation) under windows 7, so I could switch easely between Ubuntu and Win7. Open Diskutility (see picture, 4xRaid-1 and 1xRaid-0 arrays). Run each raid (one at the time). Select Check (if resyncing wait and check after). During checking array may being repaired (see picture). Stop array. Check all 4 Raid-1 arrays. Do not check Raid-0 array. Ready! Build NAS together and switch it on. In Explorer (your Lacie BigDisk address, if not sure use MyLanViewer or look into your router). Log on: admin/admin. In System/Disks choose format. Wait a minute and press status (3 times). Log in again. Everything should work. Firmware version is v2.28. Above mentioned procedure is meant for 2x1TB disks. If using 500GB disks (or even smaller) please contact me for advise. Greetz, John [...] Partitions ¦sda1: extended, 980 Mb ¦sda2: xfs, md raid0, user data, the remaining part of the disk ¦sda5: swap, md raid1, 130 Mb ¦sda6: Linux kernel U-Boot Image, 8 Mb ¦sda7: ext3, md raid1, minimal root partition, 8 Mb ¦sda8: ext3, md raid1, original firmware partition, 130 Mb ¦sda9: ext3, md raid1, snapshots/upgrades partition, 696 Mb ¦sda10: filled with zeroes (may be used for an alternate kernel), 8 Mb [/quote] I am trying to put all of the steps and information in one place so they are clear and concise. Am I missing anything? Ethernet Big Disk Drives Wiped Post Reply 1 post • Page 1 of 1 - Posts: 2 - Joined: Wed May 25, 2016 8:08 am Post Reply 1 post • Page 1 of 1
http://forum.nas-central.org/viewtopic.php?f=151&t=18671&p=103615&sid=63eb04f62dbc181e8ab73e4c82a5311c
CC-MAIN-2018-26
refinedweb
435
76.11
String.ToLower Method Silverlight Returns a copy of this string converted to lowercase, using the casing rules of the current culture. Namespace: SystemNamespace: System Assembly: mscorlib (in mscorlib.dll) Return ValueType: System.String The lowercase equivalent of the current string. Starting in Silverlight 4, the behavior of the String.ToLower() method has changed. In Silverlight 4, it converts the current string instance to lowercase using the casing rules of the current culture. This conforms to the behavior of the String.ToLower() method in the full .NET Framework. In Silverlight 2 and Silverlight 3, it uses the casing rules of the invariant culture. The following code example converts several mixed case strings to lowercase. using System; public class Example { public static void Demo(System.Windows.Controls.TextBlock outputBlock) { string[] info = { "Name", "Title", "Age", "Location", "Gender" }; outputBlock.Text += "The initial values in the array are:" + "\n"; foreach (string s in info) outputBlock.Text += s + "\n"; outputBlock.Text += String.Format("{0}The lowercase of these values is:", "\n") + "\n"; foreach (string s in info) outputBlock.Text += s.ToLower() + "\n"; outputBlock.Text += String.Format("{0}The uppercase of these values is:", "\n") + "\n"; foreach (string s in info) outputBlock.Text += s.ToUpper() + "\n"; } } For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. Show:
http://msdn.microsoft.com/en-us/library/e78f86at(v=vs.95)
CC-MAIN-2014-23
refinedweb
220
52.46
Spring Boot 2.1: Outstanding OIDC, OAuth 2.0, and Reactive API Support Spring Boot 2.1 was recently released, eight months after the huge launch of Spring Boot 2.0. The reason I’m most excited about Spring Boot 2.1 to me is its improved performance and OpenID Connect (OIDC) support from Spring Security 5.1. The combination of Spring Boot and Spring Security has provided excellent OAuth 2.0 support for years, and making OIDC a first-class citizen simplifies its configuration quite a bit. For those that aren’t aware, OIDC is just a thin-layer on top of OAuth 2.0 that provides the user’s identity with an ID token. Spring Security automatically translates this token into a Java Principal so you can easily retrieve a user’s information using dependency injection. In addition to an ID token, OIDC adds: A UserInfo endpoint for getting more user information A standard set of scopes A standardized implementation of the ID token (with JWT) Before I dive into showing you how to add authentication to a Spring Boot app with OIDC, let’s take a look at what’s new and noteworthy in this release. What’s New in Spring Boot 2.1 I think Philip Riecks does a great job of summarizing the highlights in What’s new in Spring Boot 2.1: Java 11 Support: just add <java.version>11</java.version>to your pom.xml! Logging Groups: group logging categories using logging.group.{groupName}={first},{second}( logging.group.weband logging.group.sqlare already defined). Lazy JPA startup: specify spring.data.jpa.repositories.bootstrap-mode=lazyto turn it on. JUnit 5 improvements: no more @ExtendWith(SpringExtension.class)necessary! There are some other useful security-related features in the official Spring Boot 2.1 release notes: @WebMvcTestand @WebFluxTestsecurity configuration is now automatically included with web slice tests. @WebMvcTestlooks for a WebSecurityConfigurerbean while @WebFluxTestlooks for a ServerHttpSecuritybean. OAuth 2.0 client configuration has a single spring.security.oauth2.client.registrationtree. The authorizationcodeand clientcredentialskeys have been removed. Thymeleaf Spring Security Extras has changed its auto-configuration coordinates to thymeleaf-extras-springsecurity5. Update your build files! OAuth 2.0 login has been added to WebFlux, along with resource server support. I like to call it OIDC login since OAuth is not for authentication. Another new feature that looks interesting: Elasticsearch REST client support. I integrated Spring Data Jest into JHipster, so this development intrigues me. Especially its description: an alternative option to Jest, auto-configurations for RestClient and RestHighLevelClient are provided with configurable options from the spring.elasticsearch.rest.* namespace. Create a Secure Spring Boot Application You can create a Spring Boot application quickly with the Spring CLI. It allows you to write Groovy scripts that get rid of the boilerplate Java and build file configuration. Refer to the project’s documentation for installation instructions. To install Spring CLI, I recommend using SDKMAN!: sdk install springboot brew tap pivotal/tap brew install springboot Make sure you’re using the 2.1.0 version by running spring --version. $ spring --version Spring CLI v2.1.0.RELEASE Create a hello.groovy file that has the following code: @Grab('spring-boot-starter-security') @RestController class Application { @RequestMapping('/') String home() { 'Hello World' } } The @Grab annotation invokes Grape to download dependencies. Because Spring Security is in the classpath, its default security rules will be used. That is, protect everything, allow a user with the username user, and generate a random password on startup for that user. Run this app with the following command: spring run hello.groovy Open your browser to and you’ll be greeted with a login form. Enter user for the username and copy/paste the generated password from your terminal. If you copied and pasted the password successfully, you’ll see Hello World in your browser. Add Identity and Authentication with OIDC Using the same username and password for all your users is silly. Since friends don’t let friends write authentication, I’ll show you how to use Okta for auth with just a few lines of code. Spring Boot app! Create an OIDC App in Okta Log in to your Okta Developer account and navigate to Applications > Add Application. Click Web and click Next. Give the app a name you’ll remember, and specify as a Login redirect URI. Click Done. The result should look something like the screenshot below. Copy and paste the URI of your default authorization server, client ID, and the client secret into application.yml (you’ll need to create this file). spring: security: oauth2: client: provider: okta: issuer-uri: https://{yourOktaDomain}/oauth2/default registration: okta: client-id: {clientId} client-secret: {clientSecret} Create a helloOIDC.groovy file that uses Spring Security and its OIDC support. @Grab('spring-boot-starter-oauth2-client') @RestController class Application { @GetMapping('/') String home(java.security.Principal user) { 'Hello ' + user.name } } Run this file using spring run helloOIDC.groovy and try to access. You’ll be redirected to Okta to log in, or just shown Hello {sub claim} if you’re already logged in. In the near future, you’ll be able to use Okta’s Spring Boot starter and make it even simpler: okta: oauth2: issuer: https://{yourOktaDomain}/oauth2/default client-id: {clientId} client-secret: {clientSecret} @Grab('com.okta.spring:okta-spring-boot-starter:1.0.0-SNAPSHOT') @RestController class Application { @GetMapping('/') String home(java.security.Principal user) { 'Hello ' + user.name } } Limiting Access Based on Group Spring Security ships with a number of nifty annotations that allow you to control access to methods. You can use @Secured, @RoleAllowed, and @PreAuthorize to name a few. To enable method-level security, you just need to add the following annotation to a configuration class. @Configuration @EnableGlobalMethodSecurity( prePostEnabled = true, (1) securedEnabled = true, (2) jsr250Enabled = true) (3) public class SecurityConfig { } To use these annotations in your app, Spring Security will recognize your groups as authorities and allow you to lock down methods! @Grab('com.okta.spring:okta-spring-boot-starter:1.0.0-SNAPSHOT') import org.springframework.security.access.prepost.PreAuthorize @EnableGlobalMethodSecurity(prePostEnabled = true) @RestController class Application { @GetMapping('/') String home(java.security.Principal user) { 'Hello ' + user.name } @GetMapping('/admin') @PreAuthorize("hasAuthority('Administrators')") String admin(java.security.Principal user) { 'Hello, ' + user.name + '. Would you like to play a game?' } } Angular, React, and WebFlux - Oh My! I updated a few of my favorite tutorials on this blog to use Spring Boot 2.1 recently. Build a Basic CRUD App with Angular 7.0 and Spring Boot 2.1: uses implicit flow, Okta’s Angular SDK, and a Spring Security resource server. Use React and Spring Boot to Build a Simple CRUD App: uses authorization code flow and packages everything in a single JAR. Full Stack Reactive with Spring WebFlux, WebSockets, and React: uses implicit flow, along with Spring Security OIDC login and resource server. I enjoyed writing the full stack reactive tutorial so much, I turned it into a screencast! A keen eye will notice I’m using Java 11 and Node 11 in this video. 😃 JHipster and Spring Boot 2.1 Earlier I mentioned JHipster. The JHipster team is actively working on upgrading its baseline to Spring Boot 2.1. You can watch progress by following issue #8683. If you’ve never heard of JHipster before, you should download the free JHipster Mini-Book from InfoQ! It’s a book I wrote to help you get started with hip technologies today: Angular, Bootstrap and Spring Boot. The 5.0 version was just released. Learn More About Spring Boot and Spring Security I’ve only touched on the tip of the iceberg regarding the capabilities of Spring Boot and Spring Security. You can use them to build and secure microservices too! Below are some related posts that show the power of using OAuth 2.0 and OIDC to secure your Spring Boot APIs.
https://developer.okta.com/blog/2018/11/26/spring-boot-2-dot-1-oidc-oauth2-reactive-apis
CC-MAIN-2018-51
refinedweb
1,307
51.04
New Gtk2Hs pre-release in time for xmasSunday, December 24th, 2006 So we didn’t quite make our deadline of a 0.9.11 release before xmas but I think we’re pretty close. We are at least into the release phase. You can now grab a tarball of Gtk2Hs version 0.9.10.3: Try to build it, test it and report problems back to us. As for the issue of the new list/tree widget system, we’ve decided to include the new api in parallel with the old. This means we’re not yet breaking anyone’s old code while providing access the new api. We’re not promising yet that the new api is perfect but that’s partly why we want to get it included in a release - so that people can try it and tell us what’s missing. To get at it just: import qualified Graphics.UI.Gtk.ModelView as New You really have to use import qualified because most of the names clash. You can also check out the demos in the demo/treeList and demo/profileviewer directories. After xmas we’ll follow up with some proper release candiate tarballs, lots more testing, and we’ll give you the details on what minimal api changes there are in this release. In other news, after writing us a great glade tutorial, Hans van Thiel has been hard at work on an introductory tutorial. With any luck that’ll be available around the same time as the 0.9.11 release.
http://haskell.org/gtk2hs/archives/category/development/
crawl-001
refinedweb
257
71.55
Mercurial > dropbear view libtommath/bn_mp_count_bits.c @ 475:52a644e7b8e1 pubkey-options * Patch from Frédéric Moulins adding options to authorized_keys. Needs review. line source #include <tommath.h> #ifdef BN_MP_COUNT_BITS], */ /* returns the number of bits in an int */ int mp_count_bits (mp_int * a) { int r; mp_digit q; /* shortcut */ if (a->used == 0) { return 0; } /* get number of digits and add that */ r = (a->used - 1) * DIGIT_BIT; /* take the last digit and count the bits in it */ q = a->dp[a->used - 1]; while (q > ((mp_digit) 0)) { ++r; q >>= ((mp_digit) 1); } return r; } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_count_bits.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */
https://hg.ucc.asn.au/dropbear/file/52a644e7b8e1/libtommath/bn_mp_count_bits.c
CC-MAIN-2022-21
refinedweb
106
65.42
MVar these fundamental, atomic operations: putwhich fills the var if empty, or blocks (asynchronously) until the var is empty again tryPutwhich fills the var if empty; returns trueif successful takewhich empties the var if full, returning the contained value, or blocks (asynchronously) otherwise until there is a value to pull tryTakeempties if full, returns Noneif empty. readwhich reads the current value without touching it, assuming there is one, or otherwise it waits until a value is made available via put tryReadreturns Some(a)if full, without modifying the var, or else returns None isEmptyreturns trueif currently empty. Cats-Effect MVar is generic, being built to abstract over the effect type via the Cats-Effect type classes, meaning you can use it with Monix’s Task just as well as with cats.effect.IO or any data types implementing Async or Concurrent. Note that MVar is already described in cats.effect.concurrent.MVar and Monix’s implementation does in fact implement that interface. MVar will remain in Monix as well because: - it shares implementation with monix.execution.AsyncVar, the Future-enabled alternative - we can use our Atomic implementations - at this point Monix’s MVarhas some fixes that have to wait for the next version of Cats-Effect to be merged upstream Use-case: Synchronized Mutable Variables import monix.execution.CancelableFuture import monix.catnap.MVar import monix.eval.Task def sum(state: MVar[Task, Int], list: List[Int]): Task[Int] = list match { case Nil => state.take case x :: xs => state.take.flatMap { current => state.put(current + x).flatMap(_ => sum(state, xs)) } } val task = for { state <- MVar[Task].of(0) r <- sum(state, (0 until 100).toList) } yield r // Evaluate task.runToFuture.foreach(println) //=> 4950[Task, Unit]) { def acquire: Task[Unit] = mvar.take def release: Task[Unit] = mvar.put(()) def greenLight[A](fa: Task[A]): Task[A] = for { _ <- acquire a <- fa.doOnCancel(release) _ <- release } yield a } object MLock { /** Builder. */ def apply(): Task[MLock] = MVar[Task].of(()).map(v => new MLock(v)) } And now we can apply synchronization to the previous example: val task = for { lock <- MLock() state <- MVar[Task].of(0) task = sum(state, (0 until 100).toList) r <- lock.greenLight(task) } yield r // Evaluate task.runToFuture.foreach(println) //=> 4950[Task, count = 100000 val sumTask = for { channel <- MVar[Task].empty[Option[Int]]() producerTask = producer(channel, (0 until count).toList).executeAsync consumerTask = consumer(channel, 0L).executeAsync // Ensure they run in parallel, not really necessary, just for kicks sum <- Task.parMap2(producerTask, consumerTask)((_,sum) => sum) } yield sum // Evaluate sumTask.runToFuture.foreach(println) //=> 4999950000 Running this will work as expected. Our producer pushes values into our MVar and our consumer will consume all of those values. If you're looking for the older 2.x click here!
https://monix.io/docs/3x/catnap/mvar.html
CC-MAIN-2019-51
refinedweb
455
52.05
Making currency name and currency symbol helpers for Open Event Frontend This blog article will illustrate how to make two helpers which will help us in getting the currency name and symbol from a dictionary, conveniently.The helpers will be used as a part of currency form on Open Event Front End It also exemplifies the power of ember JS and why is it being used in this project via a counter example in which we try to do things the non ember way and get the required data without using those helpers. So what do we have to begin with ? The sample data which will be fetched from the API: [ { currency : 'PLN', serviceFee : 10.5, maximumFee : 100.0 }, { currency : 'NZD', serviceFee : 20.0, maximumFee : 500.0 } //The list continues ] The dictionary data format: [ { paypal : true, code : 'PLN', symbol : 'zł', name : 'Polish zloty', stripe : true }, { paypal : true, code : 'NZD', symbol : 'NZ$', name : 'New Zealand dollar', stripe : true }, { paypal : false, code : 'INR', symbol : '₹', name : 'Indian rupee', stripe : true } ] // The list continues And our primary goal is to fetch the corresponding name and symbol from the dictionary for a given currency code, easily and efficiently. One might be tempted to get things done the easy way : via {{get (find-by 'code' modal.name currencies) 'name'}} and perhaps, {{get(find-by 'code' modal.name currencies) 'symbol'}} where currencies is the name of the imported array from the dictionary. But this might be hard to follow for a first time reader, and also in case we ever need this functionality to work in a different context, this is clearly not the most feasible choice. Hence helpers come into picture, they can be called anywhere and will have a much simpler syntax Our goal is to make helpers such that the required functionality is achieved with a simpler syntax than the one shown previously.So we will simply generate the helpers’ boiler-plate code via ember CLI $ ember generate helper currency-name $ ember generate helper currency-symbol Next we will import the currency format from the payment dictionary to match it against the name or symbol provided by the user. Now all that remains is finding the correct matching from the dictionary. We import the find function from lodash for that. So, this is how they would look import Ember from 'ember'; import { find } from 'lodash'; import { paymentCurrencies } from 'open-event-frontend/utils/dictionary/payment'; const { Helper } = Ember; export function currencyName(params) { return find(paymentCurrencies, ['code', params[0]]).name; } export default Helper.helper(currencyName); And for the currency symbol helper import Ember from 'ember'; import { find } from 'lodash'; import { paymentCurrencies } from 'open-event-frontend/utils/dictionary/payment'; const { Helper } = Ember; export function currencySymbol(params) { return find(paymentCurrencies, ['code', params[0]]).symbol; } export default Helper.helper(currencySymbol); Now all we need to do use them is {{currency-name ‘USD’}} and {{currency-symbol ‘USD’}} to get the corresponding currency name and symbol. We use find from lodash here instead of the default even though it is similar in performance because it provides much better readability. Resources *Featured Image licensed under Creative Commons CC0 in public domain
https://blog.fossasia.org/tag/helper/
CC-MAIN-2020-24
refinedweb
519
50.26
Content-type: text/html write,); The following version of the writev() function does not conform to current standards and is supported only for backward compatibility: #include <sys/types.h> #include <sys/uio.h> ssize_t writev( int filedes , struct iovec *iov, int iov_count ); Interfaces documented on this reference page conform to industry standards as follows: write(): XPG4, XPG4-UNIX pwrite(): POSIX.1c writev(): XPG4-UNIXPG4-UNIX] If filedes refers to a socket, write() is equivalent to send() with no flags flags is set and the filedes parameter refers to a regular file, a successful write() or pwrite() function does not return until the data is delivered to the underlying hardware (as described in the open() function). With devices incapable of seeking, writing always takes place starting at the current position. The value of a file pointer associated with such a device is undefined. If the O_APPEND flag of the file status is set, the file offset is set to the end of the file prior to each write and no intervening file modification operation occurs between changing the file offset and the write operation. If a write() or pwrite() requests that more bytes be written than there is room for (for example, the ulimit() or the physical end of a medium), only as many bytes as there is room for are written. For example, suppose there is space for 20 bytes more in a file before reaching a limit. A write of 512 bytes returns 20. The next write of a nonzero number of bytes will give a failure return (except as noted below). [XPG4-UNIX] Also,() function. [Digital] the blocking locks are removed, or the function is terminated by a signal. If O_NDELAY or O_NONBLOCK is set, then the function returns -1 and sets errno to [EAGAIN]. Upon successful completion, the write() or pwrite() function marks the st_ctime and st_mtime fields of the file for update, and clears its set-user ID and set-group ID attributes if the file is a regular file. The fcntl() function provides more information about record locks. [XPG4-UNIX] (the STREAM write queue is full due to internal flow control conditions), the function blocks until data can be accepted. [XPG4-UNIX] In addition, write(), pwrite() and writev() will fail if the STREAM head had processed an asynchronous error before the call. In this case, the value of errno does not reflect the result of write(), pwrite(), or writev() but reflects the prior error. [Digital] The write(), pwrite(), and writev() functions, which suspend the calling process until the request is completed, are redefined so that only the calling thread is suspended. [Digital] When debugging a module that includes the writev() function and for which _XOPEN_SOURCE_EXTENDED has been defined, use _Ewritev to refer to the write write() or pwrite() function returns the number of bytes actually written to the file associated with the filedes parameter. This number is never be. Refer to mtio(7) for information on enabling and disabling "early warning" EOT. Note: A partial write is a request which spans the end of a partition. The write(), pwrite(), and writev() functions set errno to the specified values for the following conditions: The O_NONBLOCK flag is set on this file and the process would be delayed in the write operation. In addition, the pwrite() function fails and the file pointer remains unchanged if the following is true: The file specified by fildes is associated with a pipe or FIFO. [XPG4-UNIX]: [Digital] For filesystems mounted with the nfsv2 option, the process attempted to write beyond the 2 gigabyte boundary. [Digital] The named file is a directory and write access is requested. [Digital] Indicates insufficient resources, such as buffers, to complete the call. Typically, a call used with sockets has failed due to a shortage of message or send/receive buffer space. [Digital] The named file resides on a read-only file system and write access is required. : [Digital] A write to a pipe (FIFO) of {PIPE_BUF} bytes or less is requested, O_NONBLOCK is set, and less than nbytes bytes of free space is available. Functions: open(2), fcntl(2), lseek(2), mtio(7), open(2), getmsg(2), lockf(3), pipe(2), poll(2), select(2) ulimit(3) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man2/writev.2.html
CC-MAIN-2016-44
refinedweb
707
59.13
: scala> val a1 = Array(1, 2, 3) a1: Array[Int] = Array(1, 2, 3) scala> val a2 = a1 map (_ * 3) a2: Array[Int] = Array(3, 6, 9) scala> val a3 = a2 filter (_ % 2 != 0) a3: Array[Int] = Array(3, 9) scala> a3.reverse res1: Array[Int] = Array(9, 3) Given that Scala arrays are represented just like Java arrays, how can these additional features be supported in Scala? In fact, the answer to this question differs between Scala 2.8 and earlier versions. Previously, the Scala compiler somewhat “magically” wrapped and unwrapped arrays to and from Seq objects when required in a process called boxing and unboxing. The details of this were quite complicated, in particular when one created a new array of generic type Array[T]. There were some puzzling corner cases and the performance of array operations was not all that predictable.: scala> val seq: Seq[Int] = a1 seq: Seq[Int] = WrappedArray(1, 2, 3) scala> val a4: Array[Int] = seq.toArray a4: Array[Int] = Array(1, 2, 3) scala> a1 eq a4 res2: Boolean = true The interaction above demonstrates that arrays are compatible with sequences, because there’s an implicit conversion from arrays to WrappedArrays. To go the other way, from a WrappedArray to an Array, you can use the toArray method defined in Traversable. The last REPL line above shows that wrapping and then unwrapping with toArray gives the same array you started with.: scala> val seq: Seq[Int] = a1 seq: Seq[Int] = WrappedArray(1, 2, 3) scala> seq.reverse res2: Seq[Int] = WrappedArray(3, 2, 1) scala> val ops: collection.mutable.ArrayOps[Int] = a1 ops: scala.collection.mutable.ArrayOps[Int] = [I(1, 2, 3) scala> ops.reverse res3: Array[Int] = Array(3, 2, 1) You see that calling reverse on seq, which is a WrappedArray, will give again a WrappedArray. That’s logical, because wrapped arrays are Seqs, and calling reverse on any Seq will give again a Seq. On the other hand, calling reverse on the ops value of class ArrayOps will give an Array, not a Seq. The ArrayOps example above was quite artificial, intended only to show the difference to WrappedArray. Normally, you’d never define a value of class ArrayOps. You’d just call a Seq method on an array: scala> a1.reverse res4: Array[Int] = Array(3, 2, 1) The ArrayOps object gets inserted automatically by the implicit conversion. So the line above is equivalent to scala> intArrayOps(a1).reverse res5: Array[Int] = Array(3, 2, 1) where intArrayOps is the implicit conversion that was inserted previously. This raises the question how the compiler picked intArrayOps over the other implicit conversion to WrappedArray in the line above. After all, both conversions map an array to a type that supports a reverse method, which is what the input specified. The answer to that question is that the two implicit conversions are prioritized. The ArrayOps conversion has a higher priority than the WrappedArray conversion. The first is defined in the Predef object whereas the second is defined in a class scala.LowPritoryImplicits, which is inherited from Predef. Implicits in subclasses and subobjects take precedence over implicits in base classes. So if both conversions are applicable, the one in Predef is chosen. A very similar scheme works for strings.. // this is wrong! def evenElems[T](xs: Vector[T]): Array[T] = { val arr = new Array[T]((xs.length + 1) / 2) for (i <- 0 until xs.length by 2) arr(i / 2) = xs(i) arr } The evenElems method returns a new array that consist of all elements of the argument vector xs which are at even positions in the vector. The first line of the body of evenElems creates the result array, which has the same element type as the argument. So depending on the actual type parameter for T, this could be an Array[Int], or an Array[Boolean], or an array of some of the other primitive types in Java, or an array of some reference type. But these types have all different runtime representations, so how is the Scala runtime going to pick the correct one? In fact, it can’t do that based on the information it is given, because the actual type that corresponds to the type parameter T is erased at runtime. That’s why you will get the following error message if you compile the code above: error: cannot find class manifest for element type T val arr = new Array[T]((arr.length + 1) / 2) ^ What's required here is that you help the compiler out by providing some runtime hint what the actual type parameter of `evenElems` is. This runtime hint takes the form of a class manifest of type `scala.reflect.ClassManifest`. A class manifest is a type descriptor object which describes what the top-level class of a type is. Alternatively to class manifests there are also full manifests of type `scala.reflect.Manifest`, which describe all aspects of a type. But for array creation, only class manifests are needed. The Scala compiler will construct class manifests automatically if you instruct it to do so. “Instructing” means that you demand a class manifest as an implicit parameter, like this: def evenElems[T](xs: Vector[T])(implicit m: ClassManifest[T]): Array[T] = ... Using an alternative and shorter syntax, you can also demand that the type comes with a class manifest by using a context bound. This means following the type with a colon and the class name ClassManifest, like this: // this works def evenElems[T: ClassManifest](xs: Vector[T]): Array[T] = { val arr = new Array[T]((xs.length + 1) / 2) for (i <- 0 until xs.length by 2) arr(i / 2) = xs(i) arr } The two revised versions of evenElems mean exactly the same. What happens in either case is that when the Array[T] is constructed, the compiler will look for a class manifest for the type parameter T, that is, it will look for an implicit value of type ClassManifest[T]. If such a value is found, the manifest is used to construct the right kind of array. Otherwise, you’ll see an error message like the one above. Here is some REPL interaction that uses the evenElems method. scala> evenElems(Vector(1, 2, 3, 4, 5)) res6: Array[Int] = Array(1, 3, 5) scala> evenElems(Vector("this", "is", "a", "test", "run")) res7: Array[java.lang.String] = Array(this, a, run) In both cases, the Scala compiler automatically constructed a class manifest for the element type (first, Int, then String) and passed it to the implicit parameter of the evenElems method. The compiler can do that for all concrete types, but not if the argument is itself another type parameter without its class manifest. For instance, the following fails: scala> def wrap[U](xs: Array[U]) = evenElems(xs) <console>:6: error: could not find implicit value for evidence parameter of type ClassManifest[U] def wrap[U](xs: Array[U]) = evenElems(xs) ^ What happened here is that the `evenElems` demands a class manifest for the type parameter `U`, but none was found. The solution in this case is, of course, to demand another implicit class manifest for `U`. So the following works: scala> def wrap[U: ClassManifest](xs: Array[U]) = evenElems(xs) wrap: [U](xs: Array[U])(implicit evidence$1: ClassManifest[U])Array[U] This example also shows that the context bound in the definition of U is just a shorthand for an implicit parameter named here evidence$1 of type ClassManifest[U].]. Contents
http://docs.scala-lang.org/overviews/collections/arrays.html
CC-MAIN-2014-52
refinedweb
1,251
62.27
System.Numerics Namespace This namespace includes the following types: The BigInteger structure, which is a nonprimitive integral type that supports arbitrarily large integers. An integral primitive such as Byte or Int32 includes a MinValue and a MaxValue property, which define the lower bound and upper bound supported by that data type. In contrast, the BigInteger structure has no lower or upper bound, and can contain the value of any integer. The Complex structure, which represents a complex number. A complex number is a number in the form a + bi, where a is the real part, and b is the imaginary part. The SIMD-enabled vector types, such as Vector4, Matrix3x2, Plane, and Quaternion. Show:
https://msdn.microsoft.com/en-us/library/dd268220(d=printer).aspx
CC-MAIN-2015-18
refinedweb
114
52.49
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to override method inside a class in Odoo8 How to override method inside a class in Odoo8 Hi, You can use standard inheritance for this:- _inherit = 'model' (without any _name) You can use standard inheritance to add new behaviors/extend existing models. For example you want to add a new field to a 'custom.object' and add a new method: class CustomObject(orm.Model): _inherit = "custom.object" _column = { 'my_field': fields.char('My new field') } def a_new_func(self, cr, uid, ids, x, y, context=None): # my stuff return something **You can also override existing methods: [ if you want to inherit an existing method and add a few codes on it, you can use like this ] here super calls the original method and returns the value and you can add your code after that def existing(self, cr, uid, ids, x, y, z, context=None): parent_res = super(CustomObject, self).existing(cr, uid, ids, x, y, z, context=context) # my stuff <--- here goes your code return parent_res_plus_my_stuff` ** In some cases you want to modify the entire functionality of a method and have to redefine that method, in that case you can simply define that function without using super :- def existing(self, cr, uid, ids, x, y, z, context=None): # my stuff <--- here goes your code return my_stuff` Hope this helps. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now I updated my answer please check now
https://www.odoo.com/forum/help-1/question/how-to-override-method-inside-a-class-in-odoo8-108755
CC-MAIN-2017-26
refinedweb
280
56.18
Recently I was assigned with a task of setting up a portable offline OSM tile server. So naturally I went there: And tried to follow all the instructions on a virtual machine. (a VDI image with Ubuntu 19.10) And there I had soooooo much PAIN trying to install this freaking Mapnik thing! Literally I'm going to f*cking hate this name for god knows how long...but that's not the point. The point is that after I managed to install it using "pip", everything went smoothly until I tried to execute renderd -f -c /usr/local/etc/renderd.conf There I got in trouble once again as the following errors popped: renderd[15908]: An error occurred while loading the map layer 'ajt': Could not create datasource for type: 'postgis' (no datasource plugin directories have been successfully registered) encountered during parsing of layer 'landcover-low-zoom' in Layer of '/home/osboxes/src/openstreetmap-carto/mapnik.xml' After some searching I found this topic: which hinted that I should `Try calling mapnik.register_default_input_plugins(); at the top of your script.` But where in the hell should I put this line I don't understand! I have no script. I suppose I have to launch python and somehow run it there but I haven't figured out how to do this. I could have gone and tried learning python to figure this out, sure, but this thing is taking too much time already and I don't think it should. So I thought maybe somebody could help me out with this. ALthough I anticipate a sh#tstorm coming my way about how incompetent I am and how I should go read the fcking manual (which one, god damn it!). I definitely could have chosen words more carefully but fck this, I am so sick of this cr@p. This freaking mapnik! Why does it have to take so much attention on itself! P.S. I had to install postgres 11 because my packet manager didn't see version 10. And when I was compiling a "mode_tile" module I commented out whe whole "gen_tile_test.cpp" because it failed to find guess what...A FREAKING #include <mapnik/box2d.hpp>! #include <mapnik/box2d.hpp> asked 21 Feb '20, 13:27 kartman1 38●6●7●11 accept rate: 0% edited 21 Feb '20, 13:31 This forum is for questions which can be answered, not rants about your software installation experience. I note that you were installing on an entirely different version of the Operating System rather than 18.04 LTS: it is only to be expected that some things may have changed. Ok, my bad, I ll go and try this with 18.04=( although it's strange that fundamentally most things worked as expected and yet they didn't This is not strange at all. Your error message seems to point at Mapnik's PostGIS plugin not being found, likely because it has not been installed (sometimes, between distributions, packages are split up in several parts and where you only needed to install one in version A you now need to install three in version B) or because it resides in a different directory than anticipated. Your swearing is inappropriate, even more so as you already suspected, when writing your message, that you might be criticised for it! Perhaps this paragraph overdoes the "British reserve" somewhat (disclaimer - I wrote it). Maybe it should say "depending on where you are starting from, you may be in a whole world of pain". It doesn't surprise me that you've had issues with Ubuntu 19. My experience with earlier Ubuntu interim releases haven't always been positive - I get the feeling that Ubuntu uses them to "throw functionality at the wall and see what sticks", which you probably don't want to be dealing with if you want a server to be around for more than a few months. There's a new LTS version out this year, and I expect that we'll update the switch2osm guides once we've got something that works. Ubuntu 18.04 LTS is due to be receive regular support through 2023 though, so there's no hurry. answered 21 Feb '20, 14:14 SomeoneElse ♦ 34.1k●69●357●818 accept rate: 15% edited 22 Feb '20, 10:33 Yeah it worked on Ubuntu 18.04.3 Bionic Beaver (64bit). Although there was an error in a "sudo apt install npm nodejs" line: npm didn't want to install because one of its dependencies was broken. Luckly though there was a topic just on that matter () So I did "sudo apt-get install nodejs-dev node-gyp libssl1.0-dev" and it all worked. Although I had "default-libmysqlclient-dev libgdal-dev libmapnik-dev libmysqlclient-dev libssl-dev" removed in the process. I really hope that "libmapnik-dev" is not going to backfire... Thanks! You can test whether the lack of libmapnik-dev will cause a problem by building some other software that might need it locally (but not doing a "make install" or similar). "node" changes every 5 mins - maybe we need to revisit that bit of the instructions. They're in github, so you can make a pull request there. There's one other postgres version oddity as an issue already, I think. Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: mapnik ×336 postgis ×130 renderd ×97 question asked: 21 Feb '20, 13:27 question was seen: 983 times last updated: 22 Feb '20, 10:33 Best way to get all cities of a specific area? Setting up my own tile server Setting PostGIS as a datasource column "int_tc_type" does not exist How to render high zoom level maps with renderd ? Open Street Map "no map for: default" duplicate key value violates unique constraint "planet_osm_nodes_pkey" OSM – PostGIS – Mapnik problem! Rendering images/maps from Postgresql/PostGIS Database map not updating to changes made First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/73178/freaking-mapnik-error-renderd-no-datasource-for-type-postgis
CC-MAIN-2021-39
refinedweb
1,017
70.73
Timer in C#. <![if !supportLists]> 1. <![endif]> System.Windows.Forms.Timer <![if !supportLists]> 2. <![endif]>System.Timers.Timer <![if !supportLists]> 3. <![endif]>System.Threading.Timer System.Windows.Form.Timer a timer that raises an event at user-defined intervals. This timer is optimized for use in Windows Forms applications and must be used in a window. A Timer is used to raise an event at user-defined intervals. This Windows timer is designed for a single-threaded environment where UI threads are used to perform processing. It requires that the user code have a UI message pump available and always operate from the same thread, or marshal the call onto another thread. When you use this timer, use the Tick event to perform a polling operation or to display a splash screen for a specified period of time. Whenever the Enabled property is set to true and the Interval property is greater than zero, the Tick event is raised at intervals based on the Interval property setting. This class provides methods to set the interval, and to start and stop the timer. using System; using System.Windows.Forms; namespace Timer { public partial class Form1 : Form { public Form1() { InitializeComponent(); } static System.Windows.Forms.Timer myTimer = new System.Windows.Forms.Timer(); static int alarmCounter = 1; static bool exitFlag = false; private void btnTimer_Click(object sender, EventArgs e) /*; } }. using System; using System.Timers; namespace SystemTimersTimer { class Program private static System.Timers.Timer aTimer; static void Main(string[] args) // with a ten second interval. aTimer = new System.Timers.Timer(10000); // Hook up the Elapsed event for the timer. aTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent); // Set the Interval to 2 seconds (2000 milliseconds). aTimer.Interval = 2000; aTimer.Enabled = true; Console.WriteLine("Press the Enter key to exit the program."); Console.ReadLine(); // If the timer is declared in a long-running method, use // KeepAlive to prevent garbage collection from occurring // before the method ends. //GC.KeepAlive(aTimer); private static void OnTimedEvent(object source, ElapsedEventArgs e) Console.WriteLine("The Elapsed event was raised at" + e.SignalTime); }. using System.Threading; namespace SystemThreadingTimer class TimerExampleState public int counter = 0; public Timer tmr;;
http://www.mindstick.com/Articles/c5384287-f911-4f3a-b9d2-34f778879cdf/?Timer%20in%20C
CC-MAIN-2014-15
refinedweb
351
51.14
I am using dev-c++ with allegro here is the code: #include <allegro.h> int SCREEN_WIDTH; int SCREEN_HEIGHT; BITMAP *buffer; int main(int argc, char *argv[]){ allegro_init(); install_mouse(); SCREEN_WIDTH = 620; SCREEN_HEIGHT = 240; set_color_depth(16); set_gfx_mode( GFX_AUTODETECT_WINDOWED, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0); show_mouse(screen); buffer = create_bitmap(SCREEN_WIDTH, SCREEN_HEIGHT); putpixel(buffer,300,300,4); blit(buffer,screen,0,0,0,0,SCREEN_WIDTH,SCREEN_HEIGHT); while(!key[KEY_ESC]){ //do stuff clear_keybuf(); } destroy_bitmap(buffer); } END_OF_MAIN() I want it to end the loop when the escape key is pressed, but it doesn't respond. Also, there are two screens instead of just one; the first is text only and the second is where the pixels are drawn. Lastly, I tell the computer to put the pixel on the buffer at 300,300 but when buffer is drawn to the screen, it is at about 10,0, with multiple pixels along the x axis (about 10, in all different colors). Does anyone have any idea about what is wrong? Thanks for any help...
https://www.daniweb.com/programming/software-development/threads/208021/several-problems
CC-MAIN-2018-09
refinedweb
165
67.99
Most of us in data science have seen a lot of AI-generated people in recent times, whether it be in papers, blogs, or videos. We’ve reached a stage where it’s becoming increasingly difficult to distinguish between actual human faces and faces generated by artificial intelligence. However, with the current available machine learning toolkits, creating these images yourself is not as difficult as you might think. Note: This article was written by Rahul Agarwal, a data scientist at WalmartLabs. In my view, GANs will change the way we generate video games and special effects. Using this approach, we could create realistic textures or characters on demand. Source: So in this post, we’re going to look at the generative adversarial networks behind AI-generated images, and help you to understand how to create and build your own similar application with PyTorch. We’ll try to keep the post as intuitive as possible for those of you just starting out, but we’ll try not to dumb it down too much. At the end of this article, you’ll have a solid understanding of how General Adversarial Networks (GANs) work, and how to build your own. Task Overview In this post, we will create unique anime characters using the Anime Face Dataset. It is a dataset consisting of 63,632 high-quality anime faces in a number of styles. It’s a good starter dataset because it’s perfect for our goal. We’ll be using Deep Convolutional Generative Adversarial Networks (DC-GANs) for our project. Though we’ll be using it to generate the faces of new anime characters, DC-GANs can also be used to create modern fashion styles, general content creation, and sometimes for data augmentation as well. But before we get into the coding, let’s take a quick look at how GANs work. Brief Intro to GANs for Generating Fake Images GANs typically employ two dueling neural networks to train a computer to learn the nature of a dataset well enough to generate convincing fakes. One of these Neural Networks generates fakes (the generator), and the other tries to classify which images are fake (the discriminator). These networks improve over time by competing against each other. Perhaps imagine the generator as a robber and the discriminator as a police officer. The more the robber steals, the better he gets at stealing things. But at the same time, the police officer also gets better at catching the thief. Well, in an ideal world, anyway. The losses in these neural networks are primarily a function of how the other network performs: Discriminator network loss is a function of generator network quality: Loss is high for the discriminator if it gets fooled by the generator’s fake images.Generator network loss is a function of discriminator network quality: Loss is high if the generator is not able to fool the discriminator. In the training phase, we train our discriminator and generator networks sequentially, intending to improve performance for both. The end goal is to end up with weights that help the generator to create realistic-looking images. In the end, we’ll use the generator neural network to generate high-quality fake images from random noise. The Generator Architecture One of the main problems we face when working with GANs is that the training is not very stable. So we have to come up with a generator architecture that solves our problem and also results in stable training. The diagram below is taken from the paper Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, which explains the DC-GAN generator architecture. Source: Though it might look a little bit confusing, essentially you can think of a generator neural network as a black box which takes as input a 100 dimension normally generated vector of numbers and gives us an image: The generator as a black box. So how do we create such an architecture? Below, we use a dense layer of size 4x4x1024 to create a dense vector out of the 100-d vector. We then reshape the dense vector in the shape of an image of 4×4 with 1024 filters, as shown in the following figure: The generator architecture. Note that we don’t have to worry about any weights right now as the network itself will learn those during training. Once we have the 1024 4×4 maps, we do upsampling using a series of transposed convolutions, which after each operation doubles the size of the image and halves the number of maps. In the last step, however, we don’t halve the number of maps. We reduce the maps to 3 for each RGB channel since we need three channels for the output image. What are Transpose Convolutions? Put simply, transposing convolutions provides us with a way to upsample images. In a convolution operation, we try to go from a 4×4 image to a 2×2 image. But when we transpose convolutions, we convolve from 2×2 to 4×4 as shown in the following figure: Upsampling a 2×2 image to 4×4 image Some of you may already know that unpooling is commonly used for upsampling input feature maps in convolutional neural networks (CNN). So why don’t we use unpooling here? The reason comes down to the fact that unpooling does not involve any learning. However, transposed convolution is learnable, so it’s preferred. Later in the article we’ll see how the parameters can be learned by the generator. The Discriminator Architecture Now that we’ve covered the generator architecture, let’s look at the discriminator as a black box. In practice, it contains a series of convolutional layers with a dense layer at the end to predict if an image is fake or not. You can see an example in the figure below: The discriminator as a black box Every image convolutional neural network works by taking an image as input, and predicting if it is real or fake using a sequence of convolutional layers. Data Preprocessing and Visualization Before going any further with our training, we preprocess our images to a standard size of 64x64x3. We will also need to normalize the image pixels before we train our GAN. You can see the process in the code below, which I’ve commented on for clarity. # Root directory for dataset dataroot = "anime_images/" # # We can use an image folder dataset the way we have it setup. # Create the dataset dataset = datasets.ImageFolder(root=dataroot, transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) # Create the dataloader dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=workers) # Decide which device we want to run on device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu") # Plot some training images real_batch = next(iter(dataloader)) plt.figure(figsize=(8,8)) plt.axis("off") plt.title("Training Images") plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0))) The resultant output of the code is as follows: So Many different Characters — Can our Generator understand the patterns? Implementation of DCGAN Now we define our DCGAN. In this section we’ll define our noise generator function, our generator architecture, and our discriminator architecture. Generating Noise Vector for Generator We use a normal distribution to generate the noise to convert into images using our generator architecture, as shown below: nz = 100 noise = torch.randn(64, nz, 1, 1, device=device) The Generator Architecture The generator is the most crucial part of the GAN. Here, we’ll create a generator by adding some transposed convolution layers to upsample the noise vector to an image. You’ll notice that this generator architecture is not the same as the one given in the DC-GAN paper I linked above. In order to make it a better fit for our data, I had to make some architectural changes. I added a convolution layer in the middle and removed all dense layers from the generator architecture to make it fully convolutional. I also used a lot of Batchnorm layers and leaky ReLU activation. The following code block is the function I will use to create the generator: # Size of feature maps in generator ngf = 64 # Number of channels in the training images.. (ngf*2) x 16 x 16 # Transpose 2D conv layer 4. nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), # Resulting state size. (ngf) x 32 x 32 # Final Transpose 2D conv layer 5 to generate final image. # nc is number of channels - 3 for 3 image channel nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False), # Tanh activation to get final normalized image nn.Tanh() # Resulting state size. (nc) x 64 x 64 ) def forward(self, input): ''' This function takes as input the noise vector''' return self.main(input) Now we can instantiate the model using the generator class. We are keeping the default weight initializer for PyTorch even though the paper says to initialize the weights using a mean of 0 and stddev of 0.2. The default weights initializer from Pytorch is more than good enough for our project. # Create the generator netG = Generator(ngpu).to(device) # Handle multi-gpu if desired if (device.type == 'cuda') and (ngpu > 1): netG = nn.DataParallel(netG, list(range(ngpu))) # Print the model print(netG) Now you can see the final generator model here: The Discriminator Architecture Here is the discriminator architecture. I use a series of convolutional layers and a dense layer at the end to predict if an image is fake or not. # Number of channels in the training images. For color images this is 3 nc = 3 # Size of feature maps in discriminator ndf = 64))) # Print the model print(netD) Here is the architecture of the discriminator: Training Understanding how the training works in GAN is essential. It’s interesting, too; we can see how training the generator and discriminator together improves them both at the same time . Now that we have our discriminator and generator models, next we need to initialize separate optimizers for them. # Initialize BCELoss function criterion = nn.BCELoss() # Create batch of latent vectors that we will use to visualize # the progression of the generator fixed_noise = torch.randn(64, nz, 1, 1, device=device) # Establish convention for real and fake labels during training real_label = 1. fake_label = 0. # Setup Adam optimizers for both G and D #)) The Training Loop This is the main area where we need to understand how the blocks we’ve created will assemble and work together. # Lists to keep track of progress/Losses img_list = [] G_losses = [] D_losses = [] iters = 0 # Number of training epochs num_epochs = 50 # Batch size during training batch_size = 128 print("Starting Training Loop...") # For each epoch for epoch in range(num_epochs): # For each batch in the dataloader for i, data in enumerate(dataloader, 0): ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) # Here we: # A. train the discriminator on real data # B. Create some fake images from Generator using Noise # C. train the discriminator on fake data ########################### #() ############################ # (2) Update G network: maximize log(D(G(z))) # Here we: # A. Find the discriminator output on Fake images # B. Calculate Generators loss based on this output. Note that the label is 1 for generator. # C. Update Generator ########################### netG.zero_grad() label.fill_(real_label) # fake labels are real for generator cost # Since we just updated D, perform another forward pass of all-fake batch through D output = netD(fake).view(-1) # Calculate G's loss based on this output errG = criterion(output, label) # Calculate gradients for G errG.backward() D_G_z2 = output.mean().item() # Update G optimizerG.step() # Output training stats every 50th Iteration in an epoch if i % 1000 == 0: print('[%d/%d][%d/%d]tLoss_D: %.4ftLoss_G: %.4ftD(x): %.4ftD(G(z)): %.4f / %.4f' % (epoch, num_epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) # Save Losses for plotting later G_losses.append(errG.item()) D_losses.append(errD.item()) # Check how the generator is doing by saving G's output on)) iters += 1 It may seem complicated, but I’ll break down the code above step by step in this section. The main steps in every training iteration are: Step 1: Sample a batch of normalized images from the dataset. for i, data in enumerate(dataloader, 0): Step 2: Train the discriminator using generator images (fake images) and real normalized images (real images) and their labels. #() Step 3: Backpropagate the errors through the generator by computing the loss gathered from discriminator output on fake images as the input and 1’s as the target while keeping the discriminator as untrainable — This ensures that the loss is higher when the generator is not able to fool the discriminator. You can check it yourself like so: if the discriminator gives 0 on the fake image, the loss will be high i.e., BCELoss(0,1).() We repeat the steps using the for-loop to end up with a good discriminator and generator. Results The final output of our generator can be seen below. The GAN generates pretty good images for our content editor friends to work with. The images might be a little crude, but still, this project was a starter for our GAN journey. The field is constantly advancing with better and more complex GAN architectures, so we’ll likely see further increases in image quality from these architectures. Also, keep in mind that these images are generated from a noise vector only: this means the input is some noise, and the output is an image. It’s quite incredible. All these images are fake! 1. Loss over the training period Here is the graph generated for the losses. We can see that the GAN Loss is decreasing on average, and the variance is also decreasing as we do more steps. It’s possible that training for even more iterations would give us even better results. plt.figure(figsize=(10,5)) plt.title("Generator and Discriminator Loss During Training") plt.plot(G_losses,label="G") plt.plot(D_losses,label="D") plt.xlabel("iterations") plt.ylabel("Loss") plt.legend() plt.show() generator vs. discriminator loss 2. Image Animation at every 250th Iteration in Jupyter Notebook We can choose to see the output as an animation using the below code: #%%capture fig = plt.figure(figsize=(8,8)) plt.axis("off") ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list] ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True) HTML(ani.to_jshtml()) Here you can check out the training progression You can also save the animation object as a GIF if you want to send them to some friends. ani.save('animation.gif', writer='imagemagick',fps=5) Image(url='animation.gif') 3. Image Generated at Every 200th Iteration Below you’ll find the code to generate images at specified training steps. It’s a little difficult to clear see in the iamges, but their quality improves as the number of steps increases. # create a list of 16 images to show every_nth_image = np.ceil(len(img_list)/16) ims = [np.transpose(img,(1,2,0)) for i,img in enumerate(img_list)if i%every_nth_image==0] print("Displaying generated images") # You might need to change grid size and figure size here according to num images.() Given below is the result of the GAN at different time steps: If you don’t want to show the output as a GIF, you can see the output as a grid too. Conclusion In this post we covered the basics of GANs for creating fairly believable fake images. We hope you now have an understanding of generator and discriminator architecture for DC-GANs, and how to build a simple DC-GAN to generate anime images from scratch. Though this model is not the most perfect anime face generator, using it as a base helps us to understand the basics of generative adversarial networks, which in turn can be used as a stepping stone to more exciting and complex GANs as we move forward. Look at it this way, as long as we have the training data at hand, we now have the ability to conjure up realistic textures or characters on demand. That is no small feat. For a closer look at the code for this post, please visit my GitHub repository. If you’re interested in more technical machine learning articles, you can check out my other articles in the related resources section below. This article was written by Rahul Agarwal, a data scientist at WalmartLabs. Previously published: read original article here
https://coinerblog.com/using-gans-to-create-anime-faces-via-pytorch-s17i3uug/
CC-MAIN-2020-40
refinedweb
2,791
54.42
The problem with using a stateful firewall is that if the applications that go through it have a slightly different concept of what proper TCP state should be, or if the firewall makes invalid assumptions, some services will cease to function. The following subsections explain what some of those errors are and how to fix them.). A little history is in order here. In FireWall-1 4.0 and earlier, if FireWall-1 received a TCP ACK packet that didn't match an entry in the connections tables, it would strip off the data portion of the packet (thus making it harmless), change the TCP sequence number, and forward the packet to its final destination. Because the destination host would see an unexpected sequence number, it would send a SYN packet to resynchronize the connection. The SYN packet would then go through the rulebase. If the rulebase permitted the packet, the entries in the connections tables would be recreated and the connection would continue. In FireWall-1 4.1 and FireWall-1 4.1 SP1, FireWall-1 allows the unsolicited TCP ACK packet only if it comes from the server. If the TCP ACK packet comes from the client (i.e., the machine that originated the connection), the TCP ACK packet is dropped. Then someone figured out that this handling of ACK packets could be used to cause a DoS attack against both the firewall and the host behind it. Since FireWall-1 4.1 SP2, by default FireWall-1 drops ACK packets for which there are no entries in the state tables. However, in NG FP3 and above, you can revert back to the pre-4.1 SP2 behavior by going into the Global Properties frame, Stateful Inspection tab, and unchecking the "Drop out of state TCP Packets" box. In NG FP2 and before, use dbedit as described in FAQ 4.2 and enter the following commands: dbedit> modify properties firewall_properties fw_allow_out_of_state_tcp 1 dbedit> update properties firewall_properties NOTE! Some application vendors use TCP connections in ways that do not follow the standards documented in RFC793. Since FireWall-1 attempts to enforce strict adherence to the standards, applications that do not comply will have difficulties communicating through FireWall-1 or any other stateful packet filter. NG FP2 and above provide a functionality that allows TCP packets for a specific port number even if they do not conform to Check Point's idea of state. This allows out-of-state TCP packets for specific services provided the packets would normally be passed by the rulebase. To do this, edit $FWDIR/lib/user.def on the management station and add a line of code (set in bold) within the following context: #ifndef __user_def__ #define __user_def__ // // User-defined INSPECT code // deffunc user_accept_non_syn() { dport = 22 }; #endif /* __user_def__ */ The INSPECT code between the curly braces defines the service(s) you wish to allow. The preceding example is SSH (TCP port 22). To define multiple services?for example, SSH (port 22), https (port 443), and ldap (port 389)?replace the bold line in the preceding example with this one: deffunc user_accept_non_syn() { dport=22 or dport=443 or dport=389 }; To permit non-SYN packets between hosts a.b.c.d and x.y.z.w in addition to non-SYN packets on port 22, use the following: deffunc user_accept_non_syn() { (src=x.y.z.w, dst=a.b.c.d) or (src=a.b.c.d, dst=x.y.z.w) or dport=22 }; (See Chapter 14 for more information on INSPECT.) If the rulebase is constructed carefully enough, the firewall should be relatively safe from an ACK-type DoS attack because all packets allowed by this change must still pass the rulebase. FireWall-1 can mark a connection in the connections table to allow traffic to pass in one direction only. This can either be a connection that started from the inside, in which case FireWall-1 would mark the table to read that only outbound packets are allowed, or it can be a connection that originated from the outside, in which case FireWall-1 would mark the table to read that only in bound packets are allowed. This means that data can pass in only one direction (ACK packets as part of normal TCP are acceptable). When a packet violates a unidirectional connection, Check Point logs an entry into SmartView Tracker/Log Viewer. UDP services have an option to set a service to accept replies. In a sense, that is unidirectional. Unidirectional TCP connections occur with FTP. Some programs that use FTP do so in a nonstandard way that requires all the connections used by the FTP connection to be bidirectional. To allow for bidirectional FTP connections in FireWall-1 NG, perform the following steps. Stop the FireWall-1 management station with cpstop. Edit $FWDIR/lib/base.def on the management station. Add the following bolded lines within the context shown: deffunc ftp_port_code() { ftp_intercept_port(CONN_ONEWAY_EITHER) or (IS_PASV_MSG,reject or 1) }; deffunc ftp_pasv_code() { ftp_intercept_pasv(CONN_ONEWAY_EITHER) or (IS_PORT_CMD,reject or 1) }; deffunc ftp_bidir_code() { ftp_intercept_port(NO_CONN_ONEWAY) or ftp_intercept_pasv(NO_CONN_ONEWAY) }; deffunc ftp_code() { ftp_intercept_port(CONN_ONEWAY_EITHER) or ftp_intercept_pasv(CONN_ONEWAY_EITHER) }; Edit $FWDIR/conf/tables.C on the management station as follows (changes are set in bold): : (protocols :table-type (confobj-dynamic) :location (protocols) :read_permission (0x00000000) :write_permission (0x00040000) :queries ( :all (*) ) ) Note that table-type will be changed from confobj-static to confobj-dynamic. Start the FireWall-1 management station with cpstart. Use dbedit to enter the following commands: dbedit> create tcp_protocol FTP_BI dbedit> update protocols FTP_BI dbedit> modify protocols FTP_BI handler ftp_bidir_code dbedit> modify protocols FTP_BI match_by_seqack true dbedit> modify protocols FTP_BI res_type ftp dbedit> update protocols FTP_BI dbedit> quit This allows you to create the bidirectional FTP service. Open up SmartDashboard/Policy Editor and create a new service of type TCP. It will be on port 21. Give it a name other than FTP_BI (e.g., ftp_bidir). Click the Advanced button and select FTP_BI as the protocol type. Use the new service in a rule. Install the security policy. This error can be seen in SmartView Tracker/Log Viewer when FireWall-1 receives a new connection from a source to a destination over the same port/service as a connection that was recently closed with a FIN or RST. FireWall-1 hangs onto these connections until the TCP end timeout is reached, which defaults to 60 seconds. This behavior is normal and expected. The first step in alleviating this issue is to lower the TCP end timeout to see if that helps remove the connection from the connections table in time for the new connection to be received without a conflict. In FireWall-1 NG FP2 and later, the TCP end timeout can be modified via the GUI in the Stateful Inspection frame of the Global Properties section. If the problem still occurs, the solution is to use TCP Sequence Verifier in NG FP3 to enable FireWall-1 to see the connection as a new connection, not an established one. For this to work properly, you need to run NG FP3 or above. On Nokia platforms, ensure that you have disabled flows. Contact Nokia Support for assistance. Another option exists in hotfix SHF_FW1_FP3_0114, which is included in NG FP3 HFA-311 and above. You can change the behavior by modifying the value of the kernel variable fw_reuse_established_conn in three ways: change it to the TCP port number on which you need this behavior, change it to -1 for all ports, or change it to -2 to disable the behavior. See FAQ 6.1 for instructions on how to edit FireWall-1 kernel variables. These errors show up in SmartView Tracker in FireWall-1 NG FP3 and above. SmartDefense is dropping packets with the SYN and RST flags set as malformed instead of as a normal RST packet. Check Point provides a fix for this issue in hot fix SHF_FW1_FP3_0114. This fix is included in NG FP3 HFA-311 and above. After applying the fix, you can change the behavior by modifying the value of the kernel variable fw_accept_syn_rst to the TCP port number on which you need this behavior, to -1 for all ports, or to -2 to disable the behavior. See FAQ 6.1 for instructions on how to edit FireWall-1 kernel variables. These error messages show up in SmartView Tracker on FireWall-1 NG FP3 and above when the firewall receives unexpected SYN-ACK packets. To allow these packets, change the kernel variable fw_allow_out_of_state_syn_resp to 1. FAQ 6.1 explains how to change kernel variables. Prior to FireWall-1 NG FP1, FireWall-1 did not perform any checking of TCP sequence numbers. NG FP1 introduced this functionality, which validates the TCP sequence numbers used in a connection. It provides better tracking of the state of TCP connections. Enabling this feature can eliminate certain kinds of error messages in the logs and possibly create others. To enable TCP Sequence Verifier on NG FP3 or above, in SmartDashboard, select SmartDefense from the Policy menu. The option is listed under TCP as Sequence Verifier. To enable TCP Sequence Verifier on NG FP2, check the "Drop out of sequence packets" option under TCP Sequence Verifier in the Stateful Inspection frame in the Global Properties section. To enable TCP Sequence Verifier on NG FP1, use dbedit to edit the following property to true in the objects_5_0.C file: dbedit> modify properties firewall_properties fw_tcp_seq_verify 1 dbedit> update properties firewall_properties In FireWall-1 NG, you can set these timeouts in the GUI directly. For both TCP and UDP services, go into the Advanced section of the service in question. For TCP services, edit the session timeout. For UDP services, edit the virtual session timeout. Reinstall the security policy. It is usually better to use some of the other tricks discussed to permit TCP packets that are out of state, such as the method described in FAQ 6.21. I don't even want to think about the security implications of leaving idle TCP connections open forever, but my gut tells me that this is not a good idea. If you absolutely need to disable timeouts for a service because the vendor of your application refuses to implement a mechanism for periodically checking to see whether a connection is alive, this is how you would do it with dbedit: dbedit> modify services service-name timeout 2147483647 dbedit> update services service-name The value specified in the preceding example is used internally by the kernel to specify connections that do not time out. However, if you set any smaller number slightly less than 2,147,483,647, you still get connections that should last many years, assuming you do not stop your firewall for that long.
https://etutorials.org/Networking/Check+Point+FireWall/Chapter+6.+Common+Issues/Problems+with+Stateful+Inspection+of+TCP+Connections/
CC-MAIN-2022-33
refinedweb
1,778
53.81
Is a string array with 500-1000 words read out of a file possible? Where can I get such a file. Is a string array with 500-1000 words read out of a file possible? Where can I get such a file. If I wanted to make a game like Scrabble in Java, how could I include all the words? Is there a way to import them or so? How do I do that? Is this even possible? I tried your suggestion in post 4, but apparently I typed it in a bit wrong. Tried it again and it works. Thanks! Maybe I'll rephrase. So this is my code: String[] stringArray = {"hello","world","test","whatever"}; System.out.println("stringArray: "+ java.util.Arrays.toString(stringArray));... Eclipse gives an error: "Array constants can only be used in initializers." in this line: stringArray = {"banana","fish","tree","apple"}; How can I change a complete array when it's already initialized? Can I change this array: String[] stringArray = {"hello","world","test","whatever"}; to this: ... This: double test = 100/3; System.out.print(test); gives as output 33.0. Isn't it supposed to output 33.3333...? Am I missing something? yep, doing that now already. Apparently I forgot about arrays. Thanks anyway. What's the easier way to get a random string? At the moment I would use: Random number = new Random(); numberR = number.nextInt(2)+1; if (numberR==1){ System.out.println("Hi."); } Nevermind Glad, I could help. :) You forgot the dot between System and out. Also post code inside code tags: import java.io.*; import java.util.Scanner; class Main { public static void main (String[] args) throws... It's only console. Thanks, I'll try to use JOptionPanes. How could I create this programm: If you start it, a loop tells something (say every 5 seconds) until you type in 'stop'. The problem is, if I type in .nextLine(); it checks every time the loop... So I've got the following code: import java.awt.*; import javax.swing.*; public class testframe extends JFrame{ private JButton play; maybe... May I ask what's an IDE? It would be realy useful to just type in jumptoLine(12); and it goes to that line. Now is there something to do so? Thanks for helping me, GregBannon, but I hadn't read your comment since now. And by now I already know the equals method. I btw changed the programm so it auto repeats itself after you got the right... Thanks :) Helped me! if (x = y != z) tests if x is equal to y OR z. How do I do this for .equals? like: if (x.equals("Hello" or "World") I didn't want to copy it, because it's so long... The programm runs fine until I get the number right. Then the conosle outputs: Again? Type in yes or no. Again? Type in yes or no. ... So I wanted to make a simple number guessing game, which worked fine. Then I wanted to add a play again function, but... I get errors. Why? Here's the code: import java.util.*; public class... thanks :) you really helped me out a lot. now that's not important, but how long do you think would/will it take to learn java until I can create a small game? Hi. So I wanna create games for Windows. No shooter or RPG or so, but Jump 'n' Run or Roguelikes (no high graphics) with maybe a Multiplayer mode. I have nearly no experience in that, so 1. Is...
http://www.javaprogrammingforums.com/search.php?s=de21115c4e0fb6f0e550a2cb6d113ed9&searchid=1724930
CC-MAIN-2015-35
refinedweb
589
87.82
“ In Object-Oriented Programming (OOP), encapsulation refers to the bundling of data with the methods that operate on that data, or the restricting of direct access to some of an object’s components.” Before OOP and encapsulation, the data in an object used to be accessed by other entities. For instance to calculate the magnitude of the vector class in Kotlin: <I chose Kotlin for the demo as its syntax is less verbose. This anti-pattern however is equally applicable to all the languages following OOP> The main function accesses fields of the Vector class through accessor functions like the getters… Git is a database to store the snapshots of the codebase throughout its development phase. Although, developers are familiar with the basic commands, most are oblivious to the internal workings. This will be a hands on tutorial on how git works internally. Let’s setup a local git repository. $ mkdir git-internals-tutorial ; cd $_ # create a new directory $ git init # create a git repository locally A git repository stores the snapshots in a .git folder. We can visualize the folder with a tree command. $ tree .git .git ├── HEAD ├── config ├── hooks ├── objects │ ├── info │ └── pack └── refs ├──… The args and the kwargs in python Say we want to print the below list. list_of_numbers = [1, 2, 4, 5, 6, 7] list_of_numbers = [1, 2, 4, 5, 6, 7]for number in list_of_numbers: print(number, end=" ")----------------------------------------------OUTPUT:1 2 4 5 6 7 2. A better way would be to use list comprehension list_of_numbers = [1, 2, 4, 5, 6, 7][print(number, end=" ") for number in list_of_numbers]----------------------------------------------OUTPUT:1 2 4 5 6 7 3. The best would be to… HTTP being a stateless protocol, web developers found it hard to keep track of the returning clients. A client would have to reenter all the data again on making a subsequent request to the server. Before cookies, the work around to get the data from a returning client would be to make the browser send the data instead having the user repeatedly entering it. It was achieved by having the data written into hidden forms in the website and have it sent to the server on the next request. As expected, this method was cumbersome not to mention error prone. … Talisman is an open source tool developed and maintained by ThoughtWorks Technologies. The tool is a language independent scanner that scans the code in a git repository for potential passwords and other confidential information that may be checked in as part of the commits in git. More on talisman here. Although talisman works predominantly by running git hooks that gets triggered on git commit or push action on the developer machines, they also provide a way to run the tool on the command line with the help of the Talisman CLI. The code and the sample implementation used in the… Awk is an interpreted scripting language used for text manipulation. It is available by default in most linux and unix distributions. Awk divides the input file into records and each record is divide into fields. The awk command is as follows > awk '<the script to run in quotes>' input_file The quotes around the script enables the shell to treat the entire script as a single argument for the awk command. Awk runs the script on each record in the input_file unless specified. The input file of the name weather-data.csv used in this blog is available on github. Do download… Here is a hands on tutorial on docker bridge networks. Feel free to install docker and follow along docker network ls This command will list all the docker networks currently running on the host machine which is generally your computer. The list would include the driver and the scope of the docker networks.… Represent an ordered pair of integers with a single integer. I wanted to create a lookup table for the coordinate points in the first quadrant of a graph. Searching the look up table would be optimal if instead of storing the coordinates as a tuple of (x, y), it could be condensed to a single element. Thats when I stumbled on the cantor pairing function which would provide a collision free key for each coordinate point in the first quadrant of a graph. The cantor pairing function is used to uniquely identify a two tuple integer. Typically in computer programs, an enum is used to represent a field having different values. For instance, while representing a set of athletes, the field stating the sport played would have the value represent by an enum. However, enums label the elements in serial order. #include <stdio.h>enum sport { cricket, football, basketball };int main() { printf("Cricket = %d\n", cricket); printf("Football = %d\n", football); printf("Basketball = %d\n", basketball); return 0; }OUTPUT:Cricket = 0 Football = 1 Basketball = 2 Although it may seem to be an easy encoding for the programmers, if used… Back in HTTP 1.0, the client opens a TCP connection to make a HTTP request to the server. Once the request goes through the server used to close the TCP connection. The property of site locality states that an application that initiates a HTTP request to a server is likely make more requests in future. So if a server receives a request, it should be ready to accept more requests subsequent requests for a given amount of time. But the in HTTP 1.0, the connection has to re-established for each request very frequently. The process of opening and closing of… Developer @ThoughtWorksTechnologies
https://praveen-alex-mathew.medium.com/?source=post_internal_links---------6----------------------------
CC-MAIN-2021-31
refinedweb
924
60.55
doubts getting exponentially INCREMENTed niraj singh Ranch Hand Joined: Feb 07, 2001 Posts: 36 posted Mar 08, 2001 07:14:00 0 I have been reading a lot of discussion on pre-increment and post-increment in this forum, but still I have not been able to clearly understand the program flow logic for pre and post increments. I have attached a detailed discussion describing the way the increment operator works from the sun java forum site. Could someone tell me the step-by-step logic flow for a pre and post increment operator. Question 1 - My first question is regarding the code attached below which deals with i=i++ My understanding is that the sequence is as follows - 1. The full expression to the right of the "=" operator has to be evaluted first. 2. My earlier understanding was that post-increment means the variable is incremented after complete execution of the statement and before control moves to the next statement. But here, the value of i is inserted before start of expression evaluation. So i set to 0. 3. i++ has to be evaluated before assignment because ++ operator has higher precedence. Thus i gets incremented to 1. 4. This is where I get hazy. Does he mean that the incremented value of i which is now 1, is not assigned to the i on the left hand side of the "="operator, but instead stored in a temporary memory location as 1. The original value of i is located by Java (which is 0) and is assigned to i. So, i remains at 0, but the temporary storage has a value 1. So, what happens to the temporary storage......is it just dropped and the value lost? Now, when dealing with pre-increment, the same 4 steps are - 1. The full expression to the right of the "=" operator has to be evaluted first. 2. The value of i will be plugged in after the increment......what does it mean? How will i be incremented when it has no value. 3. A pre-increment too has higher precedence than a = operator, so ++i has to be evaluated before assignment because ++ operator has higher precedence. Thus i gets incremented to 1. 4. In the previous case, I assumed the post-incremented value is stored in a temporary memory location. Here, I assume, the difference would be that, no temporary memory location is used, but instead the actual memory location of i is updated to the value 1. The value of the expression is processed which is also 1, and this value is assigned to the variable i, which is on the left side of the = operator. public class Increment{ public static void main(String argv[]){ Increment inc = new Increment(); int i =0; inc.fermin(i); System.out.println(i); i++; System.out.println(i); } void fermin(int i){ // arguments are passed by value, so changed value of this local variable "i" i++;//is not returned. } } /* Explanation - Operator Precedence Chart: ========================= Operators are shown in decreasing order of precedence from top to bottom. Rank Operator Type Associativity ==== ======== ============= ============= 1 () parentheses left to right 2 [] array subscript left to right ... 5 ++ uniray post-increment right to left ... 16 + arithmetic addition left to right ... 34 = assignment right to left ... 45 >>>= so, the operator "=" has the lowest precedence in the following expression: i = i++; The expression that has to be evaluated prior to the assignment is: i++. The post-increment means the value will be plugged in first prior to the expression evaluation. So i = i++ can be viewed as two statements i = 0 -- value is plugged in prior to increment i++ -- post-increment expression The post-increment has higher precedence than assignment. Due this reason, expression will be evaluated prior to the assignment. So "i" will be increated by 1. That means the value of "i" is 1. Then the assignment will be performed as i = 0. That means the value of "i" is changed from 1 to 0. The conclusion is that the variable "i" will be assigned to 1 during the expression evaluation but the value is temporary one. At the end (after assignment) the variable "i" will have 0. Let us see the case with pre-increment. i = ++i; In the case of pre-increment the value will be plugged in after the increment. That means, "i" will be incremented to have a value of 1, and then value will be plugged in before the assignment. i = 1; In this the variable "i" will have a value of 1. */ Question 2 - This relates to a question put up on this forum a few days back. I am reposting the code - public class Q18 { static int call(int x) { try { System.out.println(x---x/0);//line 1 return x--;//line 2 } catch(Exception e){ System.out.println(--x-x%0);//line 3 return x--; } finally { return x--; } } public static void main(String[] args) { System.out.println(" value = "+ call(5)); } } In the line marked 1, we have x---x/0. Is this expression evaluated as - 1. (x - (--x))/0 or 2. ((x--) - x)/0 Changing this, I left out the divide by zero, and tried to evaluate the remaining expression. I have taken x=5. Hence- 1. (x - (--x)) //answer is 1 2. ((x--) - x)//answer is 1 If the answer is 1, I tried to use the Question 1 logic of pre and post increment, but have failed to understand, how both evaluate to 1. Please explain logic as also relevant to Question No. 1. If I further modify the question (keeping x = 5) to - (x--+--x);//result is 8 But if I change it to - ((x--)-x); (x--+--x);//result is 6 Could someone please explain the above questions. [This message has been edited by niraj singh (edited March 09, 2001).] Cindy Glass "The Hood" Sheriff Joined: Sep 29, 2000 Posts: 8521 posted Mar 09, 2001 09:59:00 0 You are correct in that the system has a working area that holds what it knows values to be. This is called the scratchpad area or transaction area in some languages. If the variable x=5 then a working area is created for x's value when it is in scope. When the variable goes out of scope the contents of the working area is lost. When you do x-- you are saying use the value of x for the variable and decrement the working area value for x for the next use. When you say --x you are saying decrement the working area and then assign that value back to x. Let me tackle the second one first. x=5 (x - (--x)) 5 - (now 4) = 1 (x--)- x 5 - (now 4) = 1 working area holds 4 (x--+--x); 5 + (now 3) = 8 4 I got lost on what you were doing for the last part, you have 2 expressions with one result? "JavaRanch, where the deer and the Certified play" - David O'Meara niraj singh Ranch Hand Joined: Feb 07, 2001 Posts: 36 posted Mar 10, 2001 03:20:00 0 Hi Cindy, Thanks a lot for clearing up my concepts. My main doubt was regarding the precedence of () and ++, that is whether the parantheses had higer precedence than the ++ operator. The answer to that is that I was confusing precedence with associativity. () is only for associativity. The operands will still have to be evaluated left to right. Niraj subject: doubts getting exponentially INCREMENTed Similar Threads Help me out about increment operaror assignment operators Dissecting the post-fix operator about ++ ?? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/198136/java-programmer-SCJP/certification/doubts-exponentially-INCREMENTed
CC-MAIN-2015-35
refinedweb
1,275
64.1
This page aims to offer a reasonably comprehensive guide to categorised Wikiversity content. The lists here are mainly generated dynamically (automatically) and provide multiple alternative navigation pathways. Wikiversity users are diverse, so no single exploration path is adequate. To ensure that a new resource is accessible here, categorise it - the project box system can help new editors easily add appropriate categories. An earlier version of this page can be found here. It is also possible to browse Wikiversity resources by name (alphabetical listing). You can browse Wikiversity by school subject. The school subject portals aim to give a friendly and useful access route for users interested in pre-tertiary (pre-school, primary and secondary) education. The school subject portals aim to offer users an alternative non-university route to resources in their subject area which is free of the extreme specialisation which occurs at tertiary level and which is friendly and comprehensible to schoolchildren. You can browse Wikiversity by educational level. There is also a guided tour to these portals. You can browse Wikiversity by user type. There are only a few of these, and the user type portals offer access routes based around the role you wish to play at Wikiversity. You can browse Wikiversity by the completion status of resources, which can help you find complete resources reading for using, or freshly started ones which you can help develop. Work also needs to be done on categorising resources by completion status, which is why some of the categories here are rather empty. You can browse Wikiversity by "faculty", where "faculty" is understood in the international (non-US American) sense of a major division of a university. Wikiversity is not a real university, but you can explore Wikiversity using the metaphor of a university. If you prefer the word "school" to "faculty", please skip to the box below. You can browse Wikiversity by "school", where "school" is understood in the US American sense of a major division of a university. Wikiversity is not a real university, but you can explore Wikiversity using the metaphor of a university. If you prefer the word "faculty" to "school", please skip to the box above. During the course of its history, Wikiversity was instructed to organize itself more closely around "learning projects", so a structure appeared which offers an alternative way to explore content. You can browse Wikiversity by resource type. This route is rather rough and ready for the time being, as people's consistency and willingness to categorise their resources by type has been somewhat lacking. It is hoped that by spotlighting the "by resource type" exploration path, people will slowly improve the tagging of resources by resource type. You can browse Wikiversity by department. Historically Wikiversity had something called "departments" which were put into the "Topic" namespace, and you can browse through them here.
http://en.wikiversity.org/wiki/Wikiversity:Browse
crawl-002
refinedweb
474
53.81
2010/5/3 Eric Blake <eblake redhat com>: > * src/util/dnsmasq.c (dnsmasqReload): Mingw lacks kill, but is not > running a dnsmasq daemon either. > --- > > I'm not as familiar with the mingw cross-build setup, but I finally > got enough pieces installed on my F-12 machine that this was the only > remaining compilation failure that I ran into. Let me know if we need > an alternate patch that solves dnsmasq more gracefully than just > crippling one function. There's no dnsmasq to kill on Windows. I think this fix is fine. > src/util/dnsmasq.c | 4 +++- > 1 files changed, 3 insertions(+), 1 deletions(-) > > diff --git a/src/util/dnsmasq.c b/src/util/dnsmasq.c > index 1cb5f21..d6cef40 100644 > --- a/src/util/dnsmasq.c > +++ b/src/util/dnsmasq.c > @@ -328,14 +328,16 @@ dnsmasqDelete(const dnsmasqContext *ctx) > * Reloads all the configurations associated to a context > */ > int > -dnsmasqReload(pid_t pid) > +dnsmasqReload(pid_t pid ATTRIBUTE_UNUSED) > { > +#ifndef __MINGW32__ Typically, WIN32 instead of __MINGW32__ is used in the libvirt code base. Cygwin doesn't define WIN32, so this won't interfere with a Cygwin build. > if (kill(pid, SIGHUP) != 0) { > virReportSystemError(errno, > _("Failed to make dnsmasq (PID: %d) reload config files.\n"), You could kill the \n here. > pid); > return -1; > } > +#endif /* __MINGW32__ */ > > return 0; > } > -- > 1.6.6.1 > ACK Matthias
https://www.redhat.com/archives/libvir-list/2010-May/msg00032.html
CC-MAIN-2017-39
refinedweb
219
60.31
Custom JavaScript targeting Custom JavaScript targeting allows you to inject JavaScript onto a page, then target your experiments based on the value that the JavaScript returns.In this article: When to use Custom JavaScript targeting Use custom JavaScript when you want to build targeting conditions based on webpage information that can’t be retrieved from the URL, the data layer, JavaScript variables, or other targeting. Your custom JavaScript must be a single JavaScript function that returns a value using the 'return' statement. You can then target visitors based on the value that your JavaScript returns. Note: All user-defined JavaScript must be declared above the Optimize 360 container snippet, in the <HEAD> of the page. JavaScript declared after the Optimize 360 snippet will not be available to target on page load. Learn more about the placement of the Optimize 360 snippet. Example: Target visitors browsing your site in the morning You want to target experiments to users visiting your site during the morning hours. To do this, write a JavaScript function that returns the current hour (with possible values 0-23). Then, create a targeting condition that looks for a returned value that is less than 12. Step 1: Create a custom variable - Create or edit an experiment. - Click the TARGETING tab. - Click AND to add a new targeting rule. - Click Custom JavaScript. - Click Variable, then Create new... - Optionally, click an existing variable to edit it. - Enter your Custom JavaScript in the open text field (see a sample below). - Name your variable – for example, Browser time. - Click CREATE VARIABLE. Sample JavaScript which returns the time that the browser’s clock is set to: function() { return (new Date()).getHours(); } Step 2: Build a condition with your custom variable After creating your custom variable, Optimize will populate it in a new targeting condition which you can complete by adding a match type and value. For this example, build a targeting condition that looks for a returned value of 11 or less and click SAVE. This condition will evaluate true if: - the value of the Browser time variable is less than 12. This condition will evaluate false if: - the value of the Browser time variable is 12or greater. Note: Be cautious with JavaScript code that will have side effects. Your code shouldn't alter/update the DOM or any variables currently stored on the page. Also, make sure that your app's logic doesn’t depend on this code having been run. Match types The following match types are available in JavaScript variable targeting: - Equals / does not equal - Contains / does not contain - Starts with / does not start with - Ends with / does not end with - Regex matches / does not regex match - Less than - Less than or equal - Greater than - Greater than or equal Equals / does not equal Every character, from beginning to end, must be an exact match of the entered value for the condition to evaluate as true. A condition using does not equal will evaluate as true when the query parameter does not equal any of the entered values. Example: Will evaluate true when the value of the variable is true. Will evaluate true when the value of the variable is false. Contains / does not contain The contains match type (also known as a "substring match") allows you to target any occurrence of a substring with a longer string. Example: Will evaluate true for: Will evaluate true for: Starts with / does not start with The starts with match type matches identical characters starting from the beginning of the query string up to and including the last character in the string you specify. Example: Will evaluate true for: Will evaluate true for: Ends with / doesn't end with An exact match of the entered value with the end of the URL. You can target shopping cart pages that use /thankyou.html at the end of their URLs. Example: Will evaluate true for: Will evaluate true for: true for: Operators AND The AND operator is useful when you wish to target a variation based on multiple rules that all need to be true. Conditions using the AND operator will only evaluate as true when all of the values are met. Example: To target users searching for nexus, while browsing from a tablet, create two rules joined by the AND operator. A query parameter conditions with the OR operator. Example: To target searches on your website for either nexus or chromecast, create a rule with two site search in the Value field. You'll notice that OR is automatically added after you press enter."
https://support.google.com/360suite/optimize/answer/6301785?hl=en
CC-MAIN-2017-13
refinedweb
759
70.33
Component: Per Face Face-Face adjacency relation More... #include <component.h> Inherits T. Component: Per Face Face-Face adjacency relation It encodes the adjacency of faces through edges; for 2-manifold edges it just point to the other face, and for non manifold edges (where more than 2 faces share the same edge) it stores a pointer to the next face of the ring of faces incident on a edge. Note that border faces points to themselves. NULL pointer is used as a special value to indicate when the FF Topology is not computed. Definition at line 613 of file face/component.h.
http://vcglib.net/classvcg_1_1face_1_1FFAdj.html
CC-MAIN-2019-09
refinedweb
103
63.49
21095/unable-to-schedule-jobs-every-couple-of-days Ihave set up a single-node Kubernetes cluster, using flannel. Most of the time everything works perfectly fine but after a few days I've noticed that the cluster reached a stage where it wasn't able to schedule new pods and the pods were stuck in "pending" stage. I then realized this happens after every couple of days. Its a very weird problem and i have no idea what to do. Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned dex-1939802596-zt1r3 to superserver-03 1m 2s 21 {kubelet superserver-03} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "somepod-1939802596-zt1r3_somenamespace" with SetupNetworkError: "Failed to setup network for pod \"somepod-1939802596-zt1r3_somenamespace(167f8345-faeb-11e6-94f3-0cc47a9a5cf2)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod" Technical details: kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Kubernetes Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Started the cluster with these commands: kubeadm init --pod-network-cidr 10.244.0.0/16 --api-advertise-addresses 192.168.1.200 kubectl taint nodes --all dedicated- kubectl -n kube-system apply -f Some syslog logs that may be relevant (I got many of those): Feb 23 11:07:49 server-03 kernel: [ 155.480669] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 11:07:49 server-03 dockerd[1414]: time="2017-02-23T11:07:49.735590817+02:00" level=warning msg="Couldn't run auplink before unmount /var/lib/docker/aufs/mnt/89bb7abdb946d858e175d80d6e1d2fdce0262af8c7afa9c6ad9d776f1f5028c4-init: exec: \"auplink\": executable file not found in $PATH" Feb 23 11:07:49 server-03 kernel: [ 155.496599] aufs au_opts_verify:1597:dockerd[24704]: dirperm1 breaks the protection by the permission bits on the lower branch Feb 23 11:07:49 server-03 systemd-udevd[29313]: Could not generate persistent MAC address for vethd4d85eac: No such file or directory Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.756976 1228 cni.go:255] Error adding network: no IP addresses available in network: cbr0 Feb 23 11:07:49 server-03 kernel: [ 155.514994] IPv6: eth0: IPv6 duplicate address fe80::835:deff:fe4f:c74d detected! Feb 23 11:07:49 server-03 kernel: [ 155.515380] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 11:07:49 server-03 kernel: [ 155.515588] device vethd4d85eac entered promiscuous mode Feb 23 11:07:49 server-03 kernel: [ 155.515643] cni0: port 34(vethd4d85eac) entered forwarding state Feb 23 11:07:49 server-03 kernel: [ 155.515663] cni0: port 34(vethd4d85eac) entered forwarding state Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.757001 1228 cni.go:209] Error while adding to cni network: no IP addresses available in network: cbr0 Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.757056 1228 docker_manager.go:2201] Failed to setup network for pod "somepod-752955044-58g59_somenamespace(5d6c28e1-f8dd-11e6-9843-0cc47a9a5cf2)" using network plugins "cni": no IP addresses available in network: cbr0; Skipping pod Yes that is a very weird problem and a weird stage to be stuck at. Try setting up a cron job to run this script on reboot There is a garbage collecting the pods on docker daemon restart which should help with your issue. You can try the following steps: You can ...READ MORE switch Docker to 1.12.x; Kubernetes doesn't support ...READ MORE Try and start kubelet with the following ...READ MORE Your kubernetes cluster is missing the ing Unfortunately, you cannot run the CronJob inside a container ...READ MORE You’re trying to access a private IP. ...READ MORE OR
https://www.edureka.co/community/21095/unable-to-schedule-jobs-every-couple-of-days?show=21097
CC-MAIN-2019-26
refinedweb
672
57.67
Transposed - medium In this challenge we get two files: a text file output containing the encrypted (or rather transformed) flag L{NTP#AGLCSF.#OAR4A#STOL11__}PYCCTO1N#RS.S and a piece of Python code encrypt.py which was used to transform the cleartext flag. #-*- coding:utf-8 -*- import random W = 7 perm = range(W) random.shuffle(perm) msg = open("flag.txt").read().strip() while len(msg) % (2*W): msg += "." for i in xrange(100): msg = msg[1:] + msg[:1] msg = msg[0::2] + msg[1::2] msg = msg[1:] + msg[:1] res = "" for j in xrange(0, len(msg), W): for k in xrange(W): res += msg[j:j+W][perm[k]] msg = res print msg In hindsight this challenge was quite easy and it took me much longer than it should have. Most of the time was spent trying to wrap my head around inverting string operations, such as msg[0::2] , which may be not too usual for everyone. So lets step through the code quickly. Analysis of encryption code At first, a random permutation of the first seven positive integers (e.g. [5, 6, 1, 3, 2, 0, 4]) is created. Then the cleartext flag is read from a text file and padded with . such that the length of the padded flag is at least 14 and divisible by 7 (i.e. 14, 21, 28,…). After padding the flag, a loop of 100 iterations is run through, carrying out the following steps: - Move the msgstring’s first character to the end of the string - Take every second character and and move it to the end of the string - Again, move the msgstring’s first character to the end of the string - The outer for-loop (index j) iterates over the length of the msgstring in increments of 7 (i.e. first iteration j = 0, second iteration j = 7, third iteration j = 21,…). The inner for-loop (index k) then works on junks of 7 characters (starting at postion j) by appendending the character at position j + kto the resstring. Here the permutation comes into play, because the character appended to the resstring depends on the value at position kin the permutation array. At the end, said resstring will be the new msgstring in the next iteration. That’s basically it: no key or any other shenanigans involved! The solution In order to reconstruct the plaintext flag, the above code needs to be inverted (more or less :). As we don’t know the actual permutation which was used during the encryption, we need to brute-force this part (there are 5040 possible possible combinations). To do this we put the actual decryption code into a loop which uses a different permutation during each iteration. Now inverting the encryption code: - We start with the two nested for-loops (indices jand k). There we simply remove the first character of the encrypted flag and place it at the position it initially had in the cleartext string. - After the nested loops, the last character of the string is moved to position 0. - Then the string is split in half and both halves are merged in an interleaving manner. For example, let s = "acegbdf", left half u = "aceg"and right half l = "bdf". This would result in s = "abcdefg"after the merging. - Then, the last character is moved to the beginning of the string. At the end of the iteration, resis assigned the value of msgin order to get transformed in the next iteration. At the end of each of the 5040 iterations (which themselves consist of 100 transformation iterations), the output is tested to contain the string FLAG{, in order to find the flag a little easier. (Thanks very much to the challenge author who decided to use a proper flag format!). Here is my implementation of the solution. Im sure, that there are more efficient ways to solve this, but for the sake of comprehensibility of the target audience (and myself), I did not want to make it more complex (i.e. in terms of the string operations). import random from itertools import permutations W = 7 perms = list(permutations(range(0, 7))) for perm in perms: # all permutations of range(0,6) perm = list(perm) msg2 = "L{NTP#AGLCSF.#OAR4A#STOL11__}PYCCTO1N#RS.S" res = msg2 for i in xrange(100): msg = [''] * len(res) ######## invert this the nested loops for j in xrange(0, len(msg), W): for k in xrange(W): msg[j+perm[k]] = res[0] res = res[1:len(res)] msg = ''.join(msg) ####### invert this: msg = msg[1:] + msg[:1] msg = msg[len(msg)-1:len(msg)] + msg msg = msg[:-1] ####### invert this: msg = msg[0::2] + msg[1::2] u = msg[0:int(len(msg)/2)] # first half l = msg[int(len(msg)/2):int(len(msg))] # second half r = [''] * (len(u) + len(l)) r[::2] = u r[1::2] = l msg = ''.join(r) ###### invert this: msg = msg[1:] + msg[:1] msg = msg[len(msg)-1:len(msg)] + msg msg = msg[:-1] res = msg if "FLAG{" in msg: print(msg) And here’s the flag:
http://admin-admin.at/qualsjordtunis2018-transposed
CC-MAIN-2018-47
refinedweb
849
61.36
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Introduction -- Usage Examples The gregorian date system provides a date programming system based the Gregorian Calendar. The first introduction of the Gregorian calendar was in 1582 to fix an error in the Julian Calendar. However, many local jurisdictions did not adopt this change until much later. Thus there is potential confusion with historical dates. The implemented calendar is a "proleptic Gregorian calendar" which extends dates back prior to the Gregorian Calendar's first adoption in 1582. The current implementation supports dates in the range 1400-Jan-01 to 9999-Dec-31. Many references will represent dates prior to 1582 using the Julian Calendar, so caution is in order if accurate calculations are required on historic dates. See Calendrical Calculations by Reingold & Dershowitz for more details. Date information from Calendrical Calculations has been used to cross-test the correctness of the Gregorian calendar implementation. All types for the gregorian system are found in namespace boost::gregorian. The library supports a convenience header boost/date_time/gregorian/gregorian_types.hpp that will include all the classes of the library with no input/output dependency. Another header boost/date_time/gregorian/gregorian.hpp will include the types and the input/output code. The class boost::gregorian::date is the primary temporal type for users. If you are interested in learning about writing programs. A natural expectation when adding a number of months to a date, and then subtracting the same number of months, is to end up exactly where you started. This is most often the result the date_time library provides but there is one significant exception: The snap-to-end-of-month behavior implemented by the months duration type. The months duration type may provide unexpected results when the starting day is the 28th, 29th, or 30th in a 31 day month. The month_iterator is not affected by this issue and is therefore included in the examples to illustrate a possible alternative. When the starting date is in the middle of a month, adding or subtracting any number of months will result in a date that is the same day of month (e.g. if you start on the 15th, you will end on the 15th). When a date is the last day of the month, adding or subtracting any number of months will give a result that is also the last day of the month (e.g if you start on Jan 31st, you will land on: Feb 28th, Mar 31st, etc). // using months duration type date d(2005, Nov, 30); // last day of November d + months(1); // result is last day of December "2005-Dec-31" d - months(1); // result is last day of October "2005-Oct-31" // using month_iterator month_iterator itr(d); // last day of November ++itr; // result is last day of December "2005-Dec-31" --itr; // back to original starting point "2005-Nov-30" --itr; // last day of October "2005-Oct-31" If the start date is the 28th, 29th, or 30th in a 31 day month, the result of adding or subtracting a month may result in the snap-to-end-of-month behavior kicking in unexpectedly. This would cause the final result to be different than the starting date. // using months duration type date d(2005, Nov, 29); d += months(1); // "2005-Dec-29" d += months(1); // "2006-Jan-29" d += months(1); // "2006-Feb-28" --> snap-to-end-of-month behavior kicks in d += months(1); // "2006-Mar-31" --> unexpected result d -= months(4); // "2005-Nov-30" --> unexpected result, not where we started // using month_iterator month_iterator itr(date(2005, Dec, 30)); ++itr; // "2006-Jan-30" --> ok ++itr; // "2006-Feb-28" --> snap-to DOES NOT kick in ++itr; // "2006-Mar-30" --> ok --itr; // "2006-Feb-28" --> ok --itr; // "2006-Jan-30" --> ok --itr; // "2005-Dec-30" --> ok, back where we started"
http://www.boost.org/doc/libs/1_51_0/doc/html/date_time/gregorian.html
CC-MAIN-2014-42
refinedweb
655
50.57
/* * functions in this file provide an interface for performing * operations directly on RCS files. */ #include "cvs.h" #include <stdio.h> #include "diffrun.h" #include "quotearg.h" /* This file, rcs.h, and rcs.c, together sometimes known as the "RCS library", are intended to define our interface to RCS files. Whether there will also be a version of RCS which uses this library, or whether the library will be packaged for uses beyond CVS or RCS (many people would like such a thing) is an open question. Some considerations: 1. An RCS library for CVS must have the capabilities of the existing CVS code which accesses RCS files. In particular, simple approaches will often be slow. 2. An RCS library should not use code from the current RCS (5.7 and its ancestors). The code has many problems. Too few comments, too many layers of abstraction, too many global variables (the correct number for a library is zero), too much intricately interwoven functionality, and too many clever hacks. Paul Eggert, the current RCS maintainer, agrees. 3. More work needs to be done in terms of separating out the RCS library from the rest of CVS (for example, cvs_output should be replaced by a callback, and the declarations should be centralized into rcs.h, and probably other such cleanups). 4. To be useful for RCS and perhaps for other uses, the library may need features beyond those needed by CVS. 5. Any changes to the RCS file format *must* be compatible. Many, many tools (not just CVS and RCS) can at least import this format. RCS and CVS must preserve the current ability to import/export it (preferably improved--magic branches are currently a roadblock). See doc/RCSFILES in the CVS distribution for documentation of this file format. On a related note, see the comments at diff_exec, later in this file, for more on the diff library. */ static void RCS_output_diff_options (int, char * const *, const char *, const char *, const char *); /* Stuff to deal with passing arguments the way libdiff.a wants to deal with them. This is a crufty interface; there is no good reason for it to resemble a command line rather than something closer to "struct log_data" in log.c. */ /* First call call_diff_setup to setup any initial arguments. The argument will be parsed into whitespace separated words and added to the global call_diff_argv list. Then, optionally, call call_diff_add_arg for each additional argument that you'd like to pass to the diff library. Finally, call call_diff or call_diff3 to produce the diffs. */ static char **call_diff_argv; static int call_diff_argc; static size_t call_diff_arg_allocated; static int call_diff (const char *out); static int call_diff3 (char *out); static void call_diff_write_output (const char *, size_t); static void call_diff_flush_output (void); static void call_diff_write_stdout (const char *); static void call_diff_error (const char *, const char *, const char *); /* VARARGS */ static void call_diff_add_arg (const char *s) { TRACE (TRACE_DATA, "call_diff_add_arg (%s)", s); run_add_arg_p (&call_diff_argc, &call_diff_arg_allocated, &call_diff_argv, s); } static void call_diff_setup (const char *prog, int argc, char * const *argv) { int i; /* clean out any malloc'ed values from call_diff_argv */ run_arg_free_p (call_diff_argc, call_diff_argv); call_diff_argc = 0; /* put each word into call_diff_argv, allocating it as we go */ call_diff_add_arg (prog); for (i = 0; i < argc; i++) call_diff_add_arg (argv[i]); } /* Callback function for the diff library to write data to the output file. This is used when we are producing output to stdout. */ static void call_diff_write_output (const char *text, size_t len) { if (len > 0) cvs_output (text, len); } /* Call back function for the diff library to flush the output file. This is used when we are producing output to stdout. */ static void call_diff_flush_output (void) { cvs_flushout (); } /* Call back function for the diff library to write to stdout. */ static void call_diff_write_stdout (const char *text) { cvs_output (text, 0); } /* Call back function for the diff library to write to stderr. */ static void call_diff_error (const char *format, const char *a1, const char *a2) { /* FIXME: Should we somehow indicate that this error is coming from the diff library? */ error (0, 0, format, a1, a2); } /* This set of callback functions is used if we are sending the diff to stdout. */ static struct diff_callbacks call_diff_stdout_callbacks = { call_diff_write_output, call_diff_flush_output, call_diff_write_stdout, call_diff_error }; /* This set of callback functions is used if we are sending the diff to a file. */ static struct diff_callbacks call_diff_file_callbacks = { NULL, NULL, call_diff_write_stdout, call_diff_error }; static int call_diff (const char *out) { call_diff_add_arg (NULL); if (out == RUN_TTY) return diff_run( call_diff_argc, call_diff_argv, NULL, &call_diff_stdout_callbacks ); else return diff_run( call_diff_argc, call_diff_argv, out, &call_diff_file_callbacks ); } static int call_diff3 (char *out) { if (out == RUN_TTY) return diff3_run (call_diff_argc, call_diff_argv, NULL, &call_diff_stdout_callbacks); else return diff3_run (call_diff_argc, call_diff_argv, out, &call_diff_file_callbacks); } /* Merge revisions REV1 and REV2. */ int RCS_merge (RCSNode *rcs, const char *path, const char *workfile, const char *options, const char *rev1, const char *rev2) { char *xrev1, *xrev2; char *tmp1, *tmp2; char *diffout = NULL; int retval; if (options != NULL && options[0] != '\0') assert (options[0] == '-' && options[1] == 'k'); cvs_output ("RCS file: ", 0); cvs_output (rcs->print_path, 0); cvs_output ("\n", 1); /* Calculate numeric revision numbers from rev1 and rev2 (may be symbolic). FIXME - No they can't. Both calls to RCS_merge are passing in numeric revisions. */ xrev1 = RCS_gettag (rcs, rev1, 0, NULL); xrev2 = RCS_gettag (rcs, rev2, 0, NULL); assert (xrev1 && xrev2); /* Check out chosen revisions. The error message when RCS_checkout fails is not very informative -- it is taken verbatim from RCS 5.7, and relies on RCS_checkout saying something intelligent upon failure. */ cvs_output ("retrieving revision ", 0); cvs_output (xrev1, 0); cvs_output ("\n", 1); tmp1 = cvs_temp_name(); if (RCS_checkout (rcs, NULL, xrev1, rev1, options, tmp1, NULL, NULL)) { cvs_outerr ("rcsmerge: co failed\n", 0); exit (EXIT_FAILURE); } cvs_output ("retrieving revision ", 0); cvs_output (xrev2, 0); cvs_output ("\n", 1); tmp2 = cvs_temp_name(); if (RCS_checkout (rcs, NULL, xrev2, rev2, options, tmp2, NULL, NULL)) { cvs_outerr ("rcsmerge: co failed\n", 0); exit (EXIT_FAILURE); } /* Merge changes. */ cvs_output ("Merging differences between ", 0); cvs_output (xrev1, 0); cvs_output (" and ", 0); cvs_output (xrev2, 0); cvs_output (" into ", 0); cvs_output (workfile, 0); cvs_output ("\n", 1); /* Remember that the first word in the `call_diff_setup' string is used now only for diagnostic messages -- CVS no longer forks to run diff3. */ diffout = cvs_temp_name(); call_diff_setup ("diff3", 0, NULL); call_diff_add_arg ("-E"); call_diff_add_arg ("-am"); call_diff_add_arg ("-L"); call_diff_add_arg (workfile); call_diff_add_arg ("-L"); call_diff_add_arg (xrev1); call_diff_add_arg ("-L"); call_diff_add_arg (xrev2); call_diff_add_arg ("--"); call_diff_add_arg (workfile); call_diff_add_arg (tmp1); call_diff_add_arg (tmp2); retval = call_diff3 (diffout); if (retval == 1) cvs_outerr ("rcsmerge: warning: conflicts during merge\n", 0); else if (retval == 2) exit (EXIT_FAILURE); if (diffout) copy_file (diffout, workfile); /* Clean up. */ { int save_noexec = noexec; noexec = 0; if (unlink_file (tmp1) < 0) { if (!existence_error (errno)) error (0, errno, "cannot remove temp file %s", tmp1); } free (tmp1); if (unlink_file (tmp2) < 0) { if (!existence_error (errno)) error (0, errno, "cannot remove temp file %s", tmp2); } free (tmp2); if (diffout) { if (unlink_file (diffout) < 0) { if (!existence_error (errno)) error (0, errno, "cannot remove temp file %s", diffout); } free (diffout); } free (xrev1); free (xrev2); noexec = save_noexec; } return retval; } /* Diff revisions and/or files. OPTS controls the format of the diff (it contains options such as "-w -c", &c), or "" for the default. OPTIONS controls keyword expansion, as a string starting with "-k", or "" to use the default. REV1 is the first revision to compare against; it must be non-NULL. If REV2 is non-NULL, compare REV1 and REV2; if REV2 is NULL compare REV1 with the file in the working directory, whose name is WORKFILE. LABEL1 and LABEL2 are default file labels, and (if non-NULL) should be added as -L options to diff. Output goes to stdout. Return value is 0 for success, -1 for a failure which set errno, or positive for a failure which printed a message on stderr. This used to exec rcsdiff, but now calls RCS_checkout and diff_exec. An issue is what timezone is used for the dates which appear in the diff output. rcsdiff uses the -z flag, which is not presently processed by CVS diff, but I'm not sure exactly how hard to worry about this--any such features are undocumented in the context of CVS, and I'm not sure how important to users. */ int RCS_exec_rcsdiff (RCSNode *rcsfile, int diff_argc, char * const *diff_argv, const char *options, const char *rev1, const char *rev1_cache, const char *rev2, const char *label1, const char *label2, const char *workfile) { char *tmpfile1 = NULL; char *tmpfile2 = NULL; const char *use_file1, *use_file2; int status, retval; cvs_output ("\ ===================================================================\n\ RCS file: ", 0); cvs_output (rcsfile->print_path, 0); cvs_output ("\n", 1); /* Historically, `cvs diff' has expanded the $Name keyword to the empty string when checking out revisions. This is an accident, but no one has considered the issue thoroughly enough to determine what the best behavior is. Passing NULL for the `nametag' argument preserves the existing behavior. */ cvs_output ("retrieving revision ", 0); cvs_output (rev1, 0); cvs_output ("\n", 1); if (rev1_cache != NULL) use_file1 = rev1_cache; else { tmpfile1 = cvs_temp_name(); status = RCS_checkout (rcsfile, NULL, rev1, NULL, options, tmpfile1, NULL, NULL); if (status > 0) { retval = status; goto error_return; } else if (status < 0) { error( 0, errno, "cannot check out revision %s of %s", rev1, rcsfile->path ); retval = 1; goto error_return; } use_file1 = tmpfile1; } if (rev2 == NULL) { assert (workfile != NULL); use_file2 = workfile; } else { tmpfile2 = cvs_temp_name (); cvs_output ("retrieving revision ", 0); cvs_output (rev2, 0); cvs_output ("\n", 1); status = RCS_checkout (rcsfile, NULL, rev2, NULL, options, tmpfile2, NULL, NULL); if (status > 0) { retval = status; goto error_return; } else if (status < 0) { error (0, errno, "cannot check out revision %s of %s", rev2, rcsfile->path); return 1; } use_file2 = tmpfile2; } RCS_output_diff_options (diff_argc, diff_argv, rev1, rev2, workfile); status = diff_exec (use_file1, use_file2, label1, label2, diff_argc, diff_argv, RUN_TTY); if (status >= 0) { retval = status; goto error_return; } else if (status < 0) { error (0, errno, "cannot diff %s and %s", use_file1, use_file2); retval = 1; goto error_return; } error_return: { /* Call CVS_UNLINK() below rather than unlink_file to avoid the check * for noexec. */ if( tmpfile1 != NULL ) { if( CVS_UNLINK( tmpfile1 ) < 0 ) { if( !existence_error( errno ) ) error( 0, errno, "cannot remove temp file %s", tmpfile1 ); } free( tmpfile1 ); } if( tmpfile2 != NULL ) { if( CVS_UNLINK( tmpfile2 ) < 0 ) { if( !existence_error( errno ) ) error( 0, errno, "cannot remove temp file %s", tmpfile2 ); } free (tmpfile2); } } return retval; } /* Show differences between two files. This is the start of a diff library. Some issues: * Should option parsing be part of the library or the caller? The former allows the library to add options without changing the callers, but it causes various problems. One is that something like --brief really wants special handling in CVS, and probably the caller should retain some flexibility in this area. Another is online help (the library could have some feature for providing help, but how does that interact with the help provided by the caller directly?). Another is that as things stand currently, there is no separate namespace for diff options versus "cvs diff" options like -l (that is, if the library adds an option which conflicts with a CVS option, it is trouble). * This isn't required for a first-cut diff library, but if there would be a way for the caller to specify the timestamps that appear in the diffs (rather than the library getting them from the files), that would clean up the kludgy utime() calls in patch.c. Show differences between FILE1 and FILE2. Either one can be DEVNULL to indicate a nonexistent file (same as an empty file currently, I suspect, but that may be an issue in and of itself). OPTIONS is a list of diff options, or "" if none. At a minimum, CVS expects that -c (update.c, patch.c) and -n (update.c) will be supported. Other options, like -u, --speed-large-files, &c, will be specified if the user specified them. OUT is a filename to send the diffs to, or RUN_TTY to send them to stdout. Error messages go to stderr. Return value is 0 for success, -1 for a failure which set errno, 1 for success (and some differences were found), or >1 for a failure which printed a message on stderr. */ int diff_exec (const char *file1, const char *file2, const char *label1, const char *label2, int dargc, char * const *dargv, const char *out) { TRACE (TRACE_FUNCTION, "diff_exec (%s, %s, %s, %s, %s)", file1, file2, label1, label2, out); #ifdef PRESERVE_PERMISSIONS_SUPPORT /* If either file1 or file2 are special files, pretend they are /dev/null. Reason: suppose a file that represents a block special device in one revision becomes a regular file. CVS must find the `difference' between these files, but a special file contains no data useful for calculating this metric. The safe thing to do is to treat the special file as an empty file, thus recording the regular file's full contents. Doing so will create extremely large deltas at the point of transition between device files and regular files, but this is probably very rare anyway. There may be ways around this, but I think they are fraught with danger. -twp */ if (preserve_perms && strcmp (file1, DEVNULL) != 0 && strcmp (file2, DEVNULL) != 0) { struct stat sb1, sb2; if (lstat (file1, &sb1) < 0) error (1, errno, "cannot get file information for %s", file1); if (lstat (file2, &sb2) < 0) error (1, errno, "cannot get file information for %s", file2); if (!S_ISREG (sb1.st_mode) && !S_ISDIR (sb1.st_mode)) file1 = DEVNULL; if (!S_ISREG (sb2.st_mode) && !S_ISDIR (sb2.st_mode)) file2 = DEVNULL; } #endif /* The first arg to call_diff_setup is used only for error reporting. */ call_diff_setup ("diff", dargc, dargv); if (label1) call_diff_add_arg (label1); if (label2) call_diff_add_arg (label2); call_diff_add_arg ("--"); call_diff_add_arg (file1); call_diff_add_arg (file2); return call_diff (out); } /* Print the options passed to DIFF, in the format used by rcsdiff. The rcsdiff code that produces this output is extremely hairy, and it is not clear how rcsdiff decides which options to print and which not to print. The code below reproduces every rcsdiff run that I have seen. */ static void RCS_output_diff_options (int diff_argc, char * const *diff_argv, const char *rev1, const char *rev2, const char *workfile) { int i; cvs_output ("diff", 0); for (i = 0; i < diff_argc; i++) { cvs_output (" ", 1); cvs_output (quotearg_style (shell_quoting_style, diff_argv[i]), 0); } cvs_output (" -r", 3); cvs_output (rev1, 0); if (rev2) { cvs_output (" -r", 3); cvs_output (rev2, 0); } else { assert (workfile != NULL); cvs_output (" ", 1); cvs_output (workfile, 0); } cvs_output ("\n", 1); }
http://opensource.apple.com/source/cvs/cvs-42/cvs/src/rcscmds.c
CC-MAIN-2014-15
refinedweb
2,286
50.46
Dual VSAN cache tier failure + dead vCenterm13316 Jul 28, 2019 3:26 PM Hello all, looking for your expert advice on the best way to proceed with a really ugly failure scenario. We recently physically moved our lab to a new facility and as we were bringing things back online, found that we had lost a flash cache disk on two out of our five servers. To complicate matters further, we found that vCenter was sitting on a local disk outside of vSAN and it appears to have experienced some corruption on the disk because it fails to start with a "Unable to enumerate all disks" error. In researching that further, five of the twelve vmdks report an i/o error when trying to read them and the flat file is nowhere to be found. I thought we might be able to re-build the vmdks, but seems without the flat file we are out of luck (right?). Unfortunately there is no backup. With regard to VSAN, my understanding is that when the cache disk fails the entire disk group is removed from service. That is what appears to be the case here. The failed flash disks have been replaced on both servers and there is also another flash disk on each that can be used. However it appears that the disk groups are still not in service because in the local ESXi console all of the disks are reported as not operational (see attached picture). Also many of our VMs are currently showing up in the local console as Invalid. I think the reason for that is probably because there is not enough storage on the remaining three servers to accommodate storage for all of the VMs. What is the best way to recover from this multiple-failure scenario while preserving our data? I am thinking of creating a new vCenter, putting all hosts in maintenance mode and then adding them to the new vCenter. Then I will replace the failed cache disk on each server using the new vCenter. Would that work, or is there a better/safer strategy? Also what is the procedure to replace the failed cache disks in vSAN to bring the disk groups back into service without losing data? All hosts and vCenter are running version 6.5. Thanks, Matt - disks_with_issues.jpg 565.8 K 1. Re: Dual VSAN cache tier failure + dead vCenterTheBobkin Jul 30, 2019 2:04 PM (in response to m13316) Hello Matt, Welcome to Communities, but also sorry that first time posting here is in a bad situation. Is dedupe&compression enabled on this cluster? If so then potentially the Disk-Group failed due to a Capacity-tier device failing. What exact cause was determined for failure of the Cache-tier devices? e.g. did you see Medium errors (0x3 0x11) for these devices in vmkernel.log or some other cause of failure of the Disk-Group when first initialising at boot? The data on a Disk-Group is not going to become accessible by replacing a failed Cache-tier device - this will merely allow creation of a new (blank) Disk-Group with the original Capacity-tier devices. If you haven't determined the cause of the failure and/or there is anything that can be done to remediate it AND you haven't deleted the old Disk-Groups then I would advise putting the failed Cache-tier devices back in their respective servers and trying to figure out the problem(s). The invalid VMs are likely so because their namespaces are inaccessible due to the double failure, it is technically possible that some vmdk Objects from these VMs may exist on storage and accessible but have no accessible descriptors (because they were in the now inaccessible namespace) - there are means of recreating these descriptors but it can be time consuming (these stray vmdks should now also appear as Unassociated Objects in RVC vsan.obj_status_report -t). If neither of the Disk-Groups can be recovered then unfortunately you will be looking at restoring from back-up or rebuilding VMs, though if you have anything extremely important to the business stored here and inaccessible, then consider a data recovery specialist. As for the Local VMFS vCenter issues - if the vCenter wasn't heavily customised, didn't plug in to a dozen different products, didn't manage a huge detailed inventory and manage a large number of hosts: recreate it, even as a temporary measure. Conversely, If the vCenter holds data and configurations that the business cannot afford to lose then consider engaging GSS and/or a VMFS specialist e.g. continuum Bob
https://communities.vmware.com/message/2876885
CC-MAIN-2019-35
refinedweb
771
56.59
10 October 2012 08:55 [Source: ICIS news] MELBOURNE (ICIS)--?xml:namespace> The fire, which was caused by an explosion at the hydrogenation unit of the plant’s fuel oil refinery at 4.32am (21:32 GMT) on 10 October, has been extinguished, the source said. The phenol/acetone plant, which has a phenol capacity of 200,000 tonnes/year and an acetone capacity of 120,000 tonnes/year, did not sustain any damage from the fire, the source said. However, the company’s phenol/acetone sales were temporarily halted, the source said. Shiyou Chemical is a subsidiary of Kingboard Chemical Holdings, which operates a 200,000 tonne/year phenol/acetone plant at Huizhou in
http://www.icis.com/Articles/2012/10/10/9602547/chinas-shiyou-chemical-shuts-yangzhou-phenolacetone.html
CC-MAIN-2014-52
refinedweb
115
54.73
- OSI-Approved Open Source (80) - GNU General Public License version 2.0 (44) - GNU Library or Lesser General Public License version 2.0 (13) - BSD License (9) - GNU General Public License version 3.0 (7) - Academic Free License (4) - Mozilla Public License 1.1 (4) - Apache License V2.0 (3) - Apache Software License (3) - Affero GNU Public License (2) - MIT License (2) - Artistic License (1) - Artistic License 2.0 (1) - Common Development and Distribution License (1) - Eclipse Public License (1) - European Union Public License (1) - Creative Commons Attribution License (9) - Public Domain (6) - Other License (4) - Windows (99) - Grouping and Descriptive Categories (61) - Modern (47) - Linux (44) - Mac (36) - Other Operating Systems (18) - Android (17) - BSD (15) epo "epo" is an advanced archiving system on Windows platforms. It tightly integrates into Explorer through its shell namespace extension and offers very easy-to-use archiving features. DIOWave Visual Storage DIOWave Visual Storage is a Web server which displays DICOM images. The look and feel of the Web pages are similar to imaging workstations.1 weekly downloads dcrteam Suite of diverse free tools under the platform win32 destined to help to increase the safety and ergonomics, of the users of internet of simple and comfortable way100 JMultiGraph JMultiGraph is a Open Source project for multi dimensional graphs. It supports multi fan-in and fan-out edges as well as several cutting edge graph algorithms based on state of the art research developed at Technical Universities.. Open J# Compiler Java language compiler for CLI (.NET Framework).1 weekly downloads Disney Mobile Services Services geared towards the use of mobile devices to be used by guests inside Disney parks for sharing real-time information with others in the parks. The application provides services for real-time sharing of park status updates, photos, and wait times Object PeepHole ObjectPeepHole is a .NET utility to ease the inspection of classes. Especially it is effective for unit tests of GUI applications.1 weekly downloads
https://sourceforge.net/directory/natlanguage%3Ajapanese/language%3Acsharp/?sort=update&page=3
CC-MAIN-2016-30
refinedweb
327
53.92
On Mar 6, 12:49 pm, "Martin Unsal" <martin. > > Martin I'm still not clear on what your problem is or why you don't like "from foo import bar". FWIW our current project is about 330,000 lines of Python code. I do a ton of work in the interpreter--I'll often edit code and then send a few lines over to the interpreter to be executed. For simple changes, reload() works fine; for more complex cases we have a reset() function to clear out most of the namespace and re-initialize. I don't really see how reload could be expected to guess, in general, what we'd want reloaded and what we'd want kept, so I have a hard time thinking of it as a language problem.
https://mail.python.org/pipermail/python-list/2007-March/421369.html
CC-MAIN-2014-15
refinedweb
132
76.25
Important: Please read the Qt Code of Conduct - A little help linking with a library... Hi All, I'm trying to figure out what is wrong. I'm using Qt 5.5.1 with Visual Studio 2013 32bit. A popular vendor of IP serial devices is Moxa (). They provide a free admin suite to control the devices in addition they have a .h, .lib, and .dll to talk to the devices. In side of their .h they surround the function names with extern "C": #ifdef __cplusplus extern "C" { #endif // Server Control int NSWINAPI nsio_init(); int NSWINAPI nsio_end(); NSWINAPI is defined as nothing. So the function definitions should be: int nsio_init(); Using dumpbin.exe on the library file gives the following public symbols: 61 public symbols BE2 __IMPORT_DESCRIPTOR_IPSerial E12 __NULL_IMPORT_DESCRIPTOR F4A IPSerial_NULL_THUNK_DATA 1716 __imp__nsio_init@0 1716 _nsio_init@0 15D2 __imp__nsio_end@0 15D2 _nsio_end@0 Obviously I won't bore you all with all 61 symbols! :-) Anyway as you can see at 1716 is _nsio_init. Yet the h file defines it as int nsio_init(). I'm puzzled if this is normal? In my .pro file I link to the libraries as follows: win32: LIBS += -L$$PWD/libs/ -lIPSerial -lftd2xx INCLUDEPATH += $$PWD/libs DEPENDPATH += $$PWD/libs win32:!win32-g++: PRE_TARGETDEPS += $$PWD/libs/ftd2xx.lib $$PWD/libs/IPSerial.lib As a side note you can see I link with ftd2xx.lib. That linking works fine and function calls are resolved. It looks the same when dumped: 185 public symbols 28DE __IMPORT_DESCRIPTOR_FTD2XX 2B04 __NULL_IMPORT_DESCRIPTOR 2C3A FTD2XX_NULL_THUNK_DATA 3CAC _FT_Open@8 3CAC __imp__FT_Open@8 2D8A _FT_Close@4 2D8A __imp__FT_Close@4 So I'm guessing the underscores are expected. The issue is after running qmake and rebuilding I get: io_serialiodevice.obj:-1: error: LNK2019: unresolved external symbol _nsio_init referenced in function "public: bool __thiscall io_serialIODevice::setSerialDevice(class QString &)" (?setSerialDevice@io_serialIODevice@@QAE_NAAVQString@@@Z) I fiddled around abit and added #ifndef NSWINAPI #define NSWINAPI __declspec(dllimport) #endif Since the ftd2xx.h file does so for its library and now I get: io_serialiodevice.obj:-1: error: LNK2019: unresolved external symbol __imp__nsio_init referenced in function "public: bool __thiscall io_serialIODevice::setSerialDevice(class QString &)" (?setSerialDevice@io_serialIODevice@@QAE_NAAVQString@@@Z) While this seems closer in comparison to the ftd2xx if I remove that lib I see this: io_serialiodevice.obj:-1: error: LNK2019: unresolved external symbol __imp__FT_OpenEx@12 referenced in function "public: bool __thiscall io_serialIODevice::open(class QFlags<enum QIODevice::OpenModeFlag>)" (?open@io_serialIODevice@@QAE_NV?$QFlags@W4OpenModeFlag@QIODevice@@@@@Z) So the import here as the @12 reference whereas the one for IPSerial does not. I'm just wondering if I'm missing something simple. I know what the DLL function calls look like as I've used them from C# so I might try just creating my own lib but I'm hoping to avoid that task. Any help or insight would be appreciated. - mrjj Lifetime Qt Champion last edited by mrjj Hi Maybe stupid questions, the lib and DLL file is made for 2013 32bit. ? The @ and numbers is called name mangling as you probably know. With c++ the LIB and DLL must be compiled with exact same version of visual studio, else it wont work. (as far as i know) Hi mrjj, Thanks for the reply. The library is 32 bit and yes on the name mangling. I get that and hence the extern "C" stuff. I don't believe you are entirely correct on the same version of Visual Studio stuff. I could be wrong but for example this same library can be linked and used with a Delphi 32 program versions 7 through 2010. Also the DLL has sample code to import into C# and that works just fine too but I think the C# import bypasses the .lib. I'm beginning to wonder if the .lib is somehow faulty. I have a query in with Moxa to see what they say. - mrjj Lifetime Qt Champion last edited by Hi Well now u mention it, i have also used DLLS with delphi :) In old times, with visual C++ , one could create a lib file from a DLL. I wonder if that is still possible in 2015 ? - kshegunov Qt Champions 2017 last edited by kshegunov @SysTech Hello, If I get this right you're using DLLs, correct? win32:!win32-g++: PRE_TARGETDEPS += $PWD/libs/ftd2xx.lib $PWD/libs/IPSerial.lib Then why this line in the .pro file? If memory serves me, you should do that only if you want to link statically, which would not be true in your case. As you've correctly noted, a C-linkage function should not be dependent on the compiler used, since the symbols don't have name decorations. Maybe try to comment the PRE_TARGETDEPSin your project file, and try a full rebuild? Kind regards. It seems if I remove that line that it cannot find the libs to link. I could be wrong. I copied that line from another project that is working with the ftd2xx and some other libs. It seems this IPSerial.lib is the problem. I will give it a try! Thanks! - kshegunov Qt Champions 2017 last edited by @SysTech Hello, It's been a long time since I've actually used windows, but try to set the paths properly and I think it should work. On linux symbols are resolved at runtime and information is kept in the ELF header, so here we don't have .libfiles (which is not always the better way, though). I believe on windows you only need to properly setup the .lib files' paths and names for the -land -Lswitches to work. Kind regards.
https://forum.qt.io/topic/62399/a-little-help-linking-with-a-library
CC-MAIN-2020-45
refinedweb
925
65.83
CRAWLING IN MY SKIN EDITION PREVIOUS THREAD AT >>51562843 WHAT ARE YOU WORKING ON /G/ >2015 >not using nvforth THESE WOUNDS THEY WILL NOT HEAL >comic sans >picture speaks to itself >>51568547 do i need to memify this?:~$ ./lolisp meme.cjrandom sample. ([][])([][])([]([[[][]]]()))([]([[[][]]]()))([]((()())))([](*()()))([]([[[][]][ ]](((<)(<)))))([]([[]]((()())))) Meme: 111 Total: 749 Total meme percent: 118.55% >>51568507 Doing my programming assigning which is basically practice with bst and recursion. That feeling when you actually understand a concept is awesome. Trying to figure out why in the fuck my book says the answer should be 406 ft/s when everything I've tried spits out 65 ft/s... >>51568555 no u > port2 = port1 + 1 >TypeError: Can't convert 'int' object to str implicitly >python It's like 1 am, Can someone explain this? How do I convert it explicitly? It's simple addition, shit. Man. >>51568730 port2 = port1 + str(1) >>51568730 int(port1) + 1 >>51568730 >python >TypeError >>51568741 Oh disregard this, I can't read... unless you want integers of course... Is there any where with good DP exercises? I desperately need more practice in Dynamic Programming. >>51568572 im gonna post it on github in a little bit so you can memify the code to the interpreter (lolisp). ill post the code to the entire standard library when im done with it so that you can memify that too~ >>51568773 thank you! I look forward to it. Does programming ever get stressful for you /dpt/? What do you do to relax? >>51568730 get familiar with str() and int() >>51568507 >relying on print statements instead of unit tests >>51568943 please don't be sarcastic. it's friday, let's all be friendly. let's hug? im not that fag op, i mean you and i. >>51568856 Entertainment. Videogames, movies, series, books, anime etc. Currently I'm wondering if I should start reading Spice and Wolf LN's from 6 or if I should start reading the Assasins Apprentice by Robin Hobb. Created a grade book where the teacher can enter a name of a student, submit 4 test scores, then the program will get the average and the grade depending on those averages. >>51568943 Print statements are for debugging, unit tests are for avoiding bugs. Both have their uses. >>51568943 >relying on unit tests >not being able to think for yourself >>51569074 >unit tests are for avoiding bugs kek Creating the repo for my forth now. What should I call it (dubs decides)? >>51568781 stdlib so far:: not if -1 else 0 then ; : or if else drop -1 then ; : and if drop 0 else then ; : xor not if not else then ; : <> xor ; : sep '| ; : sep? sep = ; : print dup . ; : print* dup sep? if . print* else drop then ; : drop* sep? if drop else drop drop* then ; >>51568894 >>51568746 >>51568741 >>51568737 >>51568730 What now? >>51568507 Teaching myself C++, I'm having a lot of fun with it so far. >>51569113 Says every bug magnet ever. Unless you're programming in Haskell or something where you provide a formal proof of your program then you should use unit tests. >>51569114 What was incorrect with what I said? >>51569136 >>51569131 ans forth or/and/xor are bitwise, not logical. >>51569149 >this is what javascript users actually believe >>51569195 I don't program in javascript, in fact I hate everything that has dynamic typing. I'm not a memory management fag, so I prefer shit like C# >>51569194 hm ill add those too later on.bor band bxor i guess? or i can just switch the existing ones to have a "l" prefix. although i'm not really trying to follow standards 100% friendly reminder that the vast majority of the population is stupid as hell and you are one of those people >>51569142 can I have ur wallppr? :D I've never really given python a proper chance, but it's kind of great for doing quick work. Trying to program a concept just to learn it or trying to do exercises in algorithm without having to think too much about the language limitations.string = "itwasthebestoftimes" validwords = ["i","it", "was","the","best","of","times","time","as"] storedWords = [] def dp(st): for i in range(0, len(string)+1): for j in range(i, len(string)+1): print string[i:j] if string[i:j] in validwords: storedWords.append(string[i:j]) dp(string) print(storedWords) I'm trying to learn Qt. First I tried out a Python wrapper for it and it was a total waste of time as there is absolutely no doc and good luck figuring out what maps to what. Then I reluctantly tried out C++ and it's even worse. What IDE should I be using? VS doesn't even work with the regular version of Qt, had to get a different version specifically for microshit and I can't even generate a new project without errors out the ass. I can't be assed with manually compiling and linking shit, I want something that just werks so I can start learning Qt. Anyone have any tips? Would writting something to edit metadata on an mp3 file be difficult? I basically want to run through my library, parse the year out on the directory names and use tag the mp3 files inside with that year. I don't know how to deal with directories or mp3 metadata though, so some utilities are helpful on where should I start to learn what I need to learn. I'd be using C preferably, or Python. >>51569297 oh and I'm on linux >>51569293 use qtcreator >>51569136 Regex findall returns a list of matched strings, or a list of tuples if your regex contains multiple groups. You can't convert an entire list to an integer, but you can iterate over the matches. In this case your list is ['56630'], so you have to printint(yourmom[0]) >>51569343 Tried that with the msvc version and it couldn't detect the compiler. Might go back and get a different version of Qt later but I'm pretty salty right now. Is it stealing if I go to the app store, see the description on what the application does, and do those features with c++ coding? Beginning coder, but I want to make applications that cost money at the application store so that I don't have to buy it. >>51569297 That's a good candidate for python. Look up eyeD3. It's a library for python that allows you to work with mp3 files very easily and intuitively. "python mp3" is a good search on Google as well (try out any stackoverflow result that may pop up) Working with files and directories in python is not painful at all. It could take you less than 30 minutes to get all you need to know about all that. Lastly I'd suggest using Pycharm cause it makes it easier to 'install' libraries, plus it's a really good IDE (and it's free) Summing up: install pycharm, use it to get eyeD3, start coding. Have fun >>51569393 thank you >>51569388 No, it's not. It'd be if you then were to sell it >>51569365 That outputs [ ? I'm sorry, anons. I have no idea what I'm doing. >>51569142 >Crawling.c >C++ >>51569279 >>51569142 Mate don't listen to the other people. The only wrong you're doing is not using vim instead of gedit or whatever text editor that is. >>51569437pattern = "\d{4,6}" meme = re.compile(pattern) yourmom = meme.findall(con) print(yourmom[0]) >>51569387 What do you have installed? I have vs2013 and I downloaded qt5.5.1 >>51569498TypeError: expected string or buffer Would it be easier if I posted the whole thing? >>51569551 yes >>51569558 Prepare yourself, anon. >>51569521 vs2015 community and qt5.5.1 msvc >>51569611 seems like there isn't an official build for 2015 (expected by 5.6.0), so that's probably why it had issues detecting I posted this in ded thread edition: so perl is neat, but i'm now learning that nobody uses it anymore. what are the "hip" alternatives, and where do they shine. python and ruby come to mind, but i know little about them. Basically, what scripting language should i really focus on learning? >>51569798 Learn python >>51569802 follow up question: best books/resources for an "intermediate" programmer on python? preferably terse >>51569551 Okay, I think I found an easier way to do it. You're trying to get the port code, right? The module that you're using returns a list when you call the connections(type='udp4'), returns a list of pconn objects with the following fields:fd family type laddr raddr status So you can do something like:con = d.connections(kind='udp4') for connection in con: print connection.laddr[1] And it will print what you want without having to do regex bullshit. >>51569798 python is the only answer to your question Any little challenges people have for intermediate programmers? >>51569865 leetcode.com How do you usually name your variables, /g/? I tend to use a mix of both camelCase and underscores probably because I have autism.myNumber // good readability, look fine oldFag // esthetically terrible! senileFag // e and F look better old_fag // look okay old_fag_holy_shit_what_are_you_doing // these underscores are annoying to type butItCouldBeFarWorse // readability is horrible, doesn't look good Ask your beloved programming literate anything. asp.net MVC or jsp or my newest web app not interested in any other unemployable meme platforms I detest programming, at every fucking step, I find bugs and retarded problems in standard libs, programs I need, and so on. but I think it's the best alternative I have for my life seriously, should I just resist, or should I off myself? >>51569876 >esthetically >autism but not attention to detail. >>51569834 Thanks, man. Got it working. in terms of performances, is it better to generate 2 different bitmapfonts or use the same bitmapfont for separate glyphlayouts >WHAT ARE YOU WORKING ON /G/ going to make turn based asteroids on a text grid >>51569921 >it's the best alternative I have for my life What does this even mean? If you hate programming then don't do it. I'd rather make half as much money than do something I despise every day. That shit will kill your soul anon >>51569922 I knew I had spelt something wrong! >>51569879 Lisp or Sceme? Also, who said you're beloved. >>51570077 I am not your beloved programming literate. Neither. Use Racket instead. More consistent than Lisp and not as useless as Scheme. >>51570077 chicken scheme >>51569990 well, it's one of the few technical things I know how to do, and it can even be used to make money by myself... that's way better than working with/for other people but I'm starting to hate computers, and programming has always been the worst part of this all >>51570156 >that's way better than working with/for other people That's most programming task outside of tiny companies. And even in tiny companies you have to work together with other people. >>51569977 I laughed audibly A friend of mine just gave me a shitload of iBeacons however I can only program in ruby and python. Is there a way for me to use these things with my phone without having to progam in Java? C or C++ /g/? std::vectors are pretty nice and I dont want to reimplement them or other C++ features goodnight friends it's time to renew my subscriptions, /g/. what are the best linux / programming / IT magazines? >>51570310 popular science Now that both of these are finished, What do you people thing the best way to communicate between them would be? I don't want it to screenshot and timestamp EVERY time it gets 20 packets, What should I do? Right now it gets the IP from my clipboard. >>51570366 >>51570310 r/programming already 662,319 readers >>51569876 I generally follow the convention of the language that I'm programming in. camelCase in Haskell for instance. However I mostly write C and Python, which means I use lowercase and snake case. For instance, makeusr() is fine but debugobjectinitonstack is in need of some underscores. >>51570379 >not just music in the main music folder >>51570379 God this screenshot is so fucking disgusting can you recommend me a good set of exercises to do in ia32 assembly? >>51568767 in a book about it? I think CLRS cover a bit of it in their book >>51570500 >>51570521 I recently moved everything and put it there. That's why I've been blocking it out. I don't care any more, anon. my HDD failed. >>51570592 Book or website, anything is cool. Also could I have the full name of CLRS? What's the most employable lang that isnt >java >c# >c >c++ >javascript >php >>51570644 Go >>51570644 probably python or ruby. >>51570644 You won'¨t get proper answers asking such a question, why are you discarding these languages so quickly anyway? Ruby or Python, perhaps? >>51570644 Around here? Clojure. >>51570613 it's easily available on the usual places >>51570695 Thanks buddy, I appreciate it. >>51570691 Where is "here"? >>51570714 here on the 4chan labor market. >>51570714 Tampere, Finland. I've decided that my first project will be a creative crawler. You start with a search term, and then branch out into smaller and smaller subtopics (ex. "Mexico" will lead to "Agriculture, History, Current Events, Topography" etc.) Thus you might find bits of information too specific to have been found by one or two generic search terms. Nestl. A portal for the mildly interesting. >only one semester away from graduating >glance at my qualifications >look back at prerequisites for entry-level jerbs >no vodka in the freezer >fuck ...and that's why I'm coding neural nets with Rust at four o'clock in the morning. >>51570762 >keeping vodka in the freezer Shit like this is why nobody's gonna hire you anon >>51570789 The really cheap shit is well-placed in a fridge though... >>51570156 Become a web developer. It's programming on easy mode, and still pays well >>51570762 Hahah I chuckled. g'luck anon Question for those who've used Qt: Is qml worth using? Not too fond of using a ui designer (java ptsd) and qml looks like a quick way of manually coding the ui. I just don't know if there are any performance differences or maybe there are common things that qml can't handle. >>51568746 Sounds dumb but it can be done. If you pass shit to a function in the wrong order, say it's declared as (list, number, tuple) and you say (number, list, tuple). Python'll throw a fit. I fucking hate python. >>51570889 >webdv >basically scripting in PHP and sql, sometimes Javascript >"programming" holy shit I did it, I managed to get a C program that compiles and runs correctly...without a main function! The only problem is that it doesn't actually do anything. There's no errors on compilation and no abnormal exits or whatnot when it runs, it just...well...does nothing. Here's the code:const _start(){ asm( "movl $len,%edx;\n" "movl $msg,%ecx;\n" "movl $1,%ebx;\n" "movl $1,%eax;\n" "int $0x80;" "ret;\n" "msg: .ascii \"Hello, world!\\n\";\n" "len = . - msg;\n" ); } Throw that into your GCC with the `-s` and `-nostdlib` flags and it'll give you a 984 byte executable that, when run, immediately exits normally. How would I go about, you know, actually getting this to print "Hello, world!" to the stdout like it should? Any real C programmers out there or are you all just high-level pansies? >>51571746 1 is exit() >>51571746 >real C programmers >code is pretty much assembly >>51571797 According to the contents of unistd_64.h, write is 1 and exit is 60. >>51571840 1 is sys_write >>51571850 Yeah, that's what I'm going for. Strangely, changing it to 4 results in "Hello, world!" being printed and then an immediate segfault. >>51571850 4* >>51571850 >>51571862 4* >>51571877[~]% cat /usr/include/asm/unistd_64.h | grep write #define __NR_write 1 #define __NR_pwrite64 18 #define __NR_writev 20 #define __NR_pwritev 296 #define __NR_process_vm_writev 311 4 is write on 32-bit machines. 1 is write on 64-bit machines. >>51571840 >unistd_64.h You're using the 32 bit syscall interface. >>51571888 nice trips dude >>51571862 it's actually 4 I misread >>51571904 That must have been it. With that knowledge I fixed the exit function to actually call exit(), and now it works:const _start(){ asm( "movl $len,%edx;\n" "movl $msg,%ecx;\n" "movl $1,%ebx;\n" "movl $4,%eax;\n" "int $0x80;\n" "movl $1,%eax;\n" "int $0x80;\n" "msg: .ascii \"Hello, world!\\n\";\n" "len = . - msg;\n" ); }[/tmp/devel]% gcc -s -nostdlib tmp.c -o tmp tmp.c:1:7 warning: return type defaults to 'int' [-Wimplicit-int] const _start(){ ^ [/tmp/devel]% ./tmp Hello, world! [/tmp/devel]% The binary is 992b, which is still larger than the raw asm but under 1KB so I'm satisfied. what happens to the preprocessor after c++ introduces modules? >>51571966 looks like gcc is adding a subroutine prologue/epilogue to the program >>51572074 Interesting. Also, as usual, there's quite a bit of useless null characters "padding" the binary. I'll see how small I can actually get the binary myself. >>51572142 >>51572074 Because the string seems to be hard coded at 0000:0165, I can't get the executable to be less than 173 chars without tinkering with the actual code which is something I'm not knowledgeable enough to do. I managed to get the 992 byte binary down to only 370 bytes by cutting out the null data and useless debug information, though, which is actually really impressive all things considered. 370B for a hello world program... >>51571966 >The binary is 992b, which is still larger than the raw asm but under 1KB so I'm satisfied. You'll get a bunch of junk by default from gcc (build-id and gcc version string, which creates extra sections) and for x86_64 the abi mandates unwind tables for all functions (which aren't ever needed for a fully working program). Compile and link with: gcc -s -nostdlib -fno-asynchronous-unwind-tables -Qn -Wl,--build-id=none That's 440 bytes without manual intervention. >>51572200 >01001000011001010110110001101100011011110010110000100000010101110110111101110010011011000110010000100001 woah scrub, a whole 370 bytes? I just printed it out using only 13, get the fuck on my level son >>51572260 Woah, thanks! And from there when manually shortening the binary it drops down to a measly 223 bytes. There's still a lot of null characters in the binary, though. I'm sure there's a way to cut those out. >>51572282 As I suspected, there is! I found the byte that points to the beginning of the string. Moving the string to the middle of the program and manually changing that byte to the new location of the string resulted in a savings of about twenty bytes with no loss of functionality. I'm not under 200 bytes yet but I'll keep playing around. Code someone mentioned it is impossible to write swap function in Java. I beg to differimport java.lang.reflect.*; public class T{ public static void swap(int a, int b) throws Exception{ Field f = Integer.class.getDeclaredFields()[3]; f.setAccessible(true); char[] t=(char[])f.get(null); char temp = t[a]; t[a] = t[b]; t[b] = temp; } public static void main(String args[]) throws Exception{ int a = 5; int b = 3; int c = 1; System.out.println("Values = a=" + a + " b=" + b + " c=" + c); System.out.println("swap(a, b);"); swap(a, b); System.out.println("Values = a=" + a + " b=" + b + " c=" + c); System.out.println("swap(b, c);"); swap(b, c); System.out.println("Values = a=" + a + " b=" + b + " c=" + c); } } outputValues = a=5 b=3 c=1 swap(a, b); Values = a=3 b=5 c=1 swap(b, c); Values = a=3 b=1 c=5 template<int s> struct foo { void bar() { std::cout << "Bar"; } }; template<> // why the hell is this <> ? struct foo<3> // and what the FUCK is this? { void baz() { std::cout << "Baz"; } }; >>51572282 Read this >>51572389 I did, but a lot of that's over my head. That program doesn't DO anything, it just runs and immediately exits. I understand the assembly behind my own program but how that actually translates to binary is beyond me, so right now I'm just chasing pointers in an attempt to catch the segfaults. A fresh build of my project now takes over 10 minutes. Kill me. >>51572482 don't worry. this will teach you how to efficient handle your includes >>51572546 That's true I guess. >>51572388 generics Let's say I want to write a simple chat client but I want to encrypt the messages. What's the best way to do this? - Encrypt the messages using some sort of public and private key that the people chatting have and know? I assume they'd make the keys with pgp or something similar. - Use some popular library that already has a function for that and I only have to choose the algorithm. - Don't encrypt anything and run the software over Tor. - Forget about this because I clearly don't know shit about encryption. I really don't care about the encryption part, and obviously I wouldn't think of implementing my own shitty algorithms, I just want to be able to encrypt messages. Dead simple >>51572372 nobody says you cant write a function to swap primitives. the insistence is that you can't do it without using an object >>51572482 I assume you have 100k lines of code and you are not fucking up with your dependencies or using some /g/ tier language >>51572891 as you can see in my code, a b and c are al ints. And no, you can't also not really swap objects too >>51572886 An easy way to do it is to embed the key in the client and server. Sign all network traffic with the key so only the client and server can decrypt it. Teaching myself some modern c++, this compiles in g++, not in clang:constexpr std::initializer_list<int> il={1,2,3}; If I do more complicated things similar to this than g++ starts to fail too without being too verbose of the error. Fuck this. I'm trying to write a library with literal types but I keep bumping into obscure errors. >>51572899 >I assume you have 100k lines of code Yep. My tools say 200KLOC but that is probably including quite a bit of stuff which isn't compiled. >>51572911 >An easy way to do it is to embed the key in the client and server So all clients use the same key? Then there's no point in encrypting the messages. Or am I missing something? Oh and I'd like to avoid having a server >>51573010 I don't think it'll work for you. Let's say your applications are A, the client, and B, the server. B checks the hash of the network request to validate the source, which should be A. Any other client is unable to interact with the server because it doesn't have the right key. You should use something a bit more advanced like public/private keys. Client A signs the message with the public key of client B. Client B decrypts the message with its private key. Decryption is only possible with the private key, so only client B is able to read the message. Make it simple for yourself and use a library. >>51572372 >Field f = Integer.class.getDeclaredFields()[3]; why 3? what does the fourth element in the array of declared Integer fields represent? I really don't understand what's going on here >>51573136 so the server is mandatory right? Otherwise each client would have to know the IP of everyone in the chat. I was thinking about using something like torrent trackers where they basically take care of making sure the clients know who are the clients who have the file parts they need and make them talk with each other >You should use something a bit more advanced like public/private keys. As I suspected. I only studied how RSA works in Discrete Math, but I assume there are far better algorithms by now to replace them. Thanks for your help mate. >>51573330 Not really. You only need to know the IP and port of 1 client. Lets say client A connects to client B. Client A has to know the port and IP of client B. There's no way around that. Now client C connects to client B. Client B notifies client A about the new peer. How the notification is done, is up to you. I assume torrent tracks work aswel. >>51573239 >why 3? because I need the 4th element on that array Let's say I want to make a unoficial mobile version of a website in the form of another website that fetchs data through curl/ajax or whatever and I have to login into the original website through my website and post from my website to the original website what should I study how to do this? is it even possible? what keywords should I use? only found stuff for mobile apps but mine is an actual website >>51573807 >>/wdg/ >>/sqt/ >>51568746 dynamic languages have types too. >>515685071574051 benchmarks when ? >>51574051 MODS >>51570644 VB Anyone saying else is memeing >>51572388 implementation for specific value which means that for any s it will print "Bar" but for s=3 it will print "Baz" >>51574051 you don't have to spam that shit in every single thread if you don't make any progress or don't even suply a link to mess with it. Pretty much everyone that browses this thread already knows about your "lisp" >>51574182 sorry but can't because Schroedinger. >Trying to teach myself DX12 >black screen >Can't work out why it isn't clearing to white like I asked it to Fuck... >>51574182 I am making progress every day. Valutron embodies my emphasis on solid design and incremental improvement. You can find the experimental source tree at . Be forewarned, it's not very useful right now as I am still in the design phase of building the new bytecode VM and its associated JIT compiler. I am documenting the design and implementation of these as well. Further, Valutron is somewhat large in scope, and so the source tree is in flux as I refine my designs. Note that the new JIT should provide a 5-10* speed boost v.s. straight evaluation of trees, and should also enable the implementation of call/cc in a way that matches the semantics prescribed by Revised [6] Report on the Algorithmic Language Scheme. My C is a little rusty. I would appreciate if you could help. I am making a data logger where writing speed and memory is essential. Should I call fprintf 10 times to write the different data or should I construct a char array and write everything at once? >>51574594 is it static typed? What are V-expressions then? >>51574652 fprintf is probably already buffered >>51574555 And the moment I moan on /g/, i realize what an idiot I am. I was never transitioning the back buffers between render_target_output and present. >>51570644 It's no single language. Python and ruby are the most in demand alongside vb, but it really depends on the field and application. Fucking Lua has become the python of video games and pops up outside of it almost as much as python. >>51574688 Thanks. That makes it a lot easier. >>51574594 What's your motivation for the used license? Hey I am shit in databases. For one client, I build a website in PHP/MySQL and I don't really spend too much time on it, it just runs on autopilot. (I am spending my time on another project that pays me 100 times more.) Well, right now, they started to tell me that some actions take a long time. When I got a bit deeper, it seems like it's MySQL - some queries, pretty randomly, take a long time (about 40 seconds). Similar queries before and after are quick, so it's not regular, it's pretty random. I installed munin to watch the load on the VM where it runs, and RAM, CPU, everything seems to be fine. I have no clue what the hell is going on and what should I do. I don't want to fine-tune the SQL queries now, because I don't want to touch the code at all if possible, especially the more low-level stuff like SQL queries. I will probably just try to make innodb_buffer_pool_size larger and hope that it fixes things magically. But it pisses me off that I have no idea what is going on >>51574857 Sound like your queries are deadlocking. >>51574594 doesn't look like objective-c >>51574857 Doesn't PHP have something like profiler with data sampling? >>51574655 V-expressions are an algebraic syntax available for use for code://(); They appear to be statically typed, but in fact this serves as recommendations more than constraints due to the late-binding nature of Lisp. You could overhaul the type system and implement associated optimisations in theory, though. >>51574906 That's true. I moved to C++ quite early on for various reasons. But calling it "Objective-C" elicits funny reactions from /g/ >>51574908 I don't think I need a PHP profiler, I figured out the bottleneck are the SQL queries Also it doesn't happen at development machine at all, only on production, and only in around 1/100 cases (or less), so it's hard to catch >>51574874 Thanks, I will read on that. (Again I am terrible with databases, I know some basic SQL syntax and that MySQL and Postgres exist, and that's where my knowledge ends) >>51574819 It fits with the theme of communism that Valutron is based around. >>51574983 do you accept donations ? Work on making the flowchart thing for generating random maps has been going exceeding well. The only thing I'm really missing at this point is the ability to make feedback loops. >>51575116 I don't see why not. >>51574963 >slot string world : accessor getWorld, initarg world:; Could the last colon be omited? What's it for? Should I use references or copies if I'm not modifying passed variable? >>51575299 const reference >>51575299 Try it out and you'll know what to use. >>51575251 It specifies that the named parameter 'world:' may be passed to make-instance to initialise the 'hello' slot with a value. I inherited the convention of including the colon from ISO Lisp Object System (ILOS). It doesn't seem very useful, though, so I can probably remove it and have the colon be added implicitly. >>51575299 Pass-by-value, unless you need to store that variable somewhere and perhaps modify it later. >>51575309 My reply was a bit short sighted. >>51575299 It depends. For the sake of concurrency, pass by value. Otherrwise, pass by const reference. >>51575299 by value is faster Is node.js the future of programming? I uploaded my language in its current state at: 1. There are design issues with the typenodes (MNode in pmf.h/cpp) which makes the code bloated, error-prone and leaky. 2. The primitives only work with int types. Beside, they're handled ad-hoc and bypass most of the code generation, which is also bug-prone and oogleh. 3. I haven't tested that it actually produces valid executables (but the llvm looks OK). 4. printf-and-continue-until-segfault is used instead of proper erroring, build.sh provides the commands to run to build the entire thing. Only the last command is required if the .y (bison file) and .l (flex file) aren't touched. The code is mostly written as almost-throwaway code so it's not pretty, but it should be understandable and workable. The language's syntax is in the .y file. There is support for if-then, if-then-else, while, set!, let, lambda, type declaration (might not work, not sure) and function calls. Moreover, any symbol can be placed infix to represent an infix call without ambiguity under LALR(1):let fn = lambda (x, y) x + y; fn(3, 4) fn fn(8, 9); \\should output the llvm IR for "7 + 17" There are some simple to correct syntax flaws such as require ; after {} in if and while statements. llvm_backend.h/cpp does code generation. No passmanager is used, so it outputs unoptimized code, which is especially useful for debugging. I like the base syntax (beside some flaws that need to be addressed, as outlined above) so far, but the codebase is starting to be hard to work with. Next up is probably, implementing multiple dispatch and type translation (e.g. <number> = <u>(<long>, <double>) should unify the union type, not the <number> type, with whatever argument is used in a function call). Any pull request, patches, or suggestions would be appreciated, especially in how to cleanup the code. >>51575351 Unless it gets concurrency, no. >>51574963 What's the purpose of this code?// let's define a 'get-hello-world' generic defgeneric get-hello-world (object some-object) => string; >>51575382 WebWorkers are a thing. Memory shared multi threading is being worked on for future versions of ES. >>51575382 It has workers. >>51575396 Not him, but smalltalk-style OOP means you define methods on generics, not on objects. Typically, generics are automatically made for you the first time the compiler encounters a method definition with an as-of-yet unseen name in good languages, though. Think of it as an abstract base class's function, except it's really not in and of itself related to the object, which allows extending the message passing framework over any existing object type without modifying the library which introduces that type. >>51575396 It defines a generic function. These constitute methods that are unassociated with classes. Similar to C++ templates, whichever specialisation (as defined with defmethod) is most specific to the types specified in an invocation of the generic is the one that is called. This is achieved through the Double-Dispatch mechanism. >>51575351 yes, don't listen to >>51575382, the guy does the usual confusion between concurrency and parallelism. (And even that, nodejs has parallel computing). >>51575407 >>51575412 Interesting... struct Parser<'a> { source: &'a str } enum Token<'a> { Literal(&'a str) } impl<'a> Parser<'a> { fn parse_a(&mut self) -> Option<Token> { None } fn parse_b(&mut self) -> Option<Token> { None } fn parse(&mut self) -> Option<Token> { if let Some(x) = self.parse_a() { return Some(x); } if let Some(x) = self.parse_b() { return Some(x); } None } }error: cannot borrow `*self` as mutable more than once at a time if let Some(x) = self.parse_b() { ^~~~ note: in expansion of if let expansion note: expansion site note: previous borrow of `*self` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `*self` until the borrow ends if let Some(x) = self.parse_a() { ^~~~ note: previous borrow ends here fn parse(&mut self) -> Option<Token> { ... } ^ Can someone explain to me why this doesn't work? The borrowing/lifetime semantics of rust confuse me. >>51575423 >Not him, but smalltalk-style OOP means you define methods on generics, not on objects. But why? Is there any good reason not to have them generated automatically? >>51575299 It depends on how big the object is. >>51575459 I haven't added automatic generation yet but I will do in the future. The major virtue of defining them in advance is to, for example, constrain the possible specialisations available. Think of C++09 concepts in this case. I was reading about unit tests and encountered a problem. How do you know what to test and how to test certain things. For example in graphics, how do you test if something is being drawn correctly? You can test every mathematical operation being used on their own easily, but what about the combination of operations? And for example in networking, do you make tests for all the smaller components and then assume the data are being correctly transmitted. Or you make a test using some Port on your own machine and test if it gets there. >tl'dr how to write unit tests for a complete system >>51569865 codeeval.com >>51575563 >For example in graphics, how do you test if something is being drawn correctly? Compare against a "reference" image. If you're talking about 3D rendering, that would be an image rendered by a path tracer. >>51575563 Graphics testing is a whole area of research. The usual "dumb" approach is to manually render and check the validity of an image, then use that image as a reference and allow up to x% per-pixel deviation in the render (different hardware render differently). For other forms of testing, it's about path coverage. thesis: rustlang is a theoretically nice language whose community is overrun by mediocre webdevs who see it as a lazy shortcut to get into systems programming without having to learn all the low level stuff. i'm disgusted. >>51575187 Where the fuck did you get that flowchart control? >>51575563 For networking you usually test all the basic functions like your protocol handler, and then assume the OS correctly pushes your bytes down the pipe. You'll have some end-to-end tests that verifies this separate from the unit tests. >>51575596 >>51575610 ok, thank you Question. What exactly is a parameter? Say I have a function func(func(a+b, a-b), func(a*b, a/b)) Are there 4 or 8 parameters? Or is it 2? Halp >>51575563 In my current company, we don't test at all. Not enough time, we are a start-up, move fast break things LOL XD I hate that. But I will not start doing tests either, the system is quite big right now and everyone is telling me (directly and indirectly) tests are useless and I should not do them So fuck it >>51575727 >So fuck it That's the right attitude. >>51575723 Each call to func() has two parameters. So (and this is a pretty useless number) there are six parameters overall: a+b, a-b, a*b, a/b, and the values of func(a+b, a-b) and func(a*b, a/b). >>51575723 Who cares, really. The function "func" has 2 parameters. But if you want "total parameters", it's probably 6, as this guy says >>51575736 but again it's a useless number. func has two parameters and that's what is important. >>51575736 >>51575768 thanks peeps. It's definitely a number you don't need to know but I have a computing exam coming up on Monday and they ask these kinda things. Thanks again >>51575736 >>51575768 the correct answer was "there is no way to know without function definitions" because there can be always defaulted parameters. I will give you B- for your answer though >>51575802 Is defaulted parameter a parameter if nobody knows about it? And does it make a sound? >>51575802 >because there can be always defaulted parameters Maybe in your language, not in "CS in general". >>51575727 tests are largely useless. At least, i reckon they are massively over used. The amount of time it takes to write and maintain a suite of tests is very high, and you could easily spend the same time just making the code base more robust and leverage the type system more (types are tests). They definitely have a place is a large, well designed system where certain things cannot be easily be made as robust with good code alone. In that case tests are hugely useful. Most of the time they are a waste though. You're company is probably right to be skipping them. >>51575681 thank you too mate >>51575727 >Not enough time, we are a start-up, move fast break things LOL XD > hate that. But I will not start doing tests either, the system is quite big right now well if your company had written tests while the system was growing that wouldn't have happened >So fuck it How the fuck do you even change something in the code? I'd shit myself knowing I could break it all >>51575812 yes it is, you are confusing parameters with arguments. function definitions have parameters, function calls have arguments >>51575821 Remember kids, don't be this guy: stay in school. >>51569263 ;_; >>51575840 Good counter argument. I'm 29 and have a great job that i love. >>51575814 lots of languages support defaulted parameters. So, if it is "CS in general" you should keep that in mind while answering questions >>51575631 >>51575862 You have no idea what CS even is, so top kek to you. Decided to monkey around with my palette thinger from yesterday. Apparently the correct technique is to use HSL, rather than RGB, because the distance will produce images that are off-color (warm, or hot, depending on your input palette) but more closely match the original image in terms of shape. Not only that, but working center out is also important. Haven't taken to implementing that, though, so here's American Gothic with the Starry Night palette. WE TEXT BOARDS NOW BRING BACK PROG YOU FUCKS >>51575802 >>51575862 In all of them when the function has a parameter that can be omitted, multiple versions of that function are generated: with and without. If you call the function with omitted parameter, exactly the amount of parameters you specified are sent to function. >I will give you B- You are a colossal retard. >>51575908 How's that communism working out for ya? >>51575866 How did you uncover this chestnut? I looked around before but could never find anything. >>51575908 Dude you still alive ? Holy shit. >>51575922 >>51575943 Wait, what? Did something happen last night that I missed? >>51575936 Lots and lots of google searches. I filtered a ton of over-engineered bullshit before I discovered this gem. >>51575908 I hope you're not calculating distance between colors by converting them to HSL and just getting difference or squared difference of three parts. >>51575831 Oh. Yeah. This is the first time I am hearing it's a different thing. I thought it's a synonym. And I code for about 20 years now. >>51575960 It really is beautiful. Looks great for plugin-type applications. Thanks, m90. Always nice to discover good controls. >>51575957 You been here like 7-8 months I tought you may have died or something haven't seen in you while >>51575988 Mine just calculates the absolute difference between two colors in RGB. No trickery involved, right now. >>51576020 But why were you talking about HSL? >>51575825 >well if your company had written tests while the system was growing that wouldn't have happened Yeah, but I came later when it was already giant > How the fuck do you even change something in the code? I'd shit myself knowing I could break it all Yeah I was like that too. And it did break. A lot. But I started to use Flow (basically type system in Javascript, like TypeScript, made by Facebook) and it really catches a lot of errors, at least the most common ones I had to fix actual bugs in Flow to make it work on our code though. Thanks God for github, now it's on mainstream >>51576008 I'm here every day, though. You can never get off Mr. dpt's wild ride. >>51576031 Because apparently that's the 'correct' way to do it. Either that or CIELab delta e. Colors are gay, desu. using the duolingo json api as it misses out words from vocab >>51575821 I wrote a small math parsing library, kinda like what does (although that library is 100x the size of mine, and now I've abandoned mine for this) I had a small suite of tests, and it was quite helpful in building it. I could confidently change a lot of code and then ensure that the program still worked fine. Without tests, I could never make large changes like that. The final result was used in a node based thing like >>51575187. I shared it here a while back anyway, unit tests are good. keep them small and easy to maintain, and have a CI server run the suite on every change you make. I use circle-ci. So how many of you work in a sales driven technology company?? what's your experience with salesmen?? >>51576068 It's very far from correct. In practice, this will get horrible results. Web RGB #030202 is hsb(350,0.39,0.01) Web RGB #030402 is hsb(90, 0.50,0.02) The difference between 350 and 90 degrees is huge (that of course is assuming you scale 0-360 to 0-1), but those two colors are practically the same - black. Same applies to HSV (I just can't give you precise HSV values because photoshop only has HSB). Here's some good reading with interactive javascript comparison thing: LAB is great, RGB is good and HSV/HSB produces horrible results. >>51575918 that is wrong. function is still same. same binary. callers uses default parameters given in function definitions if it is missing some arguments >>51576142 We sell to government. No experience. I am a programmer. >>51576142 Our company builds apps for other companies. Does that count? >>51576154 There are multiple functions generated in binary. >>51576154 >>51576189 really depends which language and compiler...it's not an open and shut thing e.g. the compiler determines a default parameter can be inlined in the function instead of wasting space in the stack, so it creates a whole new function which should theorectically run faster because of less bloat if you know of a language/compiler which does create different functions for different number of params, link to the source so we can verify >>51576189 Not for me. With g++ I am getting this.void t(int c = 5){ } t(2); t(); producesmovl $2, %edi call _Z1ti movl $5, %edi call _Z1ti >>51576172 Yes, so does mine, but your company will have salesmen / marketeers to seel it. My personal opinion is techies should take the greater role, but none techinical salesmen selling tech products. What is /g/s experience with being in this enviroment? >>51576238 Turn off inlining. >>51576142 We are sales-driven, but we have no salesmen because we are targeting very niche community where we have, so far, a monopoly, and all people that need us basically know about us. Also the issue is that the community of our users is on one hand very niche, on the other hand all over the planet. So yeah we have no salesmen and are very VERY technology driven. Which is cool. The bad side of our company is this >>51575727 Has anyone here ever made something useful using genetic algorithms? >>51576253 >call >inline (def little-schemer {:title "The Little Schemer" :authors [friedman, felleisen]}) (defn add-author [book new-author] (let [[author name] new-author] (assoc (:authors book) author name))) (add-author little-schemer {:name "Gerald J. Sussman"})java.lang.UnsupportedOperationException: nth not supported on this type: PersistentArrayMap >>51576296 It looks like you're using lisp. >>51576253 >call _Z1ti >_Z1ti > inlining dude, never post here again. ever. >>51576296 >he fell for the clojure meme! >>51576242 I'm a dev, but I visit customers frequently. Not to shill products, but to listen to their requirements. We have a few salesmen. They know absolutely nothing about technology except what's in the tech sheet. Even worse, they talk shit about developers. >>51576340 >Someone makes a mistake reading asm >Tell him to never post here again I'm not even that guy, but fuck, I bet 50% of the people here can't even write a trivial piece of code in their assembly language of choice, lay off the guy >>51576281 >>51576349 but it combines readability of lisp with efficiency of Java! >>51576372 Hey! I made that too. >>51576389 It actually kills lisp readability and performs 10x slower than java. Literally the worse of both worlds. >>51576389 Are you implying Java code is more difficult to read than >>51576296? >>51576281 Yes, just like people have made useful things with opendylan. >>51576450 >that_is_the_joke.gif >>51576450 >10x slower than java How difficult would it be to create an app for both osx/android from scratch? I have no experience with xcode or swift or any of that, but I'm in my final year of engineering so I do have a decent handle on C/C++/VBA and python The app would have a purse which you can add money too and micro-transactions would occur based on that. Think sort of like uber >>51576482 Fine, 3x slower. Same shit. >>51576361 salesmen sell shit that doesn't exist, at the place i work they want to push shit out at fast paces, there is no time for proper testing. I've been there 3 months now and so much old stuff is unecure and exposed to being attacked. Everything is on one server, not joking, it wont take much for this place to fuck up. To the people running the place it feels like the only thing that counts is making more money and speed things up. Clients come back and our work is tied and still expect us to make an app in a week with loads of bespoke stuff. feels like you're making junk. >>51576519 Uber for money Why did nobody ever thought of that??? >>51576519 Impossible. >>51576450 S-expressions are awful for code. They're best suited to being used for data and as a view of the internal representation. Open Dylan proves that you can get all the Lisp benefits (and even hygienic, ultra-powerful macros) without the stupidity of S-expression code. >>51576519 >>>/hn/ also you can use c++ for both >>51576519 On a related note How difficult is Cordova, and how efficient is it on today's phones? I heard it's kinda slow and inefficient, but I never tried it. And I already know JS/CSS/HTML because I do that stuff daily, while diving "back" into objective-c/dalvik java, I will not be so effective in those >>51576543 S-expressions are literally perfect for code. There does not exist a better representation whatsoever. They're perfectly unambiguous and require only 2 syntax markers, they make refactoring almost automatic with 0 tools, and they work bloatless with any programming paradigm currently in existence. They even make it ridiculously easy to serialize and deserialize data as well as making syntax manipulation trivial. >>51576543 >S-expressions are awful for code. i found them to be quite goods. it's easy to write tools for s-expression where parsing is a joy. >>51576539 I'm not sure what you mean, it's for a different service than a car ride, I just don't feel like posting my idea here. Just for the sake of argument assume it functions sort of like uber. >>51576549 hn? Why are there no squiggly lines under errors in emacs flycheck? >>51576574 The only thing I really hate about S-expressions is that they don't work well with type annotations >>51576582 >Open Dylan >>51576528 Same here. We build apps for others, so the goal is to finish the project as fast as possible. Taking shortcuts is a must since the deadlines are insane. There's no budget for unit/UI tests, so we bash the screen and see if it is still working decently. It's a real shame though. I like creating a nice architecture, but that takes time. Time we don't have. Also the quality of 'app developers' isn't really top notch. I guess that's what you get for building apps. I'm still amazed at how many high profile clients we have. >>51574215 ayy >>51576372 > >>51576430 Thanks that is really neat. >>51576571 cordova is deprecated react-native is what you should use now >>51576598 I don't agree. I think chicken- or racket-style type annotations are fantastic:(: fn (-> int int int)) (: fn2 (int int .->. int)) (define (fn x y) (+ x y)) (define (fn2 x y) (* x y)) Even inline annotations work quite well:(lambda ([x : int] [y : int]) #{(+ x y) : int}) But shouldn't be necessary. Top-level items and variables should be markable, but return types and intermediary types should usually be inferred. >>51575908 HSL is a meme I was in Asia for three trips (Vietnam, Hong Kong, Taiwan) and I loved it there How can one find an IT related job there? Is it good idea? I will take China (mainland), Hong Kong, Vietnam, Singapore, Taiwan. Maybe those other countries with less Chinese influence (Malaysia, Thailand) but I have never been there so I don't know. Korea and Japan - I don't know, I heard they work like crazy there >>51576679 I just think the prefix : and -> look silly >>51576679 Is that for typed racket? I love how it's similar to Haskell >>51576693 Bump for interest. >>51576711 Yeah, it's typed-racket. Chicken's look similar. >>51576696 That's just the baby duck talking. >>51576741 În this case, why ever use regular racket instead of typed racket? >>51576693 Try huawei or baidu. They're the big guys in terms of tech over there. It's a very bad idea. You'll have to accept that your job will mostly consist of spying on people, advertising for druglords, being complicit and hiding the company's illegal activities. >>51576741 No, it's the type theorist talking Granted, -> can be written as prefix Π, but I still prefer the infix arrow Agda-style >>51576767 >Try huawei or baidu Haha. One Chinese IT guy I met in Vietnam specifically told me not to work for Huawei because they work a lot with a little pay there Baidu is cool though >>51576767 >You'll have to accept that your job will mostly consist of spying on people, advertising for druglords, being complicit and hiding the company's illegal activities. Let's be honest though the only difference everywhere else in the world is that they're somewhat more covert about it there. >>51576683 So I've heard. >>51575727 I hope you realize there are college students out there you could give internships to in order to carry out tasks like those. >>51576633 The company I work for as high profile clients as well. The funny thing is when we give them something to test you'd expect them to have teams testing it for different scenarios. For example UI testers, penetration testers. But what really happens is some cunt that's managing it gets a friend and their family member to test it on their device, and these clients are high profile nation wide guys. It's so supprising. >>51576764 Typed racket is actually very bad, the type system is both very weak, being unable to express many kinds of types, being unable to resolve transitive type equivalence (union of unbox-type of box A and unbox-type of box of B v.s. unbox-type of box of union of A and B are not compatible, for instance), and being unable to form some types outright (for instance opaque structs cannot be passed back to untyped code during a call to a function in that same untyped code because the opaque type is not formed correctly), you also have to choose per-file if you want types or not, there's a massive cost to going form typed to untyped racket due to having to transform type tests into runtime contracts, and another in going in the other direction (+ dev overhead) because you have to specify contracts to run when importing objects from untyped code to determine the type. Also, you often have to add pointless code to get the type engine to correctly detect the types such as if you have a union of A and B and you want to do fna if it's an a and fnb if it's a b, you can't use if-else, you have to use if-a-else-if-b-else. More importantly, even when a type is trivially guaranteed and you expect the type system to detect it, you still have to insert if/elseif pointless tests to convince the typesystem of the right type to detect, and you have to add type annotations (#{... : ...}) way too often. What am i doing wrong here? I'm trying this exercise in a book, but i'm having trouble with passing the pointer to functions into the vector:#include <iostream> #include <vector> using namespace std; int foo(int x, int y) { return x + y; } int add(int a, int b) { return a += b; } int sub(int a, int b) { return a -= b; } int multiply(int a, int b) { return a *= b; } int divide(int a, int b) { return a /= b; } int main() { typedef int(*pf)(int, int); vector<pf> v{ add, sub, multiply, divide }; for (vector<pf>::iterator iter = v.begin(); iter < v.end(); ++iter) { cout << iter(2, 2) << endl; } return 0; } excuse the messy layout, it's just a small exercise. >>51576810 You mean 'overt'. >>51576871 you can't do that >>51576887 You're right. Been a long day, sorry. >>51576892 I got it to work using:#for (auto f : v) { cout << f(2, 2) << endl; } I know why the iterator thing isn't working, but i wanna know why the auto works. What type is f when used with auto then? I know i could just roll with auto, but i don't like using it. Not until i'm more comfortable with c++ and i better understand how my code works at least. >>51576871 Use std::function. >>51576871cout << (*iter)(2, 2) << endl; >>51576845 Usually the product manager 'tests' the application. I've no idea how they test but it's shit. iOS builds are usually tested on their own devices. 99% of all manager I've met have an iPhone. Android builds are usually tested on a borrowed phone. I occasionally receive mails stating the app doesn't install on their borrowed phone. No shit, it's Android 2.2 with a LDPI screen. We only support Android 4.1 and higher. What kinda applications do you build? >>51576925 because range for loop is defined by the standard as: [code[auto && __range = range_expression ; for (auto __begin = begin_expr, __end = end_expr; __begin != __end; ++__begin) { range_declaration = *__begin; loop_statement } [/code] note the *__begin >>51576679 >>51576598 kawa(define (Foo x::int y::int) ::float (->float (+ (* x y) (- x y)))) >>51576845 plebs have too high expectations of "the elite". it's dudebros all the way to the top >>51576953 >>51576932 Oh, wow i'm retarded. Thanks guys, i get it now. Christ maybe i'm just too dumb for this stuff, i keep making stupid errors like that. >>51576941 I do the web platform, but we build IOS and Andriod. I don't want to say exactly what the product is, there are quite a few other companies do the same thing from what I'm aware of. The manager gets pissy when he finds other competition doing the sme thing. But it's a decent idea and something alot of companies are interested in, we have some big names in my country and abroad that want the app. I would ask you but I'm guessing you'll probs not want to give much away. and yes how they test it is rediculs. A few weeks ago the client had a cried my web features didn't work in IE 8 in their "amazing" IT studio. So yeah New thread: >>51577055
https://4archive.org/board/g/thread/51568507/dpt-daily-programming-thread
CC-MAIN-2018-22
refinedweb
10,033
73.17
(For more resources related to this topic, see here.) Preparing the environment Before jumping into configuring and setting up the cluster network, we have to check some parameters and prepare the environment. To enable sharding for a database or collection, we have to configure some configuration servers that hold the cluster network metadata and shards information. Other parts of the cluster network use these configuration servers to get information about other shards. In production, it's recommended to have exactly three configuration servers in different machines. The reason for establishing each shard on a different server is to improve the safety of data and nodes. If one of the machines crashes, the whole cluster won't be unavailable. For the testing and developing environment, you can host all the configuration servers on a single server.Besides, we have two more parts for our cluster network, shards and mongos, or query routers. Query routers are the interface for all clients. All read/write requests are routed to this module, and the query router or mongos instance, using configuration servers, route the request to the corresponding shard. The following diagram shows the cluster network, modules, and the relation between them: It's important that all modules and parts have network access and are able to connect to each other. If you have any firewall, you should configure it correctly and give proper access to all cluster modules. Each configuration server has an address that routes to the target machine. We have exactly three configuration servers in our example, and the following list shows the hostnames: - cfg1.sharding.com - cfg2.sharding.com - cfg3.sharding.com In our example, because we are going to set up a demo of sharding feature, we deploy all configuration servers on a single machine with different ports. This means all configuration servers addresses point to the same server, but we use different ports to establish the configuration server. For production use, all things will be the same, except you need to host the configuration servers on separate machines. In the next section, we will implement all parts and finally connect all of them together to start the sharding server and run the cluster network. Implementing configuration servers Now it's time to start the first part of our sharding. Establishing a configuration server is as easy as running a mongod instance using the --configsvr parameter. The following scheme shows the structure of the command: mongod --configsvr --dbpath <path> --port <port> If you don't pass the dbpath or port parameters, the configuration server uses /data/configdb as the path to store data and port 27019 to execute the instance. However, you can override the default values using the preceding command. If this is the first time that you have run the configuration server, you might be faced with some issues due to the existence of dbpath. Before running the configuration server, make sure that you have created the path; otherwise, you will see an error as shown in the following screenshot: You can simply create the directory using the mkdir command as shown in the following line of command: mkdir /data/configdb Also, make sure that you are executing the instance with sufficient permission level; otherwise, you will get an error as shown in the following screenshot: The problem is that the mongod instance can't create the lock file because of the lack of permission. To address this issue, you should simply execute the command using a root or administrator permission level. After executing the command using the proper permission level, you should see a result like the following screenshot: As you can see now, we have a configuration server for the hostname cfg1.sharding.com with port 27019 and with dbpath as /data/configdb. Also, there is a web console to watch and control the configuration server running on port 28019. By pointing the web browser to the address, you can see the console. The following screenshot shows a part of this web console: Now, we have the first configuration server up and running. With the same method, you can launch other instances, that is, using /data/configdb2 with port 27020 for the second configuration server, and /data/configdb3 with port 27021 for the third configuration server. Configuring mongos instance After configuring the configuration servers, we should bind them to the core module of clustering. The mongos instance is responsible to bind all modules and parts together to make a complete sharding core. This module is simple and lightweight, and we can host it on the same machine that hosts other modules, such as configuration servers. It doesn't need a separate directory to store data. The mongos process uses port 27017 by default, but you can change the port using the configuration parameters. To define the configuration servers, you can use the configuration file or command-line parameters. Create a new file using your text editor in the /etc/ directory and add the following configuring settings: configdb = cfg1.sharding.com:27019, cfg2.sharding.com:27020 cfg3.sharding.com:27021 To execute and run the mongos instance, you can simply use the following command: mongos -f /etc/mongos.conf After executing the command, you should see an output like the following screenshot: Please note that if you have a configuration server that has been already used in a different sharding network, you can't use the existing data directory. You should create a new and empty data directory for the configuration server. Currently, we have mongos and all configuration servers that work together pretty well. In the next part, we will add shards to the mongos instance to complete the whole network. Managing mongos instance Now it's time to add shards and split whole dataset into smaller pieces. For production use, each shard should be a replica set network, but for the development and testing environment, you can simply add a single mongod instances to the cluster. To control and manage the mongos instance, we can simply use the mongo shell to connect to the mongos and execute commands. To connect to the mongos, you use the following command: mongo --host <mongos hostname> --port <mongos port> For instance, our mongos address is mongos1.sharding.com and the port is 27017. This is depicted in the following screenshot: After connecting to the mongos instance, we have a command environment, and we can use it to add, remove, or modify shards, or even get the status of the entire sharding network. Using the following command, you can get the status of the sharding network: sh.status() The following screenshot illustrates the output of this command: Because we haven't added any shards to sharding, you see an error that says there are no shards in the sharding network. Using the sh.help() command, you can see all commands as shown in the following screenshot: Using the sh.addShard() function, you can add shards to the network. Adding shards to mongos After connecting to the mongos, you can add shards to sharding. Basically, you can add two types of endpoints to the mongos as a shard; replica set or a standalone mongod instance. MongoDB has a sh namespace and a function called addShard(), which is used to add a new shard to an existing sharding network. Here is the example of a command to add a new shard. This is shown in the following screenshot: To add a replica set to mongos you should follow this scheme: setname/server:port For instance, if you have a replica set with the name of rs1, hostname mongod1.replicaset.com, and port number 27017, the command will be as follows: sh.addShard("rs1/mongod1.replicaset.com:27017") Using the same function, we can add standalone mongod instances. So, if we have a mongod instance with the hostname mongod1.sharding.com listening on port 27017, the command will be as follows: sh.addShard("mongod1.sharding.com:27017") You can use a secondary or primary hostname to add the replica set as a shard to the sharding network. MongoDB will detect the primary and use the primary node to interact with sharding. Now, we add the replica set network using the following command: sh.addShard("rs1/mongod1.replicaset.com:27017") If everything goes well, you won't see any output from the console, which means the adding process was successful. This is shown in the following screenshot: To see the status of sharding, you can use the sh.status() command. This is demonstrated in the following screenshot: Next, we will establish another standalone mongod instance and add it to sharding. The port number of mongod is 27016 and the hostname is mongod1.sharding.com. The following screenshot shows the output after starting the new mongod instance: Using the same approach, we will add the preceding node to sharding. This is shown in the following screenshot: It's time to see the sharding status using the sh.status() command: As you can see in the preceding screenshot, now we have two shards. The first one is a replica set with the name rs1, and the second shard is a standalone mongod instance on port 27016. If you create a new database on each shard, MongoDB syncs this new database with the mongos instance. Using the show dbs command, you can see all databases from all shards as shown in the following screenshot: The configuration database is an internal database that MongoDB uses to configure and manage the sharding network. Currently, we have all sharding modules working together. The last and final step is to enable sharding for a database and collection. Summary In this article, we prepared the environment for sharding of a database. We also learned about the implementation of a configuration server. Next, after configuring the configuration servers, we saw how to bind them to the core module of clustering. Resources for Article: Further resources on this subject: - MongoDB data modeling [article] - Ruby with MongoDB for Web Development [article] - Dart Server with Dartling and MongoDB [article]
https://www.packtpub.com/books/content/sharding-action
CC-MAIN-2015-48
refinedweb
1,674
51.89
Hey all, Since I am new to this forum I better intorduce myself first. Well, I plan to take C++ in June, but seeing it as being my weakness zone earlier on, I wanna train while I've got the chance. So, I'm studying on my own at home since uni doesn't start until summer, and I try to attempt a few questions. I'm gonna make a few topics so hope you all don't mind. Here is my first problem and I'll show you how I've worked this out. The value e^x can be approximated by the sum 1 + x + x^2/2! + x^3/3! + … + x^n/n! Write a program that takes a value x as input and outputs this sum for n taken to be each of the values 1 to 100. The program should also output e^x calculated using the predefined function exp. The function exp is a predefined function such that exp(x) returns an approximation to the value ex. The function exp is in the library with the header file cmath. Your program should repeat the calculation for new values of x until the user says she or he is through. 100 lines of output might not fit comfortably on your screen. Output the 100 output values in a format that will fit all 100 values on the screen. For example, you might output 10 lines with 10 values on each line. Okay, so there's basically three parts to this: first get the 100 values, then make them fit on the screen properly, and finally, to ask the user if he or she wants to repeat the calculation for another value of x. Thusfar, I've done the first 2 things : get the 100 values and then display them in a 10 x 10 grid. I can show you guys my source code. Its as follows: #include <iostream> #include <cmath> using namespace std; int main () { double x; //value input by the user int i=1,counter=0;; /*counter from 1-100 is counter, i will be used to display text in an array */ double fact=1; //factorial double ex=0; //this stores the answer for ex cout << "This program will take a value x from you and will use it to find the"; cout << " approximation value of the expression e^x." << endl; cout << "Please enter any real number value for x" << endl; cin >> x; //takes the value x from the user cout << "Displaying all values from x and entering into the equation" << endl; while(counter<=100) { ex += pow(x,counter)/fact; //calculation for ex counter++; //increment counter fact*=counter; //calculation for factorial (counter!) cout<<ex<<"\t"; if(i%10==0) cout<<endl; i++; } cout << "The calculated value for counter = 100 is "<<ex<<endl; cout << "The predefined value is " << exp(1) << endl; system ("pause"); return 0; } My questions now: 1. Is this correct thusfar? AND 2. How do I modify the loop or add a loop so that the user can repeat the calculation with new values of x? I'm kinda stuck on that one..... if someone can modify my code I'd be grateful.
https://www.daniweb.com/programming/software-development/threads/72428/how-can-i-modify-this-program
CC-MAIN-2018-43
refinedweb
530
67.69
Wiki » Julia Yarba, 01/05/2018 01:15 PM h1. Geant4.10.4 and CLHEP Random Number Generators *WORK IN PROGRESS !!!* Released in December 2017, Geant4.10.4 assumes the use of CLHEP v2.4.0.0. Along with several bug fixes, CLHEP v2.4.0.0 has an important change in the Random Number Generator that now uses the "MIXMAX": as the default engine (CLHEP::MixMaxRng). The MIXMAX generator is a modern, presumably faster alternative to HepJamesRandom (RANMAR; for details, see Comp. Phys. Comm. 60 (1990) 329) that was the default engine in the CLHEP v2.3.x-series. However, the (default) random engine can be easily replaced by any other engine of user's choice. If a user wishes to initialize it with a particular seed, this is also easy to do. Example usage: <pre> #include "Randomize.hh" CLHEP::HepRandom::setTheEngine(new CLHEP::RanecuEngine); long seed = 123456789; // but no larger than 900000000 !!! CLHEP::HepRandom::setTheSeed( seed ); </pre> NOTE: the setTheEngine(...) function does NOT delete the default engine; it just makes the generator use the alternative one. Obviously, the CLHEP::RanecuEngine is just one example; there are other engine available in CLHEP/Random. A reasonably up-to-date description of available engines can be found at the following URL: In other words, if for any reasons one is not comfortable with transitioning to the new MIXMAX engine of CLHEP v2.4.0.0, one can explicitly replace it with HepJamesRandom or any other engine available in CLHEP. We are currently conducting a series of tests to check how (if at all) Geant4 simulated results maybe affected by the use of one or another engine (MixMax, HepJamesRandom, RanecuEngine). Several preliminary results are included to illustrate the case: * Forward (FW) production of pi+ by 5GeV/c pi+ incident on Carbon nucleus, as simulated by Bertini cascade model {{collapse(Click to Show Plots, Click to Hide Plots) !! }} * Forward (FW) production of pi+ by 5GeV/c pi+ incident on Carbon nucleus, as simulated by FTF(P) model !! Last but not least. While CLHEP v2.4.0.0 is strongly recommended to be used with Geant4.10.4, it appears that the use of CLHEP v2.3.4.4 is also possible, at least at the "mechanical level". This means that Geant4.10.4 builds fine vs CLHEP v2.3.4.4, and results of several preliminary tests appear to make sense. We will try to explore some more about backward compatibility of Geant4.10.4 with CLHEP v2.3.4.x-series.
https://cdcvs.fnal.gov/redmine/projects/g4/wiki/RNDM-Geant4104/20/annotate
CC-MAIN-2019-39
refinedweb
419
59.19
Are you linkedIn? You Should! On April 10, I took my last steps out of the Computer Animation building at Full Sail where I been teaching Character Rigging and Scripting for the past 5 1/2 years. I had the pleasure with working with many passionate instructors dedicated to sharing knowledge and experience to their eager students. With that in mind "So Long, and Thanks for All the Fish". I finished another plug-in for Maya back in November that will displace a polygon's vertices in real time. The plug-in moves the vertices in their normal direction based off a grey-scale 2D texture. The node is based off a deformer which will allow a change in the set membership. During each frame, the active vertices in the set membership will have their UV coordinates queried based off a given UV Map. Then, using the color input of the displace Mesh Node, a color is retrieved that will be used to displace the vertices in their normal direction. Currently the node only works with 2D textures, which does include image sequences. In the example video all the polygon meshes has a high density for a more accurate representation of the 2D texture. the example in the bottom right corner is 150 by 150 vertices. Few of the presets were referencing a texture map causing an error when trying to use them. They have been fix and a new version is available. Also, I renamed the folders to "mia_material" and "mia_material_x". Keep the comments coming. Read the current discussion here! Turns out that my services were needed for unloading the truck. With the help of a moving crew hopped up on energy drink we was finished in 2 hours. SIGGRAPH went well, I talked to many people and companies. Many a great ideas for future rigging ideas spawned from the many great talks, that I will be working on soon. One of the most interesting things I saw was not on the convention floor, but was on the way the Seattle. A truck loaded with tomatoes caught fire. My attempts to snap a picture failed of the flame roasted tomatoes. The new 2008 demo reel is now live. All the menus and features was created using Python. Download the 2008 Demo Reel MOV(32M: 720x480 low quality) here! Download the 2008 Demo Reel MP4(70M: 720x480 high quality) here! If you have time don't forget to check out my latest creation. A plugin-in that displaces a polygon's vertices in real time. Flash video can be found here! I will be attending SIGGRAPH this year representing the Orlando ACM SIGGRAPH this year in sunny LA. I am looking for work as a Character Technical Director. Contact me if you want to setup an interview or even just to hang out. I will be staying with my brother in LA before I help him move to Seattle, so I will be touring the city on the 20th. Going to be a nice 18 hour drive, hopefully he does not ask me to help him unpack the truck. -Brian I wrote this class, while rigging two character simultaneously for my new reel. The rigs are very similar and the names of the nodes are the same. Felt that I was spending too much time renaming the nodes on the second character by referencing the first character's names. Tried copy/paste which did work but wasn't time saving. This class allow a selection of node names to be stored in a file and loaded in another instance of Maya with a search and replace feature. Any feature request please let me know. If you like the script please email me, sign the guestbook or leave comment Download the crossRename here! #Created by Brian Escribano, import maya.cmds as cmds def curveOffset( ): selection = cmds.ls(selection=True) curve = selection[0].split('.')[0] selCV = selection[0][ selection[0].find('[')+1 : selection[0].find(']') ] numSpans = cmds.getAttr( curve + '.spans' ) degree = cmds.getAttr( curve + '.degree' ) form = cmds.getAttr( curve + '.form') numCVs = numSpans + degree; if ( form == 2 ): numCVs -= degree try: state = int(selCV) % 2 except: state = 0 selCV = numCVs + 1 if state == 0: selList = range(0, numCVs-1, 2) # evens else: selList = range(1, numCVs, 2) # odds for sel in selList: if sel != int(selCV): cmds.select( curve + '.cv[' + str(sel) + ']', toggle=True) parentCurve function allows for quick parenting of nurbs curve's shape node to a given list or selection.Keyword Arguments:normal=(1,0,0) #Sets the normal direction of the makeNurbsCurve, default to Xradius=0.5 #Adjust the radius of the curvecolor=None #Change the color of the curve, choices are [1-31]selList=None #The list of nodes to make the adjustments. Default will be the current selection This short function will create a group node above the nodes in a given list. If no list is provided then the selection is used. Has the ability for search and replace. The group node will be placed in the same position, to zero out the controls of translation and rotation. Will also adjust the parenting as needed. For example: groupAbove('CTRL', 'GRP') will create a group node with the named change by the search and replace keyword. One limitation of MEL over Python is the inability to setup default keyword arguments in functions (procedures). For MEL to create reusability for a procedure to work with a selection or a custom array will take multiple commands or procedure. Removed the extra white spaces from the file names for the MIA presets for Maya and added a new download for Advance Output. A round of applause to Dagon for the great suggestions and ummm... corrections. Keep the comments coming. Working with the help of David Hackett, I converted all the MIA Presets for 3D MAX that was created by Jeff Patton to individual presets for Maya using Python. The presets that required procedural or texture files were changed to the value of black. If you have any questions about the presets and Mental Ray I will gladly forward your inquiries to David. After all... I am a Technical Artist, not a Render Monkey! Update (December 6, 2008): Create a folder called "attrPresets" in your presets folder and drop the folder from the zip inside. Wasn't very clear on how to get them to work. Download MIA Presets Here: mia_presets.zip Download Advance Output MIA Presets Here: mia_ao_presets.zip multiConnectAttr allows for multiple attribute connections at one time with the option for a customundo/doIt commands in ethier Python or MEL based off a list/array. Useful for updating a GUI that does not normally allows for undos. Download for Python 2.4.3 ( Maya 8.5) Here!Download for Python 2.5.1 ( Maya 2008) Here! Please email me or use the Contact Form if you like the multiConnectAttr Command. You can also leave a comment at this very entry or even sign the Guestbook. All material is Copyrighted to Brian Escribano 2005-2008 unless noted otherwise. Rate my MEL Scripts on As the Event's Chair VisitJoin as Wii Friends: 6401348556754682
http://brian.meljunky.com/
crawl-002
refinedweb
1,191
66.13
Edited by Rachel Harriette Busk Long ages ago, there lived in a city of Northern India a father and son. Both bore the same name, and a strangely inappropriate name it was. Though they were the poorest of men without any thing in the world to call their own, and without even possessing the knowledge of any trade or handicraft whereby to make a livelihood to support them at ease, they were yet called by the name of Shanggasba, that is “Renowned possessor of treasure1.” As I have already said, they knew no trade or handicraft; but to earn a scanty means of subsistence to keep body and soul together, they used to lead a wandering sort of life, gathering and hawking wood. One day as they were coming down the steep side of a mountain forest, worn and footsore, bending under the heavy burden of wood on their backs, Shanggasba, the father, suddenly hastened his tired, tottering steps, and, leading the way through the thickly-meeting branches to a little clear space of level ground, where the grass grew green and bright, called to his son to come after him with more of animation in his voice than he had shown for many a weary day. Shanggasba, the son, curious enough to know what stirred his father’s mind, and glad indeed at the least indication of any glimpse of a new interest in life, increased his pace too, and soon both were sitting on the green grass with their bundles of wood laid beside them. “Listen, my son!” said Shanggasba, the father, “to what I have here to impart to thee, and forget not my instructions.” “Just as this spot of sward, on which we are now seated, is bared of the rich growth of trees covering the thicket all around it, so are my fortunes now barren compared with the opulence and power our ancestor Shanggasba, ‘Renowned possessor of treasure,’ enjoyed. Know, moreover, that it was just on this very spot that he lived in the midst of his power and glory. Therefore now that our wanderings have brought us hither, I lay this charge upon thee that when I die thou bring hither my bones, and lay them under the ground in this place. And so doing, thou too shalt enjoy fulness of might and magnificence like to the portion of a king’s son. For it was because my father’s bones were laid to rest in a poor, mean, and shameful place, that I have been brought to this state of destitution in which we now exist. But thou, if thou keep this my word, doubt not but that thou also shalt become a renowned possessor of treasure.” Thus spoke Shanggasba, the father; and then, lifting their faggots on to their shoulder, they journeyed on again as before. Not long after the day that they had held this discourse, Shanggasba, the father, was taken grievously ill, so that the son had to go out alone to gather wood, and it so befell that when he returned home again the father was already dead. So remembering his father’s admonition, he laded his bones upon his back, and carried them out to burial in the cleared spot in the forest, as his father had said. But when he looked that the great wealth and honour of which his father had spoken should have fallen to his lot, he was disappointed to find that he remained as poor as before. Then, because he was weary of the life of a woodman, he went into the city, and bought a hand-loom and yarn, and set himself to weave linen cloths which he hawked about from place to place. Now, one day, as he was journeying back from a town where he had been selling his cloths, his way brought him through the forest where his father lay buried. So he tarried a while at the place and sat down to his weaving, and as he sat a lark came and perched on the loom. With his weaving-stick he gave the lark a blow and killed it, and then roasted and ate it. But as he ate it he mused, “Of a certainty the words of my father have failed, which he spoke, saying, ‘If thou bury my bones in this place thou shalt enjoy fulness of might and magnificence.’ And because this weaving brings me a more miserable profit even than hawking wood, I will arise now and go and sue for the hand of the daughter of the King of India, and become his son-in-law.” Having taken this resolution, he burnt his hand-loom, and set out on his journey. Now it so happened that just at this time the Princess, daughter of the King of India, having been absent for a long time from the capital, great festivities of thanksgiving were being celebrated in gratitude for her return in safety, as Shanggasba arrived there; and notably, on a high hill, before the image of a Garuda-bird2, the king of birds, Vishnu’s bearer, all decked with choice silk rich in colour. Shanggasba arrived, fainting from hunger, for the journey had been long, and he had nothing to eat by the way, having no money to buy food, but now he saw things were beginning to go well with him, for when he saw the festival he knew there would be an offering of baling cakes of rice-flour before the garuda-bird, and he already saw them in imagination surrounded with the yellow flames of the sacrifice. As soon as he approached the place therefore he climbed up the high hill, and satisfied his hunger with the baling; and then, as a provision for the future, he took down the costly silk stuffs with which the garuda-bird was adorned and hid them in his boots. His hunger thus appeased, he made his way to the King’s palace, where he called out lustily to the porter in a tone of authority, “Open the gate for me!” But the porter, when he saw what manner of man it was summoned him, would pay no heed to his words, but rather chid him and bid him be silent. Then Shanggasba, when he found the porter would pay no heed to his words, but rather bid him be silent, blew a note on the great princely trumpet, which was only sounded for promulgating the King’s decrees. This the King heard, who immediately sent for the porter, and inquired of him who had dared to sound the great princely trumpet. To whom the porter made answer,— “Behold now, O King, there stands without at the gate a vagabond calling on me to admit him because he has a communication to make to the King.” “The fellow is bold; let him be brought in,” replied the King. So they brought Shanggasba before the King’s majesty. “What seekest thou of me?” inquired the King. And Shanggasba, nothing abashed, answered plainly— “To sue for the hand of the Princess am I come, and to be the King’s son-in-law.” The ministers of state, who stood round about the King, when they heard these words, were filled with indignation, and counselled the King that he should put him to death. But the King, tickled in his fancy with the man’s daring, answered,— “Nay, let us not put him to death. He can do us no harm. A beggar may sue for a king’s daughter, and a king may choose a beggar’s daughter, out of that no harm can come,” and he ordered that he should be taken care of in the palace, and not let to go forth. Now all this was told to the Queen, who took a very different view of the thing from the King’s. And coming to him in fury and indignation, she cried out,— “It is not good for such a man to live. He must be already deprived of his senses; let him die the death!” But the King gave for all answer, “The thing is not of that import that he should die for it.” The Princess also heard of it; and she too came to complain to the King that he should cause such a man to be kept in the palace; but before she could open her complaint, the King, joking, said to her,— “Such and such a man is come to sue for thy hand; and I am about to give thee to him.” But she answered, “This shall never be; surely the King hath spoken this thing in jest. Shall a princess now marry a beggar?” “If thou wilt not have him, what manner of man wouldst thou marry?” asked the King. “A man who has gold and precious things enough that he should carry silk stuff3 in his boots, such a one would I marry, and not a wayfarer and a beggar,” answered the Princess. When the people heard that, they went and pulled off Shanggasba’s boots, and when they found in them the pieces of silk he had taken from the image of the garuda-bird, they all marvelled, and said never a word more. But the King thought thereupon, and said, “This one is not after the manner of common men.” And he gave orders that he should be lodged in the palace. The Queen, however, was more and more dismayed when she saw the token, and thus she reasoned, “If the man is here entertained after this manner, and if he has means thus to gain over to him the mind of the King, who shall say but that he may yet contrive to carry his point, and to marry my daughter?” And as she found she prevailed nothing with the King by argument, she said, “I must devise some means of subtlety to be rid of him.” Then she had the man called into her, and inquired of him thus,— “Upon what terms comest thou hither to sue for the hand of my daughter? Tell me, now, hast thou great treasures to endow her with as thy name would import, or wilt thou win thy right to pay court to her by thy valour and bravery?” And this she said, for she thought within herself, of a surety now the man is so poor he can offer no dowry, and so he needs must elect to win her by the might of his bravery, which if he do I shall know how to over-match his strength, and show he is but a mean-spirited wretch. But Shanggasba made answer, “Of a truth, though I be called ‘Renowned possessor of treasure,’ no treasure have I to endow her with; but let some task be appointed me by the King and Queen, and I will win her hand by my valour.” The Queen was glad when she heard this answer, for she said, “Now I have in my hands the means to be rid of him.” At this time, while they were yet speaking, it happened that a Prince of the Unbelievers advanced to the borders of the kingdom to make war upon the King. Therefore the Queen said to Shanggasba,— “Behold thine affair! Go out now against the enemy, and if thou canst drive back his hordes thou shalt marry our daughter, and become the King’s son-in-law. “Even so let it be!” answered Shanggasba. “Only let there be given to me a good horse and armour, and a bow and arrows.” All this the Queen gave him, and good wine to boot, and appointed an army in brave array to serve under him. With these he rode out to encounter the enemy. They had hardly got out of sight of the city, however, when the captain of the army rode up to him and said, “We are not soldiers to fight under command of a beggar: ride thou forth alone.” So they went their way, and he rode on alone. He had no sooner come to the borders of the forest, however, where the ground was rough and uneven, than he found he could in no wise govern his charger, and after pulling at the reins for a long time in vain, the beast dashed with him furiously into the thicket. “What can I do now?” mourned Shanggasba to himself as, encumbered by the unwonted weight of his armour, he made fruitless efforts to extricate himself from the interlacing branches; “surely death hath overtaken me!” And even as he spoke the enemy’s army appeared riding down towards him. Nevertheless, catching hold of the overhanging bows of a tree, by which to save himself from the plungings of the horse, and as the soil was loose and the movement of the steed impetuous, as he clung to the tree the roots were set free by his struggles, and rebounding in the face of the advancing enemy, laid many of his riders low in the dust. The prince who commanded them when he saw this, exclaimed, “This one cannot be after the manner of common men. Is he not rather one of the heroes making trial of his prowess who has assumed this outward form?” And a great panic seized them all, so that they turned and fled from before him, riding each other down in the confusion, and casting away their weapons and their armour. As soon as they were well out of sight, and only the clouds of dust whirling round behind them, Shanggasba rose from the ground where he had fallen in his fear, and catching by the bridle one of the horses whose rider had been thrown, laded on to him all that he could carry of the spoil with which the way was strewn, and brought it up to the King as the proof and trophy of his victory. The King was well pleased to have so valiant a son-in-law, and commended him and promised him the hand of the Princess in marriage. But the Queen, though her first scheme for delivering her daughter had failed, was not slow to devise another, and she said, “It is not enough that he should be valiant in the field, but a mighty hunter must he also be.” And thus she said to Shanggasba, “Wilt thou also give proof of thy might in hunting?” And Shanggasba made answer, “Wherein shall I show my might in hunting?” And the Queen said, “Behold now, there is in our mountains a great fox, nine spans in length, the fur of whose back is striped with stripes; him shalt thou kill and bring his skin hither to me, if thou wouldst have the hand of the Princess and become the King’s son-in-law.” “Even so let it be,” replied Shanggasba; “only let there be given me a bow and arrow, and provisions for many days.” All this the Queen commanded should be given to him; and he went out to seek for the great fox measuring nine spans in length, and the fur of his back striped with stripes. Many days he wandered over the mountains till his provisions were all used and his clothes torn, and, what was a worse evil, he had lost his bow by the way. “Without a bow I can do nothing,” reasoned Shanggasba to himself, “even though I fall in with the fox. It is of no use that I wait for death here. I had better return to the palace and see what fortune does for me.” But as he had wandered about up and down without knowing his way, it so happened that as he now directed his steps back to the road, he came upon the spot where he had laid down to sleep the night before, and there it was he had left the bow lying. But in the meantime the great fox nine spans long, with the fur of his back striped with stripes, had come by that way, and finding the bow lying had striven to gnaw it through. In so doing he had passed his neck through the string, and the string had strangled him. So in this way Shanggasba obtained possession of his skin, which he forthwith carried in triumph to the King and Queen. The King when he saw it exclaimed, “Of a truth now is Shanggasba a mighty hunter, for he has killed the great fox nine spans long, and with the fur of his back striped with stripes. Therefore shall the hand of the Princess be given to him in marriage.” But the Queen would not yet give up the cause of her daughter, and she said, “Not only in fighting and hunting must he give proof of might, but also over the spirits he must show his power.” Then Shanggasba made answer, “Wherein shall I show my power over the spirits?” And the Queen said, “In the regions of the North, among the Mongols, are seven dæmons who ride on horses: these shalt thou slay and bring hither, if thou wouldst ask for the hand of the Princess and become the King’s son-in-law.” “Even so let it be,” replied Shanggasba; “only point me out the way, and give me provisions for the journey.” So the Queen commanded that the way should be shown him, and appointed him provisions for the journey, which she prepared with her own hand, namely, seven pieces of black rye-bread that he was to eat on his way out, and seven pieces of white wheaten-bread that he was to eat on his way home. Thus provided, he went forth towards the region of the North, among the Mongols, to seek for the seven dæmons who rode on horses. Before night he reached the land of the Mongols, and finding a hillock, he halted and sat down on it, and took out his provisions: and it well-nigh befell that he had eaten the white wheaten-bread first; but he said, “Nay, I had best get through the black bread first.” So he left the white wheaten-bread lying beside him, and began to eat a piece of the black rye-bread. But as he was hungry and ate fast, the hiccups took him; and then, before he had time to put the bread up again into his wallet, suddenly the seven dæmons of the country of the Mongols came upon him, riding on their horses. So he rose and ran away in great fear, leaving the bread upon the ground. But they, after they had chased him a good space, stopped and took counsel of each other what they should do with him, and though for a while they could not agree, finally they all exclaimed together, “Let us be satisfied with taking away his victuals.” So they turned back and took his victuals; and the black rye-bread they threw away, but the white wheaten-bread they ate, every one of them a piece. The Queen, however, had put poison in the white wheaten-bread, which was to serve Shanggasba on his homeward journey; and now that the seven dæmons ate thereof, they were all killed with the poison that was prepared for him, and they all laid them down on the hillock and died, while their horses grazed beside them4. But in the morning, Shanggasba hearing nothing more of the trampling of the dæmons chasing him, left off running, and plucked up courage to turn round and look after them; and when he saw them not, he turned stealthily back, looking warily on this side and on that, lest they should be lying in wait for him. And when he had satisfied himself the way was clear of them, he bethought him to go back and look after his provisions. When he got back to the hillock, however, he found the seven dæmons lying dead, and their horses grazing beside them. The sight gave him great joy; and having packed each one on the back of his horse, he led them all up to the King and Queen. The King was so pleased that the seven dæmons were slain, that he would not let him be put on his trial any more. So he delivered the Princess to him, and he became the King’s son-in-law. Moreover, he gave him a portion like to the portion of a King’s son, and erected a throne for him as high as his own throne, and appointed to him half his kingdom, and made all his subjects pay him homage as to himself. “This man thought that his father’s words had failed, and owned not that it was because he buried his bones in a prosperous place that good fortune happened unto him,” exclaimed the Prince. And as he let these words escape him, the Siddhî-kür replied, “Forgetting his health, the Well-and-wise-walking Khan hath opened his lips.” And with the cry, “To escape out of this world is good!” he sped him through the air, fleet out of sight.
http://shortstories.ucgreat.com/read/010/216.htm
CC-MAIN-2019-13
refinedweb
3,545
76.08
In this tutorial, we are going to discuss a very interesting and informative topic that is a Defunct or Zombie Process in Linux. We will also discuss the init process, SIGCHLD signal, system calls [ fork(), exit(), & wait()], and Linux commands [ ps, top, & kill]. What is a Zombie Process in Linux? In Linux, a Zombie Process is a process that has completed its execution and got terminated using the exit() system call but still, it has its entry in the system’s process table. A Zombie Process is also known as a Defunct Process because it is represented in the process table with this name only. Actually, a Zombie process is neither alive nor dead just like an original zombie. How a Zombie Process is created? In the Linux environment, we create a Child process using the fork() system call. And the process which calls the fork() system call is called the Parent process. This parent process has to read the exit status of the child process after its termination using the **SIGCHLD signal and immediately call the wait() system call so that it can delete the entry of the terminated child process from the system’s process table. This process of deleting the terminated child process’s entry from the process table is called reaping. If the parent process does not make the wait() system call and continue to execute its other tasks then it will not be able to read the exit status of the child process on its termination. And the entry of the child process will remain there in the process table even after its termination. Hence it becomes a Zombie process. By default, every child process is a Zombie process until its parent process waits to read its exit status and then reaps its entry from the process table. **SIGCHLD signal: When something interesting happens to the child process like it stops or terminates, a SIGCHLD signal is sent to the parent process of the child process so that it can read the exit status of the child process. By default, the response to this SIGCHLD signal is to ignore it. C code to create a Zombie process. #include <stdio.h> #include <stdlib.h> // for exit() #include <unistd.h> // for fork(), and sleep() int main() { // Creating a Child Process int pid = fork(); if (pid > 0) // True for Parent Process sleep(60); else if (pid == 0) // True for Child Process { printf("Zombie Process Created Successfully!"); exit(0); } else // True when Child Process creation fails printf("Sorry! Child Process cannot be created..."); return 0; } Output: This program has created a Zombie process because the Child process terminates using the exit() system call while the Parent process is sleeping and not waiting for its Child to read its exit status. This Zombie process will be there only for 60 seconds after that the Parent process will be terminated then the Zombie process will be killed automatically. We can see this Zombie process in the system as [output] <defunct> using the following Linux command highlighted with red color. ubuntu:~$ ps -ef Output: We can also locate the process table entry of this Zombie process using the top command highlighted with red color. ubuntu:~$ top Output: Zombie Process vs Orphan Process A Zombie process should not be confused with an Orphan process. Because an Orphan process is a process that remains inactive or running state even after the termination of their parent process while a Zombie process remains inactive it just retains its entry in the system’s process table. An Orphan process can be of two types: - Intentionally Orphaned Process: An intentionally Orphaned process is an Orphan process that is generated when we have to either start/run an infinite running service or finish a long-running task that does not require any user intervention. These processes run in the background and usually do not require any manual support. - Unintentionally Orphaned Process: An unintentionally Orphaned process is an Orphan process that is generated when some parent process crashes or terminates leaving its child process in an active or running state. Unlike the Intentionally Orphaned process, these processes can be controlled or avoided by the user using the process group mechanism. Characteristics of a Zombie Process Following are some of the characteristics of a Zombie process: - The exit status of a Zombie process can be read by the parent process catching the SIGCHLD signal using wait()system call. - As the parent process reads the exit status of a Zombie process then its entry is reaped from the process table. - After the reaping of a Zombie process from the process table, its PID (process ID) and the process table entry can be reused by some new process in the system. - If the parent process of a Zombie process gets terminated or finished then the presence of Zombies process’s entry in the process table generates an operating system fault. - Usually, a Zombie process can be destroyed by sending the SIGCHLD signal to the parent process using the killcommand. - If a Zombie process cannot be destroyed even by sending the SIGCHLD signal to its Parent process then we can terminate its Parent process to kill the Zombie process. - As the Zombie’s Parent process is terminated or finished the Zombie process is adopted by the initprocess and which then kills the Zombie process by catching the SIGCHLD signal and reading its exit status as it keeps making the wait()system call. Threats associated with Zombie Processes Although a Zombie process does not use any system resources but retains its entry (PID) in the system’s process table. But the matter of concern is the limited size of the system’s process table. Each active process has a valid entry in the system’s process table. If anyhow very large number of Zombie processes are created then each Zombie process will occupy a PID and an entry in the system’s process table and there will be no space left in the process table. In this way, the presence of a large number of Zombie processes in the system can prevent the generation of any new process and the system will go into an inconsistent state just because neither any PID (process ID) is available nor any space in the process table. Moreover, the presence of a Zombie process generates an operating system fault when their parent processes are not alive. This is not a matter of concern if there are only a few Zombie processes in the system but can become a serious problem for the system when there are so many Zombie processes in the system. Maximum number of Zombie Processes in a system We can find the maximum number of Zombie Processes in a system using the following C program. This C program when executed in the Linux environment it will try to create an infinite number of Child Processes using the fork() system call which has been called inside the while loop. As we know the fork() system call returns a negative value only when it fails to create a Child Process, so the while loop condition will be true whenever it creates a Child Process successfully and the flag counter inside the while loop will be incremented. We have also discussed that the size of the process table in the system is limited hence after the creation of a large number of Child Processes (Zombie Processes) there will be no space left in the process table and then the fork() system call will not be able to create any Child Process anymore and hence returns a negative value and this makes the while loop condition false. In this way, the while loop will stop and the last flag counter value represents the total number of Zombie Processes. #include<stdio.h> #include<unistd.h> // for fork() int main() { int flag = 0; // Counter variable while (fork() > 0) { flag++; // Counts the No. of Zombie Processes printf("%d\n", flag); } return 0; } Output: From the output, we can clearly see the maximum number of Zombie Processes that can be created inside the system is 24679. NOTE: The maximum number of Zombie Processes is the final value of the flag counter which will differ every time you run the above C program depending upon the vacant spaces in the system’s process table. How to kill a Zombie Process? If somehow the parent process does not wait for the termination of its child process it will not be able to catch the SIGCHLD signal and hence the exit status of the child process is not read. Then its entry remains in the process table and it becomes a Zombie process. Then we have to destroy this Zombie process by sending the SIGCHLD signal to the parent process using the kill command in Linux. As the parent process receives the SIGCHLD signal it destroys the Zombie process by reaping its entry from the process table using the wait() system call. Following is the demonstration of the Linux command to kill the Zombie process manually: ubuntu:~$ kill -s SIGCHLD <PID> If anyhow Zombie process cannot be destroyed even by sending the SIGCHLD signal to the parent process then we can terminate its Parent process then If anyhow Zombie process cannot be destroyed even by sending the SIGCHLD signal to the parent process then we can terminate its Parent process then the Zombie process will be adopted by the init process (PID = 1). This init process now becomes the new parent of the Zombie process which regularly makes the wait() system call to catch the SIGCHLD signal for reading the exit status of the Zombie process and reaps it from the process table. Following is the Linux command to kill the Parent process: ubuntu:~$ kill -9 <PID> NOTE: In both of the above Linux commands just replace the <PID> with the PID (process ID) of the Zombie’s Parent process. Summing-up In this tutorial we have learned about the Zombie Process or Defunct Process in Linux, how it is created inside the system, how to locate a Zombie Process in the system, the difference between a Zombie Process and an Orphan Process, various characteristics of a Zombie Process, threats associated with the Zombie Process, how to find the maximum number of Zombie Process in the system, and different ways to kill the Zombie Process in the system.
https://www.linuxfordevices.com/tutorials/linux/defunct-zombie-process
CC-MAIN-2022-27
refinedweb
1,731
61.4
Type: Posts; User: Anddos Now when i run the program its crashing and i cannot understand why The source code as follows #include "stdafx.h" //#include "my_global.h" // Include this file first to avoid problems... ive got it compiled by installing the x86 package on a vmware x86 and copying the x86 files over to this pc, its clear to me the x86 installer installs x64 library even those they say x86 package.. theres no MySQL Server 5.6 in C:\Program Files (x86)\MySQL), i dont think posting in the mysql section will help as mysql.h is used for programming related and there is no issues with the server etc.. Hey was just reading through this thread, i am in the same situtation now, i downloaded the x86 mysql installer and picked developer enviroment, now the directories path is C:\Program... No its not about spamming,its about saving time getting questions answered with programming... How would i connect to a website or a forum with c++ inside visual studio,i am thinking of developing a program to post the same question to multiforums on the internet all at once and also knowing... can you link me to the article on the collada format ? ive heard fbx is what professional developers are using is it possible to scan a process for say an int with the value 5? basically what i want todo is scan this process for all the ints with the value 5, i am close to getting it working but i think something is missing , can anyone take alook at my code , thanks ... Basically i want a program that will scan all the regions and then dump the bytes #pragma comment(lib, "advapi32.lib") #include <windows.h> #include <stdio.h> VOID DumpBuffer(const... BobS0327 you're sample seems to be crashing... would there be away to scan for a int , or float etc in the regions? D3DXIntersect can do it , check the pick example in dx sdk I am really starting to get anoyed with directx , i cant seem to place a animated mesh at the position i give , it works fine for static such as a teapot etc.. I have uploaded the source code... i am using std::search and working well :D i have this code but i dont know how to compare the data in the memory with strcmp() #include "stdafx.h" #include <windows.h> #include <iostream> using namespace std; how do i go about doing a full string search on all the process's in task manager? i have created the snapshot and enumed the process's , i know you use ReadProcessMemory but unsure how to make the... hmm ok i didnt know where to asked this but i will asked here basically i need something that will hide a windows exe when its click .. so when i click the windows installer exe nothing apears on...
http://forums.codeguru.com/search.php?s=b5bb8995a583dbf6eeb5bf7609604711&searchid=7001921
CC-MAIN-2015-22
refinedweb
485
70.94
NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | FILES | SEE ALSO #include <sys/strlog.h> #include <sys/log.h> log is a STREAMS software device driver that provides an interface for console logging and for the STREAMS error logging and event tracing processes (see strerr(1M), and strace(1M)). log presents two separate interfaces: a function call interface in the kernel through which STREAMS drivers and modules submit log messages; and a set of ioctl(2) requests and STREAMS messages for interaction with a user level console logger, an error logger, a trace logger, or processes that need to submit their own log messages. log messages are generated within the kernel by calls to the function strlog(): strlog(short mid, short sid, char level, ushort_t flags, char *fmt, unsigned arg1 . . . ); Required definitions are contained in <sys/strlog.h>, <sys/log.h>, and <sys/syslog.h>. mid is the STREAMS module id number for the module or driver submitting the log message. sid is an internal sub-id number usually used to identify a particular minor device of a driver. level is a tracing level that allows for selective screening out of low priority messages from the tracer. flags are any combination of SL_ERROR (the message is for the error logger), SL_TRACE (the message is for the tracer), SL_CONSOLE (the message is for the console logger), SL_FATAL (advisory notification of a fatal error), and SL_NOTIFY (request that a copy of the message be mailed to the system administrator). fmt is a printf(3C) style format string, except that %s, %e, %E, %g, and %G conversion specifications are not handled. Up to NLOGARGS (in this release, three) numeric or character arguments can be provided. log is implemented as a cloneable device, it clones itself without intervention from the system clone device. Each open of /dev/log obtains a separate stream to log. In order to receive log messages, a process must first notify log whether it is an error logger, trace logger, or console logger using a STREAMS I_STR ioctl call (see below). For the console logger, the I_STR ioctl has an ic_cmd field of I_CONSLOG, with no accompanying data. For the error logger, the I_STR ioctl has an ic_cmd field of I_ERRLOG, with no accompanying data. For the trace logger, the ioctl has an ic_cmd field of I_TRCLOG, and must be accompanied by a data buffer containing an array of one or more struct trace_ids elements. struct trace_ids { short ti_mid; short ti_sid; char ti_level; }; Each trace_ids structure specifies a mid, sid, and level from which messages will be accepted. strlog(9F) will accept messages whose mid and sid exactly match those in the trace_ids structure, and whose level is less than or equal to the level given in the trace_ids structure. A value of -1 in any of the fields of the trace_ids structure indicates that any value is accepted for that field. Once the logger process has identified itself using the ioctl call, log will begin sending up messages subject to the restrictions noted above. These messages are obtained using the getmsg(2) function. The control part of this message contains a log_ctl structure, which specifies the mid, sid, level, flags, time in ticks since boot that the message was submitted, the corresponding time in seconds since Jan. 1, 1970, a sequence number, and a priority. The time in seconds since 1970 is provided so that the date and time of the message can be easily computed, and the time in ticks since boot is provided so that the relative timing of log messages can be determined. struct log_ctl { short mid; short sid; char level; /* level of message for tracing */ short flags; /* message disposition */ #if defined(_LP64) || defined(_I32LPx) clock32_t ltime; /* time in machine ticks since boot */ time32_t ttime; /* time in seconds since 1970 */ #else clock_t ltime; time_t ttime; #endif int seq_no; /* sequence number */ int pri; /* priority = (facility|level) */ }; The priority consists of a priority code and a facility code, found in <sys/syslog.h>. If SL_CONSOLE is set in flags, the priority code is set as follows: If SL_WARN is set, the priority code is set to LOG_WARNING; If SL_FATAL is set, the priority code is set to LOG_CRIT; If SL_ERROR is set, the priority code is set to LOG_ERR; If SL_NOTE is set, the priority code is set to LOG_NOTICE; If SL_TRACE is set, the priority code is set to LOG_DEBUG; If only SL_CONSOLE is set, the priority code is set to LOG_INFO. Messages originating from the kernel have the facility code set to LOG_KERN. Most messages originating from user processes will have the facility code set to LOG_USER. Different sequence numbers are maintained for the error and trace logging streams, and are provided so that gaps in the sequence of messages can be determined (during times of high message traffic some messages may not be delivered by the logger to avoid hogging system resources). The data part of the message contains the unexpanded text of the format string (null terminated), followed by NLOGARGS words for the arguments to the format string, aligned on the first word boundary following the format string. A process may also send a message of the same structure to log, even if it is not an error or trace logger. The only fields of the log_ctl structure in the control part of the message that are accepted are the level, flags, and pri fields; all other fields are filled in by log before being forwarded to the appropriate logger. The data portion must contain a null terminated format string, and any arguments (up to NLOGARGS) must be packed, 32-bits each, on the next 32-bit boundary following the end of the format string. ENXIO is returned for I_TRCLOG ioctls without any trace_ids structures, or for any unrecognized ioctl calls. The driver silently ignores incorrectly formatted log messages sent to the driver by a user process (no error results). Processes that wish to write a message to the console logger may direct their output to /dev/conslog, using either write(2) or putmsg(2). The following driver configuration properties may be defined in the log.conf file. If msgid=1, each message will be preceded by a message ID as described in syslogd(1M). If msgid=0, message IDs will not be generated. This property is unstable and may be removed in a future release. struct strioctl ioc; ioc.ic_cmd = I_ERRLOG; ioc.ic_timout = 0; /* default timeout (15 secs.) */ ioc.ic_len = 0; ioc.ic_dp = NULL; ioctl(log, I_STR, &ioc); struct trace_ids tid[2]; tid[0].ti_mid = 2; tid[0].ti_sid = 0; tid[0].ti_level = 1; tid[1].ti_mid = 1002; tid[1].ti_sid = -1; /* any sub-id will be allowed */ tid[1].ti_level = -1; /* any level will be allowed */ ioc.ic_cmd = I_TRCLOG; ioc.ic_timout = 0; ioc.ic_len = 2 * sizeof(struct trace_ids); ioc.ic_dp = (char *)tid; ioctl(log, I_STR, &ioc); Example of submitting a log message (no arguments): struct strbuf ctl, dat; struct log_ctl lc; char *message = "Don't forget to pick up some milk on the way home"; ctl.len = ctl.maxlen = sizeof(lc); ctl.buf = (char *)&lc; dat.len = dat.maxlen = strlen(message); dat.buf = message; lc.level = 0; lc.flags = SL_ERROR|SL_NOTIFY; putmsg(log, &ctl, &dat, 0); Log driver. Write only instance of the log driver, for console logging. Log configuration file. strace(1M), strerr(1M), intro(3), getmsg(2), ioctl(2), putmsg(2), write(2), printf(3C), strlog(9F) STREAMS Programming Guide NAME | SYNOPSIS | DESCRIPTION | EXAMPLES | FILES | SEE ALSO
http://docs.oracle.com/cd/E19683-01/817-0669/6mgf1n1f9/index.html
CC-MAIN-2017-17
refinedweb
1,243
62.27
#include <zzip/lib.h> note that the two flag types have been split into an o_flags (for fcntl-like openflags) and o_modes where the latter shall carry the zzip_flags and possibly accessmodes for unix filesystems. Since this version of zziplib can not write zipfiles, it is not yet used for anything else than zzip-specific modeflags. The zzip_open_ext_io function returns a new zzip-handle (use zzip_close to return it). On error the zzip_open_ext_io function will return null setting errno(3). The zzip_open_shared_io function takes an extra stream argument - if a handle has been then ext/io can be left null and the new stream handle will pick up the ext/io. This should be used only in specific environment however since zzip_file_real does not store any ext-sequence. The benefit for the zzip_open_shared_io function comes in when the old file handle was openened from a file within a zip archive. When the new file is in the same zip archive then the internal zzip_dir structures will be shared. It is even quicker, as no check needs to be done anymore trying to guess the zip archive place in the filesystem, here we just check whether the zip archive's filepath is a prefix part of the filename to be opened. Note that the zzip_open_shared_io function is also used by zzip_freopen that will unshare the old handle, thereby possibly closing the handle. The zzip_open_shared_io function returns a new zzip-handle (use zzip_close to return it). On error the zzip_open_shared_io function will return null setting errno(3). Copyright (c) 1999,2000,2001,2002,2003 Guido Draheim All rights reserved, use under the restrictions of the Lesser GNU General Public License or alternatively the restrictions of the Mozilla Public License 1.1
http://www.makelinux.net/man/3/Z/zzip_open_ext_io
CC-MAIN-2015-22
refinedweb
289
67.18
On Sun, 21 Dec 2008, Ingo Molnar wrote:> * Lai Jiangshan <laijs@cn.fujitsu.com> wrote:> > In the old codes, these lines confuse me:> > return (addr & ~PAGE_MASK) - (PAGE_SIZE - BUF_PAGE_SIZE);> > addr &= PAGE_MASK;> > This patch mostly make the codes concordant.> > ah, okay. Steve, any strong feelings against the patch? And we might just > go for removing BUF_PAGE_SIZE itself instead.I'll need to rename BUF_PAGE_SIZE to BUF_SIZE since it is really the size of a buffer per page. The pages will now have some header information on it to let things like splice take the page away and still be able to translate what is on the page.-- Steve
http://lkml.org/lkml/2008/12/22/137
CC-MAIN-2017-51
refinedweb
106
71.34
I'm new to python, and am embarrassed to ask this python question, since it's so trivial. But I've been trawling through a mass of online tutorials , and they all seem to focus on different things I have the following python script, called setSound.py - Code: Select all #!/usr/bin/python import sys import alsaaudio print 'Sound volume is being set to', str(sys.argv) m = alsaaudio.Mixer() # defined alsaaudio.Mixer to change volume m.setvolume( ARG1 ) # set volume vol = m.getvolume() # get volume float value All I need to do is pass the value ARG1 to the script as a command line argument, i.e., I want to be able to type setSound.py 50 and have it replace ARG1 by 50. If anybody could explain, I'd be most grateful.
http://www.python-forum.org/viewtopic.php?p=10911
CC-MAIN-2016-40
refinedweb
134
78.35
User talk:Hindleyite/Welcome to my new talk page From Uncyclopedia, the content-free encyclopedia < User talk:Hindleyite Revision as of 12:24, January 3, 2007 by Hindleyite (talk | contribs) Well, I'm testing out a new forum thing in my namespace. It works exactly like the dump, except it's for my user talk. Each new entry creates a subpage in my talkspace. As always, I will try and reply to everyone as soon as I can - I check for messages whenever I'm online, which for now is pretty much every day/every other day. See here for more details on how I, or rather Silent Penguin, got this to work. Thanks once again, and have a nice day! -- Hindleyite Converse 12:24, 3 January 2007 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Hindleyite/Welcome_to_my_new_talk_page?oldid=1412976
CC-MAIN-2015-14
refinedweb
129
71.14