text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
10 May 2012 10:53 [Source: ICIS news] TOKYO (ICIS)--?xml:namespace> Tosoh had shutdown its 550,000 tonne/year No 2 VCM line at Nanyo on 13 November 2011 after an explosion. Operating profit for the full year to 31 March 2012 decreased 29% to Y23.7bn from Y33.5bn the previous year, while net sales rose 0.4% to Y687.1bn from Y684.4bn. The chlor-alkali segment recorded a full-year operating loss of Y9.97bn, while it had posted an operating loss of Y3.48bn the previous year, because the exports of caustic soda and VCM decreased due to the prolonged shutdown of the VCM plant, Tosoh said. (
http://www.icis.com/Articles/2012/05/10/9558136/japans-tosoh-full-year-profit-declines-6.3-on-vcm.html
CC-MAIN-2015-18
refinedweb
112
78.45
Samsung has been on an announcement streak with its 3D NAND-infused enterprise SSD products, including a new PCIe SSD and a 15.63 TB SSD. The company has its PM1725 SSD on display here at Dell World in Austin. The PM1725 was previously announced at the Flash Memory Summit, but this is the first time Samsung displayed its newest speedster on the track. Samsung's display highlighted the PM1725 pulling down over 6.2 GBps with a sequential workload (128K, QD32). To put this in perspective, this exceeds the top speed we recorded with the PMC Flashtec NVRAM drive, which feeds data directly out of DDR3 DRAM. The eye-watering 6.2 GBps was impressive, but static demos only highlight the most impressive performance metric, so companies usually only show the workload that provides the best results. We are never satisfied with a single workload. After commandeering the server (with permission of course, ahem) we switched the test over to 4K random read. With a single worker, we recorded over 1,000,000 IOPS (yes, 1 million) with a 4K random read workload. This is the highest performance we have ever recorded with a single SSD, including other bleeding edge PCIe models. We know that read performance of any ilk will always look great, so we switched gears and ran a 4K random write workload, and came up with an impressive 343,072 IOPS. We didn't have time to precondition the SSD, which we usually do for six hours in our product evaluations, so these random write results should be taken with a grain of salt. Disclaimers aside, we rolled on to a sequential write test. We ran a 128K sequential write workload, and the PM1725 provided 1,955 MB/s of performance. This isn't as high as some PCIe SSDs that come through our labs, but it is important to note that this is with a prototype. It's good to see the prototype PM1725 performing so well in multiple workloads, even though we couldn't actually see the SSD as it was buried inside a server chassis. We were able to snap a pic of the actual SSD itself earlier this year at the Flash Memory Summit, where it was listed with a top random read/write speed of 1,000,000/120,000 IOPS, and sequential read/write speeds of 6,000/2,200 MBps. This beefy SSD tops out at 3.2 TB and is powered by the EPIC PCIe 3.0 x8 controller. It also makes the important additions of dual-port functionality (x4 x4) and multiple namespace support. The PM1725 comes in both 2.5" and PCIe flavors. Paul Alcorn is a Contributing Editor for Tom's Hardware, covering Storage. Follow him on Twitter and Google+. The days of needing even one 2.5" hard drive bay in a laptop is numbered. First optical, then that. 6GB/s and 1M IOPS...this thing is a true monster. I suspect it'll be very useful for servers that do 4K/8K video storage/streaming/processing. The storage solution would no longer be a bottleneck for this application, nosiree indeed.
https://www.tomshardware.com/news/samsung-ssd-pm1725-iops-gbps,30385.html
CC-MAIN-2020-34
refinedweb
527
74.19
WCSRTOMBS(3) Linux Programmer's Manual WCSRTOMBS(3) NAME wcsrtombs - convert a wide-character string to a multibyte string SYNOPSIS #include <<wchar.h>> size_t wcsrtombs(char *dest, const wchar_t **src, size_t len, mbstate_t *ps); DESCRIPTION- tomb(dest, *src, ps), as long as this call succeeds, and then incre- menting dest by the number of bytes written and *src by one. The con- version num- ber con- verted, (size_t) -1 is returned, and errno set to EILSEQ. CONFORMING TO C99. NOTES The behavior of wcsrtombs() depends on the LC_CTYPE category of the current locale. Passing NULL as ps is not multi-thread safe. SEE ALSO iconv(3), wcsnrtombs(3), wcstombs(3) COLOPHON This page is part of release 3.05 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. GNU 1999-07-25 WCSRTOMBS(3)
http://modman.unixdev.net/?sektion=3&page=wcsrtombs&manpath=Debian-5.0
CC-MAIN-2017-17
refinedweb
145
58.38
#include <TUnicodeTextStorage.h> #include <TUnicodeTextStorage.h> List of all members. totalCount position Default constructor;. [virtual] Disposes of the text storage buffer. [protected] Internal method to allocat or reallocate the buffer. The number of characters can be less than the current number of characters in the buffer, in which case they are truncated. [inline] Returns the number of characters actually in the buffer. 1 Delets. Inserts the given CFStringRef. Inserts the given text before the character at position. So to append text, position must equal the current number of characters, the value returned by Count(). If this routine returns true, then you can be assured that there is room in the buffer for totalCount number of characters. The previous contents of the buffer will be preserved. If totalCount is smaller than the current number of characters in the buffer, the contents will be truncated. Sets the characters in the buffer. Writes over any previous buffer contents; no data is preserved. Sets. Actual number of characters in storage buffer. Total number of characters that have been allocated. The buffer where characters are stored.
http://augui.sourceforge.net/class_t_unicode_text_storage.html
CC-MAIN-2017-22
refinedweb
181
52.97
React onClick event vs JS addEventListener So adding an onClick event is easy. class App extends React.Component { handleClick = () => console.log('Hi there'); render() { return <button onClick={this.handleClick}>Say something</button>; } } All you need to do is define your event handler function, and attach it to your HTML element inline. But what about plain ol’ JavaScript event listeners? document.addEventListener(); Would it be easier, and more beneficial to use that instead of React’s onClick inline? Maybe. Let’s start with the basics. What is a JS Event Listener According to Mozilla JS docs, an event listener is a function that gets called when a specific event occurs. Simple example, when a button gets clicked, something happens. How to add a JS Event Listener <button id="greetBttn">Click me!</button> <script> // Find element const buttonEl = document.getElementById('greetBttn'); // Add event listener buttonEl.addEventListener('click', () => alert("Hi user!")); </script> Okay, this isn’t too bad. Let’s do a code breakdown. First I create a variable named buttonEl, and it’s value is an the button node element. Then I’m adding an event listener by using, buttonEl.addEventListener(). addEventListener(type, handlerFunc); The addEventListener function requires 2 arguments. What type of event you’re looking for, and the function to trigger after the event has happened. You can look for all the event references here. But like any good samaritan, you must clean up after yourself. You should always remove an event listener when you’re done using it. How to remove a JS Event Listener buttonEl.removeEventListener('click', () => alert("Hi user!")); Just like that! You must use removeEventListener to clean up. So far, from the look of using inline onClick and plain JS event listener, I think I’d rather go with onClick. But there are pros to using addEventListener: - Works in every browser - You don’t need React to do this - You don’t need to import nothing to do this What is React onClick onclick is an inline event. Notice how I didn’t just call it React onclick. That’s because this has been thing way before React. <button onclick="alert('Hi there!');">Greet me!</button> That code actually works, and events used to be written like that. React has adopted this same style of attaching events inline, but have added there own touch to it. All of React events are what they call Synthetic Events. Quick overview of what is Synthetic Events Synthetic Events is a React instance that all regular events go through in React. React has created this to keep consistency for all browsers, and to increase performance since SyntheticEvent is pooled. Does React remove event listener? Yes. Let’s take a look at plain JS event listener in a React component. class App extends React.Component { handleClick = () => { alert("Hi there"); }; componentDidMount() { document.getElementById('foo') .addEventListener('click', this.handleClick) } componentWillUnmount() { document.getElementById('foo') .removeEventListener('click', this.handleClick) } render() { return <button id="foo">Say something</button>; } } Since I’ve added my own custom event listener, I have to clean it up myself. But if we do it the React way, they handle that for us! class App extends React.Component { handleClick = () => { alert("Hi there"); }; render() { return <button onClick={this.handleClick}>Say something</button>; } } And it’s less code. Double win for us! Which to pick 99% of the time? Just use React onClick Synthetic Event. When to use addEventListener There are a few a use cases where you need to implement your own event listener. This is usually because of a specific UI behavior that you or the designers want to achieve. An example is, when you have a modal popup, and you want to make it disappear when you click anywhere else but the dialog itself. How to do that? Here’s the code answer. For the sake of simplicity, I’m going to use inline styling as well. const styles = { dialog: { position: "fixed", top: 0, left: 0, bottom: 0, right: 0, margin: "auto", display: "flex", alignItems: "center", justifyContent: "center", width: 200, height: 200, backgroundColor: "#fff", boxShadow: "0 2px 5px 0 rgba(0,0,0,0.25)" } }; class App extends React.Component { state = { showDialog: true }; handleDocumentClick = e => { // return element object or null const isClosest = e.target.closest(`[id=foo]`); if (this.state.showDialog && !isClosest) { this.setState({ showDialog: false }); } }; componentDidMount() { document.addEventListener("click", this.handleDocumentClick); } componentWillUnmount() { document.removeEventListener('click', this.handleDocumentClick); } render() { if (!this.state.showDialog) { return null; } return ( <div id="foo" style={styles.dialog}> Click outside this box </div> ); } } Conclusion Neither approach is bad but, my recommendation is to use React’s onClick Synthetic Event because: - It handles event pooling - It handles removing event listeners for you - It’s better optimize - It’s more efficient on DOM resolution, and event delegation This is not to say, never use custom event listeners. Use it only when you can’t do things with regular React events. I like to tweet about React and post helpful code snippets. Follow me there if you would like some too!
https://linguinecode.com/post/react-onclick-event-vs-js-addeventlistener
CC-MAIN-2022-21
refinedweb
831
60.41
JSON RPC handler and message router This is part 3 of a series about the Puppet Extension for Visual Studio Code. See the bottom of this post for links to all other posts. Extension on the VS Code Marketplace In part. JSON RPC Handler The byte stream from the network is formed into messages based on version 2 of the JSON RPC protocol, which is defined in this specification document. Microsoft have a shorter version of the specification in the language server protocol repository If you’ve seen HTTP requests before, this format should look familar. It is composed of a header and content part, separated by a carriage return ( \r) and linefeed ( \n) character +--------+------+---------+ | Header | \r\n | Content | +--------+------+---------+ The Header is composed of key-value pairs separated by a colon ( :), and each pair terminated by a carriage return and linefeed. There is a mandatory Content-Length header which describes how many bytes the Content part is. As we’re encoding bytes into strings it is important to know what text encoding is used. The header is always ASCII, while the content defaults to UTF8, however this can be set using the Content-Type header, for example to set UTF16; Content-Type:application/vscode-jsonrpc; charset=utf-16. So let’s say you wanted to send the text “Hello World”, the entire JSON message would look like: Content-Length:11\r\n\r\nHello World There is a double \r\n as the first one indicates the end of the header pair, and the second indicates the end of the Header part. Conversely, to read in a stream of bytes and convert to a JSON message; Keep reading bytes until a double \r\nis received Parse the header and determine how long the Content part is from the Content-Length header Keep reading bytes until the content length is read. The JSON RPC 2.0 specification also allows for batch operations, however VS Code language client does not, so the language server does not need to implement that part of the protocol. Implementation The implementation used in the Puppet Language Server was heavily inspired by the JSON parsing in the PowerShell Language Server. The receive_data. The parse_data method then takes the extracted message content and converts it from a JSON string into a Ruby object. It then performs validation that the RPC version is correct ( 2.0) and whether the message is a Request or Notification. Then the message is sent to the Message Router for processing. The JSON Handler also exposes a few helper methods reply_* These methods will send reply messages but hide all the mundane work to craft the JSON object. For example the reply_error method takes an error code and error text parameter and will send an error message back to the Language Client. send_show_message_notification The language server can send a JSON event to popup a dialog window on the client. This can be handy for the Server to notify the Client of fatal errors. Request object When a Request message is sent, the JSON RPC Handler creates a Request object, whereas a notification will only get the raw JSON object. This is because when sending a response to a Request, it generally requires the original Request ID. The methods on this class automatically craft the required JSON objects for responses. The protocol The protcol itself is well documented by Microsoft. It is well worth reading through all of the available messages before reading about the message router. Message Router. The language protocol describes three types of messages A Request. This message requires a response from the other party. For example, the client sends a request to autocomplete items for a cursor location, and the server sends the possible autocomplete items. A Notification. This is a message which does not require/have a response from the other party. For example, the server sends a notification message to the client, to display a warning message box that the version of Puppet is too old and functionality will be limited. A custom message. This is a message that is not described in the protocol but can still be sent over the same channel. For example, the Puppet Language Server uses a custom request for the Node Graph Preview. The client sends a puppet/compileNodeGraphrequest and responds with a CompileNodeGraphResponseobject. The message router is composed of a few modules Document Store. Crash Dump module %TMP%\puppet_language_server_crash.txt. This dump file contains all of the relevant version information, a copy of the document store, relevant backtrace, and the request that triggered the crash. Request Handler The request handler (the receive_request method) is basically a really big case statement where the request name determines the code path. The following messages are handled directly by the message router: initialize. shutdown The shutdown method is sent to indicate the client is about to disconnect and should start its shutdown process. The shutdown is actually handled in the TCP Server where the client disconnection takes place. puppet/getVersion Loading Puppet (xx%) message you see in the bottom right corner of VS Code. If, for example, the functions haven’t loaded then they will not be availale during hover or autocomplete requests. The following requests are handled by providers: puppet/getResource Similar to the puppet resource, this custom request will list all of the puppet resources for a title name. For example a request with typename = user will return all of the user resources on the system. Typically the client will only issue typename requests however the server does support adding a resource title in the request; typename = user, title = username. This is handled by the PuppetHelper. puppet/compileNodeGraph This custom request will compile the supplied manifest file and then generate a DOT file which shows all of the resources, and their dependencies in the catalog. This is handled by the PuppetParserHelper. textDocument/completion The language client will issue a completion request when it is trying to auto-complete text, either automatically or issued via user command (Typically ctrl-space). This request will then return with an array of short form items. The client will then issue completionItem/resolve. completionItem/resolve The resolution request is sent from the client when the user wishes to get more detailed information about a completion item, from a previous completion request. This is also handled by the CompletionProvider textDocument/hover When you hover the mouse cursor over text, the client sends hover requests to the language server. The server can then interpret where the cursor is and provide useful information. This is handled by the HoverProvider. Notification Handler The notification handler (the receive_notification method), much like the request hanlder, is basically a really big case statement where the notification name determines the code path. initialized The initialized notification is sent from the client to the server after the client received the result of the initialize request but before the client is sending any other request or notification to the server exit Triggers the underlying server connection to close. For example if the underlying transport was TCP, then the server would disconnect the client. Typically this should be sent after a shutdown request. textDocument/didOpen and textDocument/didChange. textDocument/didSave This is a null event for this server. textDocument/didClose The document that is closed is removed from the document store to help save some memory. Wrapping up… In this post I looked at how the JSON RPC messages are interpreted, and then actioned in the message router. In the next post we start looking at one the language providers in detail. Blog series links Part 1 - Introduction to the extension Part 2 - Introduction to the Language Server Part 3 - JSON RPC handler and message router Part 4 - Language Providers
https://glennsarti.github.io/blog/puppet-extension-deep-dive-part3/
CC-MAIN-2018-17
refinedweb
1,290
52.8
Everyone knows this historical story!!! The tortoise and the Hare had a race. Sheer determination and dedication made the hare win the race. We, in this post, are going to create that story. No exactly ;). It is just a simulation of the famous race but it doesn’t end with tortoise’s win. While inching towards the winning line, both the contestants face various hurdle and distractions which are used to define their journey. First let’s understand the presumptions about speed accelerators and decelerators for these two. The hare may - Hop at full speed - Hop at a slow speed - Slip a bit from bumps on the way - Slip to a great extent due to hillocks - Sleep to rest The tortoise never sleeps on the way. He may - Plod at full speed - Slip - Plod at slow speed With these premises about our contenders, class race is created with two methods to represent their movement- moveRabbit and moveTortoise. The above mentioned motion specifiers are used as conditions to define the distance covered (forward or backward) by each of them. The movement of each contender is selected on the basis of a random number generated between 1 and 10. This generated number is checked against certain values and a value is returned. The tortoise and the Hare- Race Path The interface is created with the GraphWin object from graphics.py. gif images of rabbit and tortoise are used to display their motion. For this Image method from graphics.py is used. It accepts first argument as a point and second argument as the name of the image file (.gif) to be displayed. The Y coordinates of the tortoise and the hare are not changed. With every call to the methods of race class (moveRabbit and moveTortoise ) the X coordinates of the contestants are updated and moved to the new position. undraw() method is used to clear a contestant at its last position before moving. Simulation starts when a viewer clicks in the window. Simulation ends when either of the participants crosses the finish line (depicted by word Winner). The winner is declared and its prize (carrot or bag of insects) is displayed. Program Code import random import time from graphics import * class race: def __init__(self,turtle,rabbit): self.turtle=turtle self.rabbit=rabbit def moveTurtle(self): r=random.randrange(9)+1 turtleMoves=[3,6] if (r>=1 and r<=5): self.turtle=turtleMoves[0] elif (r==6 or r==7): self.turtle=-turtleMoves[1] else: self.turtle=1 return self.turtle def moveRabbit(self): r=random.randrange(9)+1 rabbitMoves=[9,12,2] if (r==3 or r==4): self.rabbit=rabbitMoves[0] elif (r==5): self.rabbit=-rabbitMoves[1] elif (r>=6 and r<=8): self.rabbit=rabbitMoves[2] elif (r>8): self.rabbit=-2 return self.rabbit def main(): timer=0 markLn=0 #create race path workArea = GraphWin('Simulation of Turtle and Rabbit Race', 800, 200) # give title and dimensions workArea.setBackground('white') stx=50 sty=20 p1=Image(Point(20,100),'start.gif') p1.draw(workArea) p1=Image(Point(750,100),'winner.gif') p1.draw(workArea) ln=Line(Point(stx,sty),Point(stx,sty+160)) ln.draw(workArea) ln=Line(Point(700,sty),Point(700,sty+160)) ln.draw(workArea) #set players at start line x1=30 x2=30 rPos=Point(x1,40) p1=Image(rPos,'rabbit.gif') p1.draw(workArea) tPos=Point(x2,140) p2=Image(tPos,'tortoise.gif') p2.draw(workArea) #Ask viewer to start race by clicking strt=Text(Point(workArea.width/2,workArea.height/2),"Get Set Go!!! click to start race") strt.setSize(24) strt.setFace('times roman') strt.setTextColor('green') strt.draw(workArea) workArea.getMouse() strt.undraw() race1=race(1,1) while (x1<=720 and x2<=720): x1=x1+race1.moveRabbit()*5 if x1<1: x1=30 rPos=Point(x1,40) p1.undraw() p1=Image(rPos,'rabbit.gif') p1.draw(workArea) x2=x2+race1.moveTurtle()*5 if x2<1: x2=30 tPos=Point(x2,140) p2.undraw() p2=Image(tPos,'tortoise.gif') p2.draw(workArea) timer+=1 time.sleep(1) if (x1>=x2): msg=Text(Point(workArea.width/2,workArea.height/2),"Wonderful! Rabbit gets a") winner='rabbit.gif' prize='carrot.gif' else: msg=Text(Point(workArea.width/2,workArea.height/2),"Wow!!! Turtle gets") winner='tortoise.gif' prize='sack.gif' #diaplay winner p1.undraw() p2.undraw() msg.setSize(24) msg.setFace('times roman') msg.setTextColor('orange') msg.draw(workArea) p2=Image(Point(workArea.width/2-50,workArea.height/2-50),winner) p2.draw(workArea) p2=Image(Point(workArea.width/2+100,workArea.height/2+50),prize) p2.draw(workArea) main() 2 Comments Is this code old or does it just not run in Pycharm? I ended up copying it straight into Pycharm and it gave me 5 error messages in (lines: 111, 45, 877, 4061, 4006) Hello Pascal. This code is OK. Save the code as .py file and run from python command window. Make sure that you have the images with same file names as used in the code.
https://csveda.com/the-tortoise-and-the-hare-simulation-with-python-code/
CC-MAIN-2022-33
refinedweb
839
53.27
Configure and run a http server. You define a http server by creating a shelf_serve.yaml file. This file lists the middleware and handlers that should be used to handle requests. For example: handlers: /html: type: compound handlers: /web: type: pub package: path: test_web /static: type: static path: test_static /source: type: proxy url: /api: type: rpc /echo: type: echo middlewares: log_requests: dependencies: test_api: path: test_api test_echo: path: test_echo In the middlewares section of the shelf_serve.yaml file, you can list the middlewares to be used and define optional configuration parameters. There are two build in middlewares: stdout. Example: middlewares: cors: allow: headers: "rpc-auth, content-type" methods: "POST, GET, PUT, OPTIONS, DELETE" origin: "*" In the handlers section of the shelf_serve.yaml file, you can list different handlers to be used for different routes. There are five build in handlers: pathparameter. urlparameter. rpcpackage. Note: when using the rpc handler, you should add the package that defines the api classes in the dependencies section. You can define custom handlers and middleware by annotating a factory function that creates the handler or middleware based on some configuration parameters. For example import 'package:shelf_serve/shelf_serve.dart'; import 'package:shelf/shelf.dart'; @ShelfHandler("echo") createEchoHandler(String type, String route, Map<String,dynamic> config) { return (Request r) async => new Response.ok(await r.readAsString()); } You should add the package that defines those custom handlers and middlewares to the dependencies section. First install the shelf_serve executable: pub global activate shelf_serve Then goto the directory where the shelf_serve.yaml file is located and run: shelf_serve serve By default the server will listen to port 8080. You can define another port by: shelf_serve serve --port 5000 Create a dart file and import all the necessary dependencies. Run the method serve: serve('path/to/shelf_serve.yaml', port: 5000); Alternatively, you can run the method serveInIsolate, which will automatically do all the necessary imports based on the dependencies section in the shelf_serve.yaml file. First install the shelf_serve executable: pub global activate shelf_serve Then goto the directory where the shelf_serve.yaml file is located and run: shelf_serve create-docker-project By default the project will be created in build/project. To select another output directory run: shelf_serve create-docker-project --output-directory some/other/path Next, go to the built project directory. There will be a Dockerfile there. You can create a container image by running: docker build -t my/app . And to run this image: docker run -d -p 8080:8080 my/app You can install the package from the command line: $ pub global activate shelf_serve The package has the following executables: $ shelf_serve Add this to your package's pubspec.yaml file: dependencies: shelf_serve: "^0.2.7" You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:shelf_serve/shelf_serve. Package is getting outdated. The package was released 55 weeks ago. Package is pre-v1 release. While there is nothing inherently wrong with versions of 0.*.*, it usually means that the author is still experimenting with the general direction API. Fix analysis and formatting issues. Analysis or formatting checks reported 6 hints. Run dartfmtto format lib/shelf_serve.dart. Run dartfmtto format lib/src/annotations.dart. Similar analysis of the following files failed: lib/src/config.dart(hint) lib/src/handlers.dart(hint) lib/src/middlewares.dart(hint) lib/src/server_script.dart(hint) Maintain an example. Create a short demo in the example/directory to show how to use this package. Common file name patterns include: main.dart, example.dartor you could also use shelf_serve.dart.
https://pub.dartlang.org/packages/shelf_serve
CC-MAIN-2018-26
refinedweb
612
50.43
Christoph Hellwig wrote:> On Mon, Apr 27, 2009 at 10:32:18AM +0200, monstr@monstr.eu wrote:>> +# ifdef __uClinux__>> struct stat64 {>> unsigned long long st_dev;>> unsigned long __unused1;>> @@ -69,5 +69,29 @@ struct stat64 {>> >> unsigned long __unused8;>> };>> +# else /* __uClinux__ */>> +/* FIXME */>> +struct stat64 {>> + unsigned long long st_dev; /* Device.*/>> + unsigned long long st_ino; /* File serial number.*/>> + unsigned int st_mode; /* File mode.*/>> + unsigned int st_nlink; /* Link count. */>> + unsigned int st_uid; /* User ID of the file's owner. */>> + unsigned int st_gid; /* Group ID of the file's group. */>> + unsigned long long st_rdev; /* Device number, if device. */>> + unsigned short __pad2;>> + long long st_size; /* Size of file, in bytes. */>> + int st_blksize; /* Optimal block size for I/O. */>> + long long st_blocks; /* No. 512-byte blocks allocated */>> + int st_atime; /* Time of last access. */>> + unsigned int st_atime_nsec;>> + int st_mtime; /* Time of last modification. */>> + unsigned int st_mtime_nsec;>> + int st_ctime; /* Time of last status change. */>> + unsigned int st_ctime_nsec;>> + unsigned int __unused4;>> + unsigned int __unused5;>> +};>> +# endif /* __uClinux__ */> > Userspace ABIs must not change because of MMU vs not.> :-)I expect that someone will beat me for last two patches.Short answer: You are right I know about.Long answer: in stat64 structure I wanted to find out more informationabout stat64 structure. I mean there are some variables and they have differenttypes and I would like to know about more info.For exampleIf make sense long long type for st_blocks. IMHO unsigned will be better.And I would like to create new stat64 structure where is not a fault for both noMMU/MMU version.In noMMU implementation is st_blocks unsigned long. Is it OK? or unsigned long long is better?Thanks,Michal-- Michal Simek, Ing. (M.Eng)w: p: +42-0-721842854
http://lkml.org/lkml/2009/4/27/116
CC-MAIN-2016-30
refinedweb
278
78.75
Opened 3 years ago Closed 3 years ago #28387 closed enhancement (fixed) Implement function that returns the balanced digit representation of an integer Description This ticket adds the balanced_digits function, returning the digit representation of an integer where the digits are centered at 0. For example, this enables: sage: 8.balanced_digits(3) [-1, 0, 1] sage: 33.balanced_digits(6) [3, -1, 1] Change History (13) comment:1 Changed 3 years ago by - Branch set to u/gh-sheareralexj/implement_function_that_returns_the_balanced_digit_representation_of_an_integer comment:2 Changed 3 years ago by - Commit set to 1a18b2b5dc0fe0f3eeb9621ec1f16adc0e4fb02b - Status changed from new to needs_review comment:3 Changed 3 years ago by - Reviewers set to Bruno Grenet - Status changed from needs_review to needs_work That's a good idea to add this. I've a few comments on your implementation though. Incorrect algorithm? First, it is not clear to me (so the docstring should be adapted) what is the exact specification. I would say that the balanced base b uses b digits centered around zero. Thus if b is odd, there is only one possibility, namely digits between -b//2 and b//2 (both included). For instance in base 9, one uses digits from -4 to 4. If b is even, one has to choose between digits from -b//2 to b//2-1 or -b//2+1 to b//2 (base 10 for instance: either -5 to 4 or -4 to 5), and this is defined by the value of positive_shift. Is that we you have in mind? Note that in this case, your implementation has a problem: sage: n = -46 sage: n.balanced_digits(10) # correct [4, -5] sage: n.balanced_digits(10, positive_shift=True) # should be [4, 5, -1] [4, -5] I think that a correct algorithm (if my specification in the one you have in mind) would be easier to write using a "balanced quo_rem" that on input a and b returns (q, r) such that a = b*q + r with -b//2 <= r <= b//2 (with the same subtleties for even bases). This could go as follows: def balanced_quo_rem(self, right, positive_shift=True): q, r = self.quo_rem(right) if r > right//2: return (q+1, r-right) if right % 2 == 0 and not positive_shift and r == right//2: return (q+1, r-right) return (q, r) Then the algorithm could be implemented as (I just put the main idea): def balanced_digits(self, base, positive_shift=True): digits = [] n = self while n > 0: q, r = n.balanced_quo_rem(base, positive_shift) digits.append(r) n = q return digits Other remarks Below, I indicate a few general comments on your current implementation (and the docstring, tests, etc.), though it should be changed anyway. In the docstring - There should not be a blank line as first line; - It should be "Return" rather than "Returns" (as mentioned in the doc); - Following the doc also, I would put a blank line between the first sentence ("Return ... given base") and the rest of the specification; - I would write "Return the list of balanced digits" rather than "Return a list of balanced digit" (since there is uniqueness); - You should replace b/2by b//2in the docstring to indicate a floor division (if the base is odd); - Your TESTS::; - You should write a more precise specification more generally. In the code itself - I think that having base = 10as default would be a good idea, to be consistent with n.digits()for instance (should be then mentioned in the doc obviously); - You may raise an Exceptionif the base is not an integer (and add an EXAMPLEfor this). Something like: if isinstance(base, Integer): _base = base else: try: _base = Integer(base) except TypeError: raise ValueError('base should be an integer') - You should not use floor(b/2), but rather b//2; - You may compute self_absand negas follows: self_abs, neg = self.abs(), self.sign() comment:4 Changed 3 years ago by Actually, there is another (simpler?) algorithm: - First compute the classical base brepresentation of the input integer n; - Scan the digits from low to high order, and if a digit dis larger than b//2, replace dby d-band remove one to the next digit. You still have to manage the subtlety for even bases but that's not a big deal. Also, the two-step algorithm I mention works for nonnegative integers. For negative integers, do the same with abs(n) and take the opposite of all digits. For even bases, you have to do it with not positive shift. comment:5 Changed 3 years ago by Thanks for reviewing, and thanks for finding the error! Yeah, it seems the issue is that the code is handling negatives incorrectly. Today I'll work at fixing this as well as implementing your other suggestions and algorithms. Thanks! comment:6 Changed 3 years ago by - Commit changed from 1a18b2b5dc0fe0f3eeb9621ec1f16adc0e4fb02b to 714b193f9e225dd396c7f52f1791bca8df768ac2 Branch pushed to git repo; I updated commit sha1. New commits: comment:7 Changed 3 years ago by I updated the code and used the documentation you suggested. I tested the runtime on all three algorithms and went with the last one (using the standard n.digits(b) function) as it worked fastest for large bases. comment:8 Changed 3 years ago by - Status changed from needs_work to needs_review comment:9 Changed 3 years ago by - Status changed from needs_review to needs_work The new version is much better indeed! I still have a two remarks: - The case base = 2is problematic. Depending on positive_shiftit can only represent nonnegative or nonpositive integers. The simplest way to deal with it is probably to impose that base > 2. - I would add some details in the docstring for the parameter positive_shift. In the block INPUT, you may write something like "For even bases, the representation uses digits from -b//2 + 1to b//2if set to True, and from -b//2to b//2-1otherwise. This has no effect for odd bases." comment:10 Changed 3 years ago by - Commit changed from 714b193f9e225dd396c7f52f1791bca8df768ac2 to a69b95259043fa5b5b0faa1cb12c4bf6506fea3d Branch pushed to git repo; I updated commit sha1. New commits: comment:11 Changed 3 years ago by - Status changed from needs_work to needs_review Awesome, made those two edits. Thank you much. comment:12 Changed 3 years ago by - Status changed from needs_review to positive_review That's fine for me! A side note: It is usually not needed (nor advised) to merge new versions in a branch. As long as the merge automatically works¹, it is fine. The history is simpler to read if there aren't several merges. ¹ This is showed by the color of the link to the branch in the description: Green is OK, red is NOT OK, orange means NOT TESTED and you just need to click the link for trac to try the merge (and change the color accordingly). comment:13 Changed 3 years ago by - Branch changed from u/gh-sheareralexj/implement_function_that_returns_the_balanced_digit_representation_of_an_integer to a69b95259043fa5b5b0faa1cb12c4bf6506fea3d - Resolution set to fixed - Status changed from positive_review to closed New commits:
https://trac.sagemath.org/ticket/28387
CC-MAIN-2022-21
refinedweb
1,150
59.53
Perl makes easy things easy and hard things possible. -- Official Perl Slogan, coined by Larry Wall Haskell makes hard things easy and easy things weird. -- Larry at it again, this time coining an unofficial Haskell slogan :-) We want Perl 6 to make easy things trivial, hard things easy, and impossible things merely hard. -- Damian Conway in A Taste of Perl 6, Linux Magazine, April 2003 After Audrey Tang's recent whirlwind Sydney tour, I felt inspired to get back into Pugs development. This time, however, I was determined to learn Haskell, so I could at least understand important parts of the Pugs code base. To get started learning Haskell, I felt I needed to write something non-trivial yet attainable. Somewhat arbitrarily, I chose to write a tiny RPN evaluator in Perl 5, Perl 6, and Haskell ... then sit back and compare and contrast the code. Despite my dubious past, I wanted to write the code in a clear and natural style for each of the languages, avoiding clever golfish tricks like the plague. And being a testing Fascist, I certainly wanted to see how the unit tests looked in all three languages. This meditation describes that endeavour. Perl 5 I started with a straightforward Perl 5 version, in the form of a little Rpn.pm module: and an associated test driver:and an associated test driver:package Rpn; use strict; use warnings; sub evaluate { my ($expr) = @_; my @stack; for my $tok (split ' ', $expr) { if ($tok =~ /^-?\d+$/) { push @stack, $tok; next; } my $x = pop @stack; defined $x or die "Stack underflow\n"; my $y = pop @stack; defined $y or die "Stack underflow\n"; if ($tok eq '+') { push @stack, $y + $x; } elsif ($tok eq '-') { push @stack, $y - $x; } elsif ($tok eq '*') { push @stack, $y * $x; } elsif ($tok eq '/') { push @stack, int($y / $x); } else { die "Invalid token:\"$tok\"\n"; } } @stack == 1 or die "Invalid stack:[@stack]\n"; return $stack[0]; } 1; I trust this test driver makes clear the purpose of the Rpn::evaluate function.I trust this test driver makes clear the purpose of the Rpn::evaluate function.use strict; use warnings; use Test::More; tests => @normal_tests + @exception_tests; for my $t (@normal_tests) { cmp_ok(Rpn::evaluate($t->[0]), '==', $t->[1]); } for my $t (@exception_tests) { eval { Rpn::evaluate($t->[0]) }; is($@, $t->[1]); } Perl 6 Here's a straightforward Perl 6 translation that runs today on Pugs: module Rpn-0.0.1-cpan:ASAVIGE; sub evaluate (Str $expr) returns Int { my @stack; for ($expr.split()) -> $tok { if $tok ~~ rx:Perl5/^-?\d+$/ { @stack.push($tok); next; } my $x = @stack.pop() err die "Stack underflow\n"; my $y = @stack.pop() err die "Stack underflow\n"; given $tok { when '+' { @stack.push($y + $x) } when '-' { @stack.push($y - $x) } when '*' { @stack.push($y * $x) } when '/' { @stack.push(int($y / $x)) } default { die "Invalid token:\"$tok\"\n" } } } @stack.elems == 1 or die "Invalid stack:[@stack[]]\n"; return @stack[0]; } Points to note: Though all these Perl 6 improvements are certainly welcome, notice that the overall feel of the code remains Perlish. Perl 6 is still Perlish, but a revolutionary step in refreshing new directions. -- chromatic in Porting Test::Builder to Perl 6 It's heartening to note that a number of Perl 6 improvements are being retrofitted to Perl 5. Of the improvements mentioned above, as noted in chromatic's The Year in Perl 2005, both the "defined-or" operator and an improved Switch module are slated for inclusion in the upcoming Perl 5.10 release. The Perl 6/Pugs companion test driver for Rpn.pm is little changed from its Perl 5 cousin: The observant reader will have noticed that the old Perl 5 block eval is now (less confusingly) spelled try.The observant reader will have noticed that the old Perl 5 block eval is now (less confusingly) spelled try.#!/usr/bin/pugs use v6; use Test; @normal_tests.elems + @exception_tests.elems; for @normal_tests -> $t { cmp_ok(Rpn::evaluate($t[0]), &infix:<==>, $t[1]); } for @exception_tests -> $t { try { Rpn::evaluate($t[0]) }; is($!, $t[1]); } This little example demonstrates that converting most Perl 5 programs to Perl 6 will be straightforward. Indeed, so straightforward that Larry is working on an automated way to do it. To find out what he's been up to, keep an eye on his "Translating Perl 5 to Perl 5" talk at the upcoming OSDC::Israel::2006 in February. Haskell Using Haskell is like having The Power of Reason. -- autrijus/gaal on #perl6 IRC channel cited at hawiki quotes page : I cannot decide if your analogies are false since I cannot make heads or tails of them. You should try to make CARs and CDRs of them instead. -- Larry Wall on comp.lang.lisp, Jan 21 1993 While translating Rpn from Perl 5 to Perl 6 was both pleasing and straightforward, translating it to Haskell felt, er, ... surreal-in-the-extreme, perhaps because I'd never programmed in a functional language before. It takes a while to get used to programming without variables, you see. ;-) Anyway, after considerable study and much help from the wonderful PhD-powered Haskell community, I finally have a Haskell version of Rpn that I'm happy with: {-# OPTIONS_GHC -fglasgow-exts -Wall #-} module Rpn (evaluate) where import Char isStrDigit :: String -> Bool isStrDigit = all isDigit -- Check that a string matches regex /^-?\d+$/. isSNum :: String -> Bool isSNum [] = False isSNum "-" = False isSNum ('-':xs) = isStrDigit xs isSNum xs = isStrDigit xs calc :: Int -> String -> Int -> Int calc x "+" y = x+y calc x "-" y = x-y calc x "*" y = x*y calc x "/" y = x`div`y calc _ tok _ = error $ "Invalid token:" ++ show tok evalStack :: [Int] -> String -> [Int] evalStack xs y | isSNum y = (read y):xs | (a:b:cs) <- xs = (calc b y a):cs | otherwise = error "Stack underflow" evaluate :: String -> Int evaluate expr | [e] <- el = e | otherwise = error $ "Invalid stack:" ++ show el where el = foldl evalStack [] $ words expr Though I'm elated with this code, I urge any Haskell boffins listening to please respond away if you just saw something that made you pull a face. Believe it or not, this Haskell code uses essentially the same algorithm as the Perl version. Notice that there is little need for a Stack abstract data type in Haskell (and I couldn't find one in the GHC libraries) because a built-in list can easily be used as a stack (just as it can be in Perl, via push and pop). Here is the rough equivalence between the Perl 5 code and the Haskell code: Surprisingly, writing the test driver took me much longer than the Rpn module, mainly because I didn't grok monads. Though QuickCheck (Perl equivalent: Test::LectroTest) is perhaps more Haskelly, I employed the ubiquitous xUnit port, HUnit for Haskell, as follows: {-# OPTIONS_GHC -fglasgow-exts -Wall #-} -- t1.hs: build with: ghc --make -o t1 t1.hs Rpn.hs module Main where import Test.HUnit import Control.Exception import Rpn type NormalExpected = (String, Int) makeNormalTest :: NormalExpected -> Test makeNormalTest e = TestCase ( assertEqual "" (snd e) (Rpn.evaluate (fs +t e)) ) normalTests :: Test normalTests = TestList ( map makeNormalTest [ ( ) ]) -- Exception wrapper for Rpn.evaluate -- The idea is to catch calls to the error function and verify -- that the expected error string was indeed written. evaluateWrap :: String -> IO String evaluateWrap x = do res <- tryJust errorCalls (Control.Exception.evaluate (Rpn.evaluate + x)) case res of Right r -> return (show r) Left r -> return r type ExceptionExpected = (String, String) makeExceptionTest :: ExceptionExpected -> Test makeExceptionTest e = TestCase ( do x <- evaluateWrap (fst e) assertEqual "" (snd e) x ) exceptionTests :: Test exceptionTests = TestList ( map makeExceptionTest [ ( "5 4 %", "Invalid token:\"%\"" ), ( "5 +", "Stack underflow" ), ( "+", "Stack underflow" ), ( "5 4 + 42", "Invalid stack:[42,9]" ), ( "", "Invalid stack:[]" ) ]) main :: IO Counts main = do runTestTT normalTests runTestTT exceptionTests Exception Handling Exception handling doesn't mix particularly well with pure lazy functional programming. For example, I couldn't get my test driver to work when testing calls to the error function until I added a Control.Exception.evaluate call here: to force evaluation of the function -- without it, an unevaluated thunk is (lazily) returned.to force evaluation of the function -- without it, an unevaluated thunk is (lazily) returned.(Control.Exception.evaluate (Rpn.evaluate x)) The best choice for exception handling in Haskell seems to be GHC Control.Exception. See also Simon Peyton Jones proposal for A semantics for imprecise exceptions. Tracing and Debugging My productivity increased when Autrijus told me about Haskell's trace function. He called it a refreshing desert in the oasis of referential transparency. -- chromatic in Porting Test::Builder to Perl 6 Like the awkwardly cased chromatic, finding the trace function was a breakthrough moment during my first week of Haskell programming. For example, by changing: to:to:f (x:y:zs) "+" = y+x:zs I could see what shenanigans Haskell was getting up to under the covers -- which I found an invaluable aid.I could see what shenanigans Haskell was getting up to under the covers -- which I found an invaluable aid.import Debug.Trace -- ... f (x:y:zs) "+" = trace ("+" ++ show x ++ ":" ++ show y ++ ":" ++ show +zs) (y+x:zs) First Impressions autrijus stares at type Eval x = forall r. ContT r (ReaderT x IO) (Rea +derT x IO x) and feels very lost <shapr> Didn't you write that code? <autrijus> yeah. and it works <autrijus> I just don't know what it means. -- autrijus/shapr on #perl6 IRC channel cited at hawiki quotes page What I enjoyed about Haskell in my first two weeks: What I did not enjoy about Haskell in my first two weeks: Or as TheDamian might put it: And that's what attracts me to Perl. The demands of the language itself don't get in the way of *using* the language. -- Damian Conway in Builder AU interview References Acknowledgements I'd like to thank the helpful IRC #perl6 folks (especially audreyt, luqui, gaal, aufrank, nnunley) for answering my questions and Cale Gibbard of Haskell-Cafe for explaining Control.Exception.evaluate to me. Updated 5-jan: added ^$ anchors to regex in Rpn.pm (thanks ambrus).
http://www.perlmonks.org/bare/?node_id=520826
CC-MAIN-2015-14
refinedweb
1,666
60.04
White Knight Chronicles 2 gets two new gameplay trailers . 9 Comments mington \o/ asdfasdf wow Gekidami Apparently you cant actually access the game unless you have a cleared game save from the first one. mington yeah i sort’ve caught wind of that the other day, i didn’t really read it properly but i think you get immediate access to ‘stuff’ if you have a save file of the first game. otherwise this ‘stuff’ becomes available to you as you play through the game. i’m glad that cleared that up Callum I bloody hate being forced to attack feet all the time! I want to punch it in the facceee!!! OlderGamer If they would get rid of that stupid battle system in favor of something like the one found in Dragon Age: Origin, these games would vault into something worth playing. I enjoyed the first one a bit, but the combat got old very fast. Still one of the better RPGs out of Japan this gen. DaMan disastrous in SP, average in MP. Gekidami @4 What i heard is that you cant play the sequel without a cleared save file from the first game. And that it comes with the first game fully on the disc so you have to play through it to access the second game. mington @8 no way! that got to be the worst idea ever i wanted to play the first one until the mediocre reviews. I remember all the import reviews loved it import review 8 Uk review 7 ok not huge, but if it got all 8′s across the board i probably would of picked it up. i still might, but if i were to play a JRPG any time soon it’ll probably be resonance of fate
http://www.vg247.com/2010/06/10/white-knight-chronicles-2-gets-two-new-gameplay-trailers/
CC-MAIN-2014-15
refinedweb
298
78.28
Fun. Oslo Bikes You probably know I like to work with bike sharing trip data, but this time I wanted to look at the open data provided by the Oslo City Bike program. I downloaded some of their datasets containing 2.5 million trips taken in Oslo between April 1st, 2017 and September 20th, 2017 (Norwegians love to cycle). The number of daily trips varies quite a bit, as the plot below shows (on May 17th, e.g. the system was shut down due to it being a national holiday). I made a dataset containing the trips taken with a resolution of one hour, looking like this: We will use autoregressive models to predict the number of trips being taken at a given hour, using the number of trips taken in the past as inputs. Some pandas magic gives us the following dataset. The target is the number of trips taken, the lagged data are our covariates. We’ll use the last week of the data as a test set and train our models on the rest. Keras To get started with using Keras, all you need to do is install it using pip install keras pip install tensorflow Building our first model is pretty straightforward, after reading the excellent user guide and documentation, you’ll soon enough write your own. No comparison to Tensorflow! from keras.models import Sequential from keras.layers import Dense, Dropout dense_model = Sequential() dense_model.add(Dense(32, input_shape=(1, len(features)), activation='relu')) dense_model.add(Dropout(0.2)) dense_model.add(Dense(16, activation='relu')) dense_model.add(Dropout(0.2)) dense_model.add(Dense(1, activation='linear')) dense_model.compile(loss='mean_squared_error', optimizer='adam') res = dense_model.fit(X_train, y_train, epochs=80, batch_size=25, verbose=2) So how well does our basic model do? As a comparison, I’ve also fitted a gradient boosted tree and a more fancy neural network employing long short-term memory (LSTM) units (building one is super easy in Keras). The good news is that the problem is a quite harmless one, and predictions are close to the actual values. The errors are decently distributed, and plotting the predictions against the actuals doesn’t show any significant non-linearities. >>IMAGE scikit-learn. I hope you’ve enjoyed this post and feel inspired to give Keras a try. As always, stay tuned for the next data adventure, where we’ll have even more fun with neural networks!
https://data-adventures.com/2018/03/25/fun-with-keras.html
CC-MAIN-2018-43
refinedweb
402
55.64
Go to the first, previous, next, last section, table of contents.. The. The following functions initialize a multidimensional solver, either with or without derivatives. The solver itself depends only on the dimension of the problem and the algorithm and can be reused for different problems. const gsl_multiroot_fsolver_type * T = gsl_multiroot_fsolver_hybrid; gsl_multiroot_fsolver * s = gsl_multiroot_fsolver_alloc (T, 3); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of GSL_ENOMEM. const gsl_multiroot_fdfsolver_type * T = gsl_multiroot_fdfsolver_newton; gsl_multiroot_fdfsolver * s = gsl_multiroot_fdfsolver_alloc (T, 2); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of GSL_ENOMEM. printf ("s is a '%s' solver\n", gsl_multiroot_fdfsolver_name (s)); would print something like s is a 'newton' solver. You must provide n functions of n variables for the root finders to operate on. In order to allow for general parameters the functions are defined by the following data types: int (* f) (const gsl_vector * x, void * params, gsl_vector * f) size_t n void * params Here is an example using Powell's test function, f_1(x) = A x_0 x_1 - 1, f_2(x) = exp(-x_0) + exp(-x_1) - (1 + 1/A) with A = 10^4. The following code defines a gsl_multiroot_function system F which you could pass to a solver: struct powell_params { double A; }; int powell (gsl_vector * x, void * p, gsl_vector * f) { struct powell_params * params = *(struct powell_params *)p; const double A = (params->A); const double x0 = gsl_vector_get(x,0); const double x1 = gsl_vector_get(x,1); gsl_vector_set (f, 0, A * x0 * x1 - 1); gsl_vector_set (f, 1, (exp(-x0) + exp(-x1) - (1.0 + 1.0/A))); return GSL_SUCCESS } gsl_multiroot_function F; struct powell_params params = { 10000.0 }; F.f = &powell; F.n = 2; F.params = ¶ms; int (* f) (const gsl_vector * x, void * params, gsl_vector * f) int (* df) (const gsl_vector * x, void * params, gsl_matrix * J) int (* fdf) (const gsl_vector * x, void * params, gsl_vector * f, gsl_matrix * J) size_t n void * params The example of Powell's test function defined above can be extended to include analytic derivatives using the following code, int powell_df (gsl_vector * x, void * p, gsl_matrix * J) { struct powell_params * params = *(struct powell_params *)p; const double A = (params->A); const double x0 = gsl_vector_get(x,0); const double x1 = gsl_vector_get(x,1); gsl_matrix_set (J, 0, 0, A * x1); gsl_matrix_set (J, 0, 1, A * x0); gsl_matrix_set (J, 1, 0, -exp(-x0)); gsl_matrix_set (J, 1, 1, -exp(-x1)); return GSL_SUCCESS } int powell_fdf (gsl_vector * x, void * p, gsl_matrix * f, gsl_matrix * J) { struct powell_params * params = *(struct powell_params *)p; const double A = (params->A); const double x0 = gsl_vector_get(x,0); const double x1 = gsl_vector_get(x,1); const double u0 = exp(-x0); const double u1 = exp(-x1); gsl_vector_set (f, 0, A * x0 * x1 - 1); gsl_vector_set (f, 1, u0 + u1 - (1 + 1/A)); gsl_matrix_set (J, 0, 0, A * x1); gsl_matrix_set (J, 0, 1, A * x0); gsl_matrix_set (J, 1, 0, -u0); gsl_matrix_set (J, 1, 1, -u1); return GSL_SUCCESS } gsl_multiroot_function_fdf FDF; FDF.f = &powell_f; FDF.df = &powell_df; FDF.fdf = &powell_fdf; FDF.n = 2; FDF.params = 0; Note that the function powell_fdf is able to reuse existing terms from the function when calculating the Jacobian, thus saving time._ENOPROG The solver maintains a current best estimate of the root at all times. |f_i| < epsabs and returns GSL_CONTINUE otherwise. This criterion is suitable for situations where the precise location of the root, x, is unimportant provided a value can be found where the residual is small enough. GSL_ENOPROGJ hybridsj. The steps are controlled by a spherical trust region |x' - x| < \delta, instead of a generalized region. This can be useful if the generalized region estimated by hybridsjis inappropriate.. t = (\sqrt(1 + 6 r) - 1) / (3 r) is proposed, with r being the ratio of norms |f(x')|^2/|f(x)|^2. This procedure is repeated until a suitable step size is found. The algorithms described in this section do not require any derivative information to be supplied by the user. Any derivatives needed are approximated from by finite difference. gsl_multiroots_fdjacwith a relative step size of GSL_SQRT_DBL_EPSILON. The discrete Newton algorithm is the simplest method of solving a multidimensional system. It uses the Newton iteration x -> x - J^{-1} f(x) where the Jacobian matrix J is approximated by taking finite differences of the function f. The approximation scheme used by this implementation is, J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon being the machine precision ( \epsilon \approx 2.22 \times 10^-16). The order of convergence of Newton's algorithm is quadratic, but the finite differences require n^2 function evaluations on each iteration. The algorithm may become unstable if the finite differences are not a good approximation to the true derivatives. The Broyden algorithm is a version of the discrete Newton algorithm which attempts to avoids the expensive update of the Jacobian matrix on each iteration. The changes to the Jacobian are also approximated, using a rank-1 update, J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df where the vectors dx and df are the changes in x and f. On the first iteration the inverse Jacobian is estimated using finite differences, as in the discrete Newton algorithm. This approximation gives a fast update but is unreliable if the changes are not small, and the estimate of the inverse Jacobian becomes worse as time passes. The algorithm has a tendency to become unstable unless it starts close to the root. The Jacobian is refreshed if this instability is detected (consult the source for details). This algorithm is not recommended and is included only for demonstration purposes. The multidimensional solvers are used in a similar way to the one-dimensional root finding algorithms. This first example demonstrates the hybrids scaled-hybrid algorithm, which does not require derivatives. The program solves the Rosenbrock system of equations, f_1 (x, y) = a (1 - x) f_2 (x, y) = b (y - x^2) with a = 1, b = 10. The solution of this system lies at (x,y) = (1,1) in a narrow valley. The first stage of the program is to define the system of equations, #include <stdlib.h> #include <stdio.h> #include <gsl/gsl_vector.h> #include <gsl/gsl_multiroots.h> struct rparams { double a; double b; }; int rosenbrock_f (const gsl_vector * x, void *params, gsl_vector * f) { double a = ((struct rparams *) params)->a; double b = ((struct rparams *) params)->b; const double x0 = gsl_vector_get (x, 0); const double x1 = gsl_vector_get (x, 1); const double y0 = a * (1 - x0); const double y1 = b * (x1 - x0 * x0); gsl_vector_set (f, 0, y0); gsl_vector_set (f, 1, y1); return GSL_SUCCESS; } The main program begins by creating the function object f, with the arguments (x,y) and parameters (a,b). The solver s is initialized to use this function, with the hybrids method. int main (void) { const gsl_multiroot_fsolver_type *T; gsl_multiroot_fsolver *s; int status; size_t i, iter = 0; const size_t n = 2; struct rparams p = {1.0, 10.0}; gsl_multiroot_function f = {&rosenbrock_f, n, &p}; double x_init[2] = {-10.0, -5.0}; gsl_vector *x = gsl_vector_alloc (n); gsl_vector_set (x, 0, x_init[0]); gsl_vector_set (x, 1, x_init[1]); T = gsl_multiroot_fsolver_hybrids; s = gsl_multiroot_fsolver_alloc (T, 2); gsl_multiroot_fsolver_set (s, &f, x); print_state (iter, s); do { iter++; status = gsl_multiroot_fsolver_iterate (s); print_state (iter, s); if (status) /* check if solver is stuck */ break; status = gsl_multiroot_test_residual (s->f, 1e-7); } while (status == GSL_CONTINUE && iter < 1000); printf ("status = %s\n", gsl_strerror (status)); gsl_multiroot_fsolver_free (s); gsl_vector_free (x); return 0; } Note that it is important to check the return status of each solver step, in case the algorithm becomes stuck. If an error condition is detected, indicating that the algorithm cannot proceed, then the error can be reported to the user, a new starting point chosen or a different algorithm used. The intermediate state of the solution is displayed by the following function. The solver state contains the vector s->x which is the current position, and the vector s->f with corresponding function values. int print_state (size_t iter, gsl_multiroot_fsolver * s) { printf ("iter = %3u x = % .3f % .3f " "f(x) = % .3e % .3e\n", iter, gsl_vector_get (s->x, 0), gsl_vector_get (s->x, 1), gsl_vector_get (s->f, 0), gsl_vector_get (s->f, 1)); } Here are the results of running the program. The algorithm is started at (-10,-5) far from the solution. Since the solution is hidden in a narrow valley the earliest steps follow the gradient of the function downhill, in an attempt to reduce the large value of the residual. Once the root has been approximately located, on iteration 8, the Newton behavior takes over and convergence is very rapid. iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 1 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 2 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 3 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 4 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 5 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01 iter = 6 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01 iter = 7 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00 iter = 8 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00 iter = 9 x = 1.000 0.878 f(x) = 1.268e-10 -1.218e+00 iter = 10 x = 1.000 0.989 f(x) = 1.124e-11 -1.080e-01 iter = 11 x = 1.000 1.000 f(x) = 0.000e+00 0.000e+00 status = success Note that the algorithm does not update the location on every iteration. Some iterations are used to adjust the trust-region parameter, after trying a step which was found to be divergent, or to recompute the Jacobian, when poor convergence behavior is detected. The next example program adds derivative information, in order to accelerate the solution. There are two derivative functions rosenbrock_df and rosenbrock_fdf. The latter computes both the function and its derivative simultaneously. This allows the optimization of any common terms. For simplicity we substitute calls to the separate f and df functions at this point in the code below. int rosenbrock_df (const gsl_vector * x, void *params, gsl_matrix * J) { const double a = ((struct rparams *) params)->a; const double b = ((struct rparams *) params)->b; const double x0 = gsl_vector_get (x, 0); const double df00 = -a; const double df01 = 0; const double df10 = -2 * b * x0; const double df11 = b; gsl_matrix_set (J, 0, 0, df00); gsl_matrix_set (J, 0, 1, df01); gsl_matrix_set (J, 1, 0, df10); gsl_matrix_set (J, 1, 1, df11); return GSL_SUCCESS; } int rosenbrock_fdf (const gsl_vector * x, void *params, gsl_vector * f, gsl_matrix * J) { rosenbrock_f (x, params, f); rosenbrock_df (x, params, J); return GSL_SUCCESS; } The main program now makes calls to the corresponding fdfsolver versions of the functions, int main (void) { const gsl_multiroot_fdfsolver_type *T; gsl_multiroot_fdfsolver *s; int status; size_t i, iter = 0; const size_t n = 2; struct rparams p = {1.0, 10.0}; gsl_multiroot_function_fdf f = {&rosenbrock_f, &rosenbrock_df, &rosenbrock_fdf, n, &p}; double x_init[2] = {-10.0, -5.0}; gsl_vector *x = gsl_vector_alloc (n); gsl_vector_set (x, 0, x_init[0]); gsl_vector_set (x, 1, x_init[1]); T = gsl_multiroot_fdfsolver_gnewton; s = gsl_multiroot_fdfsolver_alloc (T, n); gsl_multiroot_fdfsolver_set (s, &f, x); print_state (iter, s); do { iter++; status = gsl_multiroot_fdfsolver_iterate (s); print_state (iter, s); if (status) break; status = gsl_multiroot_test_residual (s->f, 1e-7); } while (status == GSL_CONTINUE && iter < 1000); printf ("status = %s\n", gsl_strerror (status)); gsl_multiroot_fdfsolver_free (s); gsl_vector_free (x); return 0; } The addition of derivative information to the hybrids solver does not make any significant difference to its behavior, since it able to approximate the Jacobian numerically with sufficient accuracy. To illustrate the behavior of a different derivative solver we switch to gnewton. This is a traditional Newton solver with the constraint that it scales back its step if the full step would lead "uphill". Here is the output for the gnewton algorithm, iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 1 x = -4.231 -65.317 f(x) = 5.231e+00 -8.321e+02 iter = 2 x = 1.000 -26.358 f(x) = -8.882e-16 -2.736e+02 iter = 3 x = 1.000 1.000 f(x) = -2.220e-16 -4.441e-15 status = success The convergence is much more rapid, but takes a wide excursion out to the point (-4.23,-65.3). This could cause the algorithm to go astray in a realistic application. The hybrid algorithm follows the downhill path to the solution more reliably. The original version of the Hybrid method is described in the following articles by Powell, The following papers are also relevant to the algorithms described in this section, Go to the first, previous, next, last section, table of contents.
http://linux.math.tifr.res.in/programming-doc/gsl/gsl-ref_33.html
CC-MAIN-2017-39
refinedweb
2,132
53.92
On 07.03.2009 11:49, Marc Wernecke wrote: > Hi, > > some x-vdr users reported problems with compiling mplayer and some plug-ins like music too. We solved this by adding > > DEFINES += -D__KERNEL_STRICT_NAMES Why would this suddenly be necessary? > to the VDR Makefile and patching the driver header files > > --- ./linux/include/linux/dvb/video.h.orig 2009-03-03 09:54:58.000000000 +0100 > +++ ./linux/include/linux/dvb/video.h 2009-03-03 09:45:30.000000000 +0100 > @@ -28,6 +28,7 @@ > #ifdef __KERNEL__ > #include <linux/compiler.h> > #else > +#include <linux/compiler.h> > #include <stdint.h> > #include <time.h> > #endif The driver header files have apparently been broken recently ("backport include changes on some .h files"). I posted a bug report on linux-dvb, but apparently nobody cares... Klaus
http://www.linuxtv.org/pipermail/vdr/2009-March/019780.html
CC-MAIN-2014-42
refinedweb
129
54.39
What it important that tests be independent? If tests are not independent it will lead to a lot of lost time when a test fails when there is nothing wrong with the code and a developer spends a lot of time trying to debug working code. For example if a test uses an environment variable and then a developer runs the test in another system with a different or no value for the environment variable the test might fail due to the environment variable being different, not because there is a bug in the code. The developer might spend a lot of time discovering this. Alternatively a test dependent upon other tests might pass when it should fail causing a bug to go undetected, causing problems in production. For example, one test might set a cell in a table to a certain value. Another test might be supposed to set the same cell to the same value, among other things. If the database is not reset between tests, there would be no way of determining if the second test functioned as it was supposed to. I have seen both of these scenarios recently. Another problem that I have seen recently is tests that depend on an internet connection. Here, if the test fails, we might not know if it was due to momentarily being disconnect from the internet, or something else. The test might pass sometimes and fail some other times, creating confusion and frustration. Other problems can be caused by tests that modify files, call the current time, or share mutables (Of course we need to avoid mutables). I have written a blog post earlier on how to test databases and how to make the tests independent of each other here. Let’s look at how we can test code that uses environment variables. Environment Variables A lot of code uses environment variables, and we shouldn’t stop using environment variables just to make testing easier. So how can we test code that uses environment variables? One way could be to create a class with a method to get environment variables and then mock that method to return values that we want. Another way of testing code that uses environment variables is to use a config file. For example in src/main/resources you can put a file called application.conf with the following contents conn = { url = ${?JDBC_URL} } Then to get the value of the environment variable JDBC_URL in the local variable url use the following code import com.typesafe.config._ val conf = ConfigFactory.load() val url = conf.getString("conn.url") Then for testing purposes you can put another file called application.conf in src/test/resources with the following contents conn = { url = "jdbc:h2:mem:test" } Now for your tests the variable url will have the value “jdbc:h2:mem:test” instead of the value of the environment variable JDBC_URL. Internet Connection One example of a test which is dependent upon an internet connection is a test of code that tries to create a connection to a database. Such code could be tested using an in-memory database such as H2. Conclusion I hope that you have learned something useful. Feel free to leave a comment with feedback or your own experiences with testing.
https://blog.knoldus.com/writing-independent-tests/
CC-MAIN-2019-51
refinedweb
547
62.17
return ( argument.getClass().getName() );makes a mess of your neat subtyping hierarchy above. As it says in ContextSensitiveSubtyping and ConstructiveDeconstructionOfSubtyping? (), class Circle { double height; Point center; }; class Ellipse : public Circle { double width; };This gives both Circle and Ellipse the appropriate behaviors. So what's the problem? -- AndyJewell Mathematicians use computers too. -- BrianEwins Actually, ellipses don't have radii. If you draw a right angle triangle with one side along the axis of an ellipse, the triangle will not be circumscribed by the ellipse (unless the ellipse happens to be a circle at the time). Computer scientists use ISA for substitution of properties. Mathematicians use ISA for substitution in proofs and definitions. One is "internal" substitution or LiskovSubstitution the other is "external" or contextual substitution. -- SunirShah I disagree with you here. Ellipses do have radii, and they have two foci each. The distance from one focus to a point on the ellipse plus the distance from that point to the other focus, remains constant - see. So a circle is an ellipse with the foci coinciding. I'll give a CircleAndEllipseExample, and even though I'm afraid I might be ArguingWithGhosts, could people try to shoot holes in it? I won't be so arrogant to say there is no problem, or that I've solved it, but I don't see what would be wrong with my example or why people would want to make Ellipse a subclass of Circle. -- AalbertTorsius data Ellipse = Ellipse { origin :: Point; focusA :: Int; focusB :: Int; }; data Circle = Circle { origin :: Point; radius :: Int }; setFocusA e@(Ellipse o _ b) f = if b == f then Circle o f else Ellipse o f b setFocusB e@(Ellipse o a _) f = if a == f then Circle o f else Ellipse o a f setFocusA c@(Circle o r) f = if r == f then c else Ellipse o f r setFocusB c@(Circle o r) f = if r == f then c else Ellipse o r f
http://c2.com/cgi-bin/wiki?CircleAndEllipseProblem
CC-MAIN-2015-18
refinedweb
326
52.8
NAME X509_check_host, X509_check_email, X509_check_ip, X509_check_ip_asc - X.509 certificate matching SYNOPSIS #include <openssl/x509.h> int X509_check_host(X509 *, const char *name, size_t namelen, unsigned int flags, char **peername); int X509_check_email(X509 *, const char *address, size_t addresslen, unsigned int flags); int X509_check_ip(X509 *, const unsigned char *address, size_t addresslen, unsigned int flags); int X509_check_ip_asc(X509 *, const char *address, unsigned int flags); DESCRIPTION The, which must be encoded in the preferred name syntax described in section 3.5 of RFC 1034. By default, wildcards are supported and they match only in the left-most label; but they may match part of that label with an explicit prefix or suffix. For example, by default, the host name "" would match a certificate with a SAN or CN value of "*.example.com", "w*.example.com" or "*w.example.com". Per section 6.4.2 of RFC 6125, name values representing international domain names must be given in A-label form. The namelen argument must be the number of characters in the name string or zero in which case the length is calculated with strlen(name). When name starts with a dot (e.g ".example.com"), it will be matched by a certificate valid for any sub-domain of name, (see also X509_CHECK_FLAG_SINGLE_LABEL_SUBDOMAINS below). When the certificate is matched, and peername is not NULL, a pointer to a copy of the matching SAN or CN from the peer certificate is stored at the address passed in peername. The application is responsible for freeing the peername via OPENSSL_free() when it is no longer needed. X509_check_email() checks if the certificate matches the specified email address. Only the mailbox syntax of RFC 822 is supported, comments are not allowed, and no attempt is made to normalize quoted characters. The addresslen argument must be the number of characters in the address string or zero in which case the length is calculated with strlen(address). X509_check_ip() checks if the certificate matches a specified IPv4 or IPv6 address. The address array is in binary format, in network byte order. The length is either 4 (IPv4) or 16 (IPv6). Only explicitly marked addresses in the certificates are considered; IP addresses stored in DNS names and Common Names are ignored._CHECK_SUBJECT, - - X509_CHECK_FLAG_NO_WILDCARDS, - - X509_CHECK_FLAG_NO_PARTIAL_WILDCARDS, - - X509_CHECK_FLAG_MULTI_LABEL_WILDCARDS. - - X509_CHECK_FLAG_SINGLE_LABEL_SUBDOMAINS. - The X509_CHECK_FLAG_ALWAYS_CHECK_SUBJECT flag causes the function to consider the subject DN even if the certificate contains at least one subject alternative name of the right type (DNS name or email address as appropriate); the default is to ignore the subject DN when at least one corresponding subject alternative names is present. set, X509_CHECK_FLAG_NO_WILDCARDS disables wildcard expansion; this only applies to X509_check_host. If set, X509_CHECK_FLAG_NO_PARTIAL_WILDCARDS suppresses support for "*" as wildcard pattern in labels that have a prefix or suffix, such as: "www*" or "*www"; this only applies to X509_check_host. If set, X509_CHECK_FLAG_MULTI_LABEL_WILDCARDS allows a "*" that constitutes the complete label of a DNS name (e.g. "*.example.com") to match more than one label in name; this flag only applies to X509_check_host. If set, X509_CHECK_FLAG_SINGLE_LABEL_SUBDOMAINS restricts name values which start with ".", that would otherwise match any sub-domain in the peer certificate, to only match direct child sub-domains. Thus, for instance, with this flag set a name of ".example.com" would match a peer certificate with a DNS name of "", but would not match a peer certificate with a DNS name of ""; this flag only applies to X509_check_host. RETURN VALUES The functions return 1 for a successful match, 0 for a failed match and -1 for an internal error: typically a memory allocation failure or an ASN.1 decoding error. All functions can also return -2 if the input is malformed. For example, X509_check_host() returns -2 if the provided name contains embedded NULs. NOTES Applications are encouraged to use X509_VERIFY_PARAM_set1_host() rather than explicitly calling X509_check_host. Host name checks are out of scope with the DANE-EE(3) certificate usage, and the internal checks will be suppressed as appropriate when DANE support is added to OpenSSL. SEE ALSO SSL_get_verify_result, X509_VERIFY_PARAM_set1_host, X509_VERIFY_PARAM_add1_host, X509_VERIFY_PARAM_set1_email, X509_VERIFY_PARAM_set1_ip, X509_VERIFY_PARAM_set1_ipasc HISTORY These functions were added in OpenSSL 1.0.2. Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at.
https://www.openssl.org/docs/manmaster/crypto/X509_check_host.html
CC-MAIN-2016-44
refinedweb
706
54.83
With the popularity of instant messaging platforms and advancements in AI chatbots have experienced explosive growth with as many as 80% of businesses wanting to use chat bots by 2020. This has created great opportunity for freelance Python developers as there is great need for the development of both simple and complex chat bots. One of the more popular messaging platforms is Telegram with a reported 200 million monthly users. Telegram provides an excellent API for chat bots allowing the user to not only communicate using text messages, but also multimedia content with images, and video, and rich content with HTML and javascript. The API can even be used to manage purchases directly within Telegram. Python is excellent for creating Telegram bots and the extremely popular Python Telegram bot framework makes this much easier allowing you by default to create bots that run asynchronously so you can easily code bots that can communicate with many users at the same time. Let’s get started with making our first telegram bot in Python, the first thing we’ll need to do is download Telegram, create an account and communicate with the “Botfather”. The Botfather is a chat bot created by telegram through which we can get our Telegram Bot API token. To download telegram head to Telegram.org and download and install the appropriate version for your OS, install and create an account. Now add the botfather as a contact, do this by clicking the menu icon selecting contacts, search for “botfather” and selecting the @botfather user. A conversation will popup with a start button at the bottom, click the start button and you will receive a list of commands. The command to create a new bot is /newbot enter “/newbot” and answer the prompts for naming your bot and you will receive a Telegram Bot API token. You can name the bot whatever you like, but the bot username will need to be unique on Telegram. Store the access token somewhere as we will need it to authorize our bot. Installing The Python Telegram Bot Framework For the bot creation we will be using Python version 3.7. The Python Telegram Bot framework is compatible with Python 2.7 and above. Before we get to the actual coding we will need to install the Python Telegram Bot framework the easiest way to do that is with: $ pip install python-telegram-bot If you prefer to use the source code you can find the project on Github. Now with the Python Telegram Bot library installed let’s get started. Connecting Your Bot To Telegram The first thing you’ll need to do is have the bot connect to and authenticate with the telegram API. We’ll import the Python logger library to make use of the Python Telegram Bot frameworks built in logging so we can see in real-time what is happening with the bot and if there are any errors. Place the following code into a Python file, and place your telegram bot key where indicated in the update statement: from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, Dispatcher import logging logging.basicConfig(format='%(levelname)s - %(message)s', level=logging.DEBUG) logger = logging.getLogger(__name__) updater = None def start_bot(): global updater updater = Updater( '### YOUR TELEGRAM BOT AUTHENTICATION KEY HERE ###', use_context=True) updater.start_polling() updater.idle() start_bot() Now looking at the top portion of the code you can see that we imported a number of libraries from the telegram bot module, and set the logger to display any messages of debug priority or higher. We’ve created an updater variable and this will hold the updater for our Telegram bot, which is placed in a global variable so that we can easily access it later from the UI. Updater provides an easy front end for working with the bot, and to start our new bot with the Updater we simply need to pass in the authentication key, and we’ll also pass in use_context=true to avoid any deprecation errors as context based callbacks are now the default for Python Telegram bot. updater.start_polling() actually starts the bot, and after this is passed in the bot will begin to start polling Telegram for any chat updates on Telegram. The bot will begin polling within its own separate threads so this will not halt your Python script. We use the updater.idle() command here to block the script until the user sends a command to break from the Python script such as ctrl-c on windows. When running this script you will be able to see messages that the bot and the updater have started, and a number of threads are running. The bot will not do anything noticeable as of yet in Telegram. Making The Bot Understand Commands Let’s update the script and make our start_bot function look like this: def start_bot(): global updater updater = Updater( '### YOUR TELEGRAM BOT AUTHENTICATION KEY HERE ###', use_context=True) dispatcher = updater.dispatcher dispatcher.add_handler(CommandHandler('start', start)) updater.start_polling() updater.idle() We’ve added a dispatcher variable for clearer access to the dispatcher for our bot, we’ll use the dispatcher to add commands. With the line dispatcher.add_handler(CommandHandler('start', start)) we have added a command handler that will execute when the user enters /start and will execute the callback function start. This command automatically executes when you add a bot as a new contact and press the start button within Telegram .Now add in the function for our start command: def start(update, context): s = "Welcome I Am The Finxter Chat Bot! Your life has now changed forever." update.message.reply_text(s) Command handlers require both an Update and a CallBackContext parameter. Through the Update we send updates to chat, here using update.message.reply_text automatically adds the reply only to the specific chat where the /start command was sent. Now when entering the chat with the bot, or typing the /start command you should get a reply like this: Adding More Advanced Command That Reads Chat The start command executes a function within our bot whenever a user types /start, but what if we want our bot to read and respond to chat rather than simply an executing a command? Well, for that we use a different type of handler called a MessageHandler. Add the following line to the start_bot function underneath the previous add_handler statement: dispatcher.add_handler(MessageHandler(Filters.text, repeater)) This will add a MessageHandler to the bot, we also use a filter here so that this message handler Filters everything except text because the user could be posting something other than text in their messages (such as images or video). For this handler we will create a callback function named repeater with the code: def repeater(update, context): update.message.reply_text(update.message.text) In our repeater we use the reply_text method, replying with update.message.text which sends the message chat text back to the user. Turning A Bot Command Off Or On For The User A common bot ability is to turn on or off specific commands, but there is a problem that arises which is that we cannot simply remove the Handler for the functionality as that would remove it for all users of the bot. Fortunately, Python Telegram Bot allows us to store user specific data using the context that is passed in to our callback functions. Let’s add another handler underneath the repeater handler: dispatcher.add_handler(CommandHandler('echo', echo)) Now, we’ll first modify the repeater handler to check to see if the user’s text should be echoed: def repeater(update, context): if context.user_data[echo]: update.message.reply_text(update.message.text) Here we added the statement if context.user_data[echo]: before having the bot reply to the user. Python Telegram Bot has a user_data dictionary that can be accessed using the context. This is specific to the user, and by using this we can make sure that if there are multiple users of the bot they are unaffected. Now we’ll add in another function so the user can set the echo dictionary using the echo command in chat: def echo(update, context): command = context.args[0].lower() if("on" == command): context.user_data[echo] = True update.message.reply_text("Repeater Started") elif("off" == command): context.user_data[echo] = False update.message.reply_text("Repeater Stopped") In this callback function we gather the users extra command parameters from the context. The users parameters are contained with context.args which provides an array based on the spaces from the user, in this function we check the first parameter passed by the user looking for on or off and change the user_data[echo] variable. Posting Data From The Web And Securing Commands Python Telegram Bot makes it easy to reply with files from the web, such as photos, videos, and documents you simply need to give a url to the specified content type. We’ll use the Unbounce API to gather a free image based on user provided terms and post it into chat, and we’ll also use a Filter so that this command only works for your username. Add the following code to the start_bot() function: dispatcher.add_handler(CommandHandler('get_image', get_image, filters=Filters.user(username="@YOUR_USERNAME"))) Replace YOUR_USERNAME with your username. This code will execute the get_image function, but only if the username matches your own. With this filter we are only passing in 1 username, but you could also pass in a list of usernames. Now let’s create the get_image function: def get_image(update, context): terms = ",".join(context.args).lower() update.message.reply_text(f"Getting Image For Terms: {terms}") command = context.args[0].lower() if("on" == command): context.user_data[echo] = True update.message.reply_text("Repeater Started") elif("off" == command): context.user_data[echo] = False update.message.reply_text("Repeater Stopped") Like in the previous example we get the terms using the args variable from the context, but in this case we join the terms together with a , and convert to lowercase because that is what is required by the Unbounce API. You can then get an image in the chat using /get_image and some keywords like this: Adding A GUI Now we have learned how to a bot running, and handling commands. Thanks to the multi-threaded nature of Python Telegram Bot this bot can handle many users from all round the world. In many projects on freelance sites such as Upwork the client does not care what language is used to create the bot. However, they often want an interface for managing the bot so let’s create a simple interface that allows the bot owner to start and stop the bot. To build our user interface we’ll use the PySimpleGUI library. With PySimpleGUI you can create cross platform GUIs that work on Windows, Mac and Linux without any source code with an easily readable syntax and minimal boiler plate code. To begin adding the GUI code let’s first remove the updater_idle() line from our start_bot function, so your start_bot function reads like this: def start_bot(): global updater updater = Updater( '### YOUR TELEGRAM BOT AUTHENTICATION KEY HERE ###', use_context=True) dispatcher = updater.dispatcher dispatcher.add_handler(CommandHandler('start', start)) dispatcher.add_handler(MessageHandler(Filters.text, repeater)) dispatcher.add_handler(CommandHandler('echo', echo)) dispatcher.add_handler(CommandHandler('get_image', get_image, filters=Filters.user(username="@YOUR_USERNAME"))) updater.start_polling() By removing the updater.idle() line the bot no longer pauses the script after starting and runs in a separate thread until we decide to stop the bot or the main thread stops. Now we’ll create a GUI, this GUI will consist of a status line to show whether the bot is currently turned on along with a start, and stop button, and a title like this: To create this gui add the following code: def gui(): layout = [[sg.Text('Bot Status: '), sg.Text('Stopped', key='status')], [sg.Button('Start'), sg.Button('Stop'), sg.Exit()]] window = sg.Window('Finxter Bot Tutorial', layout) while True: event, _ = window.read() if event in (None, 'Exit'): break window.close() Now to start the gui remove the start_bot() statement at the bottom of our script and replace with gui(): gui() In the layout variable you can see that we’ve defined some text and button elements, and each element in the list shows up as a line within our UI. Our first line consists of 2 items to show the status (we gave the second element a key so we can easily refer to it later), and our second line consists of three buttons. The sg.Window function is where we provide our title, and layout. The while True: loop is the standard PySimpleGUI event loop. The window.read() function returns any GUI events along with any values passed along with the event (such as user inputted text), we won’t use any values in our loop so we pass them to the _ variable, you can pass a time to wait to the windows read function in milliseconds, passing nothing as we have makes the function wait until an event is triggered. The if event in (None, ‘Exit’): statement executes if the user hits the Exit button or the user closes the window by another means (such as the close button in the corner of the window), in this case we simply break the loop. Starting And Stopping The Bot From The GUI Now if you start the script the start and stop buttons won’t actually do anything so we’ll add in the code to start and stop the script and update the status making our gui function look like this: def gui(): layout = [[sg.Text('Bot Status: '), sg.Text('Stopped', key='status')], [sg.Button('Start'), sg.Button('Stop'), sg.Exit()]] window = sg.Window('Finxter Bot Tutorial', layout) while True: event, _ = window.read() if event == 'Start': if updater is None: start_bot() else: updater.start_polling() window.FindElement('status').Update('Running') if event == 'Stop': updater.stop() window.FindElement('status').Update('Stopped') if event in (None, 'Exit'): break if updater is not None and updater.running: updater.stop() window.close() Looking at this code you can see we added two different event conditions, the Start and Stop share the same names as our buttons, and when a button is pressed in PySimpleGUI an event is triggered based on the button name. In our start event we start the bot using start_bot if there is no updater yet, otherwise we execute the start_polling method of our updater as re-starting the updater in this way is much quicker than using start_bot to initialize the bot. We also use the find_element function of the window to access the status text using the key we created of ‘status’ and change that to show the bot is running. Turning GUI Buttons On And Off Now if we start our script we get the user interface, and pressing the start button will start the bot, and pressing the stop button will stop the bot, but we get an error when we press the buttons out of order. We’ll remedy this situation by modifying our event loop to disable the start and stop buttons so the user can only do one or the other at the appropriate times. if event == 'Start': if updater is None: start_bot() else: updater.start_polling() window.FindElement('Start').Update(disabled=True) window.FindElement('Stop').Update(disabled=False) window.FindElement('status').Update('Running') if event == 'Stop': updater.stop() window.FindElement('Start').Update(disabled=False) window.FindElement('Stop').Update(disabled=True) window.FindElement('status').Update('Stopped') You can see that we used the FindElement method on the buttons here, and then using the Update method changed the disabled variable which allows you to disable buttons. If you start up now you’ll see that at first both buttons are enabled so we’ll have to make a small modification to the layout. Change the layout variable to the following: layout = [[sg.Text('Bot Status: '), sg.Text('Stopped', key='status')], [sg.Button('Start'), sg.Button('Stop', disabled=True), sg.Exit()]] Buttons should now enable and disable appropriately within the GUI as shown: And there we have it, a working GUI for our Telegram Bot. Conclusion We learned quite a bit in this tutorial, and now you can have a Python Telegram Bot that responds to commands, and even presents data shown from the web. We also created a simple UI for starting and stopping the bot. With your new skills you’re now ready to answer those potential clients on freelancing sites looking for a Telegram Bot. About the Author Johann is a Python freelancer specializing in web bots, and web scraping. He graduated with a diploma in Computer Systems Technology 2009. Upwork Profile.
https://blog.finxter.com/python-telegram-bot/
CC-MAIN-2020-10
refinedweb
2,786
60.95
Agenda See also: IRC log <laurent_lefort_cs> Previous: <krzysztof_j> i can do it <laurent_lefort_cs> Scribenick: kryzystof_j <krzysztof_j> laurent_lefort_cs: had anybody a look at Raul's slides <michael> <krzysztof_j> not so far, will look at them asap <myriam> could you upload the slides as pdf or ppt instead of pptx? <laurent_lefort_cs> PPt is here: <Payam> +q <Payam> colour coding is a very good idea <krzysztof_j> +q <krzysztof_j> michael: slide 3 and 4 look okay maybe a bit oo much detail <krzysztof_j> krzysztof_j: yes, especially the Situations part <michael> +q <krzysztof_j> +q <cory> sorry ... could you repost the link to presentation? <krzysztof_j> cory: it is in the last mail from Raul <krzysztof_j> the file is calles SSN-XG ontology v4.pptx <michael> +q <krzysztof_j> laurent_lefort_cs: yes maybe to complex and we can leave out some details <janowicz> michael: our observation <krzysztof_j> imho it is too complex <michael> shouldn't get caught trying to show everything on each diagram. The diagrams get too complex if we do that. Better if they are abstractions and then the figures in particular sections show all the details. <michael> Saying that, I think it's fine to have a diagram that tried to show everything, but it's probably more useful as an 'appendix' rather than as the first thing a reader sees. <michael> +q <krzysztof_j> Payam: helps to clarify the alignment <Payam> +q <krzysztof_j> Payam: we need a different graphical notation for the alignment, e.g., by color <krzysztof_j> +q <michael> +q <krzysztof_j> I would use different color or namespace for the classes but not introduce new signatures for realtions such as subclassing <Payam> Maybe clarifying that the alignment means "sameAs" or something similar <myriam> or different shapes <Payam> +q <Payam> -q <krzysztof_j> -q <laurent_lefort_cs> Laurent: Figure 4 - we can add dul: prefix to make it more apparent that the classes are for DUL <krzysztof_j> krzysztof_j: splitting dul and ssn may be very difficult for the documentation. why not simply highlight some aspects instead of trying to have this huge overview <krzysztof_j> laurent_lefort_cs: should fig 4 go into the dul module description <myriam> +q <krzysztof_j> michael: too complex as an intro to the ontology <krzysztof_j> michael: start with simplified views and hide other details in the introduction <Payam> agree with Michael <krzysztof_j> ack <krzysztof_j> michael: we can go into more details if we have to in later parts of the report (e.g., about about alignment details) <krzysztof_j> ) but it is too much for the intro <krzysztof_j> laurent_lefort_cs: is a figure/table about the modules missing? <krzysztof_j> +q <Payam> +q <krzysztof_j> myriam: dolce can really help us to integrate other ontologies <krzysztof_j> myriam: dolce parts are also helpful in the diagramm to get the idea and the underlying assumptions <krzysztof_j> 2010.pdf <Payam> yes <cory> yes <krzysztof_j> sorry, there is a space between SSO and 2010 this is part of the url <krzysztof_j> yes <krzysztof_j> (and it also contains the last version of the ontology and the dolce alignment) <krzysztof_j> (the last version of the alignment and sensor design pattern is available at) <krzysztof_j> laurent_lefort_cs: michael+krzysztof what is your plan for the alignment? a less complex version of the paper <michael> we are meeting after this to discuss <krzysztof_j> we will know in two days whether the paper will be accepted or not but we can use it for the report anyway <krzysztof_j> +q <Payam> -q <krzysztof_j> +q <krzysztof_j> laurent? <michael> no laurent? <Payam> ok <krzysztof_j> +q <krzysztof_j> laurent_lefort_cs: any other updates on the documentation and the modules <krzysztof_j> Andriy: some rework needed e.g., the observations. can we reuse the example in the documentation page? <krzysztof_j> +q <krzysztof_j> <michael> +q <krzysztof_j> -q <krzysztof_j> kryzystof_j: mix between the procedure that defines the observation and the procedure that defines how the sensor is constructed <krzysztof_j> (see e.g., page 8 of the SSO 2010 paper for details and examples) <krzysztof_j> laurent, this was not a criticism but just a pointer that we all should have a look at the modules and documentation to make sure it is consistent <krzysztof_j> with the ontology <krzysztof_j> I agree, again it was not a criticism <laurent_lefort_cs> End to End examples: <laurent_lefort_cs> Type definition <laurent_lefort_cs> Instances <krzysztof_j> laurent can you argue where you see the problem? <michael> There are many things for which reasoners (with Abox) don't exactly locate the errors - it's a hard problem. I don't think that we should restrict our modelling because of weaknesses in error reporting in tools. If we did there are probably lots of things we would have to change. <krzysztof_j> +q <krzysztof_j> -q <krzysztof_j> ack to michael <krzysztof_j> cory: ack to krzysztof_j <michael> +q <krzysztof_j> laurent_lefort_cs: we just need to make sure that it is consistent (and not just a QCR somewhere while it is avoided in all other parts) <michael> +q <cory> +q <krzysztof_j> (hmmm, laurent is right) <krzysztof_j> cory: the problem is to make a uniform ontology (make sure that there is no part that is way more complex than other parts) <laurent_lefort_cs> vote: go (let the ontology as it is - and document the restrictions) - postpone (delay the resolution for the after the XG) - no go (removing the restrictions) <laurent_lefort_cs> no go <michael> go <laurent_lefort_cs> Raul email: postpone <krzysztof_j> postpone or go (imo DUL also uses QCR) <Andriy> postpone <Payam> go <cory> go <myriam> maybe postpone <krzysztof_j> ok then ig go gor GO <krzysztof_j> i go for go <krzysztof_j> :) <krzysztof_j> +q <michael> I'm happy to document <krzysztof_j> -q\ <krzysztof_j> -q <krzysztof_j> vote: go <krzysztof_j> laurent_lefort_cs: we need a documentation <krzysztof_j> DUL uses closures and hence in most cases they do not require QCR <krzysztof_j> laurent_lefort_cs: all please check whether we really need QCR <krzysztof_j> laurent_lefort_cs: cory can you help with the pro/con QCR decision? <krzysztof_j> cory: yes <krzysztof_j> yes <myriam> +q <janowicz_> back <janowicz_> (i cannot join anymore the w3c system does not let me in) <Payam> Myriam: follow up will focus on continuing the current work or will/can also focus on applications of ontology? <cory> +q <cory> <myriam> ok thank you ;) <krzysztof_j> (the w3c telco system does not let me join again - probably because of the time. just FYI: we had the linked spatiotemporal data workshop last week at GIScience 2010 and some people have been interested in doing a VoCamp at ISWC 2010 on linked sensor data, citizens as sensors, and an even smaller version of ontology pattern based on our work and dolce. anybody interested to join? let me know) <cory> hello? <michael> sure <cory> sorry Laurent, phone died <laurent_lefort_cs> Conference is finished Thanks everyone <michael> bye <Payam> Bye! <cory> Laurent ... I'll send you something soon <cory> yes <krzysztof_j> michael? This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/alignment details/alignment details)/ Succeeded: s/udnerlying/underlying/ Succeeded: s/after this ti discuss/after this to discuss/ Succeeded: s/reusue/reuse/ Found ScribeNick: kryzystof_j WARNING: No scribe lines found matching ScribeNick pattern: <kryzystof_j> ... Inferring Scribes: kryzystof_j WARNING: 0 scribe lines found (out of 233 total lines.) Are you sure you specified a correct ScribeNick? Default Present: Payam, Andriy, michael, laurent_lefort_cs, myriam, krzysztof_j, +1.937.775.aaaa, janowicz Present: Payam Andriy michael laurent_lefort_cs myriam krzysztof_j +1.937.775.aaaa janowicz Regrets: Oscar Raul Danh Kevin Kerry Agenda: Got date from IRC log name: 22 Sep 2010 Guessing minutes URL: People with action items:[End of scribe.perl diagnostic output]
http://www.w3.org/2010/09/22-ssn-minutes.html
CC-MAIN-2016-22
refinedweb
1,245
58.66
This articles describes how to configure the API Component to serve as a "proxy" for an existing SOAP service provider and forwarding/returning the requests and responses payloads as-is. Use Case In a scenario when, we would like to follow a WSDL first approach, we can use Dell Boomi SOAP services to run as a proxy service. For instance, if you would only like to change the SOAP endpoint to send Request to Boomi but would like to keep the WSDL exactly as provided by externally hosted SOAP service provider. When publishing SOAP web service processes in AtomSphere, Boomi generates its own WSDL based on the Web Services Server Operation. This WSDL includes Boomi-added items including Boomi namespaces and additional XML elements for the operation object and lists. You can also check help pages on API SOAP tab which show the settings that we will use later in this article. This article assumes that you have API management enabled on your account. Using API management we have the ability to easily configure and manage API setting. Approach In this example we use the GeoIPService from. This service's publicly accessible WSDL can be found here:. Dell Boomi will dynamically generate a new process by just defining the folder where we will house this process and connection details to use for SOAP client connection. The high level, we will follow these steps: - Create SOAP connection - Create API and import endpoints using that soap connection - A process will get created automatically - We will change setting to allow same WSDL details, using API SOAP tab - Finally, we will deploy the API component and related process on our atom In this example, on Shared Web Server panel we will use API type set to Intermediate and will be using Authentication Type set to None. If you would like you can enable Authentication and further protect the service. Boomi allows to use Basic authentication or client certificates as its mentioned on the Shared Web Server panel. Implementation - Create a New SOAP UI project to confirm that the WSDL is valid and generate the Request profile: - Log in to AtomSphere and create a new Connection for the Web Service SOAP Client and provide the connection details. We will be using this during API creation steps. This will further open a new form for Connection details such as WSDL URL, Soap Endpoint Url and authentication information if any. Once filled click on Save and Close button. - Switch to API Management console and click on APIs button at main menu. Further click on Create an API button to Start the API creation process. - Provide a name for the new API, i.e GetGeoIP and also provide Base API Path i.e. getgeoip and press Save button. Click on Import an Endpoint button to start the API creation process. If you will not provide the Base API path, the WSDL generated will be merged to main WSDL. In this case we would like to keep each WSDL separate. - You will see Add Processes form, here you will need to select the appropriate button. I will be using Import from external service. Import and Configure Process Settings will open, in the Import Service *, click on Choose File and provide WSDL URL:, provide the Process location folder and select the Connection that we had created earlier: - Click next again and follow the prompts. Boomi will retrieve the Operation from the WSDL given and will prompt for selection the operations that you want to import. - You will see the final step with Success! The process was successfully imported into your API. Click the Finish button to complete the step - At this point click Save and Close Button to save the API operation and processes that are created during the import steps. Boomi has automatically created a process which looks like this: - If you would like, you can click on View Deployment or go use Deploy tab to directly deploy this process and see the WSDL generated by Boomi. The proxy service will work just fine but the WSDL will have Boomi generated items, you can follow next steps to further setup Boomi to host custom WSDL. From the API Management console, go to API Catalog link and click on the GetGeoIP API and further click on SOAP section to see the Base URL Path and WSDL URL Path. It will be like: Base URL Path: WSDL URL Path: Now the API and process is created, however, the WSDL generated will be referring to Boomi WSS connector Namespace and also the WSDL will have Boomi specific parameters. We can browse the WSDL by pointing to WSDL URL: We can test that the API works by creating a new Project SOAPUI by pointing to above URL. Which is great, but we would like to keep the request and WSDL same as external provider - In order for us to use the WSDL from the External SOAP provider follow these additional steps. Browse the API by going back to and click on SOAP link as shown below: Update the WSDL Namespace to point to the ns from original service provider. Ensure that the SOAP version is selected based on your requirements, by default it is 1.1 and in this case we want to use both 1.1 and 1.2. Tick the checkbox Omit Operation Wrappers, with this Boomi will send the request payload as it is to the WSDL service provider, in this case. It is important to check this, otherwise, requests will fail if WSDL Namespace is updated without checking this. In this case we also want to use Custom WSDL, as the Boomi generated WSDL uses literals in the XSD validation steps and presents the request as it is to the External SOAP service. Click on Choose a File, in the Upload Custom WSDL and select the original WSDL, in this case the WSDL from external host:. You will need to save the WSDL to local disk for this step. Ensure during saving the WSDL that there are no extra lines or spaces while getting the WSDL, otherwise, this step will not work. Finally, click on Save button and redeploy your API. Now we are ready to do another test by creating a new Project in SOAP UI or we can add a new End point to the project that we created in Step 1. We will be using Boomi Base URL Path: in this case. Its same as API Catalog link from API Console Now we have setup Boomi as a proxy for externally hosted SOAP web service. The request and the WSDL are same but now the endpoint is hosted on Boomi!
https://community.boomi.com/docs/DOC-3445
CC-MAIN-2018-39
refinedweb
1,114
59.13
WordWrap word boundary, non-breaking <nobr>, special character slash "/", wrap/split issue. - billouparis Hello, I would like to display a text composed of several words with wrapMode set to Text.WordWrap enumeration value. This text contains what I consider being a word which contains a special character (a slash character) in the middle of the word itself. "90km/h" Sadly Qt is not considering this as a single word, and is splitting the word before the "h". Code example: import QtQuick 2.2 import QtQuick.Window 2.1 Window { id: idRoot width: 800 height: 480 visible: true Rectangle { id: pipoRect width : childrenRect.width height: width color:"blue" Text { wrapMode : Text.WordWrap id: pipoText width : 530 height :200 color: "yellow" font.pixelSize: 24 text: "Défaut suspension: Limitez votre vitesse à 90km/h" } } } The "word" 90km/h is split on two lines, first line shows "90km/" and second line shows "h". What would be the best solution to get "90km/h" not split, and displayed in this case on the second line? Of course this text is just one example, the text can be random, and I need a generic solution to prevent Qt to consider / as a word boundary. Thank you for any hint! - billouparis Well I thought I found a kind of solution by using this line: text: "Défaut suspension: Limitez votre vitesse à <nobr>90km/h</nobr>" But this just does not work and still provide the same result, with the "90km/" on the first line and the "h" on the second line. And I'm expecting "90km/h" on the second line instead :( - billouparis It seems ok with this! text: "Défaut suspension: Limitez votre vitesse à 90km\u2060/\u2060h" This is the unicode character for word joiner! Good night! Bill This works for me as well. I have the same issue, but in QTextBlock, when text symbols immediately break after slash('/') symbol. Thanks alot for your research.
https://forum.qt.io/topic/58764/wordwrap-word-boundary-non-breaking-nobr-special-character-slash-wrap-split-issue
CC-MAIN-2017-47
refinedweb
319
73.27
+++ Jon Dowland [2012-11-16 19:41 +0000]: > and please use usertags > in future to collect together bugs under a single MBF. Usertags are very flexible but rather undiscoverable. I discovered this yesterday which lets you find out what users exist and thus what tags:? and, arguably more useful the 'by users' list: Just thought it worth mentioning to help others who find users/usertags mysterious (took me ages to understand how it works) I do think it would be useful to have more tags in the default namespace as nearly all user namespaces have exactly one tag, but that's a different discussion. Wookey -- Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM
https://lists.debian.org/debian-devel/2012/11/msg00446.html
CC-MAIN-2014-15
refinedweb
112
66.67
- Help/Advice with XSL / XPath needed - simple xpath question - [ANN] Build Java UI using XML - xsl:variable question - XML as a stream protocol. - Schema question regarding some mandatory/some optional values - DTD design advice: evaluated markup in CDATA sections? - Child Elements/Nesting - Newbie : is it possible to write file after parse several files withxerces ? - creating xml within java app - XSLT problem - NunniMCAX: new minimal xml parser Open Source - Castor and collections: mapping file needed for unmarshalling? - Converting XSD into DTD? - How to center a <fo:table> element ? (XSLFO) - libxml2/libxslt crash - [ANN] Ibex XSL-FO Formatter 2.0 Released - Use of following to get unique nodes??? - TUG2004: Registration is open now - Searching random XML documents - Help in asp report using xml - Generate XML on fly in Java dataobject. - XSL-FO - Formatting single line with different text-alignments - What does invalid character mean? - [ANN] Exchanger XML Editor 1.3 Released - DTD Logic - File not found with Xerces C++ - Newbie: Help with xml and xsl stylesheet - Advice on ways to extend a type using XSD - generate xml with subnode - C++ XML Data Binding - Condition coding in XML Schema? - Help: getting data from an external xml file - Streaming xml questions - External DTD loading - XML + XSL (Parser) + MathML - New version of ExamXML - Namespaces - XSLT reduce or count number of matching output elements - Where's msxml 2.5 - Problem with CDATAs - Schema maxInclusive puzzle - How to merge XML (with XSLT?) - XDR approval - XForms : Tree edition - Modularization using modularized xhtml - namespace problem with XSLT - must specify "encoding" attribute in DTD. Why? - CVXML - Curriculum Vitae schema - [Q] A simple C++ parser ? - xml html codes - trying to get started with simple xml stuff - use xalan ? - When fractionDigits and totalDigits won't do the job (?) - XPath : comparing a value that starts with a prefix - Schema parser - external dtd or schema with xerces ? - Newbie: XSLT substring or date format - Problems porting a program from one computer to another. MSXML version? - No way to use entities in a document using schema? - XML XSL javascript, sorting child node of data island - NT Bug , I dunno a moment of your time please. - parsing string data through castor - Wrapping XML in CDATA - [newbie] XML for scientific data - NEWBIE: XPATH question - Generate valid XHTML - NEWBIE: Very simple XPATH query - How to restrict elements by reference to another element using XML schema - Is it possible with xerces ? - xml schema - Using XSL to delete fixed text in arbitrary elements - XML Europe 2004 Program Now Available..register today with early bird rates - EAD/Etext/XML Courses at Rare Book School - Req: Info on XML Document Management Systems - Using XSL where source docuement has DOCTYPE/DTD declaration - ANN: Optimal XML Conference. 31 March 2004. Cambridge, UK. - XML Validation - XML Validation Problem - [VoiceXMl] Tutorial or Web site ? - Compare & Merge XML documents - When to use #FIXED? - Help with Carriage Return in XSL - xml schema viewer? - parameter as xml filename - XML Schema Question - Parsing issue with Apache - serverxmlhttp through ISA server - VML - Help - how to manage an indented file with dom. - XML Validation - xpath with regex - Content processors, XML accelerators (hardware) - Anything faster than Saxon with similiar features? - command line tool for comparing XML files - Help with XPATH "position" of element - What is the good XML editor? - XSLT for webpages - Cattell, Florescu, Gray, Melton to discuss marriage of XML, SQL, web services, grid computing - Color-coding verbs only - XML Newbie:Can this Be Done - The Ebusiness Bible - <key><keyref> does not work in XMLSpy - Simple mark-up for CMS - How can i put all tag contents into CDATA? - How can i put all tag contents into CDATA? - Creating links with xsl from xml - Problem converting XML with XSLT to another XML - ANNOUNCE: new XML reader and writer - How do I filter records from XML doc? - lists in attributes - Validating XML document using Xerces-J - get value to standard out when writing to file. - Help: getting data from an external xml file - Good SGML DTD viewer *or* tool for translating SGML DTDs to XML DTDs - Q: XSLT transforming result of one template with another template- how? - XSLT Problem - XML schema validation question restricting"attributes" count - W3C Issues RDF and OWL Recommendations - Widespread support for XDR-based WSDL? - Validating XML data - XSL: choosing another node if one is not available - xsl-fo starting page-number - February meeting of the Washington Area XML Users Group - XML Wiki schemas (like WikiPedia)? - GML validation and XPath - Where to ask newbie questions about RSS? - String patterns in schema - trying to apply CSS to DOM/OPML - XSLT remplate matching - Embedding SVG within XHTML - Schema documentation tool? - .Re: XPath: how to find out name of preceding siblings - Q: XPath: how to find out name of preceding siblings - XML schema - key/keyref and inheritance - parser problem - [JDOM] problems including a DTD - unable to build cocoon 2.1.3 sources file - Misaligning text columns in fop - xml schema -> rdb schema? - Dynamic content ala HTML - XML only database - cocoon cinclude problem in xsp - xsl for each for every 5 element - Converting Simple BNF grammar to DTD or schema - XSD Validation of XML - FA: The XML Handbook Third Edition - Validating using external DTD - Schema questions - cross document identity constraints. - XML extension to Trac/java - xmlspy: how to modify behavior - WSDL- Mapping Application Defined Names to XML Names - XML "templates" and Java (?) - Undeclared Tag Error in SQL - xsl:variable increment - element farms (containers for repeated elements) needed? - element farms (containers for repeated elements) needed? - element farms (containers for repeated elements) needed? - XSLT benchmarking and performance advice ? - XML file for a database - XML with SVG - XML Newsgroup software - question about attributes - read xml files with java - Undeclared Tag Error - Schema validation using Xerces-C++ version 1.6.0? - XLST: is there a formal definition? - Looking for a soft... - Serch The Resutl Knot For An Textretrieval - Client/Server SOAP Library written in C - get namaspaces for element - Schema validation using Xerces-C++ version 1.6.0? - XPath: //B[2] doesn't select anything - XSL: textarea with xsl code??? - ANNOUNCE: XMLSPY CERTIFICATION BETA OPEN! - classes acting like instances - NEWBIE: Enforcing uniqueness in XML schema - Consuming a webservice from an XSD - XML schema regular expressions question and recommended XML Schema book - XSLT Select nodes without text-node children whose names starts with specifix text - Help: targetNamespace value on my machine... - java DOM: hide prefix from app - C++ XML parsers? - XSLT adding your own functions - Extreme Markup Languages 2004 Call for Participation - XSLT to produce subtree of a tree - xsl transform... - XPATH generation from Schema - NEWBIE: XML restriction query - Mobile Application Markup Language(MAML) - xsf fo - DTD entity reference resolve - [SVG] Animations - help!!! ado + xml - xml and purpose of ccs xsl - Calling a transform XML document from HTML - new small xml parser Open Source - How do you pronounce xsl-fo? - XSD Schema import - Including inline XHTML elements in a Relax NG schema - how to select parent? - MathMl in DocBook - DTD Question surrounding duplicate child elements - regex xpath expressions - XML transform via XLS - create expandable tree from XML - refresh xml - Converting XML between charsets - Tree view - Traversion order cf. output order in XSL - Strict order of elements being a pain - xerces - problem Validating sequnece of Tag values using schema - XML/XSLT Combining queries into XML ? - XeMaiL feedback wanted - String interpreted correct? - Create XML file using to Pocket PC? - XML Schema newbie question - MSXML says a string doesn't fit the pattern defined in the schema - XSLT: Converting a number into a character? - XSL show/hide - XPath performance w/ Namespaces - XPath to nodes with containing single quotes - dbXML for VB6 - Call For Papers: 18 CS & CE Conferences, Las Vegas, June 21-24, 2004 - circular references in schemas - What's wrong with my XSL ? - How to convert XML subscription file to OPML - conditional restriction in schema? - Tool for creating XSL from 2 XML files? - Error when using entity in XSLT - container elements for repeating elements ('element farms') needed? - Status of XUpdate; representing multiple edits - Can't seen to use insertBefore - xsl:fo several items in one table cell - XML, DBI::AnyData and mySQL - unordered elements in DTD example...bogus? - Transformation between XML instance data and HTML form - urn's and wsdl file - Just two quick questions - WSDL definitions - container elements for repeating elements ('element farms') needed? - XSL:FO Question - [xerces-c] How to access a particular node? - XSL Transform Tree into Table - XSL : unwanted carriage returns/white space - type-specific elements vs type attribute - Why isn't my DOM search code working? - image and binary data - DTD generation from XML Schema - CSS is seriously broken
http://www.velocityreviews.com/forums/archive/f-32-p-29.html
CC-MAIN-2014-10
refinedweb
1,412
53.21
cv::Mat make VS 2017 Freeze when hovered hey, I'm new to OpenCV, and i encountered with weird bug... somehow, when I hover the mouse to cv::Mat, my visual studio just go freeze. the only way I can fix it is either closed the program, is there any fix regarding this issue? let me elaborate the problems more, using namespace std; using namespace cv; int main() { std::vector<uchar> array; enter code here Mat getImage = imread("image.jpg", CV_LOAD_IMAGE_GRAYSCALE); if (!getImage.data) { cout << "Could not open or find the image" << std::endl; } else { //imshow("Open an Image", getImage); if (getImage.isContinuous()) { array.assign(getImage.datastart, getImage.dataend); } else { for (int i = 0; i < getImage.rows; ++i) { array.insert(array.end(), getImage.ptr<uchar>(i), getImage.ptr<uchar>(i) + getImage.cols); } } } cvWaitKey(0); destroyAllWindows(); the problem occur when I hover my mouse to "Mat" from "Mat getImage" i'm using visual studio 2017 It is not an opencv bug but a problem with your install : check.... I use VS2017 (communauty or enterprise) and there is no problem @LBerger thanks for responding my question, but so far i didn't have this problem until I installed OpenCV, I work on .net on same VS2017 before, and it's work fine. Is there any way other than reinstalling VS2017? How did you install opencv? If you use git clone try to disable git inside VS2017 @LBerger by downloading from, then adding the instalation path to windows environtment, and add include folder, linker, and so on. I can run the code just fine, and the opencv show fine, but only when i hover to Mat the VS is Hang. I search it online, kinda like this problem... and... thanks Have you try Andrew T answer? @LBerger yep i tried, it's work, but the downside i can't see the tool tip anymore >.< just want to share if it's was a bug here or on the VS2017 side. (i just found about the 1st link after posting it here though) try to wait 5 minutes database need to be update
https://answers.opencv.org/question/192391/cvmat-make-vs-2017-freeze-when-hovered/
CC-MAIN-2019-39
refinedweb
349
72.36
Hi guys, Here is my sample code, Code Java: public class EqualsCheck { public static void main(String[] args) { EqualsCheck eq = new EqualsCheck(); eq.checkBytes(); } public void checkBytes() { Byte myByte = new Byte("111"); if (myByte.toString() == myByte.toString()) { System.out.println("== is True"); } else { System.out.println("== is False"); } if (myByte.toString().equals(myByte.toString())) { System.out.println("equals is True"); } else { System.out.println("equals is False"); } } } When I run it, I get the following output, Code Java: == is False equals is True I am confused, because myByte.toString() always gives back the String value "111". Then how come == returns false and equals returns true. I know that == checks the references and equals checks the actual objects, but where are those references that == checks which are not the same. Can someone please clarify this for me? Thanks in advance! Goldest
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/6485-difference-between-%3D%3D-equals-printingthethread.html
CC-MAIN-2015-18
refinedweb
140
68.87
t0mm0 Wrote:cool. the current megavideo code could perhaps be expanded to add login info? the code is from here so we should contribute it back if we add anything (though sadly there is no git or svn available it seems). i think also that if it is a 'd=' link, you can just change the domain to megaupload and get the original file even if you are not a premium member which might also be useful (though i haven't seen too many megavideo d links in the wild and i haven't seen anywhere a way of converting from v to d links...) t0mm0 slyi Wrote:Could you add icefilms resolver? Eldorado Wrote:I may be wrong, but I don't think this is the idea of the resolver.. what you wrote there should be the addon side of things slyi Wrote:I updated code above to enable both, I don't see why we would put these at the plugin level as you still resolving these semi complex urls and it cleaner to have in th script library. slyi Wrote:I think these resolvers should be included. rogerthis Wrote:Would including the icefilms resolver impact the chances of the module getting into the xbmc official repo? Is that something that could be still on the cards, or have anybody been in touch about it either way? rogerthis Wrote:Also shouldn't the tubeplus resolver be out too, it's the same as icefilms, an aggregator as opposed to a hoster. Eldorado Wrote:Just seeing how things are going, anything we can do to help get it finished and finally released? I think for myself the only thing I am in need of is Context Menu support def create_contextmenu(self, menuname, scriptargs, newlist=False, contextmenuobj=''): ... ... if not contextmenuobj: contextmenuobj = [] if newlist: contextmenuobj.append((menuname, u'XBMC.Container.Update(%s?%s)' % (self.url, scriptargs))) else: contextmenuobj.append((menuname, u'XBMC.RunPlugin(%s?%s)' % (self.url, scriptargs))) return contextmenuobj listitem.addContextMenuItems(contextmenuobj, replaceItems=True) contextMenuItems.append(('Show Information', 'XBMC.Action(Info)')) def add_directory(self, queries, title, img='', fanart='', total_items=0, is_folder=True, contextmenuobj='', contextreplace=False): ... ... if contextmenuobj: listitem.addContextMenuItems(contextmenuobj, replaceItems=contextreplace)
http://forum.kodi.tv/printthread.php?tid=105707&page=18
CC-MAIN-2015-18
refinedweb
362
54.52
to reach a socket server, we will implement our own using Python. You can check in detail how to set a socket server in Python on this previous post. The tests were performed using a DFRobot’s ESP32 module integrated in a ESP32 development board. The Python code To get started, we will import Python’s socket module, which will make available the functions we need to set the socket server. import socket After that, we create an object of class socket, which we will use to configure the server and to listen to incoming connections. s = socket.socket() Now that we have our socket object, we need to bind it to an IP and port. These will be the two parameters that our socket client needs to know in order to reach the server. To perform this binding, we need to call the bind method on our socket object, passing as first input the IP, as a string, and as second input the port, as a number. Since we are going to expose the server to be reached in the local network, we will use the ‘0.0.0.0’ IP, which will bind the socket to all the IP addresses on the machine. Later, we will need to find out what is the actual local IP of the machine, so we can reach it from the client. We will use port 8090 in this code, but you can try with other port. Just make sure the port you are going to use is available and not already in use by other application. s.bind(('0.0.0.0', 8090 )) After the binding, we need to make our server start listening to connections. We do it by calling the listen method on the socket object. Note that this method receives as input the number of unaccepted connections that are allowed before refusing new connections [1]. Since our simple use case doesn’t involve multiple clients, we can pass the value 0. s.listen(0) To start receiving the actual connections, we need to call the accept method on our socket object. This is a blocking method, which means the execution of the program will stop until a new client connects to the server. Once a client connects to the server, this method returns a pair with a new socket object and the address of the client. We can then use this new socket object to establish the communication with the connected client. Note that since sockets are bi-directional mechanisms, we can both send and receive data from it. Assuming that we want our server to run indefinitely, we do this accept method call inside an infinite loop. while True: client, addr = s.accept() # client handling code Now we will start receiving data from the client. We will also assume that the client will be the one closing the connection after sending all the data, so we also read the data inside a nested loop, which will only break when the client disconnects. To receive data sent by the client, we simply need to call the recv method on the socket object returned by the previously called accept method. Note that in Python 3.x (the one I’m using for this tutorial) the recv method returns the data as a bytes object. On other hand, on Python 2.x, the data is returned as a string, so if you are using an older Python version you should adapt your code accordingly. As input, the recv receives the maximum number of bytes to receive at once. If more than the specified number of bytes are sent by the client, they can be retrieved with other calls to the recv method. One important thing to keep in mind is that the recv method is also blocking. So, after calling this method, the execution will block until either the client sends some data or disconnects. In case of disconnection, the recv method will return an empty bytes object, which we can leverage as stopping condition for the data reading loop. while True: client, addr = s.accept() while True: content = client.recv(32) if len(content) ==0: break else: print(content) Finally, when the client disconnects, we call the close method on the client socket object to free the resources and go back to listening to a new connection. The final Python code can be seen below and already includes this call. import socket s = socket.socket() s.bind(('0.0.0.0', 8090 )) s.listen(0) while True: client, addr = s.accept() while True: content = client.recv(32) if len(content) ==0: break else: print(content) print("Closing connection") client.close() The Arduino code In the Arduino code, we will start to include the WiFi.h library, so we can connect to a WiFi network and then establish the socket connection. #include <WiFi.h> As global variables, we will declare the credentials of the WiFi network to which we are going to connect. We will need the network name (SSID) and password. const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPass"; We will also declare as global variables the IP address and port of the socket server we have implemented in Python. As we have seen in the Python code, we did not bind the server to a particular IP address, but rather used the ‘0.0.0.0’ IP, which means the server should be available in all the IPs of that machine. Thus, we need to figure out the local IP of the machine on the local network, so we can use that IP on the Arduino code. The easiest way to obtain the IP is by typing the ipconfig command on the command line if you are on Windows, or the ifconfig command if you are on Linux. These commands should be sent on the machine that will be running the Python code. Note that for the ESP32 to be able to reach the Python server hosted on a computer, both devices need to be on the same network since we are not doing any port forwarding and we are simply using local IPs. Alternatively, you can obtain your machine’s local IP from this website. Note that in the code below I’m using the local IP of my computer on my local network. Yours will most likely be different. The port was 8090, as we have defined in the Python code. const uint16_t port = 8090; const char * host = "192.168.1.83"; Moving on o the Arduino setup function, we will simply open a serial connection to output the results of our program and take care of connecting to the WiFi network, using the previously declared credentials. void setup() { Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.println("..."); } Serial.print("WiFi connected with IP: "); Serial.println(WiFi.localIP()); } Next, on the Arduino loop, we will establish the connection to the server periodically and send some data to it. First, we declare an object of class WiFiClient, which we will use to establish the connection and send the data. WiFiClient client; Then, to establish the actual connection, we need to call the connect method on our WiFiClient object, passing as first input the IP of the server and as second the port. This method returns 1 if the connection was successful and 0 otherwise, so we can use this to do some error checking. In case the connection fails, then we print a message to the serial connection and delay for a second before trying to connect again. if (!client.connect(host, port)) { Serial.println("Connection to host failed"); delay(1000); return; } In case of success, we move on and send the actual data to the server, which is done by calling the print method on the WiFiClient object and passing as input the string to send. Note that there other methods that we can use to send data to the server, such as the write method. client.print("Hello from ESP32!"); Since this is a simple introductory tutorial, we will not be expecting data from the server, so we can simply finish the connection by calling the stop method on our WiFiClient object, thus freeing the resources. client.stop(); The final source code is shown below. It contains some additional prints and a 10 seconds delay between each iteration of the Arduino loop. #include <WiFi.h> const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPass"; const uint16_t port = 8090; const char * host = "192.168.1.83"; void setup() { Serial.begin(115200); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.println("..."); } Serial.print("WiFi connected with IP: "); Serial.println(WiFi.localIP()); } void loop() { WiFiClient client; if (!client.connect(host, port)) { Serial.println("Connection to host failed"); delay(1000); return; } Serial.println("Connected to server successful!"); client.print("Hello from ESP32!"); Serial.println("Disconnecting..."); client.stop(); delay(10000); } Testing the code To test the whole system, we will start by compiling and uploading the Arduino code to the ESP32. Once the procedure finishes, simply open the Arduino IDE serial monitor. You should get an output similar to figure 1. Since the Python server is not yet running, then the connection attempts should fail. Figure 1 – Output of the program when the Python socket server is not connected. Next, run the Python code on the tool of your choice. In my case, I’m running it on IDLE, the Python IDE that comes with the language installation. Once the server is up and running, you should start receiving the messages from the ESP32, as illustrated in figure 2. Figure 2 – Output of the Python program when receiving data from the ESP32. If you go back to the Arduino IDE serial monitor, you should start seeing the connection and disconnection messages we have included in our code, as shown in figure 3. Figure 3 – Successful connection to the Python socket server on the ESP32. Related Posts References [1] 7 Replies to “ESP32 Arduino: Sending data with socket client” How to send text from Python Code to ESP32 ? LikeLiked by 1 person Hi! Please check this tutorial: Hope it helps 🙂 Best regards, Nuno Santos
https://techtutorialsx.com/2018/05/17/esp32-arduino-sending-data-with-socket-client/
CC-MAIN-2019-04
refinedweb
1,700
64
Opened 7 years ago Closed 3 years ago #15804 closed Bug (fixed) Query lookup types should be scoped to the last joined field's model Description (last modified by ) This is kind of obscure and best described by example. I have two models: from django.db import models from django.contrib.gis.db import models as geo_models # Note this is a geo-aware model. class Place(geo_models.Model): geom = geo_models.GeometryField() # Note this is NOT a geo-aware model. class Person(models.Model): name = models.CharField(max_length=50) hometown = models.ForeignKey(Place) I'd like to be able to do this: Person.objects.filter(hometown__geom__intersects=some_geometry) ...but that doesn't work. It gives me this error: django.core.exceptions.FieldError: Join on field 'geom' not permitted. Did you misspell 'intersects' for the lookup type? This happens because Person isn't a geo-aware model, so it doesn't know that __intersects is a valid lookup type. I can fix it by changing Person to extend django.contrib.gis.db.models and adding the GeoManager, but that feels hacky because the Person model itself doesn't have any geo fields. The solution could be to change Django's join code so that it looks at the model of the last field in the chain when determining whether a lookup is valid. I haven't looked into how difficult this would be, though. I think it's just enough of an edge case that it wouldn't be worth fixing if it required a ton of refactoring and hoop-jumping. But if it's easy, let's do it. Change History (3) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 3 years ago by Not sure when this was fixed, but it seems to work at least in 1.8+. This is pretty much related to the fact that all things linking to GIS models require a GeoManager for just this reason. In my head, it's not amazingly hard to do this and has been a "should get around to that" item for quite a while. We would effectively promote the manager to the correct subclass (which drags in the right QuerySet subclass) as we proceed down the chain. Particularly in the GIS case, this makes things a *lot* easier. In practice, it may be a little fiddly, but I don't think it requires enormous amounts of internal refactoring. Roughly speaking, an attribute on the manager to say if it needs to be "promoted to" should we enter that class and awareness of that attribute by various clone methods.
https://code.djangoproject.com/ticket/15804
CC-MAIN-2018-22
refinedweb
438
65.22
import "golang.org/x/exp/io/spi" Package spi allows users to read from and write to an SPI device. Example illustrates a program that drives an APA-102 LED strip. Code: dev, err := spi.Open(&spi.Devfs{ Dev: "/dev/spidev0.1", Mode: spi.Mode3, MaxSpeed: 500000, }) if err != nil { panic(err) } defer dev.Close() if err := dev.Tx([]byte{ 0, 0, 0, 0, 0xff, 200, 0, 200, 0xff, 200, 0, 200, 0xe0, 200, 0, 200, 0xff, 200, 0, 200, 0xff, 8, 50, 0, 0xff, 200, 0, 0, 0xff, 0, 0, 0, 0xff, 200, 0, 200, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, }, nil); err != nil { panic(err) } type Devfs struct { // Dev is the device to be opened. // Device name is usually in the /dev/spidev<bus>.<chip> format. // Required. Dev string // Mode is the SPI mode. SPI mode is a combination of polarity and phases. // CPOL is the high order bit, CPHA is the low order. Pre-computed mode // values are Mode0, Mode1, Mode2 and Mode3. The value of the mode argument // can be overriden by the device's driver. // Required. Mode Mode // MaxSpeed is the max clock speed (Hz) and can be overriden by the device's driver. // Required. MaxSpeed int64 } Devfs is an SPI driver that works against the devfs. You need to have loaded the "spidev" Linux module to use this driver. Open opens the provided device with the specified options and returns a connection. Close closes the SPI device and releases the related resources. SetBitOrder sets the bit justification used to transfer SPI words. Valid values are MSBFirst and LSBFirst. SetBitsPerWord sets how many bits it takes to represent a word, e.g. 8 represents 8-bit words. The default is 8 bits per word. SetCSChange sets whether to leave the chipselect enabled after a Tx. SetDelay sets the amount of pause will be added after each frame write. SetMaxSpeed sets the maximum clock speed in Hz. The value can be overriden by SPI device's driver. SetMode sets the SPI mode. SPI mode is a combination of polarity and phases. CPOL is the high order bit, CPHA is the low order. Pre-computed mode values are Mode0, Mode1, Mode2 and Mode3. The value can be changed by SPI device's driver. Tx performs a duplex transmission to write w to the SPI device and read len(r) bytes to r. User should not mutate the w and r until this call returns. Mode represents the SPI mode number where clock parity (CPOL) is the high order and clock edge (CPHA) is the low order bit. Order is the bit justification to be used while transfering words to the SPI device. MSB-first encoding is more popular than LSB-first. Package spi imports 6 packages (graph) and is imported by 7 packages. Updated 2017-06-04. Refresh now. Tools for package owners.
http://godoc.org/golang.org/x/exp/io/spi
CC-MAIN-2017-34
refinedweb
486
77.43
Created attachment 203197 [details] Untested patch for rejecting "/." and "/.." with EACCES After loading the mqueuefs module, calling mq_open() with "/.." or "/." as name in a C program run by root crashes the system. I assume it's a panic but it reboots too quickly to read the text. Doing this as non-root does nothing and EACCES is produced. mq_unlink("/.") as root successfully removes . from mqueuefs, and mq_unlink("/..") as root removes both .. and . Trying to unlink either as non-root just produces EACCES. After this a non-root user can create queues with these names and use them as any other queue. Listing the directory where mqueuefs is mounted while . or .. exists as queues also crashes the system. I have not tested whether programs running inside jails can cause this crash or also get EACCES. I've created a patch which I think should fix this, but I haven't tested it at all. I wasn't sure whether to pick 12.0-RELEASE or 12.0-STABLE; uname -a says: FreeBSD freebsd 12.0-RELEASE FreeBSD 12.0-RELEASE r341666 GENERIC amd64 Yeah, I see the bug. I think kern_kmq_open basically allows you to open any arbitrary single-component name you want in the mqfs, including "." and "..", which are special and reserved. I didn't try to replicate the exact panic, but the bug seems present in CURRENT as well. I should add, your patch looks basically correct to me. The concept seems good but I disagree with the chosen error numbers. Quotes below are from POSIX.1-2016. For mq_open(), [EINVAL] "The mq_open() function is not supported for the given name." seems to fit best. For mq_unlink(), [ENOENT] "The named message queue does not exist." seems plausible, or also [EINVAL]. (In reply to Jilles Tjoelker from comment #3) > The concept seems good but I disagree with the chosen error numbers. +1 My reasoning for choosing EACCES is that it's what unprivileged processes already get, and Linux also returns this error. I agree that EINVAL is more accurate though, and compatibility shouldn't matter in this case. I'm a bit surprised that the approach in my patch is the proper solution. Wouldn't it be better to check the type of node before opening or unlinking it? Created attachment 203274 [details] Reject /. and /.. with EINVAL (In reply to Torbjørn Birch Moltu from comment #5) > I'm a bit surprised that the approach in my patch is the proper solution. > Wouldn't it be better to check the type of node before opening or unlinking > it? Nodes don't exist in the mqfs namespace for '.' and '..'. cem@ or jilles@ will you commit this? Torbjørn Birch Moltu would you care to add a test case? Sure, I am happy to commit it. A commit references this bug: Author: cem Date: Tue May 21 21:26:14 UTC 2019 New revision: 348067 URL: Log: mqueuefs: Do not allow manipulation of the pseudo-dirents "." and ".." "." and ".." names are not maintained in the mqueuefs dirent datastructure and cannot be opened as mqueues. Creating or removing them is invalid; return EINVAL instead of crashing. PR: 236836 Submitted by: Torbj?rn Birch Moltu <t.b.moltu AT lyse.net> Discussed with: jilles (earlier version) Changes: head/sys/kern/uipc_mqueue.c
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236836
CC-MAIN-2020-05
refinedweb
545
77.64
Suppliers of Mice A.1 Major suppliers of common inbred and outbred strainsa A.2 Other commercial sources of micea A.1 Major suppliers of common inbred and outbred strainsa aListed in alphabetical order A.2 Other commercial sources of micea aListed in alphabetical order Appendix B Computational Tools and Electronic Databases B.1 On-line electronic databases B.1.1 Mouse genetics B.1.2 Humans and other organisms B.2 Mouse Genetics Community Bulletin Board B.3 Computer Programs B.3.1 Linkage analysis B.3.2 Mouse colony management and genetic databases B.1 On-line electronic databases An ever-increasing amount of genetic information can be easily accessed and selectively retrieved over the Internet by using a software interface protocol known as Gopher, developed at the University of Minnesota. Any researcher with an Internet link can take advantage of this protocol by loading onto their computer a platform-specific version of a "Gopher client program" available without charge from the Gopher development group. Their E-mail address is [Gopher@boombox.micro.umn.edu]. Versions of the Gopher client program are available for all common operating systems that run on IBM-compatible PCs, Macintoshs, workstations, minicomputers, and mainframes. The Gopher program takes advantage of the custom features of each local machine type and allows a user to move seemingly seamlessly among different computer locations (called Gopher Holes) around the world where database searches can be conducted and selected data can be retrieved in a simple manner. With the addition of the NCSA Telnet protocol, direct Internet links can also be made to login-based servers on the Gopher network. Another Gopher-like tool called Fetch (developed at Dartmouth College and also available free to non-profit organizations and individuals) provides Macintosh users with the ability to download files from FTP (File Transfer Protocol) sites in a transparent manner. For more information on this program, send an E-mail message to Fetch@Dartmouth.edu. As this book is being written, the Gopher software and its associated formats are still less than three years old and expanding rapidly. It is a near-certainty that the data sources described below will be enriched and supplemented with new sources of information and modes for their retrieval. In addition, other more sophisticated electronic highways are under development. The most advanced of these, known as the "World Wide Web (WWW)", can be accessed with the use of NCSA Mosaic client software. You should contact your local computer advisor for up-to-the-date information on the most advanced system available at the time of this reading. B.1.1 Mouse genetics The central database server for mouse geneticists is located at the Jackson Laboratory. It can be reached in a number of different ways: (1) By using Gopher to trace a path from the "Home Gopher Hole" at the University of Minnesota through the U.S. to Maine to "The Jackson Laboratory." (2) By pointing Gopher directly at GOPHER.INFORMATICS.JAX.ORG, (3) By using NCSA Mosaic to connect to the Jackson Laboratory through the World Wide Web. This Gopher Hole contains a number of useful databases including the Mouse Locus Catalog ( or MLC), and the Genomic Database of the Mouse (or GBASE). MLC originated as an electronic version of the chapter entitled "Catalog of mutant genes and polymorphic loci" (Green, 1989) in the Lyon and Searle compendium Genetic Variants and Strains of the Laboratory Mouse. It is updated regularly by staff members of the Jackson Laboratory led by Dr. M.T. Davisson and Dr. D. P. Doolittle. Each locus is represented by a separate record within the database that contains a detailed description of phenotype and other characteristics. The entire database can be searched to retrieve a list of loci (with associated descriptions and citations) that contain a particular search word or combination of words anywhere within the locus record. GBASE is a comprehensive database at the Jackson Laboratory containing mouse mapping information. Investigators must login to GBASE (as a guest) to access the data contained within it. GBASE can be reached either through Gopher or via a direct internet linkage with the NCSA Telnet protocol. Once on-line, an investigator can retrieve the accumulated linkage data published on any set of loci, linkage maps, as well as allele distribution patterns for common inbred strains and all associated citations. For further assistance, send an E-mail message to mgi-help@Jax.Org. The Jackson Laboratory database server also has catalog files that can be downloaded with information on available mouse strains and mutant stocks (both live and in the form of DNA samples). Many other sources of genetic information, both at the Jackson Laboratory and elsewhere are also accessible through this Gopher Hole. One can jump from this Gopher Hole directly to the "Computational Biology" gopher hole with human genetic data at Johns Hopkins University (described below), and one can also search through the Genbank sequence database directly from here. It is also possible to download updated data files for use by the Encyclopedia of the Mouse Genome which is described below. The Genome Center at the Whitehead Institute, MIT provides a highly specialized but very useful source of information through either an automated E-mail query and answer protocol or anonymous FTP (File Transfer Protocol). This Genome Center has characterized and mapped over 3,000 polymorphic mouse microsatellite markers (as of Januray, 1994) that are each defined by a unique pair of oligonucleotide primers. By filling-out and transmitting a special E-mail query form, an investigator can receive an automatic response containing information about sets of microsatellite markers defined by various criteria, including chromosomal location and polymorphisms between particular strains. It is possible to retrieve primer sequences as well as a graphical representation of a chromosome map that shows Lincoln Stein at the Whitehead Institute [Email address: lstein@genome.wi.mit.edu]. B.1.2 Humans and other organisms A central Gopher Hole for human genetic information is maintained at Johns Hopkins University. It can be reached by tracing a path through Maryland to Johns Hopkins University to "Computational Biology" or by means of the Jackson Laboratory Gopher hole described above. Once there, you should work your way down through the "Genome Project" to the "Genome Data Base (GDB) folder." Once you have reached this location, it becomes possible to search various databases of human genetic information including the Genome Database (GDB) itself, which is a compendium of human mapping data, other genetic information, and citations. At this Gopher Hole, it is also possible to search the electronic version of Victor McKusicks Mendelian Inheritance of Man (called OMIM) which contains comprehensive records on all defined human loci. For further information, the GDB staff can be reached at the E-mail address [Help@gdb.org] or the mailing address [GDB User Support, Genome Data Base, Johns Hopkins University, 2024 E. Monument Street, Baltimore, MD 21205-2100.] Other Gopher Holes dedicated to the human genome project are maintained at the Genethon Center in France (point to [Gopher.genethon.fr 70]), and the UK Mapping Project (point to [Menu.crc.ac.uk 70]). A database of Drosophila mapping information is maintained at Indiana University (point to [fly.bio.indiana.edu]) and the yeast database is maintained at Stanford University (point to genome.stanford.edu]). Mapping information on C. elegans is maintained at the MRC in Cambidge, England; contact Richard Durbin for further information (E-mail address is [rd@mac-lib.cam.ac.uk]). B.2 and data sets for the Encyclopedia of the Mouse Genome described below..3 Computer Programs B.3, Whitehead Institute, 9 Cambridge Center, Cambridge, MA 02142. B.3.2 Mouse colony management and genetic databases MacMice and Mendel's Lab are Macintosh-based programs that can be used for keeping track of a breeding mouse colony with records for animals, litters, and samples derived from them. The programs are described more fully in chapter 3 (Silver, 1986; Silver, 1993b). They were written by the author of this book and can be licensed for use from Mendel's Software. Click on the following site: for further information and ordering details, or send a FAX to Mendel's Software at 609-924-4382. Appendix C Source Materials for Further Reading C.1 Selected monographs and books C.1.1 General C.1.2 Genetic information C.1.3 Experimental embryology C.1.4 Others topics C.2 Selected Journals and Serials of Particular Interest C.2.1 Original articles C.2.2 Reviews C.1 Selected monographs and books C.1.1 General Green, M. C. & Witham, B. A. (1991). Handbook on Genetically Standardized JAX Mice, Fourth edition, The Jackson Laboratory, Bar Harbor. [Provided free upon request.].] C.1.] C.1.1.4 Others.] Simmons, M. L. & Brick, J. O. (1970). The Laboratory Mouse: Selection and Management, Prentice-Hall, Englewood Cliffs, NJ. Cook, M. J. (1965). The Anatomy of the Laboratory Mouse, Academic Press, New York. [An atlas of anatomical drawings]. C.2 Selected Journals and Serials of Particular Interest C.2.1 Original articles Cell Cytogenetics and Cell Genetics Genes and Development Genetical Research Genetics Genomics Immunogenetics Journal of Heredity Mammalian Genome Mouse Genome (formerly Mouse Newsletter) Nature Nature Genetics Proceedings of the National Academy of Science USA Science C.2.2 Reviews Annual Review of Genetics Current Opinion in Genetics and Development Trends in Genetics Appendix D Statistics D.1 Confidence limits and median estimates of linkage distance D.1.1 The special case of complete concordance D.1.2 The general case of one or more recombinants D.1.3 A C program for the calculation of linkage distance estimates and confidence intervals D.2 Quantitative differences in expression between two strains D.1 Confidence Limits and Median Estimates of Linkage Distance D.1.1 The special case of complete concordance To illustrate the statistical approach used to estimate confidence limits on experimentally-determined values for linkage distances, it is useful to first consider the special case where two linked loci show complete concordance or no recombination (symbolized asR = 0) in their allelic segregations among a set of N samples derived either from recombinant inbred strains or from the offspring of a backcross. Let us define the true recombination fraction Q as the experimental fraction of samples expected to be discordant (or recombinant) when N approaches infinity. Then the probability of recombination in any one sample is simply Q, and the probability of non-recombination, or concordance, is simply (1 - Q). As long as multiple events are completely independent of each other, one can calculate the probability that all of them will occur by multiplying together the individual probabilities associated with each event. Thus, if the probability of concordance in one sample is (1 Q), then the probability of concordance in N samples is: . In most experimental situations, the known and unknown variables are reversed in that one begins by determining the number of discordant (or recombinant) samples i that occur within a total set of N as a means to estimate the unknown true recombination fraction Q. When no discordant samples are observed, the probability term just derived can be used with the substitution of the random variable q in place of Q, to provide a continuous probability density function indicative of the relative likelihoods for different values of Q between 0.0 (complete linkage) and 0.5 (no linkage). ? ?(D.1) This equation reads "the probability that the true recombination fraction Q is equal to a particular value q is the function of q given as the last term in the equation". For both RI data and backcross data, Q is related directly to linkage distance in centimorgans d. In the case of backcross data, and for values of Q less than 0.20 (see section 7.2.2.3), recombination fractions are converted into centimorgan estimates through simple multiplication: ?d = 100Q. ?(D.2) In the case of RI data, this conversion is combined with the Haldane-Waddington equation (9.8) to yield: ? ?(D.3) An example of the probability density function associated with the experimental observation of complete concordance among 50 backcross samples is shown in figure D.1. Each value of N will define a different function, but in all cases, the curve will look the same with only the steepness of the fall-off increasing as N increases. In all cases, the "maximum likelihood estimate" for the true recombination fraction defined as the value of Q associated with the highest probability will be zero. But, since this maximum likelihood value is located at one end of the probability curve, it does not provide a useful estimate for the likely linkage distance. A better estimate would be the value of Q which defines the midpoint below which and above which the true recombination fraction value is likely to lie with equal probability; this is the definition of the median recombination fraction estimate . In mathematical terms, the value of is defined at the line which equally divides the area of the complete probability density given by equation D.1 (see figure D.1). Confidence limits are also defined by circumscribed portions of the entire probability density; the portion that lies outside a confidence interval is called a. For example, in the case of a 95% confidence inerval, a = (1 - 0.95) = 0.05. It is standard practice to assign equal portions of a to the two "tails" of the probability density located before and after the central confidence interval. Thus, the lower confidence limit is defined as the value of q bordering the initial a/2 fraction of the area under the entire probability curve. The upper confidence limit is defined as the value of q that borders the ultimate a/2 fraction of the area under the entire probability curve; this is equivalent to saying that a (1 a/2) fraction of area lies ahead of the upper confidence limit. In mathematical terms, the area beneath the entire probability density curve is equal to the definite integral of equation D.1 over the range of legitimate values for q between 0.0 and 0.5. To determine the fraction of the probability density that lies in the region between Q = 0 and any arbitrary Q = x, it is necessary to integrate over the probability density function (equation D.1) between these two values, and divide the result by the total area covered by the probability density. This provides the probability that the true recombination fraction is less than or equal to x. ? ?(D.4)? By standard methods of calculus, equation D.4 can be reduced analytically to the form: ? ?(D.5) And this equation can be reformulated to yield x as a function of . ? ?(D.6)? By solving equation D.6 for different values of , one can obtain critical values of x that define the median estimate of the recombination fraction from , lower confidence limits from , and upper confidence limits from . Once a solution for x has been obtained, it can be converted into a linkage distance value with either equation D.2 for backcross data or equation D.3 for RI strain data. Solutions to equation D.6 over a range of N RI strains and backcross animals are shown in figures 9.8, 9.16, and 9.17. D.1.2 The general case of one or more recombinants Q, and (N i) events of concordance, each with an individual probability of (1 Q). These terms are multiplied together along with a "binomial coefficient" that counts the the permutations in which the two types of events can appear to produce the "binomial formula": ? ?(D.7) Q is unknown. In this case, one can substitute the random variable q in place of Q in equation D.7 to generate a probability density function that provides relative likelihoods for different values of Q between 0.0 (complete linkage) and 0.5 (no linkage). In this use of the binomial formula, the factorial fraction (known as the binomial coefficient) remains constant for all values of q and can be eliminated since the purpose of the function is to provide relative probabilities only: ? ?(D.8) An example of the probability density function associated with the experimental observation of one discordant RI strain among a total of 26 samples is shown in figure D.2..2.8 in place of the two occurrences of equation D.1 within equation D.4: ? ?(D.9) Tables D.1, D.2, D.3, D.4, D.5, and D.6 for 68% and 95% confidence intervals, but it is possible to generate confidence limits for any other integer percentile confidence interval. The program will also calculate maximum likelihood and median estimates of linkage distance. It is listed below as a self-contained unit that should be ready for compiling with any standard C compiler on any computer. D.1.3 A C program for the calculation of linkage distance estimates and confidence intervals /*** A C program for the calculation of linkage distance estimates and confidence intervals ***/ #include <stdio.h> ***********************/ D.2 Quantitative differences in expression between two strains How does one determine whether two populations of animals defined by different inbred strains are showing a significant difference in the expression of a trait? The answer is with a test statistic known as the "t-test" or "Students: ? ?(D.10) where xi refers to the expression value obtained for the ith sample in the set. Third is the variance of each set of animals ( and ) calculated as: ? ?(D.11) With values for the variance of each sample set and the size of each set, one can calculate a combined parameter refered to as the "pooled variance": ? ?(D.12) Finally, one can use the value obtained for the pooled variance together with the samples sizes and sample means to obtain a "t value": ? ?(D.13) One final combined parameter is required to convert the t value into a level of significance the number of degrees of freedom df. ? ?(D.14) With values for t and df, one can obtain a P value from a table of critical values for the t distribution found in Table D.7. Appendix E Glossary of Terms References Abbott, C. (1992). Characterization of mouse-hamster somatic cell hybrids by PCR: A panel of mouse-specific primers for each chromosome. Mammalian Genome 2: 106-109. Adolph, S., and Klein, J. (1981). Robertsonian variation in Mus musculus from Central Europe, Spain, and Scotland. J. Hered. 72: 139-142. Agulnik, A. I., Agulnik, S. I., and Ruvinsky, A. O. (1991). Two doses of the paternal Tme gene do not compensate the lethality of the Thp deletion. J. Hered. 82: 351-353. Aitman, T. J., Hearne, C. M., McAleer, M. A., and Todd, J. A. (1991). Mononucleotide repeats are an abundant source of length variants in mouse genomic DNA. Mammal. Genome 1: 206-210. Altman, P. L., and Katz, D., eds. (1979). Inbred and Genetically Defined Strains of Laboratory Animals: Part I, Mouse and Rat. (Fed. Amer. Soc. Exp. Biol., Bethesda). Alvarez, W., and Asaro, F. (1990, October, 1990). An extraterrestrial impact. Scientific American, p. 78-84. Ammerman, A. J., and Cavalli-Sforza, L. L. (1984). The Neolithic Transition and the Genetics of Populations in Europe. (Princeton University Press, Princeton, NJ USA). Armour, J. A. L., and Jeffreys, A. J. (1992). Biology and application of human minisatellite loci. Curr. Opin. Genet. Devel. 2: 850-856. Arnheim, N. (1983). Concerted evolution of multigene families. In Evolution of Genes and Proteins, Nei, M. and Koehn, R. K., eds. (Sinauer, pp. 38-61. Arnheim, N., Li, H., and Cui, X. (1991). Review: PCR analysis of DNA sequences in single cells: Single sperm gene mapping and genetic disease diagnosis. Genomics 8: 415-419. Artzt, K., Calo, C., Pinheiro, E. N., DiMeo, T. A., and Tyson, F. L. (1987). Ovarian teratocarcinomas in LT/Sv mice carrying t-mutations. Dev Genet 8: 1-9. Asada, Y., Varnum, D. S., Frankel, W. N., and Nadeau, J. H. (1994). A mutation in the Ter gene causing increased susceptibility to testicular teratomas maps to mouse chromosome 18. Nature Genetics 6: 363-368. Atchley, W. R., and Fitch, W. M. (1991). Gene trees and the origins of inbred strains of mice. Science 254: 554-558. Auffray, J.-C., Marshall, J. T., Thaler, L., and Bonhomme, F. (1991). Focus on the nomenclature of european species of mus. Mouse Genome 88: 7-8. Auffray, J.-C., Vanlerberghe, F., and Britton-Davidian, J. (1990). The house mouse progression in Eurasia: a palaeontological and archaeozoological approach. Biol. J. Linnean Soc. 41: 13-25. Avner, P. (1991). Genetics: Sweet mice, sugar daddies. Nature 351: 519-520. Avner, P., Amar, L., Dandolo, L., and Guénet, J. L. (1988). Genetic analysis of the mouse using interspecific crosses. Trends Genet. 4: 18-23. Bahary, N., Pachter, J. E., Felman, R., Leibel, R. L., Albright, K., Cram, S., and Friedman, J. M. (1992). Molecular mapping of mouse chromosomes 4 and 6: use of a flow-sorted Robertsonian chromosome. Genomics 13: 761-769. Bailey, D. W. (1971). Recombinant-inbred strains. An aid to finding identity, linkage and funcion of histocompatibility and other genes. Transplantation 11: 325-327. Bailey, D. W. (1978). Sources of subline divergence and their relative importance for sublines of six major inbred strains of mice. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), pp. 423-438. Bailey, D. W. (1981). Recombinant inbred strains and bilineal congenic strains. In The Mouse in Biomedical Research, Vol. 1, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, New York), pp. 223-239. Barany, F. (1991). Genetic disease detection and DNA amplification using cloned thermostable ligase. Proc. Nat. Acad. Sci. USA 88: 189-193. Barker, D., Schafer, M., and White, R. (1984). Restriction sites containing CpG show a higher frequency of polymorphism in human DNA. Cell 36: 131-138. Barlow, D., and Lehrach, H. (1990). Partial Not I digests generated by low enzyme concentration or by the presence of ethidium bromide can be used to extend the range of physical mapping. Technique 2: 79-87. Barlow, D. P., and Leharch, H. (1987). Genetics by gel electrophoresis: the impact of pulsed field gel electrophoresis on mammalian genetics. Trend. Genet. 3: 167-171. Barlow, D. P., Stoger, R., Herrmann, B. G., Saito, K., and Schweifer, N. (1991). The mouse insulin-like growth factor type-2 receptor is imprinted and closely linked to the Tme locus. Nature 349: 84-87. Bartolomei, M. S., Zemel, S., and Tilghman, S. (1991). Parental imprinting of the mouse H19 gene. Nature 351: 153-155. Barton, N. H., and Hewitt, G. M. (1989). Adaptation, speciation and hybrid zones. Nature 341: 497-502. Bateson, W., Saunders, E. R., and Punnett, R. C. (1905). Experimental studies in the physiology of heredity. Reports to the Evolution Committee of the Royal Society 2: 1-55 and 80-99. Beier, D. R. (1993). Single-strand conformation polymorphism (SSCP) analysis as a tool for genetic mapping. Mammal. Genome 4: 627-631. Beier, D. R., Dushkin, H., and Sussman, D. J. (1992). Mapping genes in the mouse using single-strand conformation polymorphism analysis of recombinant inbred strains and interspecific crosses. Proc. Nat. Acad. Sci. USA 89: 9102-9106.. Berry, R. J. (1981). Population dynamics of the house mouse. Symp. zool. Soc. London 47: 395-425. Berry, R. J., and Corti, M., eds. (1990). Biological Journal of the Linnean Society (Academic Press, London). Berry, R. J., and Jakobson, M. E. (1974). Vagility and death in an island population of the house mouse. J. Zool. 173: 341-354. Berry, R. J., Jakobson, M. E., and Peters, J. (1987). Inherited differences within an island population of the House mouse (Mus domesticus). J. Zool. London 211: 605-618. Berry, R. J., and Peters, J. (1975). Macquarie Island house mice: a genetical isolate on a sub-Antarctic island. J. Zool. 176: 375-389. Bickmore, W. A., and Sumner, A. T. (1989). Mammalian chromosome banding - an expression of genome organization. Trend Genet. 5: 144-148. Bird, A., Lavia, P., MacLeod, D., Lindsay, S., Taggart, M., and Brown, W. (1987). Mammalian genes and islands of non-methylated CpG-rich DNA. In Human Genetics, Vogel, F. and Sperling, K., eds. (Springer-Verlag, Berlin), pp. 182-186. Bird, A. P. (1986). CpG-rich islands and the function of DNA methylation. Nature 321: 209-213. Bird, A. P. (1987). CpG islands as gene markers in the vertebrate nucleus. Trend. Genet. 3: 342-347. Birkenmeier, E., Rowe, L., and Nadeau, J. (1994). The Jackson Laboratory backcross. Mammal. Genome 5: in press. Bishop, C. E. (1992). Mouse Y chromosome. Mammal. Genome 3: S289-S293. Bishop, C. E., Boursot, P., Baron, B., Bonhomme, F., and Hatat, D. (1985). Most classical Mus musculus domesticus laboratory mouse strains carry a Mus musculus musculus Y chromosome. Nature 325: 70-72. Blanchetot, A., Price, M., and Jeffreys, A. J. (1986). The mouse myoglobin gene. Eur. J. Biochem. 159: 469-474. Bode, V. C. (1984). Ethylnitrosourea mutagenesis and the isolation of mutant alleles for specific genes located in the t-region of mouse chromosome 17. Genetics 108: 457-470. Boer, P. H., Adra, C. N., Lau, Y.-F., and McBurney, M. W. (1987). The testis-specific phosphoglycerate kinase gene pgk-2 is a recruited retroposon. Mol. Cell. Biol. 7: 3107-3112. Bohlander, S. K., Espinosa, R., LeBeau, M. M., Rowley, J. D., and Diaz, M. O. (1992). A method for the rapid sequence-independent amplification of microdissected chromosomal material. Genomics 13: 1322-1324. Bonhomme, F. (1986). Evolutionary relationships in the genus Mus. Curr. Topics Micro. Immunol. 127: 19-34. Bonhomme, F., Benmehdi, F., Britton-Davidian, J., and Martin, S. (1979). Analyse génétique de croisements interspécifiques Mus musculus L. X Mus spretus Lataste: liaison de Adh-1 avec Amy-1 sur le chromosome 3 et de Es-14 avec Mod-1 sur le chromosome 9. C. R. Acad. Sci. Paris 289: 545-548. Bonhomme, F., Britton-Davidian, J., Thaler, L., and Triantaphyllidis, C. (1978). Sur lexistence en Europe de quatre groupes de souris (genre Mus L.) du rang espèce et semi-espèce, démontrée par la génétique biochimique. C.R. Acad. Sci. Paris 287: 631-633. Bonhomme, F., Catalan, J., Britton-Davidian, J., V.M., C., Moriwaki, K., Nevo, E., and Thaler, L. (1984). Biochemical diversity and evolution in the genus Mus. Biochem. Genet. 22: 275-303. Bonhomme, F., Catalan, J., Gerasimov, S., Orsini, P., and Thaler, L. (1983). Le complexe despèces du genre Mus en Europe centrale et orientale. I. Génétique. Z. Säugetierkunde 48: 78-85. Bonhomme, F., and Guénet, J.-L. (1989). The wild house mouse and its relatives. In Genetic Variants and Strains of the Laboratory Mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 649-662. Bonhomme, F., Guénet, J.-L., and Catalan, J. (1982). Présence dun facteur de stérilité mâle, Hst-2, segrégant dans les croisements interspécifiques M. musculus L. x M. Spretus Lastaste et lié à Mod-1 et Mpi-1 sur le chromosome 9. C.R. Acad. Sc. Paris 294 Ser. III: 691-693. Bonhomme, F., Guénet, J.-L., Dod, B., Moriwaki, K., and Bulfield, G. (1987). The polyphyletic origin of laboratory inbred mice and their rate of evolution. J. Linnean Soc. 30: 51-58. Bonhomme, F., Martin, S., and Thaler, L. (1978). Hybridation en laboratoire de Mus musculus L. et Mus spretus Lataste. Experientia 34: 1140-1141. Botstein, D., White, R. L., Skolnick, M., and Davis, R. W. (1980). Construction of a genetic linkage map in man using restriction fragment length polymorphisms. Am. J. Hum. Genet. 32: 314-331. Boursot, P., Auffray, J.-C., Britton-Davidian, J., and Bonhomme, F. (1993). The evolution of house mice. Ann. Rev. Ecol. System.: in press. Boursot, P., Bonhomme, F., Britton-Davidian, J., Catalan, J., Yonekawa, H., Orsini, P., Guerasimov, S., and Thaler, L. (1984). Introgression différentielle des génomes nucléaires et mitochondriaux chez deux semi-espèces européennes de Souris. C.R. Acad. Sci. Paris, Série III 299: 365-370. Boyle, A. L., Ballard, S. G., and Ward, D. C. (1990). Differential distribution of long and short interspersed element sequences in the mouse genome: Chromosome karyotyping by fluorescence in situ hybridization. Proc. Nat. Acad. Sci. 87: 7757-7761. Boyse, E. A., Miyazawa, M., Aoki, T., and Old, L. J. (1968). Two systems of lymphocyte isoantigens in the mouse. Proc. Royal Soc. (London) B 170: 175-193. Britton, J., and Thaler, L. (1978). Evidence for the presence of two sympatric species of mice (genus Mus) in southern France based on biochemical genetics. Biochem. Genet. 16: 213-225. Broccoli, D., Miller, O. J., and Miller, D. A. (1992). Isolation and characterization of a mouse subtelomeric-sequence. Chromosoma 101: 442-447. Bronson, F. (1984). The adaptability of the house mouse. Sci. Amer. 250: (3) 90-97. Bronson, F. H., Dagg, C. P., and Snell, G. D. (1966). Reproduction. In Biology of the Laboratory Mouse, Green, E. L., eds. (McGraw-Hill, New York), pp. 187-204. Brown, S. (1993). The European Collaborative Interspecific Backcross (EUCIB). Mouse Genome 91: v-xii. Brown, S. D. M., Avner, P., and Herman, G. (1992). Mouse X chromosome. Mammal. Genome 3: S274-S288. Bruce, H. M. (1959). An exteroreceptive block to pregnancy in the mouse. Nature 184: 105. Bruce, H. M. (1968). Absence of pregnancy block in mice when stud and test males belong to an inbred strain. J. Reprod. Fert. 17: 407-408. Bryant, S. P. (1991). Software for genetic linkage analysis. In Protocols in Human Molecular Genetics, Mathew, C. G., eds. (Humana Press, Clifton, NJ), Bryda, E. C., DePari, J. A., SantAngelo, D. B., Murphy, D. B., and Passmore, H. C. (1992). Multiple sites of crossing over within the Eb recombinational hotspot in the mouse. Mamm Genome 2: 123-9. Buckler, A. J., Chang, D. D., Graw, S. L., Brook, J. D., Haber, D. A., Sharp, P. A., and Housman, D. E. (1991). Exon amplification: A strategy to isolate mammalian genes based on RNA splicing. Proc Natl Acad Sci USA 88: 4005-4009. Burke, D. T., Carle, G. F., and Olson, M. V. (1987). Cloning of large segments of exogenous DNA into yeast by means of artificial chromosome vectors. Science 236: 806-812. Burke, D. T., Rossi, J. M., Koos, D. S., and Tilghman, S. M. (1991). A mouse genomic library of yeast artificial chromosome clones. Mammal. Genome 1: 65. Cannizzarro, L. A., and Emanuel, B. E. (1984). An improved method for G-banding chromosomes after in situ hybridization. Cytogenet Cell Genet 38: 308-309. Capecchi, M. (1989). The new mouse genetics: Altering the genome by gene targeting. Trends Genet. 5: 70-76. Carter, A. T., Norton, J. D., Gibson, Y., and Avery, R. J. (1986). Expression and transmission of a rodent retrovirus-like VL30 gene family. J. Mol. Biol. 188: 105-108. Carter, T. C. (1954). The estimation of total genetical map lengths from linkage test data. J. Genet. 53: 21-28. Carter, T. C., and Falconer, D. S. (1951). Stocks for detecting linkage in the mouse and the theory of their design. J. Genet. 50: 307-323. Castle, W. E. (1903). The laws of Galton and Mendel and some laws governing race improvement by selection. Proc. Amer. Acad. Arts Sci. 35: 233-242. Cattanach, B. M., and Kirk, M. (1985). Differential activity of maternally and paternally derived chromosome regions in mice. Nature 315: 469-498. Ceci, J. D., Matsuda, Y., Grubber, J. M., Jenkins, N. A., Copeland, N. G., and Chapman, V. M. (1994). Interspecific backcrosses provide an important tool for centromere mapping of mouse chromosomes. Genomics 19: 515-524. Chartier, F. L., Keer, J. T., Sutcliffe, M. J., Henriques, D. A., Mileham, P., and Brown, S. D. M. (1992). Construction of a mouse yeast artificial chromosome library in a recombination-deficient strain of yeast. Nature Genetics 1: 132-136. Chevret, P., Denys, C., Jaeger, J.-J., Michaux, J., and Catzeflis, F. M. (1993). Molecular evidence that the spiny mouse (Acomys) is more closely related to gerbils (Gerbillinae) than to true mice (Murinae). Proc Natl Acad Sci USA 90: 3433-3436. Chisaka, O., and Capecchi, M. R. (1991). Regionally restricted developmental defects resulting from targeted disruption of the mouse homeobox gene hox-1.5. Nature 350: 473-479. Chromosome committee chairs (1993). Encyclopedia of the Mouse Genome III. Mammal. Genome 4: special issue: S1-S284. Chumakov, I., Rigault, P., Guillou, S., Ougen, P., Billaut, A., Guasconi, G., Gervy, P., LeGall, I., Soularue, P., Grinas, L., Bougueleret, L., Bellanne-Chantelot, C., Lacroix, B., Barillot, E., Gesnouin, P., Pook, S., Vaysseix, G., Frelat, G., Schmitz, A., Sambucy, J.-L., Bosch, A., Estivill, X., Weissenbach, J., Vignal, A., Riethman, H., Cox, D., Patterson, D., Gardiner, K., Hattori, M., Sakaki, Y., Ichikawa, H., Ohki, M., Le Paslier, D., Heilig, R., Antonarakis, S., and Cohen, D. (1992). Continuum of overlapping clones spanning the entire human chromosome 21q. Nature 359: 380-387. Cochran, W. G. (1954). Some methods for strenghening the common c2 tests. Biometrics 10: 417-451. Collins, F., and Galas, D. (1993). A new five-year plan for the U.S. human genome project. Science 262: 43-46. Committee on standardized genetic nomenclature for mice (1989). Rules and guidelines for gene nomenclature. In Genetic Variants and Strains of the Laboratory Mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 1-12. Cook, M. J. (1983). Anatomy. In The Mouse in Biomedical Research, Vol. 3, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, NY), pp. 101-120. Copeland, N. G., and Jenkins, N. A. (1991). Development and applications of a molecular genetic linkage map of the mouse genome. Trends Genet 7: 113-8.. Corbet, G. B., and Hill, J. E. (1991). A World List of Mammalian Species. 3rd edition (Oxford University Press, New York). Cornall, R. J., Aitman, T. J., Hearne, C. M., and Todd, J. A. (1991). The generation of a library of PCR-analyzed microsatellite variants for genetic mapping of the mouse genome. Genomics 10: 874-881. Costantini, F., and Lacy, E. (1981). Introduction of a rabbit beta-globin gene into the mouse germ line. Nature 294: 92-94. Courtney, M. G., Elder, P. K., Steffen, D. L., and Getz, M. J. (1982). Evidence for an early evolutionary origin and locus polymorphism of mouse VL30 DNA sequences. J. Virol. 43: 511-518. Cox, D. R., Burmeister, M., Price, E. R., Kim, S., and Myers, R. M. (1990). Radiation hybrid mapping: a somatic cell genetic method for constructing high-resolution maps of mammalian chromosomes. Science 250: 245-250. Cox, R. D., Copeland, N. G., Jenkins, N. A., and Lehrach, H. (1991). Interspersed repetitive element polymerase chain reaction product mapping using a mouse interspecific backcross. Genomics 10: 375-384. Cox, R. D., Meier-Ewert, S., Ross, M., Larin, Z., Monaco, A. P., and Leharch, H. (1993). Genome mapping and cloning of mutations using yeast artificial chromosomes. In Guides to Techniques in Mouse Development, Methods in Enzymology 225, Wassarman, P. M. and DePamphilis, M. L., eds. (Academic Press, San Diego), pp. 623-637. Craig, J. M., and Bickmore, W. A. (1993). Chromosome bands - flavours to savour. Bioessays 15: 349-354. Crow, J. F. (1990). Mapping functions. Genetics 125: 669-671. Cuénot, L. (1902). La loi de Mendel et lhérédité de la pigmentation chez les souris. Arch. Zool. exp. gén., 3e sér. 3: 27-30. Cuénot, L. (1903). Lhérédité de la pigmentation chez les souris, 2me note. Arch. zool. exp. gén. 4: 33-38. Cuénot, L. (1905). Les races pures et les combinaisons chez les souris. Arch. zool. exp. gén. 4: 123-132. Cui, X., Gerwin, J., Navidi, W., Li, H., Kuehn, M., and Arnheim, N. (1992). Gene-centromere linkage mapping by PCR analysis of individual oocytes. Genomics 13: 713-717. DEustachio, P., and Clarke, V. (1993). Localization of the twitcher (twi) mutation on mouse chromosome 12. Mammal. Genome 4: 684-686. Daniels, D. L., Plunkett, G., Burland, V., and Blattner, F. R. (1992). Analysis of the Escherichia coli genome: DNA sequence of the region from 84.5 to 86.5. Science 257: 771-777. Datta, S. K., Owen, J. E., Womack, J. E., and Riblet, R. J. (1982). Analysis of recombinant inbred lines derived from autoimmune (NZB) and high leukemia (C58) strains: independent multigenic systems control B cell hyperactivity, retrovirus expression, and autoimmunity. J. Immunol. 129: 1539-1544. Davisson, M. T. (1990). The Jackson Laboratory Mouse Mutant Resource. Lab Animal 19: 23. Davisson, M. T. (1993). Personal communication. Davisson, M. T., and Akeson, E. C. (1993). Recombination suppression by heterozygous Robertsonian chromosomes in the mouse. Genetics 133: 649-667. Davisson, M. T., and Roderick, T. H. (1989). Linkage map. In Genetic Variants and Strains of the Laboratory Mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 416-427. Davisson, M. T., Roderick, T. H., and Doolittle, D. P. (1989). Recombination percentages and chromosomal assignments. In Genetic Variants and Strains of the Laboratory Mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 432-505. Dayhoff, M. O. (1978). Survey of new data and computer methods of analysis. In Atlas of Protein Sequence and Structure, 5, supp. 3, Dayhoff, M. O., eds. (National Biomedical Research Foundation, Silver Springs, MD), pp. 2-8. de Boer, P., and Groen, A. (1974). Fertility and meiotic behavior of male T70H tertiary trisomics of the mouse (Mus musculus). A case of preferential telomeric meiotic pairing in a mammal. Cytogenet. Cell Genet. 13: 489-510. DeChiara, T. M., Robertson, E. J., and Efstratiadis, A. (1991). Parental imprinting of the mouse Insulin-like growth factor II gene. Cell 64: 849-859. Demant, P., and Hart, A. A. M. (1986). Recombinant congenic strains a new tool for analyzing genetic traits determined by more than one gene. Immunogenetics 24: 416-422. den Dunnen, J. T., and van Ommen, G.-J. B. (1991). Pulsed-field gel electrophoresis. In Protocols in Human Molecular Genetics, Methods in Molecular Biology 9, Mathew, C. G., eds. (Humana Press, Clifton, NJ), pp. 169-182. Dietrich, W., Katz, H., Lincoln, S., Shin, H.-S., Friedman, J., Dracopoli, N. C., and Lander, E. (1992). A genetic map of the mouse suitable for typing intraspecific crosses. Genetics 131: 423-447. Dobrovolskaia-Zavadskaia, N. (1927). Sur la mortification spontanée de la queue chez la souris nouveau-née et sur lexistence dun caractère (facteur) héréditaire <<non viable>>. C. r. Séanc. Soc. Biol. 97: 114-116. Donehower, L. A., Harvey, M., Slagle, B. L., McArthur, M. J., Montgomery Jr., C. A., Butel, J. S., and Bradley, A. (1992). Mice deficient for p53 are developmentally normal but susceptible to spontaneous tumours. Nature 356: 215-221. Donis-Keller, H., Green, P., Helms, C., Cartinhour, S., Weiffenbach, B., Stephens, K., Keith, T., Bowden, D., Smith, D., Lander, E., Botstein, D., Akots, G., Rediker, K., Gravius, T., Brown, V., Rising, M., Parker, C., Powers, J., Watt, D., Kauffman, E., Bricker, A., Phipps, P., Muller-Kahle, H., Fulton, T., Ng, S., Schumm, J., Braman, J., Knowlton, R., Barker, D., Crooks, S., Lincoln, S., Daly, M., and Abrahamson, J. (1987). A genetic linkage map of the human genome. Cell 51: 319-337. Dover, G. (1982). Molecular drive: a cohesive mode of species evolution. Nature 299: 111-117. Drouet, B., and Simon-Chazottes, D. (1993). The microsatellite found in the DNA sequence with the code name MMMYOGG1 (GenBank) does not correspond to the myogenin gene (Myog) but to myoglobin (Mb) and maps to mouse Chromosome 15. Mammal. Genome 4: 348. Dunn, L. C. (1965). A Short History of Genetics. (McGraw-Hill, New York). Eddy, E. M., OBrien, D. A., and Welch, J. E. (1991). Mammalian sperm development in vivo and in vitro. In Elements of Mammalian Fertilization, Vol. 1, Wassarman, P. M., eds. (CRC Press, Boston), pp. 1-28. Eicher, E. M. (1971). The identification of the chromosome bearing linkage group XII in the mouse. Genetics 69: 267-271. Eicher, E. M. (1978). Murine ovarian teratomas and parthenotes as cytogenetic tools. Cytogenet. Cell Genet. 20: 232-239. Eicher, E. M., and Shown, E. P. (1993). Molecular markers that define the distal ends of mouse autosomes 4, 13, and 19 and the sex chromosomes. Mammal. Genome 4: 226-229. Elliott, R. (1979). Mouse variants studied by two-dimensional electrophoresis. Mouse News Lett 61: 59. Elliott, R. W., and Yen, C.-H. (1991). DNA variants with telomere probe enable genetic mapping of ends of mouse chromosomes. Mammal. Genome 1: 118-122. Elston, R. C., and Stewart, J. (1971). A general model for the genetic analysis of pedigree data. Hum. Hered. 21: 523-542. Ephrussi, B., and Weiss, M. C. (1969). Hybrid somatic cells. Scient. Amer. 220 (April): 26-34. Eppig, J. T., and Eicher, E. M. (1983). Application of the ovarian teratoma mapping method in the mouse. Cytogenet. Cell Genet. 20: 232-239. Eppig, J. T., and Eicher, E. M. (1988). Analysis of recombination in the centromere region of mouse chromosome 7 using ovarian teratoma and backcross methods. J. Hered. 79: 425-429. Erickson, R. P. (1989). Why isnt a mouse more like a man. Trend. Genet. 5: 1-3. Erlich, H. A., eds. (1989). PCR Technology (Stockton Press, New York). Evans, E. P. (1989). Standard Normal Chromosomes. In Genetic Variants and Strains of the laboratory mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 576-581. Farr, C. J. (1991). The analysis of point mutations using synthetic oligonucleotide probes. In Protocols in Human Molecular Genetics, Methods in Molecular Biology 9, Mathew, C., eds. (Humana Press, Clifton, NJ), pp. 69-84. Farr, C. J., and Goodfellow, P. N. (1992). Hidden messages in genetic maps. Science 258: 49. Feingold, M. (1980). Preface. In Josephine: The Mouse Singer (A Play), McClure, M., eds. (New Directions Books, New York), Ferguson-Smith, A. C., Reik, W., and Surani, M. A. (1990). Genomic imprinting and cancer. Cancer Surv. 9: 487-503. Ferris, S. D., Sage, R. D., Huang, C.-M., Nielson, J. T., Ritte, U., and Wilson, A. C. (1983a). Flow of mitochondrial DNA across a species boundary. Proc. Nat. Acad. Sci. USA 80: 2290-2294. Ferris, S. D., Sage, R. D., Prager, E. M., Titte, U., and Wilson, A. C. (1983b). Mitochondrial DNA evolution in mice. Genetics 105: 681-721. Ferris, S. D., Sage, R. D., and Wilson, A. C. (1982). Evidence from mtDNA sequences that common laboratory strains of inbred mice are descended from a single female. Nature 295: 163-165. Field, K. G., Olsen, G. J., Lane, D. J., Giovannoni, S. J., Ghiselin, M. T., Raff, E. C., Pace, N. R., and Raff, R. A. (1988). Molecular phylogeny of the animal kingdom. Science 239: 748-753. Fiering, S., Kim, C. G., Epner, E. M., and Groudine, M. (1993). An "in-out" strategy using gene targeting and FLP recombinase for the functional dissection of complex DNA regulatory elements: analysis of the b-globin locus control region. Proc Natl Acad Sci USA 90: 8469-8473. Fischer, S. G., and Lerman, L. S. (1983). DNA fragments differing by single-base pair substitutions are separated in denaturing gradient gels: Correspondence with melting theory. Proc. Nat. Acad. Sci. USA 80: 1579-1583. Fisher, R. A. (1936). Has Mendels work been rediscovered? Annals of Science 1: 115-137. Fitzgerald, J., Wilcox, S. A., Graves, J. A. M., and Dahl, H.-H. M. (1993). A eutherian X-linked gene, PDHA1, is autosomal in marsupials: A model for the evolution of a second, testis-specific variant in eutherian mammals. Genomics 18: 636-642. Flaherty, L. (1981). Congenic strains. In The Mouse in Biomedical Research, Vol. 1, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, N.Y.), pp. 215-222. Foote, S., Vollrath, D., Hilton, A., and Page, D. C. (1992). The human Y chromosome: overlapping DNA clones spanning the euchromatic region. Science 258: 60-66. Forrest, S. (1993). Genetic algorithims: principles of natural selection applied to computation. Science 261: 872-878. Frankel, W. N., Lee, B. K., Sotye, J. P., Coffin, J. M., and Eicher, E. M. (1992). Characterization of the endogenous nonecotropic murine leukemia viruses of NZB/BINJ and SM/J inbred strains. Mammal. Genome 2: 110-122. Frankel, W. N., Stoye, J. P., Taylor, B. A., and Coffin, J. M. (1990). A genetic linkage map of endogenous murine leukemia proviruses. Genetics 124: 221-236. Friedrich, G., and Soriano, P. (1991). Promoter traps in embryonic stem cells: a genetic screen to identify and mutate developmental genes in mice. Genes Dev 5: 1513-23. Gall, J. G., and Pardue, M. L. (1969). Formation and detection of RNA-DNA hybrid molecules in cytological preparations. Proc. Nat. Acad. Sci. 63: 378-382. Gardner, M. B., Kozak, C. A., and OBrien, S. J. (1991). The Lake Casitas wild mouse: evolving genetic resistance to retroviral disease. Trend Genet. 7: 22-27. Garrels, J. I. (1983). Quantitative two-dimensional gel electrophoresis of proteins. Methods Enzymol 100: 411-423. Gasser, D. L., Sternberg, N. L., Pierce, J. C., Goldner, S. A., Feng, H., Haq, A. K., Spies, T., Hunt, C., Buetow, K. H., and Chaplin, D. D. (1994). P1 and cosmid clones define the organization of 280 kb of the mouse H-2 complex containing the Cps-1 and Hsp70 loci. Immunogenetics 39: 48-55. Geliebter, J., and Nathenson, S. G. (1987). Recombination and the concerted evolution of the murine MHC. Trends Genet 3: 107-112. Gendron-Maguire, M., and Gridley, T. (1993). Identification of transgenic mice. In Guides to Techniques in Mouse Development, Methods in Enzymology 225, Wassarman, P. M. and DePamphilis, M. L., eds. (Academic Press, San Diego), pp. 794-799. Giacalone, J., Friedes, J., and Francke, U. (1992). A novel GC-rich human macrosatellite VNTR in Xq24 is differentially methylated on active and inactive X chromosomes. Nature Genetics 1: 137-143. Goradia, T. M., Stanton, V. P., Cui, X., Aburatani, H., Li, H., Lange, K., Housman, D. E., and Arnheim, N. (1991). Ordering three DNA polymorphisms on human chromosome 3 by sperm typing. Genomics 10: 748-755. Gordon, J. W., and Ruddle, F. H. (1981). Integration and stable germ line transmission of genes injected into mouse pronuclei. Science 214: 1244-1246. Gossler, A., Joyner, A. L., Rossant, J., and Skarnes, W. C. (1989). Mouse embryonic stem cells and reporter constructs to detect developmentally regulated genes. Science 244: 463-5. Graff, R. J. (1978). Minor histocompatibility genes and their antigens. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), pp. 371-389. Graur, D., Hide, W. A., and Li, W.-H. (1991). Is the guinea-pig a rodent? Nature 351: 649-652. Gray, J. W., Dean, P. N., Fuscoe, J. C., Peters, D. C., and Trask, B. J. (1990). High-speed chromosome sorting. Science 238: 323-329. Green, E. D., and Olson, M. V. (1990). Systematic screening of yeast artificial-chromosome libraries by use of the polymerase chain reaction. Proc. Nat. Acad. Sci. USA 87: 1213-1217. Green, E. L. (1981). Genetics and Probability in Animal Breeding Experiments. (Oxford University Press, New York). Green, E. L., and Roderick, T. H. (1966). Radiation Genetics. In Biology of the Laboratory Mouse, Green, E. L., eds. (McGraw-Hill, New York), pp. 165-185. Green, M. C. (1966). Mutant genes and linkages. In Biology of the Laboratory Mouse, Green, E. L., eds. (McGraw-Hill, New York), pp. 87-150. Green, M. C. (1989). Catalog of mutant genes and polymorphic loci. In Genetic Variants and Strains of the Laboratory Mouse, Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 12-403. Green, M. C., and Witham, B. A., eds. (1991). Handbook on Genetically Standardized JAX Mice fourth edition (The Jackson Laboratory, Bar Harbor). Gropp, A., Winking, H., Zech, L., and Muller, H. (1972). Robertsonian chromosomal variation and identification of metacentric chromosomes in feral mice. Chromosoma 39: 265-288. Grosveld, F., Blom van Assendelft, G., Greaves, D. R., and Kollias, G. (1987). Position-independent, high-level expression of the human b-globin gene in transgenic mice. Cell 51: 975-985. Grüneberg, H. (1943). The Genetics of the Mouse. (Cambridge University Press, Cambridge). Guenet, J.-L., Nagamine, C., Simon-Chazottes, D., Montagutelli, X., and Bonhomme, F. (1990). Hst3: an X-linked hybrid sterility gene. Genet Res Camb 56: 163. Gyllensten, U., and Wilson, A. C. (1987). Interspecific mitochondrial DNA transfer and the colonization of Scandinavia by mice. Genet. Res. Camb. 49: 25-29. Hagag, N. G., and Viola, M. V. (1993). Chromosome microdissection and cloning. (Academic Press, San Diego). Haig, D., and Graham, C. (1991). Genomic imprinting and the strange case of the insulin-like growth factor II receptor. Cell 64: 1045-1046. Haldane, J. B. S. (1919). The mapping function. J. Genet. 8: 299-309. Haldane, J. B. S. (1922). Sex ratio and unisexual sterility in hybrid animals. J. Genetics 12: 101-109. Haldane, J. B. S., Sprunt, A. D., and Haldane, N. M. (1915). Reduplication in mice. J. Genet. 5: 133-135. Haldane, J. B. S., and Waddington, C. H. (1931). Inbreeding and linkage. Genetics 16: 357-374. Hamada, H., and Kakunaga, T. (1982). Potential Z-DNA forming sequences are highly dispersed in the human genome. Nature 298: 396-398. Hamada, H., Petrino, M. G., and Kakunaga, T. (1982). A novel repeated element with Z-DNA-forming potential is widely found in evolutionarily diverse eukaryotic genomes. Proc Natl Acad Sci USA 79: 6465-6469. Hammer, M. F., Bliss, S., and Silver, L. M. (1991). Genetic exchange across a paracentric inversion of the mouse t complex. Genetics 128: 799-812. Hammer, M. F., Schimenti, J., and Silver, L. M. (1989). Evolution of mouse chromosome 17 and the origin of inversions associated with t haplotypes. Proc Natl Acad Sci U S A 86: 3261-5. Hammer, M. F., and Silver, L. M. (1993). Phylogenetic analysis of the alpha-globin pseudogene-4 (Hba-ps4) locus in the house mouse species complex reveals a stepwise evolution of t haplotypes. Mol. Biol. Evol. 10: 971-1001. Hanscombe, O., Whyatt, D., Fraser, P., Yannoutsos, N., Greaves, D., Dillon, N., and Grosveld, F. (1991). Importance of globin gene order for correct developmental expression. Genes Develop 5: 1387-1394. Harbers, K., Jahner, D., and Jaenisch, R. (1981). Microinjection of cloned retroviral genomes into mouse zygotes: Integration and expression in the animal. Nature 293: 540-542. Harding, R. M., Boyce, A. J., and Clegg, J. B. (1992). The evolution of tandemly repetitive DNA: recombination rules. Genetics 132: 847-859. Harper, M. E., Ullrich, A., and Saunders, G. F. (1981). Localization of the human insulin gene to the distal end of the short arm of chromosome 11. Proc Natl Acad Sci USA 78: 4458-4460. Hasties, N. D. (1989). Highly repeated DNA families in the genome of Mus musculus. In Genetic Variants and Strains of the Laboratory Mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 559-573. Hasties, N. D., and Bishop, J. O. (1976). The expression of three abundance classes of messenger RNA in mouse tissues. Cell 9: 761-774. Hasty, P., Ramirez-Solis, R., Krumlauf, R., and Bradley, A. (1991). Introduction of a subtle mutation into the Hox 2.6 locus in embryonic stem cells. Nature 351: 234-246. Hatada, I., Hayashizaki, Y., Hirotsune, S., Komatsubara, H., and Mukai, T. (1991). A genomic scanning method for higher organisms using restriction sites as landmarks. Proc Natl Acad Sci U S A 88: 9523-7. Hearne, C. M., Ghosh, S., and Todd, J. A. (1992). Microsatellites for linkage analysis of genetic traits. Trends Genet. 8: 288-294. Helmuth, R. (1990). Nonisotopic detection of PCR products. In PCR protocols, Innis, M. A., Gelfand, D. H., Sninsky, J. J., and White, T. J., eds. (Academic Press, San Diego), pp. 119-128.. Henson, V., Palmer, L., Banks, S., Nadeau, J. H., and Carlson, G. A. (1991). Loss of heterozygosity and mitotic linkage maps in the mouse. Proc. Nat. Acad. Sci. 88: 6486-6490. Herman, G. E., Berry, M., Munro, E., Craig, I. W., and Levy, E. R. (1991). The construction of human somatic cell hybrids containing portions of the mouse X chromosome and their use to generate DNA probes via interspersed repetitive sequence polymerase chain reaction. Genomics 10: 961-970. Herman, G. E., Nadeau, J. H., and Hardies, S. C. (1992). Dispersed repetitive elements in mouse genome analysis. Mammal. Genome 2: 207-214. Herrmann, B. G., Barlow, D. P., and Lehrach, H. (1987). A large inverted duplication allows homologous recombination between chromosomes heterozygous for the proximal t complex inversion. Cell 48: 813-825. Hilgers, J., and Arends, J. W. A. (1985). A series of recombinant inbred strains between the BALB/cHeA and STS/A strains. Curr. Top. Microbiol. Immunol. 122: 31-37. Hilgers, J., and Poort-Keeson, R. (1986). Strain distribution pattern of genetic polymorphisms between BALB/cHeA and STS/A strains. Mouse News Lett. 76: 14-20. Hillyard, A. L., Doolittle, D. P., Davisson, M. T., and Roderick, T. H. (1992). Locus map of the mouse. Mouse Genome 90: 8-21. Himmelbauer, H., and Silver, L. M. (1993). High resolution comparative mapping of mouse chromosome 17. Genomics 17: 110-120. Hochgeschwender, U. (1992). Toward a transcriptional map of the human genome. Trends Genet 8: 41-44. Hogan, B., Beddington, R., Costantini, F., and Lacy, E. (1994). Manipulating the Mouse Embryo: A Laboratory Manual, Second Edition. (Cold Spring Harbor Laboratory Press, Cold Spring Harbor). Hood, L. (1992). Personal communication. Hood, L., Kronenberg, M., and Hunkapiller, T. (1985). T cell antigen receptors and the immunoglobulin supergene family. Cell 40: 225-229. Horiuchi, Y., Agulnik, A., Figueroa, F., Tichy, H., and Klein, J. (1992). Polymorphisms distinguishing different mouse species and t haplotypes. Genet. Res. 60: 43-52. Hörz, W., and Altenburger, W. (1981). Nucleotide sequence of mouse satellite DNA. Nucl. Acids Res. 9: 683-696. Huntingtons Disease Collaborative Research Group (1993). A novel gene containing a trinucleotide repeat that is expanded and unstable on Huntingtons disease chromosomes. Cell 72: 971-983. Huxley, A. (1932). Brave New World. (Doubleday, Doran & Co., Garden City, NY). Innis, M. A., Gelfand, D. H., Sninsky, J. J., and White, T. J., eds. (1990). PCR protocols (Academic Press, San Diego). Jaeger, J.-J., Tong, H., and Denys, C. (1986). The age of Mus-Rattus divergence: paleontological data compared with the molecular clock. C.R. Acad. Sci. Paris 302 (ser. II): 917-922. Jaenisch, R. (1976). Germ line integration and Mendelian transmission of the exogenous Molony leukemia virus. Proc. Nat. Acad. Sci. 73: 1260-1264. Jahn, C. L., Hutchison, C. A., Phillips, S. J., Weaver, S., Haigwood, N. L., Voliva, C. F., and Edgell, M. H. (1980). DNA sequence organization of the beta-globin complex in the BALB/c mouse. Cell 21: 159-168. Jakobovits, A., Moore, A. L., Green, L. L., Vergara, G. J., Maynard-Currie, C. E., Austin, H. A., and Klapholz, S. (1993). Germ-line transmission and expression of a human-derived yeast artificial chromosome. Nature 362: 255-258. Jan, Y. N., and Jan, L. Y. (1993). Functional gene cassettes in development. Proc Natl Acad Sci USA 90: 8305-8307. Jarman, A. P., and Wells, R. A. (1989). Hypervariable minisatellites: recombinations or innocent bystanders? Trends Genet 5: 367-371. Jeang, K. T., and Hayward, G. S. (1983). A cytomegalovirus DNA sequence containing tracts of tandemly repeated CA dinucleotides hybridizes to highly repetitive dispersed elements in mammalian cell genomes. Mol. Cell. Biol. 3: 1389. Jeffreys, A. J., Royle, N. J., Wilson, V., and Wong, Z. (1988). Spontaneous mutation rates to new length alleles at tandem-repetitive hypervariable loci in human DNA. Nature 332: 278-281. Jeffreys, A. J., Wilson, V., Kelly, R., Tayor, B. A., and Bulfield, G. (1987). Mouse DNA "fingerprints": Analysis of chromosome localization and germ-line stability of hypervariable loci in recombinant inbred strains. Nucl Acids Res 15: 2823-2836. Jeffreys, A. J., Wilson, V., and Thein, S. L. (1985). Individual-specific fingerprints of human DNA. Nature 316: 76-79. Jenkins, N. A., and Copeland, N. G. (1985). High frequency germline acquisition of ecotropic MuLV proviruses in SWR/J-RF/J hybrid mice. Cell 43: 811-819. Jenkins, N. A., Copeland, N. G., Taylor, B. A., and Lee, B. K. (1982). Organization, distribution, and stability of endogenous ecotropic murine leukemia virus DNA sequences in chromosomes of Mus musculus. J. Virol. 43: 26-36. John, H., Birnstiel, M. L., and Jones, K. N. (1969). RNA-DNA hybrids at the cytological level. Nature 223: 582-585. Johnson, D. R. (1974). Hairpin-Tail: a case of post-reductional gene action in the mouse egg? Genetics 76: 795-805. Joyner, A., eds. (1993). Gene targeting: A practical approach (Oxford University Press, New York). Joyner, A. L., Auerbach, A., and Skarnes, W. C. (1992). The gene trap approach in embryonic stem cells: the potential for genetic screens in mice. Ciba Found Symp 165: 277-88. Joyner, A. L., Skarnes, W. C., and Rossant, J. (1989). Production of a mutation in the mouse En-2 gene by homologous recombination in embryonic stem cells. Nature 338: 153-156. Julier, C., DeGouyon, B., Georges, M., Guenet, J.-L., Nakamura, Y., and Avner, P. (1990). Minisatellite linkage maps in the mouse by cross-hybridization with human probes containing tandem repeats. Proc. Nat. Acad. Sci. USA 87: 4585-4589. Kasahara, M., Figueroa, F., and Klein, J. (1987). Random cloning of genes from mouse chromosome 17. Proc. Natl. Acad. Sci. U.S.A. 84: 3325-3328. Keeler, C. E. (1931). The Laboratory Mouse. Its Origin, Heredity, and Culture. (Harvard University Press, Cambridge). Kerr, R. (1991). Extinction potpourri: Killers and victims. Science 254: 942-943. Kerr, R. A. (1992). Extinction by a one-two comet punch? Science 255: 160-161. Kerr, R. A. (1993). Second Crater Points to Killer Comets. Science 259: 1543. Keshet, E., and Itin, A. (1982). Patterns of genomic distribution and sequence heterogeneity of a murine "retrovirus-like" multigene family. J. Virol. 43: 50-58. Kevles, D. J., and Hood, L., eds. (1992). The Code of Codes: Scientific and Social Issues in the Human Genome Project (Harvard University Press, Boston). Kidd, S., Lockett, T. J., and Young, M. W. (1983). The notch locus of Drosophila melanogaster. Cell 34: 421-433. King, T. R., Dove, W. F., Herrmann, B., Moser, A. R., and Shedlovsky, A. (1989). Mapping to molecular resolution in the T to H-2 region of the mouse genome with a nested set of meiotic recombinants. Proc. Natl. Acad. Sci. USA 86: 222-226.beta superfamily. Cell 71: 399-410. Kit, S. (1961). Equilibrium sedimentation in density gradients of DNA preparations from animal tissues. J. Mol. Biol. 3: 711-716. Klein, J. (1986). Natural History of the Major Histocompatibility Complex. (John Wiley & Sons, New York). Knight, A. M., and Dyson, P. J. (1990). Detection of DNA polymorphisms between two inbred mouse strains--limitations of restriction fragment length polymorphisms (RFLPs). Mol Cell Probes 4: 497-504. Korenberg, J. R., and Rykowski, M. C. (1988). Human genome organization: Alu, Lines, and the molecular structure of metaphase chromosome bands. Cell 53: 391-400. Kosambi, D. D. (1944). The estimation of map distances from recombination values. Ann. Eugenics 12: 172-175. Kozak, C., Peters, G., Pauley, R., Morris, V., Michalides, R., Dudley, J., and others (1987). A standardized nomenclature for endogenous mouse mammary tumor viruses. J. Virol. 61: 1651-1654. Kramer, J. M., and Erickson, R. P. (1981). Developmental program of PGK-1 and PGK-2 isozymes in spermatogenic cells of the mouse: Specific activities and rates of synthesis. Dev. Biol. 87: 37-45. Kunkel, L. M., Monaco, A. P., Middlesworth, W., Ochs, H. D., and Latt, S. A. (1985). Specific cloning of DNA fragments absent from the DNA of a male patient with an X-chromosome deletion. Proc. Nat. Acad. Sci. USA 82: 4778-4782. Kusumi, K., Smith, J. S., Segre, J. A., Koos, D. S., and Lander, E. S. (1993). Construction of a large-insert yeast artificial chromosome library of the mouse genome. Mammal. Genome 4: 391-392. Kwiatkowski, D. J., Dib, C., Slaugenhaupt, S. A., Povey, S., Gusella, J. F., and Haines, J. L. (1993). An index marker map of chromosome 9 provides strong evidence for positive interference. Am. J. Hum. Genet. 53: 1279-1288. Laird, C. D. (1971). Chromatid structure: relationship between DNA content and nucleotide sequence diversity. Chromosoma 32: 378-406. Landegren, U., Kaiser, R., and Hood, L. (1990). Oligonucleotide ligation assay. In PCR Protocols, Innis, M. A., Gelfand, D. H., Sninsky, J. J., and White, T. J., eds. (Academic Press, San Diego), pp. 92-98. Landegren, U., Kaiser, R., Sanders, J., and Hood, L. (1988). A ligase-mediated gene detection technique. Science 241: 1077-1080. Lander, E. S., Green, P., Abrahamson, J., Barlow, A., Daly, M. J., Lincoln, S. E., and Newburg, L. (1987). MAPMAKER: an interactive computer package for constructing primary genetic linkage maps of experimental and natural populations. Genomics 1: 174-81. Larin, Z., Monaco, A. P., and Lehrach, H. (1991). Yeast artificial chromosome libraries containing large inserts from mouse and human DNA. Proc Natl Acad Sci U S A 88: 4123-7. Larin, Z., Monaco, A. P., Meier-Ewert, S., and Leharch, H. (1993). Construction and characterization of yeast artificial chromosome libraries from the mouse genome. In Guides to Techniques in Mouse Development, Methods in Enzymology 225, Wassarman, P. M. and DePamphilis, M. L., eds. (Academic Press, San Diego), pp. 623-637. Laurie, D. A., and Hulten, M. A. (1985). Further studies on chiasma distribution and interference in the human male. Ann. Hum. Genet. 49: 203-214. Lawrence, J. B. (1990). A fluorescence in situ hybridization approach for gene mapping and the study of nuclear organization. In Genetic and Physical Mapping, Genome Analysis 1, Davies, K. E. and Tilghman, S., eds. (Cold Spring Harbor Laboratory Press, Leder, A., Swan, D., Ruddle, F., DEustachio, P., and Leder, P. (1981). Dispersion of alpha-like globin genes of the mouse to three different chromosomes. Nature 293: 196-200. LeRoy, H., Simon-Chazottes, D., Montagutelli, X., and Guénet, J.-L. (1992). A set of anonymous DNA clones as markers for mouse gene mapping. Mammal. Genome 3: 244-246. Levenson, C., and Chang, C.-a. (1990). Nonisotopically labelled probes and primers. In PCR protocols, Innis, M. A., Gelfand, D. H., Sninsky, J. J., and White, T. J., eds. (Academic Press, San Diego), pp. 99-112. Li, H., Gyllensten, U. B., Cui, X., Saiki, R. K., Erlich, H. A., and Arnheim, N. (1988). Amplification and analysis of DNA sequences in single human sperm and diploid cells. Nature 335: 414-417. Lindsay, S., and Bird, A. P. (1987). Use of restriction enzymes to detect potential gene sequences in mammalian DNA. Nature 327: 336-338. Lisitsyn, N., Lisitsyn, N., and Wigler, M. (1993). Cloning the differences between two complex genomes. Science 259: 946-951. Lister, C., and Dean, C. (1993). Recombinant inbred lines for mapping RFLP and phenotypic markers in Arabidopsis thaliana. The Plant Journal 4: 745-750. Little, C. C., and Bagg, H. J. (1924). The occurrence of four inheritable morphological variations in mice and their possible relation to treatment with X-rays. J. Exp. Zool. 41: 45-92. Little, P. (1993). The end of the beginning. Nature 362: 408-409. Love, J. M., Knight, A. M., McAleer, M. A., and Todd, J. A. (1990). Towards construction of a high resolution map of the mouse genome using PCR-analyzed microsatellites. Nuc. Acids Res. 18: 4123-4130. Lovett, M., Kere, J., and Hinton, L. M. (1991). Direct selection: A method for the isolation of cDNAs encoded by large genomic regions. Proc. Nat. Acad. Sci. 88: 9628-9632. Lowe, T., Sharefkin, J., Yang, S.-Q., and Dieffenbach, C. W. (1990). A computer program for selection of oligonucleotide primers for polymerase chain reactions. Nucleic Acids Res 18: 1757-1761. Ludecke, H.-J., Senger, G., Claussen, U., and Horsthemke, B. (1989). Cloning defined regions of the human genome by microdissection of banded chromosomes and enzymatic amplification. Nature 338: 348-350. Lueders, K. K., and Kuff, E. L. (1977). Sequences associated with intracisternal A-particles are reiterated in the mouse genome. Cell 12: 963-972. Lyon, M. F., and Kirby, M. C. (1992). Mouse chromosome atlas. Mouse Genome 90: 22-44. Lyon, M. F., and Searle, A. G., eds. (1989). Genetic Variants and Strains of the Laboratory Mouse. 2nd (Oxford University Press, Oxford). Maniatis, T., Fritsch, E. F., and Sambrook, J. (1982). Molecular Cloning: A Laboratory Manual. (Cold Spring Harbor Laboratory, Cold Spring Harbor, NY). Manly, K. F. (1993). A macintosh program for storage and analysis of experimental genetic mapping data. Mammal. Genome 4: 303-313. Marchuk, D. A., and Collins, F. S. (1994). The use of YACs to identify expressed sequences: cDNA screening using total YAC insert. In YAC Libraries, A Users Guide, Nelson, D. L. and Brownstein, B. H., eds. (W.H. Freeman and Company, New York), Mariat, D., and Vergnaud, G. (1992). Detection of polymorphic loci in complex genomes with synthetic tandem repeats. Genomics 12: 454-458. Marshall, C. J. (1991). Tumor suppressor genes. Cell 64: 313. Marshall, J. D., Mu, J.-L., Nesbitt, M. N., Frankel, W. N., and Paigen, B. (1992). The AXB and BXA set of recombinant inbred mouse strains. Mammal. Genome 3: 669-680. Marshall, J. T. (1981). Taxonomy. In The Mouse in Biomedical Research, Vol. 1, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, N.Y.), pp. 17-26.-1436. Martin, S. L. (1991). LINEs. Curr. Opin. Genet. Develop. 1: 505-508. Matsuda, Y., and Chapman, V. M. (1991). In situ analysis of centromeric satellite DNA segregating in Mus species crosses. Mammal. Genome 1: 71-77. Matsuda, Y., Manly, K. F., and Chapman, V. M. (1993). In situ analysis of centromere segregation in C57BL/6 X Mus spretus interspecific backcrosses. Mammal. Genome 4: 475-480. McClelland, M., and Ivarie, R. (1982). Asymmetrical distribution of CpG in an average mammalian gene. Nucl. Acids Res. 10: 7865-7877. McDonald, J. D., Shedlovsky, A., and Dove, W. F. (1990). Investigating inborn errors of phenylalanine metabolism by efficient mutagenesis of the mouse germ line. In Biology of Mammalian Germ Cell Mutagenesis, Allen, J. W., Bridges, B. A., Lyon, M. F., Moses, M. J., and Russell, L. B., eds. (Cold Spring Harbor Press, Cold Spring Harbor), pp. 259-270. McGinnis, W., and Krumlauf, R. (1992). Homeobox genes and axial patterning. Cell 68: 283-302. McGrath, J., and Solter, D. (1983). Nuclear Transplantation in the mouse embryo by microsurgery and cell fusion. Science 220: 1300-1302. McKusick, V. (1988). Mendelian Inheritance in Man: Catalogs of Autosomal Dominant, Autosomal Recessive, and X-Linked Phenotypes. 8th (The Johns Hopkins University Press, Baltimore). Meisler, M. H. (1992). Insertional mutation of classical and novel genes in transgenic mice. Trend Genet. 8: 341-344. Meitz, J. A., and Kuff, E. L. (1992). Intracisternal A-particle-specific oligonucleotides provide multilocus probes for genetic linkage studies in the mouse. Mammal. Genome 3: 447-451. Michaud, J., Brody, L. C., Steel, G., Fontaine, G., Martin, L. S., Valle, D., and Mitchell, G. (1992). Strand-separating conformational polymorphism analysis: efficacy of detection of point mutations in the human ornithine d-aminotransferase gene. Genomics 13: 389-394.-32. Michiels, F., Burmeister, M., and Lehrach, H. (1987). Derivation of clones close to met by preparative field inversion gel electrophoresis. Science 236: 1305-1308. Miesfeld, R., Krystal, M., and Arnheim, N. (1981). A member of new repeated sequence family which is conserved throughout eucaryotic evolution is found between the human delta- and beta-globin genes. Nuc. Acids Res. 9: 5931. Miller, O. J., and Miller, D. A. (1975). Cytogenetics of the mouse. Ann. Rev. Genet. 9: 285-303. Milner, C. M., and Campbell, R. D. (1992). Genes, genes and more genes in the human major histocompatibility complex. BioEssays 14: 565-571. Montagutelli, X. (1990). GENE-LINK: a program in PASCAL for backcross genetic analysis. J Hered 81: 490-1. Moore, D. S., and McCabe, G. P. (1989). Introduction to the Practice of Statistics. (W.H. Freeman & Co., New York). Moore, T., and Haig, D. (1991). Genomic imprinting in mammalian development: A parental tug-of-war. Trends Genet 7: 45-49. Morgan, T. H., and Cattell, E. (1912). Data for the study of sex-linked inheritance in Drosophila. J. Exp. Zool. 13: 79-101. Morse, H. C. (1978). Introduction. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), pp. 1-31. Morse, H. C. (1981). The laboratory mouse - A historical perspective. In The Mouse in Biomedical Research, Vol. 1, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, New York), pp. 1-16. Morse, H. C. (1985). The Bussey Institute and the early days of mammalian genetics. Immunogenet. 21: 109-116. Morton, N. (1955). Sequential tests for the detection of linkage. Am. J. Hum. Genet. 7: 277-318.-905. Moulia, C., Aussel, J. P., Bonhomme, F., Boursot, P., Nielsen, J. T., and Renaud, F. (1991). Wormy mice in a hybrid zone: a genetic control of susceptibility to parasite infection. J. Evol. Biol. 4: 679-687. Moyzis, R. K., Buckingham, J. M., Cram, L. S., Dani, M., Deaven, L. L., Jones, M. D., Meyne, J., Ratliffe, R. L., and Wu, J.-R. (1988). A highly conserved repetitive DNA sequence (TTAGGG) present at the telomeres of human chromosomes. Proc. Nat. Acad. Sci. USA 85: 6622-6626. Mu, J. L., Naggert, J. K., Nishina, P. M., Cheach, Y. C., and Paigen, B. (1993). Strain distribution pattern in AXB and BXA recombinant inbred strains for loci on murine chromosomes 10, 13, 17, and 18. Mammal. Genome 4: 148-152. Muller, H. J. (1916). The mechanism of crossing-over. Am. Nat. 50: 193-221. Muller, H. J. (1927). Artificial transmutation of the gene. Science 66: 84-87. Nadeau, J. H. (1984). Lengths of chromosomal segments conserved since divergence of man and mouse. Proc. Nat. Acad. Sci. 81: 814-818. Nadeau, J. H., Bedigian, H. G., Bouchard, G., Denial, T., Kosowksy, M., Norberg, R., Pugh, S., Sargeant, E., Turner, R., and Paigen, B. (1992). Multilocus markers for mouse genome analysis: PCR amplification based on single primers of arbitrary nucleotide sequence. Mammal. Genome 3: 55-64. Nadeau, J. H., Herrmann, B., Bucan, M., Burkart, D., Crosby, J. L., Erhart, M. A., Kosowsky, M., Kraus, J. P., Michiels, F., Schnattinger, A., Tchetgen, M.-B., Varnum, D., Willison, K., Lehrach, H., and Barlow, D. (1991). Genetic maps of mouse chromosome 17 including 12 new anonymous DNA loci and 25 anchor loci. Genomics 9: 78-89.-1622. Nebel, B. R., Amarose, A. P., and Hackett, E. M. (1961). Calender of gametogenic development in the prepuberal male mouse. Science 134: 832-833. Nei, M. (1987). Molecular Evolutionary Genetics. (Columbia University Press, New York). Nelson, D. L., and Brownstein, B. H., eds. (1994). YAC Libraries, A Users Guide (W.H. Freeman and Company, New York). Nelson, D. L., Ledbetter, S. A., Corbo, L., Victoria, M. F., Ramirez-Solis, R., Webster, T. D., Ledbetter, D. H., and Caskey, T. (1989). Alu polymerase chain reaction: A method for rapid isolation of human-specific sequences from complex DNA sources. Proc. Nat. Acad. Sci. USA 86: 6686-6690. Neumann, P. E. (1990). Two-locus linkage analysis using recombinant inbred strains and Bayes Theorem. Genetics 126: 277-284. Neumann, P. E. (1991). Three-locus linkage analysis using recombinant inbred strains and Bayes theorem. Genetics 128: 631-638. Nichols, R. D., Knoll, J. H. M., Butler, M. G., Karam, S., and Lalande, M. (1989). Genetic imprinting suggested by maternal heterodisomy in non-deletion Prader-Willi syndrome. Nature 342: 281-285. North, M. A., Sanseau, P., and Buckler, A. J. (1993). Efficiency and specificity of gene isolation by exon amplification. Mammal. Genome 4: 466-474. OBrien, S., Womack, J. E., Lyons, L. A., Moore, K. J., Jenkins, N. A., and Copeland, N. G. (1993). Anchored reference loci for comparative genome mapping in mammals. Nature Genetics 3: 103-112. OFarrell, P. H. (1975). High resolution two-dimensional electrophoresis of proteins. J. Biol. Chem. 250: 4007-4021. Oakberg, E. F. (1956a). A description of spermiogenesis in the mouse and its use in the analysis of the cycle of the seminiferous epithelium and germ cell renewal. Amer. J. Anat. 99: 391-413. Oakberg, E. F. (1956b). Spermatogenesis in the mouse and timing of stages of the cycle of the seminiferous epithelium. Am. J. Anat. 99: 507-516. Ohno, S. (1967). Sex Chromosomes and Sex-Linked Genes. (Springer Verlag, Berlin). Okada, N. (1991). SINEs. Curr. Opin. Genet. Develop. 1: 498-504. Olds-Clarke, P., and Peitz, B. (1985). Fertility of sperm from t/+ mice: evidence that +-bearing sperm are dysfunctional. Genet. Res. 47: 49-52. Ollmann, M. M., Winkes, B. M., and Barsh, G. S. (1992). Construction, analysis, and application of a radiation hybrid mapping panel surrounding the mouse agouti locus. Genomics 13: 731-740. Orita, M., Iwahana, H., Kanazawa, H., Hayashi, K., and Sekiya, T. (1989a). Detection of polymorphisms of human DNA by gel electrophoresis as single-strand conformation polymorphisms. Proc. Nat. Acad. Sci.USA 86: 2766-2770. Orita, M., Suzuki, Y., Sekiya, T., and Hayashi, K. (1989b). Rapid and sensitive detection of point mutations and DNA polymorphims using the polymerase chain reaction. Genomics 5: 874-879. Painter (1928). Genetics 13: 180-189. Palmiter, R. D., and Brinster, R. L. (1986). Germ-line transformation of mice. Ann. Rev. Genet. 20: 465-499. Papaioannou, V. E., and Festing, M. F. W. (1980). Genetic drift in a stock of laboratory mice. Labortory Animals 14: 11-13. Pardue, M. L., and Gall, J. G. (1970). Chromosomal localization of mouse satellite DNA. Science 168: 1356-1358. Parimoo, S., Patanjali, S. R., Shukla, H., Chaplin, D. D., and Weissman, S. M. (1991). cDNA selection: Efficient PCR approach for the selection of cDNAs encoded in large chromosomal DNA fragments. Proc. Nat. Acad. Sci. 88: 9623-9627. Parrish, J. E., and Nelson, D. L. (1993). Methods for finding genes: a major rate-limiting step in positional cloning. Gen. Anal. Tech. Appl. 10: 29-41. Patanjali, S. R., Parimoo, S., and Weismann, S. M. (1991). Construction of a uniform abundance (normalized) cDNA library. Proc. Nat. Acad. Sci. USA 88: 1943-1947. Pedersen, R. A., Papaioannou, V., Joyner, A., and Rossant, J. (1993). Targeted Mutagenesis in Mice. Audiovisual material from Cold Spring Harbor Laboratory Press, Cold Spring Harbor. Pickford, I. (1989) Ph.D. thesis, Imperial Cancer Research Foundation. Pierce, J. C., Sternberg, N., and Sauer, B. (1992). A mouse genomic library in the bacteriophage P1 cloning system: organization and characterization. Mamm Genome 3: 550-8. Pierce, J. C., and Sternberg, N. L. (1992). Using bacteriophage P1 system to clone high molecular weight genomic DNA. Methods Enzymol 216: 549-74. Popp, R. A., Bailiff, E. G., Skow, L. C., Johnson, F. M., and Lewis, S. E. (1983). Analysis of a mouse alpha-globin gene mutation induced by ethylnitrosourea. Genetics 105: 157-167. Potter, M., Nadeau, J. H., and Cancro, M. P., eds. (1986). The Wild Mouse in Immunology (Springer-Verlag, New York). Povey, S., Smith, M., Haines, J., Kwiatkowski, D., Fountain, J., Bale, A., Abbott, C., Jackson, I., Lawrie, M., and Hultén, M. (1992). Report on the first international workshop on chromosome 9. Ann. Hum. Genet. 56: 167-221. Punnett, R. C. (1911). Mendelism. (Macmillan, New York). Rajan, T. V., Halay, E. D., Potter, T. A., Evans, G. A., Seidman, J. G., and Margulies, D. H. (1983). H-2 hemizygous mutants from a heterozygous cell line: Role of mitotic recombination. EMBO J 2: 1537-1542. Rattner, J. B. (1991). The structure of the mammalian centromere. Bioessays 13: 51-56. Reeves, R. H., Crowley, M. R., Moseley, W. S., and Seldin, M. F. (1991). Comparison of interspecific to intersubspecific backcrosses demonstrates species and sex differences in recombination frequency on mouse chromosome 16. Mammal. Genome 1: 158-164. Ridley, R. M., Frith, C. D., Farrer, L. A., and Conneally, P. M. (1991). Patterns of inheritance of the symptoms of Huntington disease suggestive of an effect of genomic imprinting. J. Med. Genet. 28: 224-231. Rikke, B. A., Garvin, L. D., and Hardies, S. C. (1991). Systematic identification of LINE-1 repetitive DNA sequence differences having species specificity between Mus spretus and Mus domesticus. J Mol Biol 219: 635-643. Rikke, B. A., and Hardies, S. C. (1991). LINE-1 repetitive DNA probes for species-specific cloning from Mus spretus and Mus domesticus genomes. Genomics 11: 895-904.. Acid Res. 18: 2887-2890. Rinchik, E. M. (1991). Chemical mutagenesis and fine-structure functional analysis of the mouse genome. Trend. Genet. 7: 15-21. Rinchik, E. M., Bangham, J. W., Hunsicker, P. R., Cacheiro, N. L. A., Kwon, B. S., Jackson, I. J., and Russell, L. B. (1990a). Genetic and molecular analysis of chlorambucil-induced germ-line mutations in the mouse. Proc. Nat. Acad. Sci. USA 87: 1416-1420. Rinchik, E. M., Carpenter, D. A., and Selby, P. B. (1990b). A strategy for fine-structure functional analysis of a 6- to 11-centimorgan region of mouse chromosome 7 by high-efficiency mutagenesis. Proc. Nat. Acad. Sci. USA 87: 896-900. Rinchik, E. M., and Russell, L. B. (1990). Germ-line deletion mutations in the mouse: tools for intensive functional and physical mapping of regions of the mammalian genome. In Genetic and Physical Mapping, Genome analysis 1, Davies, K. E. and Tilghman, S. M., eds. (Cold Spring Harbor Laboratory Press, Cold Spring Harbor), pp. 121-158. Robertson, E. J. (1991). Using embryonic stem cells to introduce mutations into the mouse germ line. Biol Reprod 44: 238-45. Röhme, D., Fox, H., Herrmann, B., Frischauf, A.-M., Edström, J.-E., Mains, P., Silver, L. M., and Lehrach, H. (1984). Molecular clones of the mouse t complex derived from microdissected metaphase chromosomes. Cell 36: 783-788. Rossant, J., Vijh, M., Siracusa, L. D., and Chapman, V. M. (1983). Identification of embryonic cell lineages in histological sections of M. musculus<>M. caroli chimaeras. J. Embryol. Exp. Morph. 73: 179-191. Rossi, J. M., Burke, D. T., Leung, J. C., Koos, D. S., Chen, H., and Tilghman, S. M. (1992). Genomic analysis using a yeast artificial chromosome library with mouse DNA inserts. Proc Natl Acad Sci U S A 89: 2456-60. Rowe, F. P. (1981). Wild house mouse biology and control. Symp. zool. Soc. London 47: 575-589. Rugh, R. (1968). The Mouse: Its Reproduction and Development. (Oxford University Press, New York). Russell, E. S. (1978). Origins and history of mouse inbred strains: contributions of Clarence Cook Little. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), pp. 33-43. Russell, E. S. (1985). A history of mouse genetics. Ann. Rev. Genet. 19: 1-28. Russell, L. B. (1990). Patterns of mutational sensitivity to chemicals in poststem-cell stages of mouse spermatogenesis. Prog Clin Biol Res 340: 101. Russell, L. B., Hunsicker, P. R., Cacheiro, N. L. A., Bangham, J. W., Russell, W. L., and Shelby, M. D. (1989). Chlorambucil effectively induces deletion mutations in mouse germ cells. Proc. Natl. Acad. Sci. U.S.A. 86: 3704-3708. Russell, L. B., Russell, W. L., Rinchik, E. M., and Hunsicker, P. R. (1990). Factors affecting the nature of induced mutations. In Biology of Mammalian Germ Cell Mutagenesis, Allen, J. W., Bridges, B. A., Lyon, M. F., Moses, M. J., and Russell, L. B., eds. (Cold Spring Harbor Press, Cold Spring Harbor), pp. 271-289. Russell, W. L., Hunsicker, P. R., Raymer, G. D., Steele, M. H., Stelzner, K. F., and Thompson, H. M. (1982). Dose response curve for ethylnitrosourea specific-locus mutations in mouse spermatogonia. Proc. Nat. Acad. Sci. USA 79: 3589-3591. Russell, W. L., and Hurst, J. G. (1945). Pure strain mice born to hybrid mothers following ovarian transplantation. Proc. Nat. Acad. Sci. USA 31: 267-273. Russell, W. L., Kelly, P. R., Hunsicker, P. R., Bangham, J. W., Maddux, S. C., and Phipps, E. L. (1979). Specific-locus test shows ethylnitrosourea to be the most potent mutagen in the mouse. Proc. Nat. Acad. Sci. USA 76: 5918-5922. Ruvinsky, A., Agulnik, A., Agulnik, S., and Rogachova, M. (1991). Functional analysis of mutations of murine chromosome 17 with the use of tertiary trisomy. Genetics 127: 781-788. Sage, R. D. (1981). Wild mice. In The Mouse in Biomedical Research, Vol. 1, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, N.Y.), pp. 40-90. Sage, R. D. (1986). Wormy mice in a hybrid zone. Nature 324: 60-63. Sage, R. D., Whitney, J. B., and Wilson, A. C. (1986). Genetic analysis of a hybrid zone between domesticus and musculus mice (Mus musculus complex): Hemoglobin polymorphisms. Curr. Topics Micro. Immunol. 127: 75-85.. Sambrook, J., Fritsch, E. F., and Maniatis, T. (1989). Molecular Cloning: A Laboratory Manual. 2nd (Cold Spring Harbor Laboratory, Cold Spring Harbor). Sapienza, C. (1989). Genome imprinting and dominance modification. Ann NY Acad Sci 534: 24-38. Sapienza, C. (1991). Genome imprinting and carcinogenesis. Biochem. Biophys. Acta 1072: 51-61. Scalenghe, F., Turco, E., Edstrom, J. E., Pirotta, V., and Melli, M. (1981). Microdissection and cloning of DNA from a specific region of Drosophila melanogaster polytene chromosomes. Chromosoma 82: 205-216. Schedl, A., Montoliu, L., Kelsey, G., and Schütz, G. (1993). A yeast artificial chromosome covering the tyrosinase gene confers copy number-dependent expression in transgenic mice. Nature 362: 258-261. Schwartz, D. C., and Cantor, C. R. (1984). Separation of yeast chromosome-sized DNAs by pulsed field gel electrophoresis. Cell 37: 67-75. Schwarz, D. C., and Cantor, C. R. (1984). Separation of yeast chromosome-sized DNAs by pulsed field gradient gel electrophoresis. Cell 37: 67-75. Searle, A. G. (1982). The genetics of sterility in the mouse. In Genetic Control of Gamete Production and Function, Crosignani, P. G. and Rubin, B. L., eds. (Academic Press, London), pp. 93-114. Searle, A. G. (1989). Chromosomal variants: numerical variants and structural rearrangements. In Genetic Variants and Strains of the Laboratory Mouse., Lyon, M. F. and Searle, A. G., eds. (Oxford University Press, Oxford), pp. 582-616. Sedivy, J. M., and Joyner, A. L. (1992). Gene Targeting. (W.H. Freeman, New York). Seldin, M. F., Howard, T. A., and DEustachio, P. (1989). Comparison of linkage maps of mouse chromosome 12 derived from laboratory strain intraspecific and Mus spretus interspecific backcrosses. Genomics 5: 24-28. Serikawa, T., Montagutellie, X., Simon-Chazottes, D., and Guénet, J.-L. (1992). Polymorphisms revealed by PCR with single, short-sized, arbitrary primers are reliable markers for mouse and rat gene mapping. Mammal. Genome 3: 65-72. Shedlovsky, A., King, T. R., and Dove, W. F. (1988). Saturation germ line mutagenesis of the murine t region including a lethal allele at the quaking locus. Proc Natl Acad Sci U S A 85: 180-4. Sheehan, P. M., Fastovsky, D. E., Hoffmann, R. G., Berghaus, C. B., and Gabriel, D. L. (1991). Sudden extinction of the dinosaurs: Latest Cretaceous, Upper Great Plains, U.S.A. Science 254: 835-839. Sheffield, V. C., Beck, J. S., Kwitek, A. E., Sandstrom, D. W., and Stone, E. M. (1993). The sensitivity of single-strand conformation polymorphism analysis for the detection of single base substitutions. Genomics 16: 325-332.. Nat. Acad. Sci. USA 86: 232-236. Sheppard, R., Montagutelli, X., Jean, W., Tsai, J.-Y., Guenet, J.-L., Cole, M. D., and Silver, L. M. (1991). Two-dimensional gel analysis of complex DNA families: Methodology and apparatus. Mammal. Genome 1: 104-111. Sheppard, R. D., and Silver, L. M. (1993). Methods for two-dimensional analysis of repetitive DNA families. In Guides to Techniques in Mouse Development, Methods in Enzymology 225, Wassarman, P. M. and DePamphilis, M. L., eds. (Academic Press, San Diego), pp. 701-715. Shizuya, H., Birren, B., Kim, U.-J., Mancino, V., Slepak, T., Tachiiru, Y., and Simon, M. (1992). Cloning and stable maintenance of 300-kilobase-pair fragments of human DNA in Escherichia coli using an F-factor-based vector. Proc. Nat. Acad. USA 89: 8794-8797. Silver, J. (1985). Confidence limits for estimates of gene linkage based on analysis using recombinant inbred strains and backcrosses. J. Hered. 76: 436-440. Silver, J., and Buckler, C. E. (1986). Statistical considerations for linkage analysis using recombinant inbred strains and backcrosses. Proc. Nat. Acad. Sci. USA 83: 1423-1427. Silver, L. M. (1985). Mouse t haplotypes. Ann. Rev. Genet. 19: 179-208. Silver, L. M. (1986). A software package for record-keeping and analysis of a breeding mouse colony. J. Hered. 77: 479. Silver, L. M. (1988). Mouse t haplotypes: a tale of tails and a misunderstood selfish chromosome. Curr. Top. Microbiol. Immunol. 137: 64-69. Silver, L. M. (1990). At the crossroads of developmental genetics: the cloning of the classical mouse t locus. Bioessays 12: 377-380. Silver, L. M. (1993a). The peculiar journey of a selfish chromosome: mouse t haplotypes and meiotic drive. Trends in Genetics 9: 250-254. Silver, L. M. (1993b). Recordkeeping and database analysis of breeding colonies. In Guides to Techniques in Mouse Development, Methods in Enzymology Vol. 225, Wassarman, P. M. and DePamphilis, M. L., eds. (Academic Press, San Diego), pp. 3-15. Silver, L. M., Hammer, M., Fox, H., Garrels, J., Búcan, M., Herrmann, B., Frischauf, A. M., Lehrach, H., Winking, H., Figueroa, F., and Klein, J. (1987). Molecular evidence for the rapid propagation of mouse t haplotypes from a single, recent, ancestral chromosome. Mol. Biol. Evol. 4: 473-482. Silver, L. M., Nadeau, J. H., and Goodfellow, P. N. (1993). Encyclopedia of the Mouse Genome III. Mammal. Genome 4 (special issue): S1-S283. Silver, L. M., Uman, J., Danska, J., and Garrels, J. I. (1983). A diversified set of testicular cell proteins specified by genes within the mouse t complex. Cell 35: 35-45. Silvers, W. K. (1979). The Coat Colors of the Mouse. (Springer-Verlag, New York). Simmler, M.-C., Cox, R. D., and Avner, P. (1991). Adaptation of the interspersed repetitive sequence polymerase chain reaction to the isolation of mouse DNA probes from somatic cell hybrids on a hamster background. Genomics 10: 770-778. Singer, M. F. (1982). SINEs and LINEs: highly repeated short and long interspersed sequences in mammalian genomes. Cell 28. Siracusa, L. D., Jenkins, N. A., and Copeland, N. G. (1991). Identification and applications of repetitive probes for gene mapping in the mouse. Genetics 127: 169-179. Slizynski, B. M. (1954). Chiasmata in the male mouse. J. Genet. 53: 597-605. Smithies, O. (1993). Animal models of human genetic diseases. Trends Genet. 9: 112-116. Snell, G. D., eds. (1941). Biology of the Laboratory Mouse 1rst edition (Blakiston Co., New York). Snell, G. D. (1978). Congenic resistant strains of mice. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), pp. 1-31. Snell, G. D., and Reed, S. (1993). William Ernest Castle, pioneer mammalian geneticist. Genetics 133: 751-753. Snouwaert, J. N., Brigman, K. K., Latour, A. M., Malouf, N. N., Boucher, R. C., Smithies, O., and Koller, B. H. (1992). An animal model for cystic fibrosis made by gene targeting. Science 257: 1083-1088. Sokal, R. R., Oden, N. L., and Wilson, C. (1991). Genetic evidence for the spread of agriculture in Europe by demic diffusion. Nature 351: 143-145. Soller, M. (1991). Mapping quantitative trait loci affecting traits of economic importance in animal populations using molecular markers. In Gene-mapping techniques and applications, Schook, L. B., Lewin, H. A., and McLaren, D. G., eds. (Marcel Dekker Inc., New York), pp. 21-49. Stallings, R. L., Ford, A. F., Nelson, D., Torney, D. C., Hildebrand, C. E., and Moyzis, R. K. (1991). Evolution and distribution of (GT)n repetitive sequences in mammalian genomes. Genomics 10: 807-815. Steinmetz, M., Uematsu, Y., and Fischer-Lindhal, K. (1987). Hotspots of homologous recombination in mammalian genomes. Trends Genet. 3: 7-10. Stevens, L. C. (1957). A modification of Robertsons technique of homoiotopic ovarian transplantation in mice. Transplant. Bull. 4: 106-107. Strong, L. C. (1978). Inbred mice in science. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), Sturtevant, A. H. (1913). The linear arrangement of six sex-linked factors in Drosophila, as shown by their mode of association. J. Exper. Zool. 14: 43-59. Taketo, M., Schroeder, A. C., Mobraaten, L. E., Gunning, K. B., Hanten, G., Fox, R. R., Roderick, T. H., Stewart, C. L., Lilly, F., Hansen, C. T., and Overbeek, P. A. (1991). FVB/N: An inbred mouse strain preferable for transgenic analyses. Proc. Nat. Acad. Sci. USA 88: 2065-2069. Talbot, D., Collis, P., Antoniou, M., Vidal, M., Grosveld, F., and Greaves, D. R. (1989). A dominant control region from the human b-globin locus conferring integration site-independent gene expression. Nature 338: 352-355. Taylor, B. A. (1978). Recombinant inbred strains: use in gene mapping. In Origins of Inbred Mice, Morse, H. C., eds. (Academic Press, New York), pp. 423-438. ten Berg, R. (1989). Recombinant inbred series, OXA. Mouse News Lett. 84: 102-104. Townes, T. M., and Behringer, R. R. (1990). Human globin locus activation region (LAR): role in temporal control. Trends Genet. 6: 219-223. Trask, F. (1991). Fluorescence in situ hybridization. Trend. Genet. 7: 149-155. Tucker, P. K., Lee, B. K., Lundrigan, B. L., and Eicher, E. M. (1992). Geographic origin of the Y chromosome in "old" inbred strains of mice. Mammal. Genome 3: 254-261. Updike, J. (1991). Introduction. In The Art of Mickey Mouse, Yoe, C. and Morra-Yoe, J., eds. (Hyperion, New York), Valancius, V., and Smithies, O. (1991). Testing an "in-out" targeting procedure for making subtle genomic modifications in mouse embryonic stem cells. Mol. Cell. Bio. 11: 1402-1408. van Deursen, J., and Wieringa, B. (1992). Targeting of the creatine kinase M gene in embryonic stem cells using isogenic and nonisogenic vectors. Nuc. Acids Res. 20: 3815-3820. Vanlerberghe, F., Boursot, P., Catalan, J., Gerasimov, S., Bonhomme, F., Botev, B. A., and Thaler, L. (1988). Analyse génétique de la zone dhybridation entre les deux sous-espèces de souris Mus musculus domesticus et Mus musculus musculus en Bulgarie. Genome 30: 427-437. Wagner, E. F., Stewart, T. A., and Mintz, B. (1981a). The human beta-globin gene and a functional thymidine kinase gene in developing mice. Proc. Nat. Acad. Sci. 78: 5016-5020. Wagner, T. E., Hoppe, P. C., Jollick, J. D., Scholl, D. R., Hodinka, R. L., and Gault, J. B. (1981b). Microinjection of a rabbit beta-globin gene into zygotes and its subsequent expression in adult mice and their offspring. Proc. Nat. Acad. Sci. 78: 6376-6380. Wallace, M. E. (1981). Wild mice heterozygous for a single Robertsonian translocation. Mouse News Lett. 64: 49. Wassarman, P. (1993). Mammalian eggs, sperm and fertilisation: dissimilar cells with a common goal. Develop Biol 4: 189-197. Wassarman, P. M. (1990). Profile of a mammalian sperm receptor. Development 108: 1-17. Wassarman, P. M., and DePamphilis, M. L., eds. (1993). Guide to Techniques in Mouse Development (Academic Press, San Diego). Watson, J. D., and Tooze, J. (1981). The DNA Story: a Documentary History of Gene Cloning. (W. H. Freeman, San Francisco). Watson, M. L., DEustachio, P., Mock, B. A., Steinberg, A. D., Morse, H. C., Oakey, R. J., Howard, T. A., Rochelle, J. M., and Seldin, M. F. (1992). A linkage map of mouse chromosome 1 using an interspecific cross segregating for the gld autoimmunity mutation. Mammal. Genome 2: 158-171. Weber, J. L. (1990). Informativeness of human (dC-dA)n(dG-dT)n polymorphisms. Genomics 7: 524-530. Weber, J. L., and May, P. E. (1989). Abundant class of human DNA polymorphisms which can be typed using the polymerase chain reaction. Am. J. Hum. Genet. 44: 388-396. Weber, J. L., Wang, Z., Hansen, K., Stephenson, M., Kappel, C., Salzman, S., Wilkie, P. J., Keats, B., Dracopoli, N. C., Brandriff, B. F., and Olsen, A. S. (1993). Evidence for human meiotic recombination interference obtained through construction of a short tandem repeat-polymorphism linkage map of chromosome 19. Am. J. Hum. Genet. 53: 1079-1095. Weinberg, R. A. (1991). Tumor suppressor genes. Science 254: 1138-1146. Weiss, R. (1991). Hot prospect for new gene amplifier: How LCR works. Science 254: 1292-1293. Welsh, J., and McClelland, M. (1990). Fingerprinting genomes using PCR with arbitrary primers. Nucl. Acids Res. 18: 7213-7218. Welsh, J., Peterson, C., and McClelland, M. (1991). Polymorphisms generated by arbitrarily primed PCR in the mouse: applications to strain identification and genetic mapping. Nucl. Acids Res. 19: 303-306. West, J. D., Frels, W. I., Papaioannou, V. E., Karr, J. P., and Chapman, V. M. (1977). Development of interspecific hybrids of Mus. J. Embryol. exp. Morph. 41: 233-243. Whittingham, D. G., and Wood, M. J. (1983). Reproductive Physiology. In The Mouse in Biomedical Research, Vol. 3, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, NY), pp. 137-164. Williams, J. G. K., Kubelik, A. R., Livak, K. J., Rafalski, J. A., and Tingey, S. V. (1990). DNA polymorphisms amplified by arbitrary primers are useful as genetic markes. Nucl. Acids Res. 18: 6531-6535. Wills, C. (1992). Exons, Introns, and Talking Genes: The Science Behind the Human Genome Project. (Basic Books, New York). Wilson, A. C., Ochman, H., and Prager, E. M. (1987). Molecular scale for evolution. Trends Genet. 3: 241-247. Wimer, R. E., and Fuller, J. L. (1966). Patterns of behavior. In Biology of the Laboratory Mouse, Green, E. L., eds. (McGraw-Hill, New York), pp. 629-653. Winking, H., and Silver, L. M. (1984). Characterization of a recombinant mouse t haplotype that expresses a dominant maternal effect. Genetics 108: 1013-1020. Womack, J. E. (1979). Single gene differences controlling enzyme properties in the mouse. Genetics 92: s5-s12. Wood, S. A., Pascoe, W. S., Schmidt, C., Kemler, R., Evans, M. J., and Allen, N. D. (1993). Simple and efficient production of embryonic stem cellembryo chimeras by coculture. Proc Natl Acad Sci USA 90: 4582-4585. Woodward, S. R., Sudweeks, J., and Teuscher, C. (1992). Random sequence oligonucleotide primers detect polymorphic DNA products which segregate in inbred strains of mice. Mammal. Genome 3: 73-78. Woychik, R. P., Wassom, J. S., and Kingsbury, D. (1993). TBASE: a computerized database for transgenic animals and targeted mutations. Nature 363: 375-376. Wright, S. (1952). The genetics of quantitative variability. In Quantitative Genetics, Reeve, E. C. R. and Waddington, C. H., eds. (H.M. Stat. Office, London), pp. 5-41. Wright, S. (1966). Mendels ratios. In The Origin of Genetics: A Mendel Source Book, Stern, C. and Sherwood, E. R., eds. (W.H. Freeman, San Francisco), Yamazaki, K., Beauchamp, G. K., Matsuzaki, O., Kupniewski, D., Bard, J., Thomas, L., and Boyse, E. A. (1986). Influence of a genetic difference confined to mutation of H-2K on the incidence of pregnancy block in mice. Proc. Nat. Acad. Sci. 83: 740-741. Yoe, C., and Morra-Yoe, J. (1991). The Art of Mickey Mouse. (Hyperion, New York).01-816. Yonekawa, H., Moriwaki, K., Gotoh, O., Miyashita, N., Matsushima, Y., Shi, L., Cho, W. S., Zhen, X.-L., and Tagashira, Y. (1988). Hybrid origin of Japanese mice "Mus musculus molossinus": evidence from restriction analysis of mitochondrial DNA. Mol. Biol. Evol. 5: 63-78. Yonekawa, H., Moriwaki, K., Gotoh, O., Watanabe, J., Hayashi, J.-I., Miyashita, N., Petras, M. L., and Tagashira, Y. (1980). Relationship between laboratory mice and subspecies mus musculus domesticus based on restriction endonuclease cleavage patterns of mitochondrial DNA. Japan. J. Genetics 55: 289-296. Zhang, L., Cui, X., Schmitt, K., Hubert, R., Navidi, W., and Arnheim, N. (1992). Whole genome amplification from a single cell: implications for genetic analysis. Proc Natl Acad Sci USA 89: 5847-5851. Zhang, Y., and Tycko, B. (1992). Monoallelic expression of the human H19 gene. Nature Genetics 1: 40-44. Zimmerer, E. J., and Passmore, H. C. (1991). Structural and genetic properties of the Eb recombinational hotspot in the mouse. Immunogenetics 33: 132-40. Zoghbi, H. Y., and Chinault, A. C. (1994). Generation of YAC contigs by walking. In YAC Libraries, A Users Guide, Nelson, D. L. and Brownstein, B. H., eds. (W.H. Freeman and Company, New York), Zurcher, C., van Zwieten, M. J., Solleveld, H. A., and Hollander, C. F. (1982). Aging Research. In The Mouse in Biomedical Research, Vol. 4, Foster, H. L., Small, J. D., and Fox, J. G., eds. (Academic Press, NY), pp. 11-35.
http://www.princeton.edu/~lsilver/book/MGapp.html
crawl-002
refinedweb
16,015
60.61
I am pretty new to C++ and I am trying to write a BlackJack program to help me learn how to implement classes. So far I only have a Deck class and I am having trouble making it set up the deck and shuffle the cards. I have been able to set up the deck and shuffle the cards by not using classes, so I feel the error is in the way I set up my class or defined the member functions. The program compiles fine, but then kicks out with a generic error message when trying to run it. I feel like I have tried everything, but have not been able to make it work. Do I need to use pointers somewhere? Is that the problem? Pointers are definitly a weak suit for me as I have never programmed in a language which uses them before. And I have not really done a lot of programming in any language. I appreciate you taking the time to give me some help. Code:// This is a blackjack game in c++ #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; // come back and put classes in there own header files class Deck { public: vector<string> cards; void shuffle(); void set_up_deck(); private: string plainCards[52]; }; void Deck::shuffle() { random_shuffle(cards.begin(), cards.end()); } void Deck::set_up_deck() { string plainCards[52] = {"Ace","2","3","4","5","6","7","8","9","10","J","Q","K","Ace","2","3","4","5","6","7","8","9","10","J","Q","K","Ace","2","3","4","5","6","7","8","9","10","J","Q","K","Ace","2","3","4","5","6","7","8","9","10","J","Q","K"}; vector<string> cards (52); for (int i=0; i<52; i++) { cards[i] = plainCards[i]; } } int main() { cout << "Blackjack test" << endl; Deck deck1; deck1.set_up_deck(); cout << "Here are the unshuffled cards" << endl; for (int j=0; j<10; j++) { cout << deck1.cards[j] << ","; } deck1.shuffle(); cout << "Here are the shuffled cards" << endl; for (int j=0; j<10; j++) { cout << deck1.cards[j] << ","; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/45540-simple-class-problem.html
CC-MAIN-2013-48
refinedweb
345
70.02
Hi Philippe, It looks like I commented it out when I was backing out fixes from the SVR4 "fixinc.svr4" file. Below is a proposed diff. I have added a selection clause because the "select" is done internally without forking off another process. I would like you to consider the select clause and propose any improvements. The requirement is: 1. It must be positive for all "stdio.h" header files that require this fix. It is desireable to: 2. Be negative as often as possible whenever the fix is not required to avoid new process overhead It is nice if: 3. The selection expression is as simple as possible to both process and uderstand by people. :-) That in mind, here is the proposed patch: Index: inclhack.def =================================================================== RCS file: /cvs/egcs/egcs/gcc/fixinc/inclhack.def,v retrieving revision 1.29 diff -u -r1.29 inclhack.def @@ -1235,15 +1235,14 @@ /* * Fix return type of fread and fwrite on sysV68 */ -#ifdef LATER fix = { hackname = read_ret_type; files = stdio.h; + select = "extern int\t.*, fread\\(\\), fwrite\\(\\)"; sed = "s/^\\(extern int\tfclose(), fflush()\\), " "\\(fread(), fwrite()\\)\\(.*\\)$" "/extern unsigned int\t\\2;\\\n\\1\\3/"; }; -#endif /*
https://gcc.gnu.org/pipermail/gcc-patches/1999-June/013004.html
CC-MAIN-2021-21
refinedweb
191
70.5
Feature #8563 Instance variable arguments Description =begin. =end Related issues History #1 Updated by Nobuyoshi Nakada about 2 years ago - Category set to syntax - Status changed from Open to Assigned - Assignee set to Yukihiro Matsumoto - Target version set to Next Major #2 Updated by Matthew Kerwin about 2 years ago #3 Updated by Yukihiro Matsumoto about 2 years ago From my POV: def initialize(@foo, @bar) end does not express intention of instance variable initialization. I'd rather add a method like define_attr_initialize(:foo, :bar) to define a method of def initialize(foo, bar) @foo = foo @bar = bar end Matz. #4 Updated by Tsuyoshi Sawada about 2 years ago It could also be used besides initialize: def update_something foo do_update_something(@foo = foo) ... end would become def update_something @foo do_update_something(@foo) ... end #5 Updated by Nobuyoshi Nakada about 2 years ago =begin : phluid61 (Matthew Kerwin) wrote: Question: would this be valid? def foo(@foo=@foo) end In generic, foo = foo is valid always. =end #6 Updated by Charles Nutter about 2 #7 Updated by Boris Stitnicky about 2 years ago @Matz: #8 Updated by Matthew Kerwin about 2. #9 Updated by Boris Stitnicky about 2 years ago Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/8563
CC-MAIN-2015-32
refinedweb
201
58.21
A couple of months ago, I had written some articles on the Parallel class found in the System.Threading.Tasks namespace which provides library-based data parallel replacements for common operations such as for loops, for each loops, and execution of a set of statements. You can read the articles here Parallel Tasks in .NET 4.0 Parallel Tasks – Methods that Return value Cancelling Tasks in .NET 4.0 In this article, we will explore the static method Parallel.For. A lot of developers ask me about the difference between the C# for loop statement and the Parallel.For. The difference is that with the C# for statement, the loop is run from a single thread. However the Parallel class uses multiple threads. Moreover the order of the iteration in the parallel version is not necessarily in order. Let us see an example: } As you can see, the Parallel.For method is defined as Parallel.For Method (Int32, Int32, Action(Of Int32)). Here the first param is the start index (inclusive), the second param is the end index (exclusive) and the third param is the Action<int> delegate that is invoked once per iteration. Here’s the output of running the code: As you can see, with the C# for loop statement, the results are printed sequentially and the loop is run from a single thread. However with the Parallel.For method uses multiple threads and the order of the iteration is not in order. The Parallel.For() construct is useful if you have a set of data that is to be processed independently. The construct splits the task over multiple processor.
http://www.dotnetcurry.com/csharp/608/csharp-parallel-for-loop-method
CC-MAIN-2017-34
refinedweb
272
65.42
Subject: [ggl] read_wkt and write_wkt From: Barend Gehrels (Barend.Gehrels) Date: 2009-04-28 10:23:46 > >> The read_wkt function currently returns true/false if it is successfully or >> not, while inside lexical_cast<> is used which might throw an exception. I >> would like to change this by returning void and always throw an exception if >> it is not successful. Is this OK for everyone? >> >> This exception is already defined as struct read_wkt_exception : public >> geometry_exception, we need to move that geometry_exception to core. Is that >> OK ? >> > > Both points sound fine to me. > OK, I'll change it then. > >> Boost defines an "exception/exception.hpp" which is not used by boost >> libraries. At least not by e.g. spirit. So I derive from std::exception, >> like most boost libraries do, is that OK? >> > > Boost folks usually like to avoid intra-dependencies, so I guess they > don't have the intention to use Boost.Exception too much inside Boost. > I propose to leave it like that for now. The second reason is that we > don't know this library much for now. We can always integrate it in > the future if we know it better and we realize it can bring us > something. > OK, makes sense. Indeed it is written somewhere that boost libraries should not use other boost libraries. I think in general this is an old guideline, or applicable to specific libraries. We're using many things of Boost which are specific for library programmers (remove_const) or very handy (ranges). > >> About 2) >> The "make_" prefix is normally reserved for object generators. This is one >> but actually it is, more strictly, generating an manipulator (which is an >> object). Is the "make_" prefix appropriate here? I thought I asked this >> earlier but cannot find that mail so probably I didn't. Alternatively it >> would be: >> 1) cout << wkt_manip<G>(geometry) << endl >> 2) cout << wkt(geometry) << end // because this will be most widely used >> > > I'd say that the make_prefix is used: > - when the function generates an object whose constructor would > otherwise need one or several template parameters > AND > - when the object is known by the user and usually explicitly > manipulated by him (e.g. std::pair). > > Am I right to say that the wkt<> class is rather a "helper" class > whose only purpose is to pass a geometry to a stream, and will never > be used explicitly by the user for any other purpose? If it's the case > then we should forget the make_ prefix and stick to wkt(geometry). > OK, makes also sense. I already did the veshape like this. So changing the wkt is a small job, it is slightly more work to update the samples. But it looks better indeed. > >> About 3) >> The last one is prone to errors if geometry is not a geometry or not a tag. >> It is not living in namespace ggl to enable the library to stream custom >> geometries which are living outside the namespace ggl. So it is not in any >> namespace. Bruno recently mailed me that this behaviour will never be >> accepted by Boost so we have to re-add the namespace there, or remove it at >> all. >> > > I really don't know what's allowed and what's not in this regard. We > really should have a discussion on the Boost list about "is it allowed > to enable operator<< for any class satisfying a given concept" and if > yes, what are the rules to follow. > > OK, the problem is actually also that it tries to stream everything not streamable and then complains that it is not satisfying the concept. I'll have a look again, didn't touch this part for months. Regards, Barend -------------- next part -------------- An HTML attachment was scrubbed... URL: Geometry list run by mateusz at loskot.net
https://lists.boost.org/geometry/2009/04/0132.php
CC-MAIN-2019-43
refinedweb
629
65.52
UserAgent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.90 Safari/537.36 Steps to reproduce the problem: 1. Figure out which major X opcode has to be used to use the XTEST extension: $ strace -s 1024 xdotool mousemove_relative 100 100 2>&1 | grep -A50 '"XTEST"' | grep recvmsg | head -n1 recvmsg(3, {msg_name(0)=NULL, msg_iov(1)=[{"\1\0\10\0\0\0\0\0\1\204\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096}], msg_controllen=0, msg_flags=0}, 0) = 32 The tenth byte (here: \204) is the major opcode of XTEST in the currently running X server. 2. Figure out the PID of the GPU process (Chrome's task manager or `pgrep -f gpu-process`). 3. Determine the file descriptor number(s) used by the GPU process to communicate with Xorg. This requires lsof>=4.89! You might need to run this as root. # gpuprocpid=$(pgrep -f gpu-process); lsof -p$gpuprocpid +E -U | grep $gpuprocpid | grep ',Xorg' | sed 's| *| |g' | cut -d' ' -f4 12u 14u 4. Compile the PoC: # gcc -o test test.c -Wall -std=gnu99 5. Run the PoC. arg1 is the PID of the GPU process, arg2 is the FD number determined using lsof, arg3 is the major opcode of XTEST in octal. # ./test 1343 12 204 remote mapping at 0x7f002b7f0000 Your cursor should jump around on the screen now. What is the expected behavior? What went wrong? On Linux, when Chrome is running under Xorg, the GPU process has a (trusted) unix domain socket connection to the X server. A (trusted) connection to the X server can be used, among other things, to send commands according to the XTEST Extension Protocol. This protocol can be used to spoof arbitrary mouse and keyboard input. Did this work before? N/A Chrome version: 54.0.2840.90 Channel: stable OS Version: Flash Version: Shockwave Flash 23.0 r0 While moving the cursor around a bit is mostly harmless, the same mechanism could also be used to e.g. launch a terminal using a key combination or so and then write arbitrary commands into it, effectively resulting in unsandboxed code execution. test.c mostly consists of code for injecting syscalls into the GPU process. It injects an mmap() call to create a temporary buffer, then writes X11 commands to the temporary buffer and sends them via write(). This requires running arbitrary code in an unsandboxed process, as the OS-user, yes? If so, then it's outside of Chrome's security model. I think the test code just uses injection to make it easier to be running the PoC from outside the gpu process but sending the calls from GPU... I think it would be interesting to see the same PoC but running inside the gpu process itself e.g. GpuMain in content/gpu/gpu_main.cc > I think the test code just uses injection to make it easier to be running the PoC from outside the gpu process but sending the calls from GPU... Yeah, exactly. I didn't want to have to figure out where in the source code I have to place code to make it run inside the GPU process sandbox (and also didn't want to have to rebuild Chrome), so I injected the syscalls into the sandboxed process externally. Okay. Just apply this patch: ===================================================================== jannh@jannh:~/chromium/src$ git diff diff --git a/content/gpu/gpu_main.cc b/content/gpu/gpu_main.cc index f8887ce..03b7294 100644 --- a/content/gpu/gpu_main.cc +++ b/content/gpu/gpu_main.cc @@ -68,6 +68,8 @@ #if defined(USE_X11) #include "ui/base/x/x11_util.h" // nogncheck #include "ui/gfx/x/x11_switches.h" // nogncheck +#include "ui/gfx/x/x11_types.h" +#include <X11/extensions/XTest.h> #endif #if defined(OS_LINUX) @@ -283,6 +285,15 @@ int GpuMain(const MainFunctionParams& parameters) { nullptr); #endif + Display *dpy = gfx::GetXDisplay(); + XSynchronize(dpy, True); + while (1) { + XTestFakeRelativeMotionEvent(dpy, 100, 100, 0); + usleep(100000); + XTestFakeRelativeMotionEvent(dpy, -100, -100, 0); + usleep(100000); + } + { TRACE_EVENT0("gpu", "Run Message Loop"); base::RunLoop().Run(); ===================================================================== When you launch Chrome, the cursor should jump around until you kill Chrome / the GPU process. Ouch, the nightmare that is the GPU sandbox continues :-( This sounds like it has the potential to be one of those incredibly annoying-to-fix bugs - unless X11 supports some way to give up the ability to make certain types of calls, it sounds like it'd take some pretty horrible out of process filtering to block something like this :-/ erg@, not sure if you are the best person to add here, but hoping you'd at least know who might be familiar how X11 works. afaik X11 supports some relatively granular configuration - if you're using SELinux policies, which won't work across all distros. Without SELinux, you can still get into a restricted mode via the X SECURITY protocol extension, but it's not granular at all - iirc all windows that are subject to security restrictions have some kind of access to each other, you can't use the clipboard, you can't use acceleration, I'm not sure whether you can even use shared memory. (This is the mode you get with "ssh -X" on systems where the distro hasn't messed up the ssh config.) Also, getting into this mode is really fragile, so you need to test whether you actually managed to restrict yourself or not. You have to connect to the X server normally, generate an MIT authentication token with restrictions turned on, disconnect, then reconnect with the token. If the token has expired or is invalid for some other reason, authentication falls back to unix domain socket authentication, which, because you're coming from the user's normal UID, grants normal, unrestricted access again. ( was a related bug in OpenSSH.) Since I think you're using at least one of the X sockets for GLX stuff (?), I'm guessing that the normal restricted mode won't work for you and you'll need a broker process. :/ +kbr and piman for being gpu wizards who know about X11 +thomasanderson FYI Err, yeah, this has always been the case. The GPU process can also create full-screen top-level windows and fake the login screen. All this was a given when we implemented the GPU sandbox. It is virtually impossible to implement a proper broker for the X connection, because some drivers (e.g. NVIDIA) use a proprietary extension that we do not understand (and is subject to at-will changes). This PoC still requires arbitrary code execution to be achieved in one of Chrome's unsandboxed or less-sandboxed processes. If arbitrary code can be run in the GPU process then the expectation is likely that processes can be spawned at the user's privilege level. It's not clear to me what, if anything, needs to be done in response to this report. I don't think this warrants high security severity, but I'm not sure what the definitions of the severities are. Downgrading to medium for the time being. The severity came from these guidelines: "High severity vulnerabilities allow an attacker to execute code in the context of, or otherwise impersonate other origins. Bugs which would normally be critical severity with unusual mitigating factors may be rated as high severity. For example, sandbox escapes fall into this category as their impact is that of a critical severity bug, but they require the precondition of a compromised renderer." Even though the Linux GPU process is unfortunately relatively weakly sandboxed, this counts as a sandbox escape - we have seen attacks that get code execution in the GPU process and escape from there. kbr, are you the right owner for this bug? I'm not, sorry. The GPU process has elevated capabilities by design, and per piman's comment in #11, it doesn't seem feasible to broker the X11 connection to disallow access to the XTEST extension. (Next security sheriff signing on) Based on #11, it seems like there's not that much that can be done here, since this vulnerability is inherent to the design of the GPU sandbox? #18 Maybe there's something we can do, but not right now. As we add more Wayland support, we're going to have to come up with a solution to this problem anyway. The wayland protocol only allows window operations on the client process that created the window, so we're forced to implement some sort of command forwarding. Maybe once this is done, it will be easier to do the same thing with X11? Hey there, I would like to confirm that this doesn't affect Chrome OS. I'm pretty sure of the answer, but I would like to be 100% sure. thomasanderson: IIUC, the Linux world will move from X11 towards Wayland anyways. Is that plausible (long-term) future where this problem will be solved? Is the Chrome OS stack eventually be all Wayland, or is Wayland going to be limited to ARC++? > I would like to confirm that this doesn't affect Chrome OS. I'm pretty sure of the answer, but I would like to be 100% sure. AFAIK there aren't any more boards using X11, so it shouldn't affect CrOS > thomasanderson: IIUC, the Linux world will move from X11 towards Wayland anyways. Is that plausible (long-term) future where > this problem will be solved? Even after we have Wayland working, desktop Linux will still need to support X11 for many years after. > Is the Chrome OS stack eventually be all Wayland, or is Wayland going to be limited to ARC++? I think CrOS mainly uses drm, can anyone confirm? @#21: yes, all Chrome OS boards are Ozone/Freon, i.e. no X11 (it's not Wayland either, just direct drm). They're not explicitly affected by this. Thanks for the comments everyone. What should the resolution on this be? I'm uncomfortable with removing the security label and restriction, but we really like security bugs to be assigned to someone. Security sheriff checking in -- thomasanderson@, re command forwarding in #19, is this still possible given piman's comment in #11 about proprietary extensions? even if this isn't possible right now, this bug should have an owner and some sort of long term plan, be it wayland, command forwarding, or selinux thomasanderson: Uh oh! This issue still open and hasn't been updated in the last 30 Maybe we can do something about this, but I'd like security team's review first (and piman too, if possible). The X11 connection has 2 parts: client->server for requests server->client for events and request-responses We can have the browser process act as a proxy between the GPU process and the X server that filters bad requests. For server->GPU communication, the proxy won't filter anything. For GPU->server communication, the proxy will filter certain major opcodes. The wire protocol is very simple, just a sequence of requests that all look like this: | 1 byte major opcode | 1 byte padding | 2 byte request length | optional request data | We just filter out the requests having major opcodes corresponding to XTEST and XWarpPointer (and XCreateWindow, where the parent is not a chrome-owned window). It shouldn't matter if Nvidia has a proprietary extension protocol, because we just forward the raw data - unless the extension could be used in exploits. But I only see 2 nvidia extensions: NV-CONTROL and NV-GLX. I'm guessing NV-CONTROL is used by the nvidia control panel to change graphical settings? So maybe a whitelist of only the extensions used by the GPU process is better than a blacklist? (there should only be a handful of them that we use) If NV-GLX is exploitable, then we're really out of luck on this issue. piman@ wdyt? Also, maybe the proxy could be eliminated if we use seccomp-bpf instead? Security team: is there an easy way to handle all write-type or read-type syscalls together? > Also, maybe the proxy could be eliminated if we use seccomp-bpf instead? I don't think so. There are SO_ATTACH_BPF, SO_ATTACH_FILTER and SO_LOCK_FILTER for filtering socket traffic with BPF/eBPF, but to filter traffic in the right direction, these socket options would have to be set on the X server's side of the connection. seccomp-bpf can only filter arguments that are passed in registers, not buffers that syscall arguments point to. We commit ourselves to a 60 day deadline for fixing for high severity vulnerabilities, and have exceeded it here. If you're unable to look into this soon, could you please find another owner or remove yourself so that this gets back into the security triage queue? For more details visit - Your friendly Sheriffbot I think filtering the X protocol to disable specific protocol extensions is feasible and shouldn't be too hard, but would add some latency for the extra process hops, on the critical path. It would need to be measured and make sure we would not regress real life performance. Things like tracking the parent windows becomes very tricky when considering the asynchronicity of the protocol - basically unless we're careful about tracking when the server has received specific operations on the browser connection, we cannot make non-racy decisions about what an operation on the GPU process connection can do. E.g. even though the browser process has sent a CreateWindow request to create a top-level window, that request may still be in the client queue or in the server queue, and a request from the GPU process issued later referencing that browser window (therefore passing a trivial "has the browser issued a CreateWindow for it" test) could get processed by the X server ahead of the CreateWindow from the browser, therefore actually referencing an unknown XID instead of a known one (thinking XID reuse, etc.). Doing the proper tracking requires digging into the protocol for the browser connection to track X request acks (normally hidden by Xlib/xcb) and sounds fairly invasive / complex, doubly so if we want to ensure the filtering of the GPU process connection does not involve a round trip to the browser UI thread, where those acks are received currently (as it could create deadlocks). Okay, GPU sandbox escalation is medium at worst, because the GPU process doesn't expose attack surface directly to hostile content (a big mitigating factor). So, I'm downgrading the severity as appropriate. As to the larger question, it reads to me like this is something of a known issue with X. So, it sounds like we might just need to take the hit of disclosing and figure out if we should make the architectural changes necessary to proxy the X protocol, or if there's some other long term solution here. Maybe I just didn't see it, but is it known for certain whether or not we can get the GPU process to work with a non-trusted channel to the X server? As for getting into non-trusted mode (Re: #8), couldn't the browser process do the work of getting a non-trusted auth token, and then pass that to the GPU process? Per #35, de-restricting this Deadline-Exceeded bug. I agree that Medium is the right severity. Pushing to M-60, since 58 is out the door. Outside this bug, jannh reminded me about the UID equivalence -> permission granted problem. This bug just might not be fixable? At least until The Year Of Not X On The Linux Desktop. ;) Friendly ping from security sheriff. Should we consider either WontFix'ing this or raising the priority? Un-cc-ing me from all bugs on my final day. Friendly security sheriff ping, any updates on a fix or WontFix decision? I'm closing this issue because there's not really a viable solution. Creating a bulletproof broker for X11 would add latency for all interaction with the window server, and like piman@ pointed out, it would be difficult to know exactly what to filter because of GPU drivers using proprietary extensions that are subject to at-will changes. Also, this vulnerability isn't exactly a secret, so I think it could be made public.
https://bugs.chromium.org/p/chromium/issues/detail?id=662692
CC-MAIN-2018-47
refinedweb
2,726
61.26
1,1 A pair of numbers x and y is called amicable if the sum of the proper divisors of either one is equal to the other. The smallest pair is x = 220, y = 284. The sequence lists the amicable numbers in increasing order. Note that the pairs x, y are not adjacent to each other in the list. See also A002025 for the x's, A002046 for the y's. Theorem: If the three numbers p = 3*(2^(n-1)) - 1, q = 3*(2^n) - 1 and r = 9*(2^(2n-1)) - 1 are all prime where n >= 2, then p*q*(2^n) and r*(2^n) are amicable numbers. This 9th century theorem is due to Thabit ibn Kurrah (see for example, the History of Mathematics by David M. Burton, 6th ed., p. 510). - Mohammad K. Azarian, May 19 2008 The first time a pair ordered by its first element is not adjacent is x = 63020, y = 76084 which correspond to a(17) and a(23), respectively. - Omar E. Pol, Jun 22 2015 For amicable pairs see A259180 and also A259933. - Omar E. Pol, Jul 15 2015 First differs from A259180 (amicable pairs) at a(18). - Omar E. Pol, Jun 01 2017 Sierpinski (1964), page 176, mentions Erdos's work on the number of pairs of amicable numbers <= x. - N. J. A. Sloane, Dec 27 2017 Scott T. Cohen, Mathematical Buds, Ed. H. D. Ruderman, Vol. 1 Chap. VIII pp. 103-126 Mu Alpha Theta 1984. P. Erdos, On amicable numbers, Pub. Math. Debrecen, 4 (1955), 108-111. Clifford A. Pickover, The Math Book, Sterling, NY, 2009; see p. 90. W. Sierpinski, Elementary Theory of Numbers, Panst. Wyd. Nauk, Warsaw, 1964. David Wells, The Penguin Dictionary of Curious and Interesting Numbers, pp. 145-7, Penguin Books 1987. T. D. Noe, Table of n, a(n) for n = 1..77977 (terms < 10^14 from Pedersen's tables) Titu Andreescu, Number Theory Trivia: Amicable Numbers Anonymous, Amicable Pairs Applet Test Anonymous, Amicable and Social Numbers [broken link] Sergei Chernykh, Table of n, a(n) for n = 1..823818, zipped file (results of an exhaustive search for all amicable pairs with smaller member < 10^17) Sergei Chernykh, Amicable pairs list Germano D'Abramo, On Amicable Numbers With Different Parity, arXiv:math/0501402 [math.HO], 2005-2007. Leonhard Euler, On amicable numbers, arXiv:math/0409196 [math.HO], 2004-2009. Steven Finch, Amicable Pairs and Aliquot Sequences, 2013. [Cached copy, with permission of the author] Mariano García, A Million New Amicable Pairs, J. Integer Sequences, 4 (2001), #01.2.6. Mariano García, Jan Munch Pedersen, Herman te Riele, Amicable pairs, a survey, Report MAS-R0307, Centrum Wiskunde & Informatica. Hisanori Mishima, Amicable Numbers:first 236 pairs(smaller member<10^8) fully factorized David Moews, A List Of The First 5001 Amicable Pairs David and P. C. Moews, A List Of Amicable Pairs Below 2.01*10^11 Number Theory List, NMBRTHRY Archives--August 1993 J. O. M. Pedersen, Tables of Aliquot Cycles [Broken link] J. O. M. Pedersen, Tables of Aliquot Cycles [Via Internet Archive Wayback-Machine] J. O. M. Pedersen, Tables of Aliquot Cycles [Cached copy, pdf file only] Ivars Peterson, MathTrek, Appealing Numbers Ivars Peterson, MathTrek, Amicable Pairs, Divisors and a New Record P. Pollack, Quasi-Amicable Numbers are Rare, J. Int. Seq. 14 (2011) # 11.5.2 Carl Pomerance, On amicable numbers (2015) Herman J. J. te Riele, On generating new amicable pairs from given amicable pairs, Math. Comp. 42 (1984), 219-223. Herman J. J. te Riele, Computation of all the amicable pairs below 10^10, Math. Comp., 47 (1986), 361-368 and Supplement pp. S9-S40. Herman J. J. te Riele, A New Method for Finding Amicable Pairs, Proceedings of Symposia in Applied Mathematics, Volume 48, 1994. Ed Sandifer, Amicable numbers Gérard Villemin's Almanach of Numbers, Nombres amiables et sociables Eric Weisstein's World of Mathematics, Amicable Pair Wikipedia, Amicable number Pomerance shows that there are at most x/exp(sqrt(log x log log log x)/(2 + o(1))) terms up to x for sufficiently large x. - Charles R Greathouse IV, Jul 21 2015 F:= proc(t) option remember; numtheory:-sigma(t)-t end proc: select(t -> F(t) <> t and F(F(t))=t, [$1.. 200000]); # Robert Israel, Jun 22 2015 s[n_] := DivisorSigma[1, n] - n; AmicableNumberQ[n_] := If[Nest[s, n, 2] == n && ! s[n] == n, True, False]; Select[Range[10^6], AmicableNumberQ[ # ] &] (* Ant King, Jan 02 2007 *) (PARI) aliquot(n)=sigma(n)-n isA063990(n)={local(a); a=aliquot(n); a<>n && aliquot(a)==n} \\ Michael B. Porter, Apr 13 2010 (Python) from sympy import divisors A063990 = [n for n in xrange(1, 10**5) if sum(divisors(n))-2*n and not sum(divisors(sum(divisors(n))-n))-sum(divisors(n))] # Chai Wah Wu, Aug 14 2014 Union of A002025 and A002046. A180164 (gives for each pair (x, y) the value x+y = sigma(x)+sigma(y)). Cf. A259180. Sequence in context: A274116 A121507 A255215 * A259180 A259933 A273259 Adjacent sequences: A063987 A063988 A063989 * A063991 A063992 A063993 nonn N. J. A. Sloane, Sep 18 2001 Comment about the first not adjacent pair being (67095, 71145) removed by Michel Marcus, Aug 21 2015 approved
https://oeis.org/A063990
CC-MAIN-2018-17
refinedweb
871
65.12
Opened 10 years ago Closed 6 years ago #3242 enhancement closed fixed (fixed) use python 2.5 'spwd' module instead of z3p secret 'shadow' module when available Description It might even make sense to get rid of the 'shadow' module and just include a 2.3 backport of spwd within Twisted, but I'll leave it up to you to make another ticket. However, given that this functionality is actually maintained somewhere, let's use the maintained version... Change History (41) comment:1 follow-up: 3 Changed 10 years ago by comment:2 Changed 9 years ago by comment:3 follow-up:. comment. Ok, so it will work on python 2.5.I've just sent a necessary fix to PyPAM maintainer. comment:5 Changed 9 years ago by comment:6 Changed 9 years ago by Ready for review, I guess. comment:7 Changed 9 years ago by You should write some tests that mock the appropriate functions ( seteuid, setegid, getspwnam) here. Ideally you could break it up so that the front-end/back-end separation with UNIXPasswordDatabase is done via an attribute that is an (optional, for compatibility purposes) argument to its constructor or something. comment:8 Changed 9 years ago by I refactored that class to take an option argument of getpwnam functions, and some tests for that class and the new functions. comment:9 Changed 9 years ago by Erm... I don't actually see any of that in the branch. I only see changes to twisted/conch/checkers.py, as a matter of fact. Is there another branch with the wrong ticket number on it somewhere? If so, want to merge forward with a correct number so the 'branch' field picks it up? comment:10 Changed 9 years ago by Oh, looks like the commit didn't actually make it into SVN. Should be there now. comment:11 Changed 9 years ago by Conflicts in both files when merged to trunk. Looks like you need that merge forward anyway :). comment:12 Changed 9 years ago by comment:13 Changed 9 years ago by This time for sure! comment:14 Changed 8 years ago by Definitely an improvement. I like the large number of duplicated seteuid lines which are deleted ;-). getpwnam_functions - its name violates the coding standard. - it isn't documented. The class needs a docstring which has an @ivar. - why have the 'None' condition at all? is the list intended to be mutable per-instance? wouldn't it be simpler to have it be a tuple, and have the signature be def __init__(self, getpwnamFunctions=(getpwnam_passwd, getpwnam_shadow))? - getpwnam_passwdand getpwnam_shadow - names violate the coding standard - 3 blank lines between toplevel functions, please - neither one has a docstring test_checkers - We've been moving towards a more declarative, active-voice style of writing docstrings, one which is both briefer and more amenable to documentation generation. For example, instead of "When foo is true, bar() should baz without raising an exception.", "bar() bazzes when foo.". Avoid pronouns, except where the antecedent is clearly visible in the docstring itself. Avoid words like "should" and "verifies" and so on. Just say what the expected behavior is, not that the test verifies it or that it should be so. test_verifyCryptedPasswordand test_verifyCryptedPassword_md5have blank docstrings. test_getpwnam_passwd, test_getpwnam_shadow, test_default_checkers, test_pass_in_checkers, test_verify_password, test_fail_on_KeyError, test_fail_on_bad_password, and test_loop_through_functionsall have names which violate the coding standard. assertLoggedInand assertUnauthorizedLoginare a bit vague. "is a valid login"? What does that mean? What constitutes validity? I see a @type, but what about a @param? The bit where you need to return its result or it might not actually do anything seems pretty important, too. - rather than touching a bunch of unrelated lines by changing the import from importing classes to importing modules, why not just `from twisted.conch.checkers import pwd as pwdcheckers, shadow as spwdcheckers? - If you import individual modules, patch()calls can be less invasive. Since spwd and pwd aren't particularly widely used, this isn't of terribly much practical concern in this case, but it's a good habit to get into: patch as narrowly as possible. In this particular case, getpwnamand getspwnamare the only functions that are used in checkers, so you could import them directly. - there are a couple of >80 character lines. - please put 2 blank lines between methods and 3 between classes. test_loop_through_functions' docstring is a bit vague. Phrase it a bit more positively, i.e. " UNIXPasswordDatabase.requestAvatarIdloops through each getpwnamfunction associated with it and returns a Deferredwhich fires with the result of the first one which returns a value rather than raising exception Foo." comment:15 Changed 8 years ago by Other than 3.5 and 3.6, the other comments are addressed in the branch. 3.5: I switched to importing the module because otherwise I'm importing a ton of individual things and that seemed more cluttered. If it's really a problem I can do it that way, though. 3.6: I'm not sure what this means; I'm only patching individual functions; I'm not sure how to patch more narrowly than that. Perhaps an example would help. I'm going to put this back up for review, but if those comments need to be addressed, feel free to bounce it back. comment:16 Changed 8 years ago by * import Crypto.Cipher.DES3 + Crypto except ImportError: Please don't do that. I know that pyflakes complain, but I'd rather see it fixed in pyflakes. * + credentials = SSHPrivateKey('test', 'ssh-rsa', + keydata.publicRSA_openssh, 'foo', + keys.Key.fromString( + keydata.privateRSA_openssh).sign('foo')) Indentation is broken. Maybe something like that: credentials = SSHPrivateKey( 'test', 'ssh-rsa', keydata.publicRSA_openssh, 'foo', keys.Key.fromString(keydata.privateRSA_openssh).sign('foo')) It's visible in a couple of place: the second indentation should always be 4 spaces, not 8. + skip = 'cannot run without crypt module' Do you mean pwd here? + elif not getattr(os, 'O_NOCTTY', None): + skip = 'cannot run without os.O_NOCTTY' Please check for is None explicitely. + @param username: the username of the user to return the passwd database + information for. The second line should be indented (and the following as well). * + A checker which validates users out of the UNIX password databases, or a + databases a compatible format. "a database in a compatible format" ? This is a pretty nice enhancement. I suppose it depends on #3822 before being merged? Thanks! comment:17 Changed 8 years ago by comment:18 Changed 8 years ago by comment:19 Changed 8 years ago by Regarding my point 3.5, never mind. An example regarding point 3.6 is forthcoming. comment:20 Changed 8 years ago by Okay. So, regarding patching more narrowly - reading my comment right now, I can see it doesn't make much sense, so here's a much-expanded version. The tests are currently passing os, checkers.shadow (i.e. "spwd"), and pwd as arguments to "patch". This means that any module that does import os (or import spwd or import pwd) will be using the replaced functions for the duration of the test. As I said, the functions being patched in this case are unlikely to be used by trial, but, "unlikely" is the not same as "certainly not". Certainly other functions in the os module are. This is a pre-existing problem in this test module, so I don't think it's necessarily the responsibility of this branch to fix. However, as a matter of habit, patching should always be done so that it affects as few modules as possible, to minimize the effect of future features that trial might have. (Imagine a tool which let you run every test as a different effective user-ID. I don't know why you'd want such a thing... but you could have it.) Taking the specific example of test_getpwnamShadow, instead of this: self.patch(checkers.shadow, 'getspnam', lambda u: (0, 1, 2, 3)) self.patch(os, "geteuid", self.mockos.geteuid) self.patch(os, "getegid", self.mockos.getegid) # ... et cetera ... you could do this: self.patch(checkers, "os", self.mockos) class MockShadow(object): def getspnam(self, u): return (0, 1, 2, 3) self.patch(checkers, "shadow", MockShadow()) (of course you could put MockShadow at module level) The second example there only patches the names actually used in the checkers module, so other modules which import the same names are not affected. comment:21 Changed 8 years ago by I'm not sure what the status of this branch is. Was glyph's comment intended as a review? comment:22 Changed 8 years ago by comment:23 Changed 8 years ago by My comment was an explanation of a point in an earlier review, which z3p said he didn't understand in comment 15. comment:24 Changed 8 years ago by Hopefully z3p can respond to glyph's point 3.6 in comment 15 now. comment:25 Changed 7 years ago by comment:26 Changed 7 years ago by comment:27 Changed 7 years ago by comment:28 Changed 7 years ago by comment:29 Changed 6 years ago by comment:30 Changed 6 years ago by comment:31 Changed 6 years ago by comment:32 Changed 6 years ago by comment:33 Changed 6 years ago by Ready for review. Buildbox results will be at comment:34 Changed 6 years ago by Reviewing. comment:35 Changed 6 years ago by WELCOME BACK TO THE STAGE OF HISTORY - Looks like the API documentation builder found some problems; you should fix those. assertUnauthorizedLoginincorrectly describes its @rtypeas Deferred; its return type is actually NoneType. (Which is good, because new tests should not spin the reactor.) - This code still uses patchan awful lot. I can see you responded to my previous review, and the patches are all much less aggressive, but in the future you may want to consider attributes looked up on instances in more places than temporarily-replaced attributes of modules or classes. (Nothing to be done for this ticket, just be aware of that in the future.) Otherwise this looks pretty good, thanks: if you can fix points 1 and 2, I think it's ready to land! comment:36 Changed 6 years ago by ShadowDatabase in t.p.fakepwd also needs an @since marker. comment:37 Changed 6 years ago by Looks like the API documentation builder found some problems; you should fix those. Dang. I'd really like to teach pydoctor about the standard library. :/ comment:38 Changed 6 years ago by I've addressed point 2, and added the @since marker. I'm not sure what the right answer is for point 1. Should I untag the module in the docstring, or is there some other fix to make? comment:39 Changed 6 years ago by They're green, including the API builder (thanks to exarkun for the pointer). comment:40 Changed 6 years ago by The API changes look good, the rtype change looks good, @since looks good, and I think this is ready to merge! comment:41 Changed 6 years ago by (In [33231]) Merge branch spwd-3242-7: use python 2.5 'spwd' module instead of z3p secret 'shadow' module when available Author: exarkun, z3p Reviewer: glyph, therve, thijs, jesstess Fixes: #3242 Python 2.5 includes a standard library module to interact with the /etc/shadow password database. This updates Twisted to that module, rather than the module z3p hacked together many, many years ago. I would expect the PAM functionality to almost entirely obsolete any direct support of /etc/passwdor /etc/shadow. Is this something we actually need to continue to maintain?
http://twistedmatrix.com/trac/ticket/3242
CC-MAIN-2017-47
refinedweb
1,923
66.44
We’ll start having a look at data binding in WPF. Briefly, data binding is the ability to bind some property of a control (such as the text in a TextBox) to the value of a data field in an object. (More generally, you can bind controls to other data sources such as databases, but that’s more advanced.) The idea, and the need for it, are best illustrated with an example. Suppose we want a little program that allows you to step through the integers and test each one to see if it’s a prime number or a perfect number. In case you’ve forgotten, a prime number is one whose only factors are 1 and the number itself, so 2, 3, 5, 7, 11, 13, 17 and so on are the first few prime numbers. A perfect number is one where the sum of all its factors (including 1 but excluding the number itself) equals the number itself. The smallest perfect number is 6, since its factors are 1, 2 and 3, and 1+2+3=6. The next perfect number is 28, and after that they get quite large, and very rare. The interface for the program might look like this: The number being considered is shown at the top. The Factors box shows the number of factors (excluding 1 and the number itself) that number has; thus a number with zero factors is prime. The ‘Sum of factors’ box shows the sum of the number’s factors (including 1, but excluding the number). A number whose ‘Sum of factors’ equals the number itself is therefore perfect. The program should paint the background of the ‘Factors’ box LightSalmon if the number is prime, and the background of the ‘Sum of factors’ box LightSeaGreen if the number is perfect. The Reset button resets Number to 2. The Dec button decrements Number by 1; Inc increments Number by 1, and Exit quits the program. Note that the Dec button also becomes disabled when Number is 2, since we don’t want to consider any numbers less than 2. If the only programming technique you knew about was event handling, you can see that you would need to add handlers for each button’s Click event, and these handlers would need to update Number and the values displayed in the various TextBoxes. You would also need to handle the TextChanged event in the Number TextBox so if the user typed in a new number it would update the other boxes. Quite a bit of back-and-forth coding would be needed to keep everything synchronized. Data binding allows the value displayed by a TextBox, or in fact pretty well any property of any control, to be bound to a data value, so that changing the data value automatically updates the property, and changing the property can also update the data value. Let’s start with the simplest case: binding the value displayed by a TextBox to a data field in an object. To work on this project, we’ll use Expression Blend (EB) for the design and binding work, and Visual Studio (VS) for writing the classes we need to support the data binding. I’ve built the user interface using EB and won’t go into the details here; you can look here for an introduction to using EB to build interfaces. If you want the XAML code, you can download the project files; see the end of this post for the link. We’ll need a class for storing the details of the number being considered, and for calculating the factors of that number. We’ll call that class PrimePerfect, and create it in VS. It’s a pretty standard class, except for one thing: it must be able to notify the controls when any of the relevant data fields are changed so they can be updated. In data binding, the way this is done is by writing a class that implements the INotifyPropertyChanged interface. The beginning of PrimePerfect therefore looks like this: using System.ComponentModel; ...... class PrimePerfect : INotifyPropertyChanged // in System.ComponentModel { public event PropertyChangedEventHandler PropertyChanged; protected void Notify(string propName) { if (this.PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propName)); } } ...... Note the using statement at the top; it is required for notification code. Implementing INotifyPropertyChanged requires that we declare an event handler for PropertyChanged, as we’ve done on line 5. The Notify() method on line 6 takes a string parameter which is used to identify which property has changed, and the event handler is then called, passing along this parameter. The rest of the class consists of standard code, defining the properties and calculating the factors of the number. A typical property looks like this: int numFactors; public int NumFactors { get { return numFactors; } set { numFactors = value; Notify("NumFactors"); } } The property contains the usual ‘get’ and ‘set’ clauses, but note that in the ‘set’, a call to Notify() is made, passing along a string identifying which property has been set. This property string is used to bind the text field of a TextBox to the value of NumFactors. To see how this is done, we return to EB (make sure you save and build the project at this point, so the updates to the PrimePerfect class are available to EB). We’ll add the binding between the factorsTextBox (that displays the current value of NumFactors) and the NumFactors data field in PrimePerfect. In EB, select factorsTextBox. In ‘Common Properties’ in the right panel, find the Text entry. Click on the little square to the right of the entry, and select ‘Data binding’ from the context menu. In the dialog box that appears, click “Data field” at the top, then “+CLR Object”. This brings up another dialog that shows the available data sources for the binding. Find the namespace containing the PrimePerfect class (in the project here, I’ve called it PrimePerfectBinding), and select the PrimePerfect class, then OK. You should now see PrimePerfectDataSource appear in the Data sources column. Click on it, and PrimePerfect should appear in the Fields list on the right. Open it and select NumFactors from the list (if you can’t see it, try setting ‘Show’ to ‘All properties’ in the combo box at the bottom). Click OK and the binding is complete. If you now look at the XAML code for factorsTextBox, you should see Text=”{Binding NumFactors}” as one of the attributes. If you’ve been following faithfully, you’ll probably have noticed that although we’ve defined the PrimePerfect class and added notification code to it, and also added a binding to factorsTextBox, we haven’t actually created an instance of PrimePerfect yet. To remedy this, go to the C# code-behind file for MainWindow, and add a couple of lines to it as follows. PrimePerfect primePerfect; public MainWindow() { InitializeComponent(); primePerfect = new PrimePerfect(); baseGrid.DataContext = primePerfect; } You’ll see we create an instance of PrimePerfect (its default constructor sets Number = 2). However, we’ve also added a cryptic line in which a DataContext is set to this object. What’s a DataContext? The data context is an object that serves as the source of the data in a binding. When a control, such as factorsTextBox, was assigned a binding, all we did was specify the name of the data field (“NumFactors”) to which it is bound. We didn’t give the TextBox any information about where it should look for this data. The WPF data binding mechanism takes care of this by having a control with a data binding look for a data context. It will look first in the control itself. If a data context isn’t found there, it will look in the parent container of that control, and so on until it reaches the root container (usually the Window). What we’ve done here is attach a DataContext to the Grid that contains all the controls in the interface (‘baseGrid’ is the name of that Grid object). This means that all controls in that Grid that have data bindings will look in the same PrimePerfect object for their data. This completes the data binding process for factorsTextBox. To get the program to run at this point, we need to complete the code in PrimePerfect so that it calculates NumFactors and SumFactors for the current Number. This code doesn’t have any bearing on the data binding so we won’t list it here, but if you want to see how it’s done, just download the source code and have a look at it. If you run the program at this point, you should see the Factors TextBox display the current value of NumFactors, which is 0 (since 2 is the starting Number, and it’s prime). The Number and Sum of factors TextBoxes won’t show anything unless you go through the same process and bind them to the corresponding properties in PrimePerfect. There’s nothing new involved in doing this, so you can either try it yourself or just look at the source code to see how it’s done. So far, we haven’t added any event handlers to the buttons, so let’s finish with that. We can add handlers for the buttons by double clicking on the Click event for each button in EB. This generates event handlers in MainWindow.xaml.cs, and we can enter some code in them as follows. private void incButton_Click(object sender, RoutedEventArgs e) { primePerfect.Number++; } private void decButton_Click(object sender, RoutedEventArgs e) { primePerfect.Number--; } private void exitButton_Click(object sender, RoutedEventArgs e) { Application.Current.Shutdown(); } private void resetButton_Click(object sender, RoutedEventArgs e) { primePerfect.Number = 2; } There’s nothing surprising here. The three buttons that change the value of Number do so with a single line each. Now if you run the program, you’ll see that clicking on each button does change the value of Number, and also updates Factors and Sum of factors. Why? Because whenever we change the value of Number, we are calling the ‘set’ method of the Number property in PrimePerfect, and this calls code that recalculates NumFactors and SumFactors. All of this fires a Notify event for each property, so any control that is bound to that property gets updated automatically. At this stage, we have a program that allows us to step through the numbers and display NumFactors and SumFactors for each number. However, we still haven’t added the colour-coding of the textboxes when we find a prime or perfect number, and the Dec button will allow us to set Number lower than 2, which may have disastrous consequences if the number goes negative. We can fix all these problems using data binding as well, but that’s a topic for the next post. Code for this post is here.
https://programming-pages.com/tag/property-change-event/
CC-MAIN-2018-26
refinedweb
1,801
60.14
Traveling Salesman Problem Solution using Branch and Bound. By Vamshi Jandhyala in mathematics optimization November 8, 2021 Branch and bound (BB) Branch and bound is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm. The algorithm depends on efficient estimation of the lower and upper bounds of regions/branches of the search space. If no bounds are available, the algorithm degenerates to an exhaustive search. Generic version of the BB algorithm The following is the skeleton of a generic branch and bound algorithm for minimizing an arbitrary objective function $f$. To obtain an actual algorithm from this, one requires a bounding function $\textbf{bound}$, that computes lower bounds of $f$ on nodes of the search tree, as well as a problem-specific branching rule. Using a heuristic, find a solution $x_h$ to the optimization problem. Store its value, $B = f(x_h)$. (If no heuristic is available, set $B$ to infinity.) $B$ will denote the best solution found so far, and will be used as an upper bound on candidate solutions. Initialize a queue to hold a partial solution with none of the variables of the problem assigned. Loop until the queue is empty: Take a node $N$ off the queue. If $N$ represents a single candidate solution $x$ and $f(x) < B$, then $x$ is the best solution so far. Record it and set $B ← f(x)$. Else, branch on $N$ to produce new nodes $N_i$. For each of these: If $bound(N_i) > B$, do nothing; since the lower bound on this node is greater than the upper bound of the problem, it will never lead to the optimal solution, and can be discarded. Else, store $N_i$ on the queue. Several different queue data structures can be used. This FIFO queue-based implementation yields a breadth-first search. A stack (LIFO queue) will yield a depth-first algorithm. A best-first branch and bound algorithm can be obtained by using a priority queue that sorts nodes on their lower bound. Examples of best-first search algorithms with this premise are Dijkstra’s algorithm and its descendant A* search. The depth-first variant is recommended when no good heuristic is available for producing an initial solution, because it quickly produces full solutions, and therefore upper bounds. Traveling Salesman Problem (TSP) The traveling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?” It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research. Solution to TSP using BB For the TSP example below we will use a branch and bound best-first search algorithm to find the shortest tour. The cities involved are ${c_0,c_1,c_2,c_3,c_4,c_5}$ and the distance matrix is: $$ \begin{bmatrix} \infty & 1 & 4 & 5 & 4 & 5 \\ 1 & \infty & 6 & 3 & 1 & 6 \\ 4 & 6 & \infty & 1 & 1 & 5 \\ 5 & 3 & 1 & \infty & 2 & 1 \\ 4 & 1 & 1 & 2 & \infty & 5 \\ 5 & 6 & 5 & 1 & 5 & \infty \end{bmatrix} $$ One of the minimum length tours given the above distance matrix is $[c_0, c_1, c_4, c_2, c_3, c_5, c_0]$ which has a length of $10$. Bounding function We use the following lower bounding method to evaluate any partial tour. If the TSP instance has $n$ cities and let $t = {c_1, c_2, …, c_m}$ be a partial tour. Then a lower bound for the length of a tour containing the given partial tour $t$ is given by $$ LB(t) = d(c_1, c_2) + d(c_2, c_3) + … + d(c_{m−1}, c_m) + n_{left} × d_{min}, $$ where $n_{left}$ is the number of edges yet needed to complete the tour and $d_min$ is the length of the smallest edge between any two cities in the instance. Python code Here is the Python code which is a straightforward adaptation of the generic version of the BB algorithm to the TSP using priority queue and the bound function described above: from dataclasses import dataclass, field from math import inf from queue import PriorityQueue def bound(node, dist_mat): d_min = min(d for row in dist_mat for d in row) n_left = len(dist_mat) - node.level return n_left * d_min def length(path, dist_mat): l = 0 for i in range(len(path[:-1])): l += dist_mat[path[i]][path[i+1]] return l @dataclass(order=True) class Node: bound: float level: int = field(compare=False) path: list = field(compare=False) def tsp_bb(dist_mat): opt_tour = None n = len(dist_mat) pq = PriorityQueue() u = Node(0, 0, [0]) u.bound = bound(u, dist_mat) pq.put(u) minlength = inf while(not pq.empty()): u = pq.get() if u.level == n-1: u.path.append(0) len_u = length(u.path, dist_mat) if len_u < minlength: minlength = len_u opt_tour = u.path else: for i in set(range(0, n))-set(u.path): u_new = Node(0, u.level + 1, u.path + [i]) u_new.bound = bound(u_new, dist_mat) if u_new.bound < minlength: pq.put(u_new) return opt_tour, length(opt_tour, dist_mat) dist_mat = [ [inf, 1, 4, 5, 4, 5], [1, inf, 6, 3, 1, 6], [4, 6, inf, 1, 1, 5], [5, 3, 1, inf, 2, 1], [4, 1, 1, 2, inf, 5], [5, 6, 5, 1, 5, inf], ] print(tsp_bb(dist_mat))
https://vamshij.com/blog/linear-optimization/branch-and-bound/
CC-MAIN-2022-21
refinedweb
985
61.36
elf@florence.buici.com wrote: On Mon, Nov 19, 2001 at 08:41:41PM +0100, Wichert Akkerman wrote:Previously elf@florence.buici.com wrote:In the 2.95.4 headers, the cstdio is a simple wrapper. What is the motive for using it?It puts things properly in the std namespace.I'm not following you. // The -*- C++ -*- standard I/O header. // This file is part of the GNU ANSI C++ Library. #ifndef __CSTDIO__ #define __CSTDIO__ #include <stdio.h> #endif The 2.95.4 headers are broken. Look at the v3 headers. -Dave -- "Some little people have music in them, but Fats, he was all music, and you know how big he was." -- James P. Johnson
https://lists.debian.org/debian-devel/2001/11/msg01295.html
CC-MAIN-2016-50
refinedweb
116
80.78
In today's modern computing world, there is an ever shrinking emphasis on efficient memory utilization. Because of the fact that it's now possible to store huge amounts of memory on tiny flash cards, most modern programming languages have long since abandoned micromanagement of memory in favor of class wrappers and pre-configured object models that make application development much more rapid. A great deal of the programmers out there have never experienced a time when memory and processor power were at a premium. Anyone familiar with the C programming language understands the complexities of memory management and the need for processor utilization. But development practices often come full circle. As devices and systems get smaller and smaller, consumers demand more from less. A common cellular phone these days includes a wireless transmitter and receiver, a camera, an LCD screen, some amount of solid state memory, an operating system, and a full complement of applications. More complex phones include Bluetooth™ connectivity, touch screen displays, full internet connectivity, and an even wider suite of applications. These days, people want a desktop computer in the palm of their hand. And, they are able to get it, thanks to efficient code. With this in mind, I present Part 1 of my series relating to the "ELMO" principle. ELMO is an acronym that I came up with that stands for Extremely Low Memory Optimization, and aims to be a principle that everyday programmers can employ effectively and efficiently in all applications in order to trim the proverbial fat from their applications while not sacrificing readability or reuse. If you're interested, please read on. For starters, I think it's important to outline just how much memory each common data type will take up when created. Many of today's programmers don't understand the relationship between, say, Booleans and bytes, simply because Microsoft® has such a brilliantly designed IDE that much of the old guesswork has been taken out of the equation. That doesn't make it any less important, though. So, here's a quick overview regarding types pertaining to the .NET Framework 2.0. Boolean byte UShort Short Int16 Char Int32 int float Long ULong Int64 double Decimal It's pretty easy to test this out in the .NET environment using unmanaged code, like so: static unsafe void Main(string[] args) { Console.WriteLine(“System.Boolean is: “ + sizeof(Boolean) + ” byte(s).”); Console.WriteLine(“System.Int16 is: “ + sizeof(Int16) + ” byte(s).”); Console.WriteLine(“System.Int32 is: “ + sizeof(Int32) + ” byte(s).”); Console.WriteLine(“System.Int64 is: “ + sizeof(Int64) + ” byte(s).”); Console.WriteLine(“Char is: “ + sizeof(char) + ” byte(s).”); Console.WriteLine(“Double is: “ + sizeof(double) + ” byte(s).”); Console.WriteLine(“Decimal is: “ + sizeof(decimal) + ” byte(s).”); Console.ReadLine(); } The above code outputs the following: The size of a System.Boolean is: 1 byte(s). The size of a System.Int16 is: 2 byte(s). The size of a System.Int32 is: 4 byte(s). The size of a System.Int64 is: 8 byte(s). The size of a char is: 2 byte(s). The size of a double is: 8 byte(s). The size of a decimal is: 16 byte(s). So, what does this mean to us? Well, it can dictate quite a bit about how we use types within our code. For the first case scenario, I'm going to pretend that we have the need to create an object capable of holding several Boolean values for us. For illustration, I'm going to say we need something that can hold 8 true or false (Boolean) values. Most application developers would start off with something that looks like the following implementation: true false public class CBoolHolder { private bool _bool1, _bool2, _bool3, _bool4, _bool5, _bool6, _bool7, _bool8; public CBool; } public bool Bool1 { get { return _bool1; } set { _bool1 = value; } } // [ ..Additional property implementation ] } While there's nothing technically wrong with this approach, it uses way more code and memory than is needed! First, let's start off with something that can be slightly scary to some developers, and archaic to others: the structure! Yes, I said structure. It consumes very little memory, and can do all of the things we need it to do. Our first implementation might look like this: public struct BoolHolder { public bool bool1, bool2, bool3, bool4, bool5, bool6, bool7, bool8; public Bool; } } Now, we have all of the same functionality, but it lives inside a struct on the stack instead of in an object on the heap. Better already! If we run our sizeof console application again, we get the following information: The size of BoolHolder is: 8 byte(s). OK, we're down to only 8 bytes, or 64 bits. Trimming down the fat already! So, what else can we do to squish down the memory usage and save space? Well, instead of using boolean values within the structure, how about we just use one single Int32? How, you might ask? Well, we're going to use bitwise operations. I'm not going to get into defining what they are here (for more information, visit the link I just provided), and I'm going to assume you know how bitwise operations work. If not, go and read up, then come back here to see how we can implement them into our structure like so: public struct BitHolder { int bits; public BitHolder(int holder) { bits = holder; } public bool this[int index] { get { // Check the bit at index and return true or false return (bits & (1 << index)) != 0; } set { if (value) { // Sets the bit at given index to 1 (true) bits |= (1 << index); } else { // Sets the bit at given index to 0 (false) bits &= ~(1 << index); } } } What this has done for us is the following: this[int index] this[0] In order to utilize our shiny new ELMO structure, we just do this: BitHolder holder = new BitHolder(0); holder[0] = false; holder[1] = true; holder[2] = false; holder[3] = false; holder[4] = true; holder[5] = false; holder[6] = false; holder[7] = true; Now, a single structure holds 8 boolean values (the int would look like 01001001), but we can easily use this to pull up to 8 true or false bits from a single data type. So, how does it hold up to our memory test? Let's take a look: The size of BitHolder is: 4 byte(s). We halved it! We're only using 32 measly bits because we only needed to use one single puny Int32 for storage of all of our values. Not bad for the amount of data we can actually store. We could, if we so desired, halve the size again, but we'd only be able to store 4 boolean values instead of 8, using a short or Int16 type. What's more is that it's not so hard to get and set those values. We can do it iteratively, like so: short for (int x=0; x<8; x++) Console.WriteLine(holder[x].ToString()); Or, value by value like we set them above (holder[x] = false). So, now you have an ultra-light, low memory structure instead of a fat and bloated class, that uses even less code. And, of course, structures can be nullable, and can be passed as arguments to constructors and methods for use and re-use. holder[x] = false I'd like to thank Chandra Hundigam, Charles Wright, and Kris Jamsa for their articles that helped me illustrate my ideas behind EL.
http://www.codeproject.com/Articles/22324/The-ELMO-Principle-Part-1-Stack-and-Heap-Usage?fid=948578&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed
CC-MAIN-2016-40
refinedweb
1,242
63.29
Allen Holub (threading@holub.com), Freelance author 10 Oct 2000. $ $task task. Toolkit.getImage() getImage(). run(). Active_object Runnable. SwingUtilities.invokeLater(): asynchronous interface Exception_handler { void handle_exception( Throwable e ); } class File_io_task { Active_object dispatcher = new Active_object(); final OutputStream file; final Exception_handler handler; File_io_task( String file_name, Exception_handler handler ) throws IOException { file = new FileOutputStream( file_name ); this.handler = handler; } public void write( final byte[] bytes ) { // The following call asks the active-object dispatcher // to enqueue the Runnable object on its request // queue. A thread associated with the active object // dequeues the runnable objects and executes them // one at a time. dispatcher.dispatch ( new Runnable() { public void run() { try { byte[] copy new byte[ bytes.length ]; System.arrayCopy( bytes, 0, copy, 0, bytes.length ); file.write( copy ); } catch( Throwable problem ) { handler.handle_exception( problem ); } } } ); } } All write requests are queued up on the active-object's input queue with a dispatch() call. Any exceptions that occur while processing the asynchronous message in the background are handled by the Exception_handler object that's passed into the File_io_task's constructor. You would write to the file like this: dispatch() Exception_handler File_io_task File_io_task io = new File_io_task ( "foo.txt" new Exception_handler { public void handle( Throwable e ) { e.printStackTrace(); } } ); //... io.write( some_bytes ); The main problem with this class-based approach is that it's way too complicated -- adding way too much clutter to the code to do a simple thing. Introducing the $task and $asynchronous keywords to the language lets you rewrite the previous code as follows: $asynchronous $task File_io $error{ $.printStackTrace(); } { OutputStream file; File_io( String file_name ) throws IOException { file = new FileOutputStream( file_name ); } asynchronous public write( byte[] bytes ) { file.write( bytes ); } }. class. $error: some_task.close() TaskClosedException some_task.join(). public $pooled(n) n: public $pooled(10) $task Client_handler { PrintWriter log = new PrintWriter( System.out ); public asynchronous void handle( Socket connection_to_the_client ) { log.println("writing"); // client-handling code goes here. Every call to // handle() is executed on its own thread, but 10 // threads are pre-created for this purpose. Additional // threads are created on an as-needed basis, but are // discarded when handle() returns. } } $task Socket_server { ServerSocket server; Client_handler client_handlers = new Client_handler(); public Socket_server( int port_number ) { server = new ServerSocket(port_number); } public $asynchronous listen(Client_handler client) { // This method is executed on its own thread. while( true ) { client_handlers.handle( server.accept() ); } } } //... Socket_server = new Socket_server( the_port_number ); server.listen()). Socket_server listen() Client_handler handle() $pooled. $pooled $task this clone() static Improvements to synchronized Though a $task eliminates the need for synchronization in many situations, all multithreaded systems cannot be implemented solely in terms of tasks. Consequently, the existing threading model needs to be updated as well. The synchronized keyword has several flaws: synchronized You can solve these problems by extending the syntax of synchronized both to support a list of multiple arguments and to accept a timeout specification (specified in brackets, below). Here's the syntax that I'd like: synchronized(x && y && z) x y z synchronized(x || y || z) synchronized( (x && y ) || z) synchronized(...)[1000] synchronized[1000] f(){...} f() A TimeoutException is a RuntimeException derivative that is thrown if the wait times out. TimeoutException RuntimeException. interrupt() SynchronizationException: class Broken { Object lock1 = new Object(); Object lock2 = new Object(); void a() { synchronized( lock1 ) { synchronized( lock2 ) { // do something } } } void b() { synchronized( lock2 ) { synchronized( lock1 ) { // do something } } }: a() lock1 lock2 b() //... void a() { synchronized( lock1 && lock2 ) { } } void b() { synchronized( lock2 && lock3 ) { } }: while( true ) { try { synchronized( some_lock )[10] { // do the work here. break; } } catch( TimeoutException e ) { continue; } } If everybody who's waiting for the lock uses a slightly different timeout value, the deadlock will be broken and one of the threads will get to run. I propose the following syntax to represent the preceding code: synchronized( some_lock )[] { // do the work here. } The synchronized statement waits forever, but gives up the lock occasionally to break a potential deadlock. Ideally, the timeout used for each iteration would differ from the previous iteration by some random amount. Improvements to wait() and notify() The wait()/notify() system also has problems: wait() notify() The timeout-detection problem is easily solved by redefining wait() to return a boolean (rather than void). A true return value would indicate a normal return, false would indicate a timeout. boolean void true false: class Stack { LinkedList list = new LinkedList(); public synchronized void push(Object x) { synchronized(list) { list.addLast( x ); notify(); } } public synchronized Object pop() { synchronized(list) { if( list.size() <= 0 ) wait(); return list.removeLast(); } } }. get() put() Stack LinkedList synchronized(list): (a && (b || c)).wait(); where a, b, and c are any Object. a b, c. Thread For example, the current syntax of: class My_thread implements Runnable { public void run(){ /*...*/ } } new Thread( new My_thread );. new Thread( new My_runnable_object(), new My_other_runnable_object() ); The existing idiom of overriding Thread and implementing run() should still work, but it should map to a single green thread bound to a lightweight process. (The default run() method in the Thread() class would effectively create a second Runnable object internally.) Thread(): PipedInputStream PipedOutputStream wait_for_start() $send(Object o) Object=$receive() $send() $receive() $send(Object o, long timeout) : static Object global_resource; //... public void a() { $reading( global_resource ) { // While in this block, other threads requesting read // access to global_resource will get it, but threads // requesting write access will block. } } public void b() { $writing( global_resource ) { // Blocks until all ongoing read or write operations on // global_resource are complete. No read or write // operation or global_resource can be initiated while // within this block. } } public $reading void c() { // just like $reading(this)... } public $writing void d() { // just like $writing(this)... }. $reading $writing If both reading and writing threads are waiting for access, the reading threads will get access first by default, but this default can be changed by modifying the class definition with the $writer_priority attribute. For example: $writer_priority $write_priority class IO { $writing write( byte[] data ) { //... } $reading byte[] read( ) { //... } }: class Broken { private long x; Broken() { new Thread() { public void run() { x = -1; } }.start(); x = 0; } }. new That is, the start() request must be deferred until the constructor returns. start() Alternatively, the Java programming language could permit the synchronization of constructors. In other words, the following code (which is currently illegal) would work as expected: class Illegal { private long x; synchronized Broken() { new Thread() { public void run() { synchronized( Illegal.this ) { x = -1; } } }.start(); x = 0; } }: package private protected protected private private private Fred some_method() public(Fred) void some_method() { }: public(Fred, Wilma) void some_method() { } Require all field definitions to be private unless they reference truly immutable objects or define static final primitive types. Directly accessing the fields of a class violates two basic principles of OO design: abstraction and encapsulation. From the threading perspective, allowing direct access to fields just makes it easier to inadvertently have unsynchronized access to a field. static final Add the $property keyword. Objects tagged in this way are accessible to a "bean box" application that is using the introspection APIs defined in the Class class, but otherwise work identically to private private. The $property attribute should be applicable to both fields and methods so that existing JavaBean getter/setter methods could be easily defined as properties. $property Class: final: $immutable public class Fred { // all fields in this class must be final, and if the // field is a reference, all fields in the referenced // class must be final as well (recursively). static int x constant = 0; // use of `final` is optional when $immutable // is present. } Given the $immutable tag, the use of final in the field definitions could be optional. $immutable Finally, a bug in the Java compiler makes it impossible to reliably create immutable objects when inner classes are on the scene. When a class has nontrivial inner classes (as most of mine do), the compiler often incorrectly prints the error message: "Blank final variable 'name' may not have been initialized. It must be assigned a value in an initializer, or in every constructor.": synchronized static class Broken { static long x; synchronized static void f() { x = 0; } synchronized void g() { x = -1; } };: g() class Broken { static long x; synchronized private static accessor( long value ) { x = value; } synchronized static void f() { x = 0; } synchronized void g() { accessor( -1 ); } } Or the compiler should require the use of a reader/writer lock: class Broken { static long x; synchronized static void f() { $writing(x){ x = 0 }; } synchronized void g() { $writing(x){ x = -1 }; } }. stop() ThreadGroup. yield() isStopped() stopped() isInterrupted() interrupted(). suspend() resume(). sleep() Moreover, your program must be able to timeout of a blocking I/O operation. All objects on which blocking operations can occur (such as InputStream objects) should support a method like this: InputStream s = ...; s.set_timeout( 1000 ); This is the equivalent to the Socket class's setSoTimeout(time) method. Similarly, you should be able to pass a timeout into the blocking call as an argument. setSoTimeout(time) The ThreadGroup class ThreadGroup should implement all the methods of Thread that can change a thread's state. I particularly want it to implement join() so that I can wait for all threads in the group to terminate. join(). Resources About the author Allen Holub has been working in the computer industry since 1979. He is widely published in magazines (Dr. Dobb's Journal, Programmers Journal, Byte, MSJ, among others), and he writes the "Java Toolbox" column for the online magazine JavaWorld . Allen has eight books to his credit, the latest of which covers the traps and pitfalls of Java threading (Taming Java Threads). He's been designing and building object-oriented software for longer than he cares to remember. After eight years as a C++ programmer, Allen abandoned C++ for Java in early 1996. He now looks at C++ as a bad dream, the memory of which is mercifully fading. He's been teaching programming (first C, then C++ and MFC, now OO-Design and Java) both on his own and for the University of California Berkeley Extension since 1982. Allen offers both public classes and in-house training in Java and object-oriented design topics. He also does object-oriented design consulting and contract Java programming. Get information, and contact Allen, via his Web site. Rate this page Please take a moment to complete this form to help us better serve you. Did the information help you to achieve your goal? Please provide us with comments to help improve this page: How useful is the information?
http://www.ibm.com/developerworks/library/j-king.html
crawl-001
refinedweb
1,695
54.73
Complete Roguelike Tutorial, using python+libtcod, part 13 Adventure gear Simplifying Now that you can explore a large dungeon, I'm sure you can't help but notice a few things missing. Where are all the swords, armor, enchanted boots and other assorted junk? Sure, we have some cool items, but they can only be used once. We can't really handle weapons and armor in the current system. How do we solve this? First, we can add a new component to take care of the new functionality. An item with the Equipment component can be equipped or taken off, and while equipped will give the player some bonuses (more power, defense, etc). Sounds good! Now we must plan ahead how this data will be stored in our game. It's time for a small detour into game architecture! You see, the way you store your data can have a big impact on how easy it will be to handle and debug. There are two types. A brittle data structure can be easily put in an inconsistent state. A strong data structure cannot; it always makes sense, no matter how you change it. For example, you can keep a list of equipped items. To equip, you move an item to the "equipped" list. There are several inconsistent states: what does it mean for an item to be on both lists? What does it mean to be on no list? (Python will just delete an object if it's not referenced anywhere.) This is brittle. A stronger data structure is to have a "is_equipped" property on each item, and if it's True the item is equipped. This is much harder to break, because in any case the item is either equipped or unequipped; there are no weird states. We will use the same idea for bonuses, which you'll see later on. In a nutshell, try to store data in a way that allows a minimum of inconsistent states. Duplicated data or data that requires perfect coordination to make sense is usually a bad sign. This is more of an art than a science, though, and there is no absolute answer. So after the tutorial it will all be up to you! Basic equipment Ok, it's time to code that up. We'll have an Equipment component that knows whether it's equipped. It will also have an associated slot, like 'hand' for weapons or 'head' for helmets. The slot will simply be indicated by a string. class Equipment: #an object that can be equipped, yielding bonuses. automatically adds the Item component. def __init__(self, slot): self.slot = slot self.is_equipped = False def toggle_equip(self): #toggle equip/dequip status if self.is_equipped: self.dequip() else: self.equip() def equip(self): #equip object and show a message about it self.is_equipped = True message('Equipped ' + self.owner.name + ' on ' + self.slot + '.', libtcod.light_green) def dequip(self): #dequip object and show a message about it if not self.is_equipped: return self.is_equipped = False message('Dequipped ' + self.owner.name + ' from ' + self.slot + '.', libtcod.light_yellow) Nothing fancy there! To allow objects to have this component, add equipment=None to the parameters of the Object class's __init__ function, and the usual component initialization code: self.equipment = equipment if self.equipment: #let the Equipment component know who owns it self.equipment.owner = self At that point, we can also create an Item component automatically, because a piece of Equipment is always an Item (can be picked up and used). #there must be an Item component for the Equipment component to work properly self.item = Item() self.item.owner = self When the player goes to the inventory screen and tries to use a piece of equipment, it will be equipped or dequipped. So, in the use function of the Item class, add to the beginning: #special case: if the object has the Equipment component, the "use" action is to equip/dequip if self.owner.equipment: self.owner.equipment.toggle_equip() return That's the basic functionality! To test it quickly, we can let a sword appear in the dungeon, by adding a new item choice in place_objects: elif choice == 'sword': #create a sword equipment_component = Equipment(slot='right hand') item = Object(x, y, '/', 'sword', libtcod.sky, equipment=equipment_component) And item_chances['sword'] = 25 after the other item's chances, at the top of that function. Ready to test! Equipping the sword doesn't do much though. You'll also notice you can equip 2 swords at once (how cool is that?). But 3 swords or more is a bit unrealistic, so we'll take care of that. Equipment polish We don't want to let the player equip more than one item in the same slot. Fair enough! Let's make a function to check if any item occupies a slot, and return it while we're at it: def get_equipped_in_slot(slot): #returns the equipment in a slot, or None if it's empty for obj in inventory: if obj.equipment and obj.equipment.slot == slot and obj.equipment.is_equipped: return obj.equipment return None We can use it to prevent a second item in the same slot, or better yet: dequip the old item to make room for the new one. In the equip function: #if the slot is already being used, dequip whatever is there first old_equipment = get_equipped_in_slot(self.slot) if old_equipment is not None: old_equipment.dequip() Another nice behavior is to automatically equip picked up items, if their slots are available. In the pick_up function, at the end: #special case: automatically equip, if the corresponding equipment slot is unused equipment = self.owner.equipment if equipment and get_equipped_in_slot(equipment.slot) is None: equipment.equip() It is necessary, though, that dropped items be dequipped; simply add to the drop function: #special case: if the object has the Equipment component, dequip it before dropping if self.owner.equipment: self.owner.equipment.dequip() Finally, another bit of polishing. We'd like to see in the inventory which items are equipped! So in inventory_menu, this information should be shown next to the item names. Replace the line options = [item.name for item in inventory] with: options = [] for item in inventory: text = item.name #show additional information, in case it's equipped if item.equipment and item.equipment.is_equipped: text = text + ' (on ' + item.equipment.slot + ')' options.append(text) That's it. You can check the equipment's state in the inventory screen, and it changes correctly as you pick up, drop, equip and dequip various items! Bonus round The last bit is to make equipment useful, by letting it change the player's stats when equipped. We can do this in different ways, but as I mentioned in the beginning, it's better to avoid brittle data structures. For example, you could simply add the bonus value to a stat (say, attack power) when the item is equipped, and subtract it when dequipped. This is brittle because any tiny mistake will permanently change the player's stats! A more reliable approach is to calculate on-the-fly the player's stats when they are needed, based on the original stat and any bonuses. This way there's no room for inconsistencies -- the stat is truly based on whatever bonuses apply at the moment. But how can we change a stored variable to a dynamic value? Won't this mean we have to change all of the code that uses those stats? Not really, because of a neat Python feature! You can define a read-only property that is calculated on-the-fly very easily: @property def power(self): return self.base_power + bonus The bonus will be defined later. So now accessing player.power will call this function instead of getting the value of a power variable. We still need a variable to hold the player's power not counting any bonuses, though, and that's called base_power. This means that, in the Fighter class's __init__ function, we don't initialize power directly, but rather base_power: self.base_power = power More generally, you can get the value of power normally, but you only change it through base_power. So, you must also make this change in check_level_up. All that's left is to calculate the bonus! An Equipment component will remember what's its power bonus, by passing it as a new argument at initialization. I will also go ahead and define the bonuses for all the other stats: def __init__(self, slot, power_bonus=0, defense_bonus=0, max_hp_bonus=0): self.power_bonus = power_bonus self.defense_bonus = defense_bonus self.max_hp_bonus = max_hp_bonus The power property can now just iterate through all equipped items, and sum up their bonuses to get the needed total: bonus = sum(equipment.power_bonus for equipment in get_all_equipped(self.owner)) Finally, we need a helper function that returns the list of equipped items. For the player, we just go through the inventory. For monsters, we just return an empty list since they don't really have any. Feel free to change this if you want to let monsters equip items as well! def get_all_equipped(obj): #returns a list of equipped items if obj == player: equipped_list = [] for item in inventory: if item.equipment and item.equipment.is_equipped: equipped_list.append(item.equipment) return equipped_list else: return [] #other objects have no equipment That's it! Attack power is now a dynamic property. That wasn't so hard! Remember that you can change the bonus calculation easily, so if there are other modifiers, permanent spells and other conditions, it's only a small change away. For the sake of completeness, here are the properties for the other stats: @property def defense(self): #return actual defense, by summing up the bonuses from all equipped items bonus = sum(equipment.defense_bonus for equipment in get_all_equipped(self.owner)) return self.base_defense + bonus @property def max_hp(self): #return actual max_hp, by summing up the bonuses from all equipped items bonus = sum(equipment.max_hp_bonus for equipment in get_all_equipped(self.owner)) return self.base_max_hp + bonus Don't forget to make the appropriate changes in check_level_up as well. Now we can define some items with hefty bonuses! In place_objects, I changed the sword to have power_bonus=3, and added a shield for good measure: elif choice == 'shield': #create a shield equipment_component = Equipment(slot='left hand', defense_bonus=1) item = Object(x, y, '[', 'shield', libtcod.darker_orange, equipment=equipment_component) You can get really creative with equipment, of course. I'll just modify the chances to make them appear at level 4 and level 8, respectively: item_chances['sword'] = from_dungeon_level([[5, 4]]) item_chances['shield'] = from_dungeon_level([[15, 8]]) Now, since we don't want the player to enter the dungeon unprepared, you can give him or her some starting equipment at the end of new_game: #initial equipment: a dagger equipment_component = Equipment(slot='right hand', power_bonus=2) obj = Object(0, 0, '-', 'dagger', libtcod.sky, equipment=equipment_component) inventory.append(obj) equipment_component.equip() obj.always_visible = True Not bad! I also decreased the player's starting power to 2; we don't want to be too generous. It's a dungeon of doom after all! I showed you read-only properties, which are a breeze to use. If you're wondering about writable properties, check out the Python docs on the subject. We managed to create a neat bonus system, and it's generic enough that you can add new stats and ways to change them very easily. There's also equipment and slots that you can choose at will. Now you can create all sorts of useful plunder for the player to discover! The whole code is available here.
http://roguebasin.com/index.php/Complete_Roguelike_Tutorial,_using_python%2Blibtcod,_part_13
CC-MAIN-2022-27
refinedweb
1,923
67.04
A Cute Kids Toy That Speaks With Arduino and Unity :) Introduction: A Cute Kids Toy That Speaks With Arduino and Unity :) please watch the video for demonstration. this project is purely out of boredom, I was experimenting with a flex sensor when the idea came to me, originally it was meant to use a flex sensor but after a second thought the same results can be achieve easily and more efficiently using an LDR. the main idea here is that we will constantly be monitoring the LDR values and check if it exceeds a certain limit which indicates the toy has closed it's mouth in which case a random sound is played through unity I think this can provide a couple of laughs and giggles from your kids or little sibilings , have fun :) Step 1: Lets Collect Our Components 1- a hand puppet 2- LDR 3- 10k ohm resistor 4- a little something to separate the legs of the LDR inside the puppet to prevent shot circut, I used Lego 5- a bunch of Audio sounds of your choice that you want your puppet to say. a great place to look is freesound.org *I don't understand Japanese, but I chose it as it does sound as if it would come out of a toy :) Step 2: Lets Build Our Toy 1- insert the two legs of the LDR inside the puppet's mouth, make sure they are reacheable from the inside as we need to connect them to wires 2- extend your wires that will be connecting your arduino and LDR 4 male-female wires 3- place your seperator on the LDR legs from inside 4- attach your wires to the legs and tape them so they won't fall Step 3: Lets Build Our Circut please see the picture included Step 4: Arduino Code in arduino we will constantly be monitering the LDR value at a loop delay of(100) and check if it exceeds a certain limit which indicates the toy has closed it's mouth then send value "1" to Unity void setup() { // put your setup code here, to run once: Serial.begin(9600); } void loop() { // put your main code here, to run repeatedly: int data = analogRead(A0); if(data>950){ Serial.write(1); } delay(100); } Step 5: Unity Code please make sure that your unity projects allows serial Communication by edit>project settings>player> scroll down to optimization and change api compatibility to .NET 2.0 1-import your sound files into unity (drag and drop) 2-create an AudioSource in your scene 3-Create an empty GameObject you can call it manager, and attach script to 4- add soundFiles to our publicly defined Array named clips(drag n drop) the main Idea is to have an array of SoundClip to hold our sound files, and constantly checking readings from arduino if at any given time the value is = "1" then randomly pick a sound file using Random.range and play it using UnityEngine; using System.Collections; using System.IO.Ports; public class Audio : MonoBehaviour { public AudioClip[] clips; public AudioSource player; private SerialPort port = new SerialPort(@"\\.\" + "COM11", 9600); // Use this for initialization void Start () { port.Open(); port.ReadTimeout = 25; } // Update is called once per frame void Update () { if (port.IsOpen) { try { int value = port.ReadByte(); Debug.Log(value); if (value == 1) { int random = Random.Range(0, clips.Length); if (!player.isPlaying) { player.clip = clips[random]; player.Play(); } } } catch (System.Exception) { } } } }
http://www.instructables.com/id/A-Cute-Kids-Toy-That-Speaks-With-Arduino-and-Unity/
CC-MAIN-2017-47
refinedweb
577
56.42
Creating a Generator Creating a Generator One of the most common types of components to create in Cocoon is to create a Generator. Whether you realize it or not, every time you write an XSP page, you are creating a Generator. XSP pages do a number of things for you, but there is a considerable amount of overhead involved with compiling and debugging. After all, when your XSP page isn't rendering like you expect and the XML is well-formed, where do you turn? You can examine the Java code that is generated from the XSP, but that can have its own set of challenges. I had a perfectly valid Java source file generated for Java 5's javac program, but it wouldn't compile in Cocoon. Why? The default compiler included with Cocoon doesn't support Java 5. Sometimes our needs are so simple and so narrowly defined that it would be much easier for us to create our Generator right in our own IDE using all of the creature features that are included. Eclipse and IDEA are both wonderfully rich environments to develop Java code. Generators are much simpler beasts than your transformers and your serializers, so it makes creating them directly even more enticing. Cocoon does have some wonderful generators like the JXTemplateGenerator and others, but we are going to delve into the world of creating our own. In the eyes of the Sitemap, all XML pipelines start with the Generator. By definition, a Generator is the first XMLProducer in the pipeline. It is the source of all SAX events that the pipeline handles. It is also a SitemapModelComponent, so it must follow those contracts as well. Lastly, it can be a CacheableProcessingComponent satisfying those contracts as well. As usual, the order of contracts honored starts with the SitemapModelComponent, then the CacheableProcessingComponent contracts, and lastly the XMLProducer contracts. If the results of the Generator can be cached, Cocoon will attempt use the cache and bypass the Generator altogether if possible. The Caching mechanism can take the place of the Generator because it can recreate a SAX stream on demand as an XMLProducer. In the big scheme of things, the Sitemap will setup() the Generator, and then assemble the XML pipeline. After Cocoon assembles the pipeline, chaining all XMLProducers to XMLConsumers (remember that an XMLPipe is both), Cocoon will call the generate() method on the Generator. That is the signal to start producing results, so send out the SAX events and have fun. We are going to keep things easy for our generator. As usual, we will make the results cacheable because that is just good policy. In the spirit of trying to be useful, as well as trying to keep things manageably simple, let's create a BeanGenerator. The XML generated will be very simple, utilizing the JavaBean contracts and creating an element for each property, and embedding the value of the property within that element. There won't be any attributes, and the namespace will match the the JavaBean's fully qualified class name. If a property has something other than a primitive type, a String or an equivalent (like Integer and Boolean) as its value, that object will be treated as a Bean. To realize these requirements we have to find a bean, and then "render it". In this case the XML rendering of the bean will be recursive. The general approach will use Java's reflection mechanisms, and only worry about properties. There will be a certain amount of risk involved with a complex bean that includes references to other beans in that if you have two beans referring to each other you will have an infinite loop. Detecting these is outside the scope of what we are trying to do, and that is generally bad design anyway so we won't worry too much about it. Yes there is overhead with the beans Introspector but we are writing for the general case. To set up our generator, we need to use a serializer that shows us what the results are, so we will set up our sitemap to use our generator like this: <map:match <map:generate <map:serialize </map:match> Even though it is generally bad design to have a static anything in a Cocoon application we are going to use a helper class called "BeanPool" with the get() and put() methods that are familiar from the HashMap. So that it is easier for you to change the behavior of the BeanGenerator, we will provide a nice protected method called findBean() which is meant to be overridden with something more robust. The Skeleton Our skeleton code will look like this: import org.apache.avalon.framework.parameters.Parameters;.generation.AbstractGenerator; import org.apache.excalibur.source.SourceValidity; import org.apache.excalibur.source.NOPValidity; import org.xml.sax.Attributes; import org.xml.sax.SAXException; import org.xml.sax.helpers.AttributesImpl; import java.beans.Introspector; import java.beans.BeanInfo; import java.beans.PropertyDescriptor; import java.lang.reflect.Method; import java.io.IOException; import java.io.Serializable; public class BeanGenerator extends AbstractGenerator implements CacheableProcessingComponent { private static final Attributes ATTR = new AttributesImpl(); private Object m_bean; protected Object findBean(String key) { // replace this with something more robust. return BeanPool.get(key); } public void setup( SourceResolver sourceResolver, Map model, String src, Parameters params ) throws IOException, ProcessingException, SAXException { // ... skip setup code for now } public void generate() throws IOException, SAXException, ProcessingException { // ... skip generate code for now } // ... skip other methods later. } As you can see, we have our simplified findBean() method which can be replaced with something more robust later. All you need to do to populate the BeanPool is to call the BeanPool.put(String key, Object bean) method from somewhere else. Setting up to Generate Before we can generate anything let's start with the setup() code: super.setup( sourceResolver, model, src, params ); m_bean = findBean(src); if ( null == m_bean ) { throw new ResourceNotFoundException(String.format("Could not find bean: %s", source)); } What we did is call the setup method from AbstractGenerator which populates some class fields for us (like the source field), then we tried to find the bean using the key provided. If the bean is null, then we follow the principle of least surprise and throw the ResourceNotFoundException so the Sitemap knows that we simply don't have the bean available instead of generating some lame 500 server error. That's all we have to do to set up this particular Generator. Oh, and since we do have the m_bean field populated we do want to clean up after ourselves properly. Let's add the recycle() method from Recyclable so that we don't give an old result when a bean can't be found: @Override public void recycle() { super.recycle(); m_bean = null; } The Caching Clues We are going to make the caching for the BeanGenerator really simple. Ideally we would have something that listens for changes and invalidates the SourceValidity if there is a change to the bean we are rendering. Unfortunately that is outside our scope, and we will set up the key so that it never expires unless it is done manually. Since we are using the source property from the Sitemap as our key, let's just use that as our cache key: public Serializable getKey() { return source; } And lastly, our brain-dead validity implementation: public SourceValidity getValidity() { return NOPValidity.SHARED_INSTANCE; } Using this approach is a bit naive in the sense that it is very possible that the beans will have changed. We could use an ExpiresValidity instead to make things a bit more resilient to change, but that is an excersize for you, dear reader. Generating Output Now that we have our bean, we are ready to generate our output. The AbstractXMLProducer base class (AbstractGenerator inherits from that) stores the target in a class field named contentHandler. Simple enough. We'll start by implementing the generate() method, but we already know we need to handle beans differently than the standard String and primitive types. So let's stub out the method we will use for recursive serialization. Here we go: public void generate() { contentHandler.startDocument(); renderBean(source, m_bean); contentHandler.endDocument(); } All we did was call the start and end document for the whole XML Document. That is enough for a basic XML document with no content. The renderBean() method is where the magic happens: public void renderBean(String root, Object bean) { String namespace = String.format( "java:%s", bean.getClass().getName() ); qName = String.format( "%s:%s", root,root ); contentHandler.startPrefixMapping( root, namespace ); contentHandler.startElement( namespace, root, qName, ATTR ); BeanInfo info = Introspector.getBeanInfo(bean.getClass()); PropertyDescriptor[] descriptors = info.getPropertyDescriptors(); for ( PropertyDescriptor property : descriptors ) { renderProperty( namespace, root, property ); } contentHandler.endElement( namespace, root, qName ); contentHandler.endNamespace( root ); } So far we created the root element and started iterating over the properties. Our root element consists of a namespace a name and a qName. Our implementation is using the source for the initial root element so as long as we never have any special characters like a colon (':') we should be OK. Without going through the individual properties, a java.awt.Dimension object with a source of "dim" will be redered like this: <dim:dim xmlns: Now for the properties: private void renderProperty( String namespace, String root, PropertyDescriptor property ) { Method reader = property.getReadMethod(); Class<?> type = property.getPropertyType(); // only output if there is something to read, and it is not an indexed type if ( null != reader && null != type ) { String name = property.getName(); String qName = String.format( "%s:%s", root,name ); Object value = reader.invoke( m_bean ); contentHandler.startElement( namespace, name, qName, ATTR ); if ( isBean(type) ) { renderBean( name, value ) } else if ( null != value ) { char[] chars = String.valueOf(value).toCharArray(); contentHandler.characters(chars, 0, chars.length); } contentHandler.endElement( namespace, name, qName ); } } This method is a little more complex in that we have to figure out if the property is readable, and is a type we can handle. In this case, we don't read indexed properties (if you want to support that, you'll have to extend this code to do that), and we don't read any properties where there is no read method. We use the property name for the elements surrouding the property values. We get the value, and then we call the start and end elements for the property. Inside of the calls, we determine if the item is a bean, and if so we render the bean using the renderBean method (the recursive aspect); otherwise we render the content as text as long as it is not null. Once the isBean() method is implemented, our Dimension example above will produce the following result: <dim:dim xmlns: <dim:width>32</dim:width> <dim:height>32</dim:height> </dim:dim> Ok, now for the last method to determine if a value is a bean or not: private boolean isBean(Class<?> klass) { if ( Boolean.TYPE.equals( klass ) ) return false; if ( Byte.TYPE.equals( klass ) ) return false; if ( Character.TYPE.equals( klass ) ) return false; if ( Double.TYPE.equals( klass ) ) return false; if ( Float.TYPE.equals( klass ) ) return false; if ( Integer.TYPE.equals( klass ) ) return false; if ( Long.TYPE.equals( klass ) ) return false; if ( Short.TYPE.equals( klass ) ) return false; if ( java.util.Date.class.equals( klass ) ) return false; // treat dates as value objects if ( klass.getName().startsWith( "java.lang" ) ) return false; return true; } The isBean() method will treat all primitives, Strings, Dates, and anything in "java.lang" as value objects. This captures the boxed versions of primitives as well as the unboxed versions. Everything else is treated as a bean. Generators aren't too difficult to write, but the tricky parts are there due to namespaces. As long as you are familiar with the SAX API you should not have any problems. The complexity in our generator is really from the reflection logic used to discover how to render an object. You might ask why we didn't use the XMLEncoder in the java.beans package. The answer has to do with the fact that the facility is based on IO streams, and can't be easily adapted to XML streams. At any rate, we have something that can work with a wide range of classes. Our XML is easy to understand. Here is a snippet from a more complex example: <line:line xmlns: <line:name>This line has a name</line:name> <line:topLeft> <topLeft:topLeft xmlns: <topLeft:x>1</topLeft:x> <topLeft:y>1</topLeft:y> </topLeft:topLeft> </line:topLeft> <line:bottomRight> <bottomRight:bottomRight xmlns: <bottomRight:x>20</bottomRight:x> <bottomRight:y>20</bottomRight:y> </bottomRight:topLeft> </bottomRight:topLeft> </line:line> Our theoretical line object contained a name and two java.awt.Point objects which in turn had an x and a y property. It is easier to understand when you have domain specific beans that are backed to a database. Nevertheless, we have a generator that satisfies a general purpose and can be extended later on to support our needs as they change.
http://cocoon.apache.org/2.2/core-modules/core/2.2/688_1_1.html
CC-MAIN-2016-50
refinedweb
2,153
56.15
The following code-sequence showed up in a testcase (isolated from SPEC2017) for if-conversion and vectorization when searching for the maximum in an array: addi a2, zero, 1 blt a1, a2, .LBB0_5 which can be expressed as `bge zero,a1,.LBB0_5`/`blez a1,/LBB0_5`. More generally, we want to express (a < 1) as (a <= 0). This adds the required isel-pattern and updates the testcases. We already have this in this exact spot in the file. Is your repo out of date? // Match X > -1, the canonical form of X >= 0, to the bgez pattern. def : Pat<(brcond (XLenVT (setgt GPR:$rs1, -1)), bb:$imm12), (BGE GPR:$rs1, X0, bb:$imm12)>; Added a testcase and rebased onto commit 49942c6d4a0aee740381f754a6a0e6f7e1bfed43 Author: Quentin Colombet <qcolombet@apple.com> Date: Wed Mar 10 13:28:53 2021 -0800 This feels like it should be a target-independent DAGCombine?. In D98449#2622341, @craig.topper wrote:. Ah ok then this looks good to me. Would be good to have the test pre-committed with the worse codegen as usual though. Indentation looks a little off here. Everything else seems to have aligned the to and from. Isn't this the blez pattern? The "into" doesn't read right after the "as" Maybe " Lower (a < 1) as (X0 >= a), the blez pattern." This code is pretty trivially unreachable. The block above jumps over it. I'm a little surprised CodeGenPrepare or UnreachableBlockElim didn't notice that. I'd prefer we make it reachable so it can't break in the future. FYI, I put up a related patch to fix this same issue for select I applied this patch and this test case fails because the unreachable block does get deleted. This patch also appears to affect llvm/test/CodeGen/RISCV/hoist-global-addr-base.ll This updates the test cases and I have rerun 'ninja check' to ensure no further test issues. The commit message has also been updated to reflect that this is a blez-instruction. Update whitespaces to all-spaces for tabs-and-spaces. Update the comment to reflect that this will turn into a blez instruction. LGTM. Do you need me to commit it? If so can you let me know your email and how you want the Author line to appear in the commit log. Please go ahead and commit this for me. Thanks, Philipp.
https://reviews.llvm.org/D98449
CC-MAIN-2021-25
refinedweb
394
75.91
So I started my first Java class last week, and Ia m having problems. The question I have in my assignment is; "Create an application named TestMethods whose main() methods holds two integer variables. Assign values to the variables. In turn, pass each value to methods named displayIt(), displayItTimesTwo(), and displatItPlusOneHundred(). Create each method to perform the task its name implies. Save the application as TestMethods.java." So this is what I have so far... public class TestMethods { public static void main(String[] args) { int myVariable1 = 10; int myVariable2 = 27; System.out.println(displayIt (myVariable1, myVariable2)); } public static void displayIt(int myVariable1, int myVariable2) { System.out.println("My two variables are " + myVariable1 + " and " + myVariable2); } } I keep getting "void type not allowed here' on line 16 which is bold. What am I doing wrong here? This is my first assignment, and it has been frustrating. I know it's an easy question, but I cannot get it in to code for the very first part. Any help is greatly appreciated.
http://www.javaprogrammingforums.com/whats-wrong-my-code/9070-1st-java-class-assignment-having-some-difficulties-little-help-please.html
CC-MAIN-2014-41
refinedweb
169
59.9
Details - Type: Bug - Status: Resolved - Priority: Major - Resolution: Fixed - Affects Version/s: JRuby 1.6.7, JRuby 1.7.0.pre1 - Fix Version/s: JRuby 1.7.0.RC1 - Component/s: None - Labels:None - Number of attachments : Description Trying my code with -X+C I got this weird error. It does not seems to be due to String#rpartition, but rather because of a bad scope. Steps to reproduce: $ git clone git://github.com/jruby/perfer.git $ bundle $ JRUBY_OPTS="-X+C -Xbacktrace.style=raw" bin/perfer run examples/file_stat.rb The output I got is in this gist: Activity The issue was that rpartition and partition were not specified as needing access to the caller's backref (reads = BACKREF, writes = BACKREF in @JRubyMethod), and as a result no scope was being set up for the caller. In this particular case, a dummy scope was being set up for the caller (for one of the other calls or constructs in the method) and DummyDynamicScope always returns nil for backref. Trivial reproduction: def foo Object "/Users/headius/projects/jruby/tmp/perfer/examples/file_stat.rb:4:in `(root)'".rpartition(/:\d+(?:$|:in )/).first end foo (Here, the access of Object triggers a dummy scope) The fix is trivial. I'm trying to set up a test case, and then I'll commit. commit ff42564963ac0d5b8b6471f23fd3441fb2db6fba Author: Charles Oliver Nutter <headius@headius.com> Date: Wed Aug 22 18:42:34 2012 -0500 Fix JRUBY-6827 ClassCastException with DummyDynamicScope in String#rpartition? with -X+C String#partition and rpartition need read/write of backref. :100644 100644 278c7ff... cf60086... M spec/compiler/general_spec.rb :100644 100644 48e0a15... 93d6813... M src/org/jruby/RubyString.java Awesome, thanks for the explanation, I'll hopefully be able to patch this myself next time I encounter something like that! Just to add some notes: The ClassCast is this last line, but it should not class cast since pos >= 0. Benoit is correct that this must be some bad scope setup. Plus it works in mixed + interp modes.
http://jira.codehaus.org/browse/JRUBY-6827
CC-MAIN-2013-20
refinedweb
335
58.79
view raw I have a question regarding a school lab assignment and I was hoping someone could clarify this a little for me. I'm not looking for an answer, just an approach. I've been unable to fully understand the books explanations. Question: In a program, write a function that accepts three arguments: an array, the size of the array, and a number n. Assume that the array contains integers. The function should display all of the numbers in the array that are greater than the number n . /* Programmer: Reilly Parker Program Name: Lab14_LargerThanN.cpp Date: 10/28/2016 Description: Displays values of a static array that are greater than a user inputted value. Version: 1.0 */ #include <iostream> #include <iomanip> #include <cmath> using namespace std; void arrayFunction(int[], int, int); // Prototype for arrayFunction. int[] = array, int = size, int = n int main() { int n; // Initialize user inputted value "n" cout << "Enter Value:" << endl; cin >> n; const int size = 20; // Constant array size of 20 integers. int arrayNumbers[size] = {5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24}; // 20 assigned values for the array arrayFunction(arrayNumbers, size, n); // Call function return 0; } /* Description of code below: The For statement scans each variable, if the array values are greater than the variable "n" inputted by the user the output is only those values greater than "n." */ void arrayFunction(int arrayN[], int arrayS, int number) // Function Definiton { for (int i=0; i<arrayS; i++) { if (arrayN[i] > number) { cout << arrayN[i] << " "; cout << endl; } } } For my whole answer I assume that this: Question: In a program, write a function that accepts three arguments: an array, the size of the array, and a number n. Assume that the array contains integers. The function should display all of the numbers in the array that are greater than the number n . is the whole assignment. void arrayFunction(int[], int, int); is probably the only thing you could write. Note however that int[] is in fact int*. As others pointed out don't bother with receiving input. Use something along this line: int numbers[] = {2,4,8,5,7,45,8,26,5,94,6,5,8};. It will create static array for you; You have parameter int n but you never use it. You are trying to send variable to the function arrayFunction but I can't see definition of this variable! Use something called rubber duck debugging (google for it :) ). It will really help you. If you have some more precise question, ask them. As a side note: there are better ways of sending an array to the function, but your assignment forces you to use this old and not-so-good solution. Would you use an if else statement? I've edited my original post with the updated code. You have updated question, then I update my answer. First and foremost of all: do indent your code properly!!! If you do that, your code will be much cleaner, much more readable, and it will be much easier understandable not only for us, but primairly for you. Next thing: do not omit braces even if they are not required in some context. Even experienced programmers only rarely omit them, so as a beginner you should never do so (as for example with your for loop). Regarding if-else statement the short answer is: it depends. Sometimes I would use if (note: in your case else is useless). But other times I would use ternary operator: condition ? value_if_true : value_if_false; or even a lambda expression. In this case you should probably settle for an if, as it will be easier and more intuitive for you.
https://codedump.io/share/clAREcg6M9cU/1/array-function-would-appreciate-a-little-clarification
CC-MAIN-2017-22
refinedweb
619
64.41
: Do you have some error in your error log? Pydev should spawn a shell to analyze the completions or PyQt4... you could try adding 'PyQt4' to the forced builtins. See Cheers, Fabio The following forum message was posted by dbbh at: Im having a similar problem with PyDev PyQt4. When i try to run my script (from PyQt4 import QtGui) i get the following error: Traceback (most recent call File "/Users/henriques/EclipseProjects/python/HelloWorld/src/main.py", line 8, in <module> from PyQt4 import QtGui ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python 2.7/site-packages/PyQt4/QtGui.so, 2): Symbol not found: _PyCapsule_Import Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Py Qt4/QtGui.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-package s/PyQt4/QtGui.so The import is ok but any functions from the QtGui namespace gives an error (symbol not found). I have tried the following things: run directly from python shell (it works); add /lib and /lib/site-packages (where pyqt is), add only /lib/site-packages, add /lib/site-packages and /lib/site-packages/PyQt4/, add only /lib/site-packages/PyQt4/. Im currently using /lib /lib/site-packages. Other imports like scipy works fine. I have also tried to enable/disable code analysis. Im currently using python 2.7 with PyQt4 (latest version) with Eclipse Hellios and up to date PyDev. (Snow Leopard x86_64). I have build my libraries with the i386 arch and they work fine with python shell (terminal). Thanks in advance. The following forum message was posted by fabioz at: In this case, the problem doesn't seem to be the PYTHONPATH, but some other variable (not sure what variables pyqt needs, but I'd go for PATH and LIBPATH). Try seeing the differences from what you have inside eclipse and from your shell (note that /lib/site-packages/PyQt4 should NOT be added to your PYTHONPATH, only /lib/site-packages/, but things may be different for the other variables). Usually if you launch eclipse from the same shell where you have it working with the python shell, those should be correct. Cheers, Fabio The following forum message was posted by dbbh at: Dude, sorry for taking this long to reply but i`ve been very busy these last few days. You were correct. I feel so stupid right now, the system comes with an ApplePythonish version whilst i had an "official" python installed. Terminal was using the official python while pydev was using Apple`s python. I made a simbolic link from my python to the /usr/bin [backing up the other one b4] and it worked perfectly. Thanks! Just so that other ppl using Snow Leopard and python.org python can get it right with PyQt4 here is what i did. 1-Make sure eclipse is using the correct python terminal (if you click auto-configure it will go to the apple`s pyton, the default location for the user installed python is at /Library/Frameworks/Python.Frameworks/Version/X.X/bin) 2-Set eclipse PYTHONPATH to equal terminal`s pythonpath (i go-> import sys then print sys.path within the python shell to view pythonpath idk if there is another way) 3-Make sure the installed libraries are all working within python shell 4-(dont know if it works w/o this but i changed it anyways) Inside the environment tab set the value of path to terminal`s path (echo $PATH inside terminal) This is it thanks again! I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/pydev/mailman/message/26522684/
CC-MAIN-2017-30
refinedweb
644
64
Kubernetes StatefulSet — Examples & Best Practices by Bharathiraja Shanmugam Are you planning to deploy a database in the Kubernetes cluster? If so, then you’ve come to the right place. Kubernetes is a container orchestration tool that uses many controllers to run applications as containers (Pods). One of these controllers is called StatefulSet, which is used to run stateful applications. Deploying stateful applications in the Kubernetes cluster can be a tedious task. This is because the stateful application expects primary-replica architecture and a fixed Pod name. The StatefulSets controller addresses this problem while deploying the stateful application in the Kubernetes cluster. In this article, you will learn more about what StatefulSets in the Kubernetes cluster are and when to use them, as well as how to deploy the stateful application using the StatefulSets controller in a step-by-step example that includes all the manifest (YAML) files. Furthermore, the StatefulSets concept will be demonstrated using the MySQL database, which covers how to create a fixed Pod name for each MySQL replication and how to access the replicated Pods using the Services object. What Are Stateful Applications? Stateful applications are applications that store data and keep tracking it. All databases, such as MySQL, Oracle, and PostgreSQL, are examples of stateful applications. Stateless applications, on the other hand, do not keep the data. Node.js and Nginx are examples of stateless applications. For each request, the stateless application will receive new data and process it. In a modern web application, the stateless application connects with stateful applications to serve the user’s request. A Node.js application is a stateless application that receives new data on each request from the user. This application is then connected with a stateful application, such as a MySQL database, to process the data. MySQL stores data and keeps updating the data based on the user’s request. Read on to learn more about StatefulSets in the Kubernetes cluster-what they are, when to use them, how to create them, and what the best practices are. What Are StatefulSets? A StatefulSet is the Kubernetes controller used to run the stateful application as containers (Pods) in the Kubernetes cluster. StatefulSets assign a sticky identity-an ordinal number starting from zero-to each Pod instead of assigning random IDs for each replica Pod. A new Pod is created by cloning the previous Pod’s data. If the previous Pod is in the pending state, then the new Pod will not be created. If you delete a Pod, it will delete the Pod in reverse order, not in random order. For example, if you had four replicas and you scaled down to three, it will delete the Pod numbered 3. The diagram below shows how the Pod is numbered from zero and how the persistent volume is attached to the Pod in the StatefulSets. When to Use StatefulSets There are several reasons to consider using StatefulSets. Here are two examples: - Assume you deployed a MySQL database in the Kubernetes cluster and scaled this to three replicas, and a frontend application wants to access the MySQL cluster to read and write data. The read request will be forwarded to three Pods. However, the write request will only be forwarded to the first (primary) Pod, and the data will be synced with the other Pods. You can achieve this by using StatefulSets. - Deleting or scaling down a StatefulSet will not delete the volumes associated with the stateful application. This gives you your data safety. If you delete the MySQL Pod or if the MySQL Pod restarts, you can have access to the data in the same volume. Deployment vs. StatefulSets You can also create Pods (containers) using the Deployment object in the Kubernetes cluster. This allows you to easily replicate Pods and attach a storage volume to the Pods. The same thing can be done by using StatefulSets. What then is the advantage of using StatefulSets? Well, the Pods created using the Deployment object are assigned random IDs. For example, you are creating a Pod named “my-app”, and you are scaling it to three replicas. The names of the Pods are created like this: my-app-123ab my-app-098bd my-app-890yt After the name “my-app”, random IDs are added. If the Pod restarts or you scale it down, then again, the Kubernetes Deployment object will assign different random IDs for each Pod. After restarting, the names of all Pods appear like this: my-app-jk879 my-app-kl097 my-app-76hf7 All these Pods are associated with one load balancer service. So in a stateless application, changes in the Pod name are easily identified, and the service object easily handles the random IDs of Pods and distributes the load. This type of deployment is very suitable for stateless applications. However, stateful applications cannot be deployed like this. The stateful application needs a sticky identity for each Pod because replica Pods are not identical Pods. Take a look at the MySQL database deployment. Assume you are creating Pods for the MySQL database using the Kubernetes Deployment object and scaling the Pods. If you are writing data on one MySQL Pod, do not replicate the same data on another MySQL Pod if the Pod is restarted. This is the first problem with the Kubernetes Deployment object for the stateful application. Stateful applications always need a sticky identity. While the Kubernetes Deployment object offers random IDs for each Pod, the Kubernetes StatefulSets controller offers an ordinal number for each Pod starting from zero, such as mysql-0, mysql-1, mysql-2, and so forth. For stateful applications with a StatefulSet controller, it is possible to set the first Pod as primary and other Pods as replicas-the first Pod will handle both read and write requests from the user, and other Pods always sync with the first Pod for data replication. If the Pod dies, a new Pod is created with the same name. The diagram below shows a MySQL primary and replica architecture with persistent volume and data replication architecture. Now, add another Pod to that. The fourth Pod will only be created if the third Pod is up and running, and it will clone the data from the previous Pod. In summary, StatefulSets provide the following advantages when compared to Deployment objects: - Ordered numbers for each Pod - The first Pod can be a primary, which makes it a good choice when creating a replicated database setup, which handles both reading and writing - Other Pods act as replicas - New Pods will only be created if the previous Pod is in running state and will clone the previous Pod’s data - Deletion of Pods occurs in reverse order How to Create a StatefulSet in Kubernetes In this section, you will learn how to create a Pod for MySQL database using the StatefulSets controller. Create a Secret To start, you will need to create a Secret for the MySQL application that will store sensitive information, such as usernames and passwords. Here, I am creating a simple Secret. However, in a production environment, using the HashiCorp Vault is recommended. Use the following code to create a Secret for MySQL: apiVersion: v1 kind: Secret metadata: name: mysql-password type: opaque stringData: MYSQL_ROOT_PASSWORD: password Save the code using the file name mysql-secret.yaml and execute the code using the following command on your Kubernetes cluster: kubectl apply -f mysql-secret.yaml Get the list of Secrets: kubectl get secrets Create a MySQL StatefulSet Application Before creating a StatefulSet application, check your volumes by getting the persistent volume list: kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM STATUS pvc-e0567 10Gi RWO Retain Bound Next, get the persistent volume claim list: kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS mysql-store-mysql-set-0 Bound pvc-e0567d43ffc6405b 10Gi RWO Last, get the storage class list: kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY linode-block-storage linodebs.csi.linode.com linode-block-storage-retain (default) linodebs.csi.linode.com Retain Then use the following code to create a MySQL StatefulSet application in the Kubernetes cluster: apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql-set spec: selector: matchLabels: app: mysql serviceName: "mysql" replicas: 3 template: metadata: labels: app: mysql spec: terminationGracePeriodSeconds: 10 containers: - name: mysql image: mysql:5.7 ports: - containerPort: 3306 volumeMounts: - name: mysql-store mountPath: /var/lib/mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-password key: MYSQL_ROOT_PASSWORD volumeClaimTemplates: - metadata: name: mysql-store spec: accessModes: ["ReadWriteOnce"] storageClassName: "linode-block-storage-retain" resources: requests: storage: 5Gi Here are a few things to note: - The kindis a StatefulSet. kindtells Kubernetes to create a MySQL application with the stateful feature. - The password is taken from the Secret object using the secretKeyRef. - The Linode block storage was used in the volumeClaimTemplates. If you are not mentioning any storage class name here, then it will take the default storage class in your cluster. - The replication count here is 3 (using the replica parameter), so it will create three Pods named mysql-set-0, mysql-set-1, and mysql-set-2. Next, save the code using the file name mysql.yaml and execute using the following command: kubectl apply -f mysql.yaml Now that the MySQL Pods are created, get the Pods list: kubectl get pods NAME READY STATUS RESTARTS AGE mysql-set-0 1/1 Running 0 142s mysql-set-1 1/1 Running 0 132s mysql-set-2 1/1 Running 0 120s Create a Service for the StatefulSet Application Now, create the service for the MySQL Pod. Do not use the load balancer service for a stateful application, but instead, create a headless service for the MySQL application using the following code: apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - port: 3306 clusterIP: None selector: app: mysql Save the code using the file name mysql-service.yaml and execute using the following command: kubectl apply -f mysql-service.yaml Get the list of running services: kubectl get svc Create a Client for MySQL If you want to access MySQL, then you will need a MySQL client tool. Deploy a MySQL client using the following manifest code: apiVersion: v1 kind: Pod metadata: name: mysql-client spec: containers: - name: mysql-container image: alpine command: ['sh','-c', "sleep 1800m"] imagePullPolicy: IfNotPresent Save the code using the file name mysql-client.yaml and execute using the following command: kubectl apply -f mysql-client.yaml Then enter this into the MySQL client: kubectl exec --stdin --tty mysql-client -- sh Finally, install the MySQL client tool: apk add mysql-client Access the MySQL Application Using the MySQL Client Next, access the MySQL application using the MySQL client and create databases on the Pods. If you are not already in the MySQL client Pod, enter it now: kubectl exec -it mysql-client /bin/sh To access MySQL, you can use the same standard MySQL command to connect with the MySQL server: mysql -u root -p -h host-server-name For access, you will need a MySQL server name. The syntax of the MySQL server in the Kubernetes cluster is given below: stateful_name-ordinal_number.mysql.default.svc.cluster.local #Example mysql-set-0.mysql.default.svc.cluster.local Connect with the MySQL primary Pod using the following command. When asked for a password, enter the one you made in the “Create a Secret” section above. mysql -u root -p -h mysql-set-0.mysql.default.svc.cluster.local Next, create a database on the MySQL primary, then exit: create database erp; exit; Now connect the other Pods and create the database like above: mysql -u root -p -h mysql-set-1.mysql.default.svc.cluster.local mysql -u root -p -h mysql-set-2.mysql.default.svc.cluster.local Remember that while Kubernetes helps you set up a stateful application, you will need to set up the data cloning and data sync by yourself. This cannot be done by the StatefulSets. Best Practices If you are planning to deploy stateful applications, such as Oracle, MySQL, Elasticsearch, and MongoDB, then using StatefulSets is a great option. The following points need to be considered while creating stateful applications: - Create a separate namespace for databases. - Place all the needed components for stateful applications, such as ConfigMaps, Secrets, and Services, in the particular namespace. - Put your custom scripts in the ConfigMaps. - Use headless service instead of load balancer service while creating Service objects. - Use the HashiCorp Vault for storing your Secrets. - Use the persistent volume storage for storing the data. Then your data won’t be deleted even if the Pod dies or crashes. Deployment objects are the most used controller to create Pods in Kubernetes. You can easily scale these Pods by mentioning replication count in the manifest file. For stateless applications, using Deployment objects is most suitable. For example, assume you are planning to deploy your Node.js application and you want to scale the Node.js application to five replicas. In this case, the Deployment object is well suited. The diagram below shows how Deployment and StatefulSets assign names to the Pods. StatefulSets create ordinal service endpoints for each Pod created using the replica option. The diagram below shows how the stateful Pod endpoints are created with ordinal numbering and how they communicate with each other. Conclusion In this article, you learned about Kubernetes’s two main controllers for creating Pods: Deployments and StatefulSets. A Deployment object is well suited for stateless applications, and the StatefulSets controller is well suited for stateful applications. If you are planning to deploy stateful applications, such as MySQL and Oracle, then you should use the StatefulSets controller instead of the Deployment object. The StatefulSets controller offers an ordinal number feature for each Pod starting from zero. This helps stateful applications easily set up a primary-replica architecture, and if a Pod dies, a new Pod is re-created using the same name. This is a very useful feature and does not break the chain of stateful application clusters. If you are scaling down, then it deletes in the reverse order. Use the StatefulSets controller in the Kubernetes cluster for deploying stateful applications, such as Oracle, MySQL, Elasticsearch, and MongoDB. While cloning and syncing data must still be completed manually, StatefulSets go a long way in easing the complexity involved in deploying stateful applications. Originally published at.
https://loft-sh.medium.com/kubernetes-statefulset-examples-best-practices-902cd50f7fff?es_id=099a1a4986
CC-MAIN-2022-05
refinedweb
2,387
52.8
For one of my Lucky apps, I'm using Stimulus which is light-weight. I wanted to avoid doing a ton on the client side so that way I can keep all the nice benefits of Crystal and Lucky. In a recent instance, I had to load some comments on to a page asynchronously. Normally in this case, you might write some javascript to make an api call, then gather all of your records in a giant JSON object. Then you'd probably iterate over the result, and create some sort of component. Maybe in Vue, or in React, and have those render each comment. With Stimulus, you don't have built-int templating. You're left with writing a lot of document.createElement, and element.setAttribute type stuff. Or you just use lots of innerHTML = '' type calls. What I wanted was to write my markup in Lucky, then make an API call that returns the markup, and I can just shove that in to an element and be done with it. By default, the actions in Lucky want you to render an HTML Page. But I didn't want to make a blank layout just for these. So here's what I did: # src/components/comments_list.cr class CommentsList < BaseComponent needs save_comment : SaveComment needs comments : CommentQuery # Renders the comments form, and each comment def render # This could also move to it's own component! form_for Comments::Create do textarea save_comment.body, class: "field" submit "Post Comment" end div class: "comments" do @comments.each do |comment| mount Comment.new(comment) end end end end Now I just need a handy helper macro in my ApiAction. # src/actions/api_action.cr abstract class ApiAction < Lucky::Action accepted_formats [:json], default: :json macro render_component(component_class, **assigns) send_text_response({{ component_class }}.new( {% for key, value in assigns %} {{ key }}: {{ value }}, {% end %} ).render_to_string, "text/html") end end Then I can use this new macro in my api actions! # src/actions/api/comments/index.cr class Api::Comments::Index < ApiAction get "/api/comments" do save_comment = SaveComment.new comments = CommentQuery.all render_component CommentList, save_comment: save_comment, comments: comments end end Lastly, my javascript is pretty simple now. import { Controller } from 'stimulus' export default class extends Controller { connect() { fetch("/api/comments") .then(res => res.text()) .then(html => element.innerHTML = html) } } Thoughts I know I left out several bits like, setting up stimulus, or what the main action / page look like, but all of those should be fairly straight forward. I'll probably do a writeup later on integrating stimulus with Lucky. Hopefully these examples should get you close should you need to use this! Posted on by: Jeremy Woertink Software Dev in Las Vegas. CTO for Namechk.com. I work for another company too. I also teach programming for UNLV Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jwoertink/render-component-from-action-in-lucky-4ii2
CC-MAIN-2020-34
refinedweb
458
67.55
A small point to point out a difference. A lot of optimisation is done with gradient systems. In this blogpost I’d just like to point out a very simple example to demonstrate that you need to be careful with calling this “optimisation”. Especially when you have a system with a constaint. I’ll pick an example from wikipedia. Note that you can play with the notebook I used for this here. \[ \begin{align} \text{max } f(x,y) &= x^2 y \\ \text{subject to } g(x,y) & = x^2 + y^2 - 3 = 0 \end{align} \] This system is a system that has a constraint which makes it somewhat hard to optimise. If we were to draw a picture of the general problem, we might notice that only a certain set of points is of interest to us. We might be able to use a little bit of mathematics to help us out. We can write our original problem into another one; \[ L(x, y, \lambda) = f(x,y) - \lambda g(x,y) \] One interpretation of this new function is that the parameter \(\lambda\) can be seen as a punishment for not having a feasible allocation. Note that even if \(\lambda\) is big, if \(g(x,y) = 0\) then it will not cause any form of punishment. This might remind you of a regulariser. Let’s go and differentiate \(L\) with regards to \(x, y, \lambda\). \[ \begin{align} \frac{\delta L}{\delta x} &= \Delta_x f(x,y) - \Delta_x \lambda g(x,y) = 2xy - \lambda 2x \\ \frac{\delta L}{\delta y} &= \Delta_y f(x,y) - \Delta_y \lambda g(x,y) = x^2 - \lambda 2 y\\ \frac{\delta L}{\delta \lambda} &= g(x,y) = x^2 - y^2 - 3 \end{align} \] All three of these expressions need to be equal to zero. In the case of \(\frac{\delta L}{\delta \lambda}\) that’s great because this will allow us to guarantee that our problem is indeed feasible! So what one might consider doing is to rewrite this into an expression such that a gradient method will minimise this. \[q(x, y, \lambda) = \sqrt{\Big( \frac{\delta L}{\delta x}\Big)^2 + \Big( \frac{\delta L}{\delta y}\Big)^2 + \Big( \frac{\delta L}{\delta \lambda}\Big)^2}\] It’d be great if we didn’t need to do all of this via maths. As lucky would have it; python has great support for autodifferentiation so we’ll use that to look for a solution. The code is shown below. from autograd import grad from autograd import elementwise_grad as egrad import autograd.numpy as np import matplotlib.pylab as plt def f(weights): x, y, l = weights[0], weights[1], weights[2] return y * x**2 def g(weights): x, y, l = weights[0], weights[1], weights[2] return l*(x**2 + y**2 - 3) def q(weights): dx = grad(f)(weights)[0] + grad(g)(weights)[0] dy = grad(f)(weights)[1] + grad(g)(weights)[1] dl = grad(f)(weights)[2] + grad(g)(weights)[2] return np.sqrt(dx**2 + dy**2 + dl**2) n = 100 wts = np.array(np.random.normal(0, 1, (3, ))) for i in range(n): wts -= egrad(q)(wts) * 0.01 This script was ran and logged and produced the following plot: When we ignore the small numerical inaccuracy we can confirm that our solution seems feasible enough since \(q(x, y, \lambda) \approx 0\). That said, this solution feels a bit strange.Taking the found values \(f(x^*,y^*) \approx f(0.000, 1.739)\) suggests that the best value found is \(\approx 0\). Are we sure we’re in the right spot? We’ve used our tool AutoGrad the right way but there’s another issue: the gradient might get stuck in a place that is not optimal. There are more than 1 point that satisy the three derivates shown earlier. To demonstrate this, let us use sympy instead. import sympy as sp x, y, l = sp.symbols("x, y, l") f = y*x**2 g = x**2 + y**2 - 3 lagrange = f - l * g sp.solve([sp.diff(lagrange, x), sp.diff(lagrange, y), sp.diff(lagrange, l)], [x, y, l]) This yields the following set of solutions: \[\left[\left(0, -\sqrt{3}, 0\right ), \left ( 0, \sqrt{3}, 0\right ), \left ( - \sqrt{2}, -1, -1\right ), \left ( - \sqrt{2}, 1, 1\right ), \left ( \sqrt{2}, -1, -1\right ), \left ( \sqrt{2}, 1, 1\right )\right ]\] Note that one of these solutions found with sympy yields \((0, \sqrt{3}, 0)\) which corresponds \(\approx [-0.0001, 1.739, -0.0001]\). We can confirm that our gradient solver found a solution that was feasible but it did not find one that is optimal. A solutions out of our gradient solver can be a saddle point, local minimum or local maximum but the gradient solver has no way of figuring out which one of these is the global optimum (which in this problem is \(x,y = \pm (-\sqrt{2}, 1)\)). Weather or not we get close to the correct optima depends on the starting values of the variables too. To demonstrate this I’ve attempted to run the above script multiple times with random starting values to see if a pattern emerges. This example helps point out some downsides of the gradient approach: In short: gradient descent is a general tactic but when we add a constraint we’re in trouble. It also seems like variants of gradient descent like Adam will also suffer from these downsides. You might wonder why to care. Many machine learning algorithms don’t have a constraint. Imagine we have a function \(f(x|w) \approx y\) where \(f\) is a machine learning algorithm. A lot of these algorithms involve minimising a system like below; \[ \text{min } q(w) = \sum_i \text{loss}(f(x_i|w) - y_i) \] This common system does not have a constraint. But would it not be much more interesting to optimise another type of system? \[ \begin{align} \text{min } q(w) & = \sum_i \text{loss}(f(x_i|w) - y_i) \\ \text{subject to } h(w) & = \text{max}\{\text{loss}(f(x_i|w) - y_i) \} = \sigma \end{align} \] This would be fundamentally different than regularising the model to prevent a form of overfitting. Instead of revalue-ing an allocation of \(w\) we’d like to restrict an algorithm from ever doing something we don’t want it to do. The goal is not to optimise for two things at the same time but instead we would impose a hard constraint on the ML system. Another idea for an interesting constraint: \[ \begin{align} \text{min } q(w) & = \sum_i \text{loss}(f(x_i|w) - y_i) \\ \text{subject to } & \\ h_1(w) & = \text{loss}(f(x_1|w) - y_1) \leq \epsilon_1 \\ & \vdots \\ h_k(w) & = \text{loss}(f(x_k|w) - y_k) \leq \epsilon_k \end{align} \] The idea here is that one determines some constraints \(1 ... k\) on the loss of some subset of points in the original dataset. The idea here being that you might say “these points must be predicted within a certain accuracy while the other points matter less”. \[ \begin{align} \text{min } q(w) & = \sum_i \text{loss}(f(x_i|w) - y_i) \\ \text{subject to } & \\ h_1(w) & =\frac{\delta f(x|w)}{\delta x_{k}} \geq 0 \\ h_2(w) & =\frac{\delta f(x|w)}{\delta x_{m}} \leq 0 \end{align} \] Here we’re trying to tell the model that for some feature \(k\) we demand a monotonic increasing relationship with the output of the model and for some feature \(m\) we demand a monotonic decreasing relationship with the output. For some features a modeller might be able to declare upfront that certain features should have a relationship with the final model and being able to constrain a model to keep this in mind might make the model a lot more robust to certain types of overfitting. This flexibility of modelling with constraints might do wonders for interpretability and feasibility of models in production. The big downer is that currently we do not have general tools to guarantee optimality under constraints in general machine learning algorithms. That is, this is my current understanding. If I’m missing out on something: hit me up on twitter. As a note I figured it might be nice to mention a hack I tried to improve the performance of the gradient algorithm. The second derivatives of \(L\) with regards to \(x,y\) also need to be negative if we want \(L\) to be a maximum. Keeping this in mind we might add more information to our function \(q\). \[q(x, y, \lambda) = \sqrt{\Big( \frac{\delta L}{\delta x}\Big)^2 + \Big( \frac{\delta L}{\delta y}\Big)^2 + \Big( \frac{\delta L}{\delta \lambda}\Big)^2 + \text{relu} \Big(\frac{\delta L}{(\delta x)^2}\Big)^2 + \text{relu} \Big(\frac{\delta L}{(\delta y)^2}\Big)^2}\] The reason why I’m adding a \(\text{relu}()\) function here is because if the second derivative is indeed negative the error out of \(\text{relu}()\) is zero. I’m squaring the \(\text{relu}()\) effect such that the error made is in proportion to the rest. This approach also has a big downside though: the relu has large regions where the gradient is zero. So we might approximate with a softplus instead. \[q(x, y, \lambda) = \sqrt{\Big( \frac{\delta L}{\delta x}\Big)^2 + \Big( \frac{\delta L}{\delta y}\Big)^2 + \Big( \frac{\delta L}{\delta \lambda}\Big)^2 + \text{softplus} \Big(\frac{\delta L}{(\delta x)^2}\Big)^2 + \text{softplus} \Big(\frac{\delta L}{(\delta y)^2}\Big)^2}\] I was wondering if by adding this, one might improve the odds of finding the optimal value. The results speak for themselves: Luck might be a better tactic. Note that you can play with the notebook I used for this here.
https://koaning.io/posts/optimisation-not-gradients/
CC-MAIN-2020-16
refinedweb
1,641
68.81
Revel, GORP, and MySQL Building a classic 3-tier web application controller in Golang. >>IMAGE. This setup should reflect a backend that you would use with a Single Page Application framework like AngularJS. Synopsis: I'm building a simple auction management system for the wife. One of the model items I have to deal with is a BidItem, which represents something you can purchase during the bidding process. I'll demonstrate how to build the full controller and database bindings for this one model item. We will implement the controller with the following steps: - Define REST Routes. - Define Model. - Implement Validation Logic. - Set up the GorpController. - Register Configuration. - Set up the Database. - Extend the GorpController. - Handle JSON Request Body (for Creation and Updates). - Implement Controller CRUD functionality. What I will not go into is security and proper response handling. Also, I'm going to be as succinct as possible so I don't waste your time. 1. Define REST Routes. In <app-root>/conf/routes, add the following lines: GET /item/:id BidItemCtrl.Get POST /item BidItemCtrl.Add PUT /item/:id BidItemCtrl.Update DELETE /item/:id BidItemCtrl.Delete GET /items BidItemCtrl.List The first two columns are obvious (HTTP Verb and Route). The :id is an example of placeholder syntax. Revel will provides three ways to get this value, including using the value as a function argument. The BidItemCtrl.<Func> refers to the controller and method that will handle the route. 2. Define Model. I've created a folder called models in <app-root>/app. This is where I define my models. This is my model in file <app-root>/app/models/bid-item.go: package models type BidItem struct { Id int64 `db:"id" json:"id"` Name string `db:"name" json:"name"` Category string `db:"category" json:"category"` EstimatedValue float32 `db:"est_value" json:"est_value"` StartBid float32 `db:"start_bid" json:"start_bid"` BidIncrement float32 `db:"bid_incr" json:"bid_incr"` InstantBuy float32 `db:"inst_buy" json:"inst_buy"` } Note the meta information after each property on the structure. The db:"<name>" is the desired column-name in the database and the json:"<name>" is the JSON property name used in serialization/deserialization. These values are not mandatory, but if you do not specify them, both GORP and the encoding/json package will use the property name. So you will be stuck with columns and JSON properties in Pascal casing. 3. Implement Validation Logic. For model validation, we could implement our own custom logic, but Revel already has a pretty good validation facility (so we'll just use that). Of course, this means we are going to trade off having a clean model (unaware of other frameworks) for development expedience. import ( "github.com/revel/revel" "regexp" ) // ... func (b *BidItem) Validate(v *revel.Validation) { v.Check(b.Name, revel.ValidRequired(), revel.ValidMaxSize(25)) v.Check(b.Category, revel.ValidRequired(), revel.ValidMatch( regexp.MustCompile( "^(travel|leasure|sports|entertainment)$"))) v.Check(b.EstimatedValue, revel.ValidRequired()) v.Check(b.StartBid, revel.ValidRequired()) v.Check(b.BidIncrement, revel.ValidRequired()) } This implements some of the validation logic by constructing the ruleset for model. It's certainly not complete and leaves much to be desired. 4. Set up the GorpController. If you would like transactions for your database, but don't want to have to manually set them up in your controller, I would recommend following this step. Keep in mind, all the code I show you after this will utilize this functionality. If you go spelunking in the Revel sample projects, you will find this little gem in the Bookings project. This is the GorpController, which is a simple extension to the revel.Controller that defines some boiler plate around wrapping controller methods in database transactions. I've moved out the database creation code into a separate file (which we will come back to in step 6), so the file should now look like this: package controllers import ( "github.com/coopernurse/gorp" "database/sql" "github.com/revel/revel" ) var ( Dbm *gorp.DbMap ) type GorpController struct { *revel.Controller Txn *gorp.Transaction } func (c *GorpController) Begin() revel.Result { txn, err := Dbm.Begin() if err != nil { panic(err) } c.Txn = txn return nil } func (c *GorpController) Commit() revel.Result { if c.Txn == nil { return nil } if err := c.Txn.Commit(); err != nil && err != sql.ErrTxDone { panic(err) } c.Txn = nil return nil } func (c *GorpController) Rollback() revel.Result { if c.Txn == nil { return nil } if err := c.Txn.Rollback(); err != nil && err != sql.ErrTxDone { panic(err) } c.Txn = nil return nil } Copy this code into a new file under <app-root>/app/controllers. I call the file gorp.go. Two important variables to note are the var ( Dbm *gorp.DbMap ) and the GorpController's property Txn *gorp.Transaction. We will use the Dbm variable to perform database creation and the Txn variable to execute queries and commands against MySQL in the controller. Now in order for us to get the magically wrapped transactional functionality with our controller actions, we need to register these functions with Revel's AOP mechanism. Create a file called init.go in <app-root>/app/controllers. In this file, we will define an init() function which will register the handlers: package controllers import "github.com/revel/revel" func init(){ revel.InterceptMethod((*GorpController).Begin, revel.BEFORE) revel.InterceptMethod((*GorpController).Commit, revel.AFTER) revel.InterceptMethod((*GorpController).Rollback, revel.FINALLY) } Now we have transaction support for controller actions. 5. Register Configuration. We are about ready to start setting up the database. Instead of throwing in some hard coded values for a connection string, we would prefer to pull this information from configuration. This is how you do that. Open <app-root>/conf/app.conf and add the following lines. Keep in mind that there are sections for multiple deployment environments. For now, we're going to throw our configuration values in the [dev] section: [dev] db.user = auctioneer db.password = password db.host = 192.168.24.42 db.port = 3306 db.name = auction Now these values will be made available to use at runtime via the Revel API: revel.Config.String(paramName). 6. Setup the Database. Next, we need to set up the database. This involves instantiating a connection to the database and creating the tables for our model if the table does not already exist. We are going to go back to our <app-root>/app/controllers/init.go file and add the logic to create a database connection, as well as, the table for our BidItem model if it does not exist. First, I'm going to add two helper functions. The first is a more generic way for us to pull out configuration values from Revel (including providing a default value). The second is a helper function for building the MySQL connection string. func getParamString(param string, defaultValue string) string { p, found := revel.Config.String(param) if !found { if defaultValue == "" { revel.ERROR.Fatal("Cound not find parameter: " + param) } else { return defaultValue } } return p } func getConnectionString() string { host := getParamString("db.host", "") port := getParamString("db.port", "3306") user := getParamString("db.user", "") pass := getParamString("db.password", "") dbname := getParamString("db.name", "auction") protocol := getParamString("db.protocol", "tcp") dbargs := getParamString("dbargs", " ") if strings.Trim(dbargs, " ") != "" { dbargs = "?" + dbargs } else { dbargs = "" } return fmt.Sprintf("%s:%s@%s([%s]:%s)/%s%s", user, pass, protocol, host, port, dbname, dbargs) } With these two functions, we can construct a connection to MySQL. Let's create a function that will encapsulate initializing the database connection and creating the initial databases: var InitDb func() = func(){ connectionString := getConnectionString() if db, err := sql.Open("mysql", connectionString); err != nil { revel.ERROR.Fatal(err) } else { Dbm = &gorp.DbMap{ Db: db, Dialect: gorp.MySQLDialect{"InnoDB", "UTF8"}} } // Defines the table for use by GORP // This is a function we will create soon. defineBidItemTable(Dbm) if err := Dbm.CreateTablesIfNotExists(); err != nil { revel.ERROR.Fatal(err) } } One thing to note in the code above is that we are setting the Dbm variable we defined in the GorpController file gorp.go. Using the connection, Dbm, we will define our BidItem schema and create the table if it does not exist. We have not yet written the defineBidItemTable(Dbm) function; I will show you that soon. Before we move on, we need to talk about imports. All of the helper functions and code in the init.go file will require the following libraries: import ( "github.com/revel/revel" "github.com/coopernurse/gorp" "database/sql" _ "github.com/go-sql-driver/mysql" "fmt" "strings" ) Of special note is the import and non-use of the mysql library: _ "github.com/go-sql-driver/mysql". If you do not include this import statement, your project will break. The reason is that GORP relies on the database/sql package, which is only a set of interfaces. The mysql package implements those interfaces, but you will not see any direct reference to the library in the code. Now it's time to implement the defineBidItemTable(Dbm) function. func defineBidItemTable(dbm *gorp.DbMap){ // set "id" as primary key and autoincrement t := dbm.AddTable(models.BidItem{}).SetKeys(true, "id") // e.g. VARCHAR(25) t.ColMap("name").SetMaxSize(25) } Notice how I use the term define and not create. This is because the database is not actually created until the call to Dbm.CreateTablesIfNotExists(). Finally, the InitDb() function needs to be executed when the application starts. We register the function similar to the way we did the AOP functions for the GorpController in the init() function: func init(){ revel.OnAppStart(InitDb) revel.InterceptMethod((*GorpController).Begin, revel.BEFORE) revel.InterceptMethod((*GorpController).Commit, revel.AFTER) revel.InterceptMethod((*GorpController).Rollback, revel.FINALLY) } When we are done with these steps, you will be able to start the application and verify the table creation in MySQL. If you are curious as to how GORP defines the table, here's a screen capture of mysql > describe BidItem;: 7. Extend the GorpController. We are ready to start working on the controller. First we need to create a new file for the controller: <app-root>/app/controllers/bid-item.go. Using the GorpController as the prototype, let's define our new BidItemCtrl: package controllers import ( "auction/app/models" "github.com/revel/revel" "encoding/json" ) type BidItemCtrl struct { GorpController } We will use those imports soon. 8. Handle JSON Request Body. This is one of those things that is not well documented in Revel, in part because, Revel doesn't consider it a part of the framework. If you want to be able to transform JSON requests into your own model representations you will need to define that functionality. We are going to add a simple method to the controller that will parse the request body and return a BidItem and Error tuple: func (c BidItemCtrl) parseBidItem() (models.BidItem, error) { biditem := models.BidItem{} err := json.NewDecoder(c.Request.Body).Decode(&biditem) return biditem, err } We now have a way of getting a BidItem from the Request.Body. You will obviously have to know the context in which to use this method, but that should be obvious since you do that anyway in most web frameworks (albeit it's a little it's a little less elegant in Revel). 9. Implement Controller CRUD functionality. Let's finish up the rest of the controller! Add BidItem. Inserting the record is pretty easy with GORP: c.Txn.Insert(<pointer to struct>). func (c BidItemCtrl) Add() revel.Result { if biditem, err := c.parseBidItem(); err != nil { return c.RenderText("Unable to parse the BidItem from JSON.") } else { // Validate the model biditem.Validate(c.Validation) if c.Validation.HasErrors() { // Do something better here! return c.RenderText("You have error in your BidItem.") } else { if err := c.Txn.Insert(&biditem); err != nil { return c.RenderText( "Error inserting record into database!") } else { return c.RenderJson(biditem) } } } } Get BidItem. func (c BidItemCtrl) Get(id int64) revel.Result { biditem := new(models.BidItem) err := c.Txn.SelectOne(biditem, `SELECT * FROM BidItem WHERE id = ?`, id) if err != nil { return c.RenderText("Error. Item probably doesn't exist.") } return c.RenderJson(biditem) } List BidItems (with paging). For listing BidItems, were going to need to implement some simple paging. Since we have an auto incremented, indexed bigint(20) for an identifier, we'll keep it simple and use the last id as the start field and a limit to indicate the amount of records we want. For this, we will accept two query parameters: lid (last id) and limit (number of rows to return). Since we don't want a SQL injection attack, we're going to parse these query options as integers. Here are a couple of functions to help the parsing (keeping it in <app-root>/app/controllers/common.go): package controllers import ( "strconv" ) func parseUintOrDefault(intStr string, _default uint64) uint64 { if value, err := strconv.ParseUint(intStr, 0, 64); err != nil { return _default } else { return value } } func parseIntOrDefault(intStr string, _default int64) int64 { if value, err := strconv.ParseInt(intStr, 0, 64); err != nil { return _default } else { return value } } The controller function is pretty simple: func (c BidItemCtrl) List() revel.Result { lastId := parseIntOrDefault(c.Params.Get("lid"), -1) limit := parseUintOrDefault(c.Params.Get("limit"), uint64(25)) biditems, err := c.Txn.Select(models.BidItem{}, `SELECT * FROM BidItem WHERE Id > ? LIMIT ?`, lastId, limit) if err != nil { return c.RenderText( "Error trying to get records from DB.") } return c.RenderJson(biditems) } Update BidItem. func (c BidItemCtrl) Update(id int64) revel.Result { biditem, err := c.parseBidItem() if err != nil { return c.RenderText("Unable to parse the BidItem from JSON.") } // Ensure the Id is set. biditem.Id = id success, err := c.Txn.Update(&biditem) if err != nil || success == 0 { return c.RenderText("Unable to update bid item.") } return c.RenderText("Updated %v", id) } GORP takes the convention of returning a tuple of int64 and Error on Updates and Deletes. The int64 is either a 0 or 1 value signifying whether the action was successful or not. Delete BidItem. And the last action for the controller. func (c BidItemCtrl) Delete(id int64) revel.Result { success, err := c.Txn.Delete(&models.BidItem{Id: id}) if err != nil || success == 0 { return c.RenderText("Failed to remove BidItem") } return c.RenderText("Deleted %v", id) } Conclusion. While a little long, I think that was probably the most complete walkthrough/example you'll find on implementing a Controller in Revel with GORP and MySQL. I hope this helps you in your effort of learning Go, GORP or Revel. References. - - - - Plus many other StackOverflow articles and blog posts... Unrepentant Thoughts on Software and Management.
http://rclayton.silvrback.com/revel-gorp-and-mysql
CC-MAIN-2017-26
refinedweb
2,398
53.07
Have you every tried to take a snapshot of a video and yet you get an image where everyone is looking glum? Wouldn't it be great if somehow you could find the "smiliest" frame from a video and extract that for the snapshot? Well, that is what I try to build in this video! A 1080p version of the video is available on Cinnamon This is a video taken from my weekly show "ML for Everyone" that broadcasts live on Twitch every Tuesday at 2pm UK time. To detect the faces and the smiles we are using the open source computer vision library OpenCV. Objective The purpose of this code is to be used with Choirless a project I'm working on for Call for Code. We need to extract thumbnail images of the rendered choir in order to display in the UI. At the moment we are taking the first frame of the video, but that is generally not the best frame to use, and later frames with people smiling as they sing are better. Process Through the course of the coding session I developed a solution incrementally. First starting with something that just detected faces in the code, then developed that further to detect faces, then optimized it to only process key frames so as to run quicker. The main parts of the code are: The Face detector This code uses the detectMultiScale method of the face classifier to first find faces in the frame. Once those are found, within each "region of interest", we look for smiles using the smile detector. def detect(gray, frame): # detect faces within the greyscale version of the frame faces = face_cascade.detectMultiScale(gray, 1.1, 3) num_smiles = 0 # For each face we find... for (x, y, w, h) in faces: if args.verbose: # draw rectangle if in verbose mode cv2.rectangle(frame, (x, y), ((x + w), (y + h)), (255, 0, 0), 2) # Calculate the "region of interest", ie the are of the frame # containing the face roi_gray = gray[y:y+h, x:x+w] roi_color = frame[y:y+h, x:x+w] # Within the grayscale ROI, look for smiles smiles = smile_cascade.detectMultiScale(roi_gray, 1.05, 6) # If we find smiles then increment our counter if len(smiles): num_smiles += 1 # If verbose, draw a rectangle on the image indicating where the smile was found if args.verbose: for (sx, sy, sw, sh) in smiles: cv2.rectangle(roi_color, (sx, sy), ((sx + sw), (sy + sh)), (0, 0, 255), 2) return num_smiles, frame In the main body of the code we open the video file and loop through each frame of the video and pass it to the detect method above to count the number of smiles in that frame. We keep track of the "best" frame with the most smiles in: # Keep track of best frame and the high water mark of # smiles found in each frame best_image = prev_frame max_smiles = -1 while 1: # Read each frame ret, frame = cap.read() # End of file, so break loop if not ret: break # Calculate the difference of frame to previous one diff = cv2.absdiff(frame, prev_frame) non_zero_count = np.count_nonzero(diff) # If not "different enough" then short circuit this loop if non_zero_count < thresh: prev_frame = frame continue # Convert the frame to grayscale gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Call the detector function num_smiles, image = detect(gray, frame.copy()) # Check if we have more smiles in this frame # than out "best" frame if num_smiles > max_smiles: max_smiles = num_smiles best_image = image # If verbose then show the image to console if args.verbose: print(max_smiles) cv2.imshow('Video', best_image) cv2.waitKey(1) prev_frame = frame There is also an optimization step in which we for a preliminary loop to see how different each frame is from its predecessor. We can then calculate a threshold that will result in us only processing the 5% of the frames that have the most difference. The full code is available on Github at: Results It works pretty well. If you run the code with the --verbose flag then it will display each new "best" frame on the screen and the final output image will have rectangles drawn on so you can see what it detected: % python smiler.py --verbose test.mp4 thumbnail-rectangles.jpg Pre analysis stage Threshold: 483384.6 Smile detection stage 5 6 7 8 9 Number of smiles found: 9 As you can see though, it wasn't perfect and it detected what it thought were smiles in some of the images of instruments. Next Week I'll be trying a slightly different approach and training a Convolutional Neural Network (CNN) to detect happy faces. Find out more on the event page: I hope you enjoyed the video, if you want to catch them live, I stream each week at 2pm UK time on the IBM Developer Twitch channel: Top comments (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/hammertoe/smile-detector-using-opencv-to-detect-smiling-faces-in-a-video-4l80
CC-MAIN-2022-40
refinedweb
807
67.59
You can subscribe to this list here. Showing 1 results of 1 On Jan 14, 2005, at 17:14, Bob Ippolito wrote: > On Jan 14, 2005, at 16:56, Kevin Dangoor wrote: > >>. >> >> Wow. That was quick! >> >> I didn't realize that there was a gotcha with the imports. That was >> just a premature optimization, so I can easily avoid that :) >> >> Thanks for your help... that's certainly not the kind of thing I >> would have just guessed... > > I don't think there is typically a gotcha with imports, I've certainly > never seen this happen before, and I have done imports from > applicationDidFinishLaunching: (pygame, in particular) before. I have > no idea if I should be blaming Cheetah, PyObjC or Python 2.3.0 > (haven't tested with 2.4 or CVS), but I will try and remember to dig > in later. I have traced the problem. It is a two-parter: (1) There is a module namespace conflict: WebWare has a package named WebKit PyObjC has a package named WebKit (2) Cheetah.Servlet checks to see if WebWare's WebKit is available, and ends up importing PyObjC's WebKit. (3) For whatever reason, it is not safe to import the WebKit wrapper from inside of an action (unless it's already imported, of course). Now this issue I will have to look further into. --- So apparently it's more or less undefined behavior if you have both WebWare and PyObjC installed! Fun :) -bob
http://sourceforge.net/p/pyobjc/mailman/pyobjc-dev/?viewmonth=200501&viewday=14
CC-MAIN-2015-48
refinedweb
243
73.98
I am trying to delete an Impedance tool in python but I can not manage to do it. in hoc it is very easy Code: Select all objref zz create soma zz = new Impedance() Impedance[0] When I do Code: Select all objref zz Impedance[0] nrniv: Object ID doesn't exist: Impedance[0] As expected! -------------------------------------------------------------------------- But when I try the same thing in python Code: Select all from neuron import h soma = h.Section() zz= h.Impedance() h.Impedance[0] <hoc.HocObject at 0x1218c90> When I try to delete the referance Code: Select all zz = None h.Impedance[0] <hoc.HocObject at 0x1218f60> I also try the StringFunctions() Code: Select all str = h.StringFunctions() str.references(Impedance[0]) Impedance[0] has 4 references found 0 of them So how can I delete the Impedance tool? ( The same thing happens with IClamp VClamp ect..) Thank You
https://www.neuron.yale.edu/phpBB/viewtopic.php?p=13098
CC-MAIN-2020-16
refinedweb
146
80.51
Zero blog post is dedicated to my full analysis of the custom sample given to us in the course. It’s a simple piece of “malware”, but I ended up having a lot of fun reverse engineering it. This sample contains a combination of techniques that is taught in the course including string obsfuscation, injections, packing, and others! Let’s dive right in! 2. First Stage I. Triage The sample comes with this README asking us to analyze a piece of malware on an infected machine. Hi there, During an ongoing investigation, one of our IR team members managed to locate an unknown sample on an infected machine belonging to one of our clients. We cannot pass that sample onto you currently as we are still analyzing it to determine what data was exfilatrated. However, one of our backend analysts developed a YARA rule based on the malware packer, and we were able to locate a similar binary that seemed to be an earlier version of the sample we're dealing with. Would you be able to take a look at it? We're all hands on deck here, dealing with this situation, and so we are unable to take a look at it ourselves. We're not too sure how much the binary has changed, though developing some automation tools might be a good idea, in case the threat actors behind it start utilizing something like Cutwail to push their samples. I have uploaded the sample alongside this email. Thanks, and Good Luck! Let’s look at the sample in PeStudio to see what we are dealing with. Here, we can see that it’s a normal Windows executable with the magic bytes MZ. Also, the entropy is 7.434, so it is likely that this first executable is packed. Entropy is a computer science concept that measures the degree of randomness in a system, so a high entropy means that the data inside is really random and disorder, which means that the author possibly has obfuscated some part of the executable. We can check for this in the sections. We can see that the resource section(.rsrc) has a really high in entropy, so we can safely assume that the next stage executable is stored in there. There are a lot of valid imports from kernel32.dll, so this is not obsfucated like I thought. Let’s see if we can unpack it dynamically! II. Simple Dynamic Unpacking My favorite way of unpacking that I learned from this course and OALabs is running the sample, attaching a debugger, and catching the next stage being written into memory! First, we need to set up the breakpoints in x32dbg. We need breakpoints on VirtualAlloc, VirtualProtect, CreateProcessInternalW, WriteProcessMemory, and ResumeThread to make sure that we catch when the malware writes the unpacked executable into memory and launches a process to execute it! Also, we should have a breakpoint on IsDebuggerPresent just to be safe! After setting them, we can run and hit our first VirtualAlloc. This gives me a return value of 0x001F0000. After following it in dump, I hit run again. When we stops at a CreateProcessInternalW call, we see that the buffer returned earlier has been written with an executable! When we dump it out, we will see that it is the second stage of this sample! III. Static Analysis That was a bit anti-climatic… Since this is a full analysis, let’s not just move on to the second stage. Instead, we can reverse engineer this sample to see how it generates this next stage as well as how it injects the new executable into a new process! The first thing we see in main function is this. There are weird strings being used as the parameter for the function sub_401300. This is almost 99% a function to decrypt these strings. Also, we can see that these strings after being decrypted in sub_401300 will be pushed to LoadLibraryA and GetProcAddress, so they are strings containing dll/api names. Let’s analyze the decrypting function. First, the length of the encrypted string is checked to make sure it’s not zero. Then, a decrypting key is constructed as we can see. Because of the way IDA sets up the stack for local variables, this is a bit messy because it’s basically a stack string. But esentially, v12, v13, v14, v15, v16, and v17 are one single stack string with the starting point at v12 and the final null byte at v17. Let’s look to see what exactly this string is. Again, the way IDA displays string is backward because of endianess, but when we build it up, the string is abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890./= Then, there is a loop to loop through each character in the encrypted string. Each of these characters is checked for the index where they appear in the decrypting key. This index is then added with 0xD or 13, and if this result is greater than the length of the decrypting key, the result is basically modded with the length to wrap back from the front. This result is used as an index in the decrypting key to produce the decrypted character. Overall, this is just a ROT13 algorithm. We can quickly build a small python function to help us decrypting strings from now on. def decrypt(string): decrypt_key = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890./=' result_str = '' for character in string: index = decrypt_key.find(character) result_str += decrypt_key[(index + 0xD) % len(decrypt_key)] print(result_str) After decrypting everything, we can go ahead and document that in IDA and move on. Back in main, it’s decrypting kernel32.dll, FindResourceA, LoadResource, SizeofResource, and LockResource, and finds the address of those functions in kernel32.dll. Then, it calls those function to finally get a pointer to the resource using LockResource Here, it extract the length at index 0x8 of the resource and multiplies it by 10. We are safe to assume that this total length is the length of the next stage executable from the VirtualAlloc and memmove calls. Also, since it’s copying from the resource at index 0x1C, we can also assume the encrypted executable starts at 0x1C. Next, we can see the obvious RC4 decryption algorithm being implemented here. In the second stage of RC4 KSA, we can see the key being used to encrypt/decrypt. In IDA, it is *(v21 % 0xF + resource_ptr2 + 0xC) through every loop. Since v21 is incremented every time and modded with 0xF, v21 will be in the range of 0 to 15. From here, we can assume that the key is 15-byte long in length, and it’s stored at index 0xC in the resource section. If we view the resource in Resource Hacker, the RC4 key is shown to be kkd5YdPM24VBXmi, and everything after 0x1B is the encrypted executable like we discuss above. At this point, it is clear how the executable is unpacked, we can just dump the this resource out and write a small python script to decrypt it. import arc4 def RC4_decrypt(encrypted_data): return arc4.ARC4('kkd5YdPM24VBXmi').decrypt(encrypted_data) Finally, we see this call sub_401000(new_executable); before returning. This is probably the function that does process injection to launch the next state. When we go inside this function, it’s easy to recognize that the classic Process Hollowing or RunPE is being used here for process injection. I’m kinda lazy to analyze this part fully because it’s 95% the same as my Process Hollowing implementation I wrote a while back, so you can read up my post here if you want to see my explanation. I’ll just include the decompiled code from IDA here with a few comments of explanation here instead! 3. Second Stage I. Triage When analyzing this second stage, let’s try and use the same triage technique we used for stage one and see if it works. First, let’s put it in PeStudio This looks good cause the entropy is around 6.182, which is decently low!! We can assume that it probably does not store any packed executable inside. The import table for this looks decent too, so there is no unpacking stuff to worry about here. When getting to this stage, I decided that I should just take a snapshot of the VM and try running it while monitoring using Process Hacker, seeing if I can spot any process injection or not. Surprisingly, I noticed that it spawns a new svchost.exe. I did not know what this does to my machine, but it is something to keep in mind when I move on to doing dynamic and static analysis. I also tried the dynamic unpacking technique from the first stage but the process just quitted when being ran in x32dbg, so I probably had to throw it in IDA to start static analysis. II. Static Analysis First thing we see in main is this block of code. This just writes the entire file path into &Filename and put it in an infinite loop calling strtok to update the pointer until there is no \ character in the string. Basically, it just strips off the path leaving only the file name. Then, it pushes the file name and its length to sub_401660 and compares the return value with some hex. Maybe this function is some hashing algorithm, so let’s analyze it. Here, we are seeing the array dword_416690 being initialized with values generated from a bunch of shift and xor. This xor key 0xEDB88320 turns out to be the polynomial representation of CRC-32. Basically, this polynomial is used to generate the CRC32 checksum table, so dword_416690 is the checksum table! Next, this piece of code just loop through the string from the parameter and generate a hash for it using the checksum table before returning it. Basically this entire function is just CRC32! The file name is hashed using CRC32, and the file name is compared to 0xB925C42D. We can’t really guess which string this hash corresponds to, and it would be painful to bruteforce… However, we know that it spawns a process with the name svchost.exe, so maybe we can try and hash it to see if it matches! I used an online CRC32 calculator for this cause I’m lazy And it does match! So basically, the first check is checking if this executable is running under the file name of svchost.exe! If the file name is svchost.exe, it calls the function sub_401DC0. We notice in this function that sub_401210 is called multiple time with a different hex every time, so it’s highly likely that it’s resolving api from the hash in the second parameter. That assumption seems to be correct. The first parameter is an index into an array of dll names where index 0 is kernel32.dll, index 1 is ntdll.dll, and index 2 is wininet.dll. Then, it will loop through all of the exported function names of the dll specified by the index, generate the CRC32 hash for each and compare against the second parameter. If it matches, the address of that api in the dll will be returned To automate this, I just parsed all the function names into 3 text files called kernel32.txt, ntdll.txt, and wininet.txt. After that, I just wrote a small python script for this function to bruteforce the hash of each function until we win the correct one. I used a dictionary in Python for that sweet O(1) look up time!! from binascii import crc32 kernel32_dict = {} ntdll_dict = {} wininet_dict = {} dll_list = [kernel32_dict, ntdll_dict, wininet_dict] file_list = [open('kernel32.txt', 'r'), open( 'ntdll.txt', 'r'), open('wininet.txt', 'r')] def crc_checksum(string): return crc32(bytes(string, 'utf-8')) % (1 << 32) for i in range(0, 3): dll_file = file_list[i] for line in dll_file: line = line[:-1] dll_list[i][crc_checksum(line)] = line for each in file_list: each.close() def find_api_from_hash(dll_index, api_hash): if dll_index in range(0, 3): if api_hash in dll_list[dll_index]: print("Hash {} = {}".format( hex(api_hash), dll_list[dll_index][api_hash])) return print("Look up fails...") while True: user_input = input("Please enter index and hash: ") if user_input == 'end': break user_input = user_input.split(' ') api_hash = int(user_input[1][2:], base=16) find_api_from_hash(int(user_input[0]), api_hash) After resolving api addresses, we see that it resolves InternetOpenA, InternetOpenUrlA, InternetReadFile, and InternetCloseHandle. Next, seems like it gets an encrypted stack “string”. The stack string looks like this in hex. In the loop processing this string, it extract one byte at a time, rotate left by 4 (basically just swapping the position of the hex character e.g DA -> AD), and xor it with 0xC5. This can be automated in a python script too, which will generate the string which seems to be a URL to a file on pastebin. def decrypt_string2(): result_str = '' string = EA'.replace( ' ', '') for i in range(0, len(string), 2): temp = int(string[i:i + 2][::-1], base=16, ) ^ 0xc5 result_str += chr(temp) print(result_str) # Then, this URL string is passed into sub_401290, so let’s see what is in there! Here, there are a bunch of Http functions, but basically it just sends a GET request to the URL and read the file from pastebin to a virtual buffer and returns it! So we know that the pastebin URL has the file to the next stage, and the return value should be the next stage! Let’s rename this function to get_remote_file and move on. Next, this buffer is passed into sub_4013A0. Surprisingly, the first call in this function is to get_remote_file again. So this mean that the call previously retrieves a link and write it into the buffer which is the parameter for sub_4013A0, and now that link is push to get_remote_file to get, possibly, the file of the next step. Next, we have a similar decrypting loop with the left rotation by 4 and xor. Again, we can automate this to get the decrypted string which is \output.. def decrypt_string3(): result_str = '' string = '34 07 A6 B6 F6 A6 B6 13'.replace(' ', '') for i in range(0, len(string), 2): temp = int(string[i:i + 2][::-1], base=16, ) ^ 0x1f result_str += chr(temp) print(result_str) This looks like a file name, so let’s rename that and move on. This part looks kind of weird, but ultimately, it just gets the path to the Temp directory and then append the file name that starts with \output. to the path. Then, it just creates the file and write to it. If we want to dump this out in a debugger, we just need to have a breakpoint on WriteFile and dump the parameter for the entire buffer! Again, another decrypting loop! This time with the xor bytes of 0x9A. This will decrypt into the string cruloader!! def decrypt_string4(): result_str = '' string = '9F 8E FE 6F 5F BF EF FF 8E'.replace(' ', '') for i in range(0, len(string), 2): temp = int(string[i:i + 2][::-1], base=16) ^ 0x9A result_str += chr(temp) print(result_str) Here, we can see there is a loop looping until it reaches the end of the newly created file or until it finds the string “cruloader” in that file. After that, the file pointer points to the section after that string! Now, we finally see the xor decryption method! It seems like everything after cruloader is encrypted with the xor key 0x61! If we can dump this file out, decrypting this file should be straightforward! Here is my full script to download the file and extract it for this stage! This is the image downloaded from the second URL! first_url = '' url_handle = urllib.request.urlopen(first_url) second_url = url_handle.read().decode() image_file = get(second_url).content result = bytearray() final_executable = image_file[image_file.index(b'cruloader'[::-1]) + 9:] for each in final_executable: result.append(each ^ 0x61) last_stage = open('.\last_stage.exe', 'wb') last_stage.write(result) last_stage.close() Next, we enter a function that decrypts the string C:\Windows\System32\svchost.exe and create a suspended process with this name! We know that the next stage executbale will be injected into this process and launched! In the next function, here is a chain of function calls. It’s a bit too long, but this list will give us a sense of how process hollowing is taking place in this function! 1. GetThreadContext -> Get the context of the spawned thread 2. ReadProcessMemory -> read thread context 3. NtUnmapViewOfSection -> hollowing memory 4. VirtualAllocEx -> allocate remote buffer in the spawn process 5. WriteProcessMemory -> Write the executable buffer into the process remote buffer 6. VirtualProtectEx 7. SetThreadContext -> setup the context (entry point,...) 8. ResumeThread -> Resume suspended thread Now we have finished that, let’s go back in main to see what happens if the original file name is not svchost.exe! IsDebuggerPresent is called with the function sub_401000. This function resolves CreateToolhelp32Snapshot, Process32FirstW, Process32NextW and proceeds to call them. Seems like it’s just looping over all of the processes and hashing the name of each process using CRC32. This hash will be compared with a list of hard-coded hashes, and if they matches, the process exits immediately. These are the processes it’s checking for. This method of evasion is kind of easy to avoid if we just change our tools’ name to something else! 659B537E: x64dbg.exe D2F05B7D: x32dbg.exe 47742A22: wireshark.exe 7C6FFE70: processhacker.exe If the checks are good and it detects that it’s not being monitored or ran in a debugger, it will decrypt C:\Windows\System32\svchost.exe and create a suspended process with this name. Then, it will just copy its own executable code into this new process and call CreateRemoteThread to start the thread. This new process will then download and executable the last stage! 4. Final stage At this point, things seem really straightforward! The entropy is extremely low, so I guess there won’t be anymore obsfucation left! And main is extremely short! It’s just creating a message box and display a message! Seems like we have finish analyzing the entire sample! 3. Remark This has been a fun activity for me to practice reverse engineering and analyzing malware! If you guys want to learn about these topics, make sure to check out Zero2Automated!!!
https://chuongdong.com/reverse%20engineering/2020/11/11/zero2auto/
CC-MAIN-2021-25
refinedweb
3,034
73.07
Writing a Custom Membership Provider for the Login Control in ASP.NET 2.0 By Dina Fleet Berry In ASP.NET 2.0 with Visual Studio (VS) 2005, you can program custom authenticated pages quickly with the Membership Login controls provided. These controls can be found in VS 2005 in the toolbox under the Login section and include: Pointer, Login, LoginView, PasswordRecovery, LoginStatus, LoginName, CreateUserWizard, and ChangePassword. These controls can be found in the framework library as a part of the System.Web.Security namespace. This article will focus on the Login control. Note: This article was written based on the November 2004 Community Release version of Visual Studio 2005. Due to the pre-release status of the information in this article, any URLs, Class names, Control names, etc may change before release.. The Login Control and the Membership Provider A membership provider is the glue between the Login control and the membership database. The login control doesn't care if the membership provider is a custom provider or a Microsoft provider. The login control knows which provider to instantiate based on entries in the web.config file. The custom provider acts just like the Microsoft-supplied providers because it inherits from and overrides the MembershipProvider class. A membership provider is the glue between the Login control and the membership database. The login control doesn't care if the membership provider is a custom provider or a Microsoft provider. The login control knows which provider to instantiate based on entries in the Membership Providers. Steps Involved when Using a Custom Provider with the Login Control There are three main steps required to use a custom provider with the Login control. login.aspx). MembershipProvider and override certain methods. web.config file needs to be modified to use a custom provider. There are three main steps required to use a custom provider with the Login control. - The login control needs to be placed on a aspx page ( - The custom provider class needs to inherit from - The How the Login Control The Login control is found in the Framework as a part of the You don't need to have any code in the For this article, the Context.User.Identity.IsAuthenticated property is set to true. The login control's DestinationPageURL property tells the web site where to direct the user if the validation is successful. System.Web.UI.WebControls namespace as the Login class. This class contains the functionality for the Login control. The majority of the functionality deals with visual style and event handling. <asp:Login</asp:Login> Login.aspx.cs code page. The control knows how to call the custom provider which does all the work because the provider is listed in the web.config. If you wanted to change the look and feel of the control on first time to the page or post back, you could manipulate the properties, methods, and events in the code behind. But again, that is optional. login.aspx.cs code behind page is a shell page provided by Visual Studio with no The Login control is found in the Framework as a part of the You don't need to have any code in the For this article, the Classes You Need to Provide System.Web.Security.MembershipProvider. In this article, it will be called MyCustomMembershipProvider. This is the custom membership provider. MyCustomUser and MyCustomUserProvider. These two classes could have easily been combined into a single class. This is a choice you can make as you write your own provider implementation. Note: If you were implementing the standard providers in the framework provided for Active Directory or SQL Server, you would use the MembershipUser class from the framework for this. - A custom class inheriting from - A class or classes that glue the above custom class to your database. In this article, these will be called MyCustomMembershipProvider : MembershipProvider MembershipProvider. You will see in MyCustomMembershipProvider that they are provided but throw "not implemented" exceptions. MyCustomMembershipProvider for the the custom provider are Initialize, and ValidateUser. Initialize is another place besides the web.config file to set properties for your custom provider. ValidateUser is the main function for the Login control to validate the user and password. public override bool ValidateUser(string strName, string strPassword) { //This is the custom function you need to write. It can do anything you want. //The code below is just one example. // strName is really an email address in this implementation bool boolReturn = false; // Here is your custom user object connecting to your custom membership database MyUserProvider oUserProvider = new MyUserProvider(); MyUser oUser = oUserProvider.FindUser(strName); if (oUser == null) return boolReturn; // Here is your custom validation of the user and password boolReturn = oUser.ValidateUsersPasswordForLogon(strPassword); return boolReturn; } ValidateUser takes two parameters which are the Name and Password of the user. For many web sites, the Name will be the User's email address. The method returns true or false depending on the results of this validation. All the code inside the method is up to you to provide. The code provided in this above example is just one possibility. Successful Validation with the Login Control Upon successful validation, the Login control will redirect to the page referenced in the DestinationPageURL property, let's call this page hello.aspx. This valid user is now in a context variable and can be retrieved with the Context.User.Identity property. Upon successful validation, the Login control will redirect to the page referenced in the Failed Validation with the Login Control The login control has many properties, methods, and events to manage the look and feel of the control both on the first instance of the page as well as post back. A default failure message is provided and will appear on the Login control if validation is unsuccessful. The login control has many properties, methods, and events to manage the look and feel of the control both on the first instance of the page as well as post back. A default failure message is provided and will appear on the Login control if validation is unsuccessful. Web.Config The web.config file will need several new pieces. In order to glue the Login control to your custom membership provider, you will need a section called <membership>. You can set the properties of the custom provider in this section. You can also control these properties from the custom membership provider class. The web.config used for this article assumes some aspx files should be accessible only after login is validated, and some files should always be available. The two types of files are located in the 'support' and 'support_unrestricted' directories used in the <location> tags. <?xml version="1.0"?> <configuration > <appSettings> <add key="ConnectionString" value="server=XXX;database=XXX;uid=XXX;password=XXX;"/> </appSettings> <system.web> <compilation debug="true"/> <authorization> <allow users="*" /> </authorization> <authentication mode="Forms"> <forms name=".ASPXAUTH" loginUrl="~/support/Login.aspx" protection="Validation" timeout="999999" /> </authentication> <membership defaultProvider="MyCustomMembershipProvider" userIsOnlineTimeWindow="15"> <providers> <add name="MyCustomMembershipProvider" type="PostPointSoftware.MyCustomMembershipProvider" enablePasswordRetrieval="true" enablePasswordReset="true" requiresQuestionAndAnswer="false" applicationName="/" requiresUniqueEmail="true" passwordFormat="Clear" description="Stores and retrieves membership data from SQL Server"2 4fff3e1aef13be438505b3f5becb5702d15bc7b98cd 6fd2b7702b46ff63fdc9ea8979f6508c82638b129a" /> </providers> </membership> </system.web> <location path="images"> <system.web> <compilation debug="true"/> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="support"> <system.web> <compilation debug="true"/> <authorization> <deny users="?" /> </authorization> </system.web> </location> <location path="support_unrestricted"> <system.web> <compilation debug="true"/> <authorization> <allow users="*" /> </authorization> </system.web> </location> </configuration> The Required Config.Web Sections <authentication> section is specified for the entire web site and uses the ~/support/login.aspx file as the authentication file. This is the file where the Login control will be used. - Authorization: This section specifies who can access that location (directory). - Authentication: This section specifies how the location is accessed. In the above example the - Membership: This is the section that ties the Login control to the custom membership provider. Why Use the New ASP.NET 2.0 Membership Controls at All? Since there is some programming work to get the new controls to talk to your old database structure, you may be asking yourself is it worth the trouble? The answer to this seems to be an active debate on several discussion boards. The benefits of this method are: The disadvantages of this method are: Since there is some programming work to get the new controls to talk to your old database structure, you may be asking yourself is it worth the trouble? The answer to this seems to be an active debate on several discussion boards. The benefits of this method are: - You can program the custom provider to provide as little or as much as you need. If you just want the Login control to work with your custom membership database, you don't necessarily have to write a lot of code. You will have to be more thoughtful in your upfront design to make sure you are covering just what you need. - The new web-based Membership Administration functionality in ASP.NET 2.0 will consume your custom provider. So you get both the ability to use the new controls and the ability to use the new web-based administration features. - Assuming Microsoft will grow this area of functionality overtime, you can continue to make use of your original work. The disadvantages of this method are: - The new login controls may be more, less, or different than what you need for your web site. Most web sites already have this membership authentication functionality so rewriting just to get it into ASP.NET 2.0 is probably a poor decision. - If you have your own administration web site or program for your custom membership, writing the additional code to make use of the new web-based Membership Administration functionality is not necessary -- just don't write those pieces. It's easy to tell which pieces to skip because all the required functions for the web-based administration deal with collections of users whereas the Login control functions deal with a single user. - Microsoft may abandon these controls. It's not likely but it is possible. Summary The new Login control provided with ASP.NET 2.0 is a simple way to implement validation for your web site. Developing a custom provider to interact with your own membership database is easy. The new Login control provided with ASP.NET 2.0 is a simple way to implement validation for your web site. Developing a custom provider to interact with your own membership database is easy. References The current MSDN location for the documentation is: System.Web.Security: System.Web.UI.WebControls:... The current MSDN location for the documentation is: System.Web.Security: System.Web.UI.WebControls: Neutral post unveil A number of all new things concerning nike shoes that nobody is talking about.Posted by moisseenfogma on 05/18/2013 01:40am I [url=]nike running[/url] vfBpc AthOhd PnjJxu Iis [url=空æ°ã¨ã«ãã³-c-1.html]air jordan[/url] PzzRixAdz Hh [url=ã¨ã¢ããã¯ã¹-c-2.html]nike air max[/url] yFpsWfzQad FfgUtzUot [url=ãã¤ãã¨ã¢ãã©ã¼ã¹-c-4.html]air force[/url] Iro RayJlo Sy [url=]nike running[/url] oVat BuyRfb AgzUeo YexP [url=空æ°ã¨ã«ãã³-c-1.html]nike air jordan[/url] gsTzhPes OarCrvKoyP [url=ãã¤ããã³ã¯-c-4.html]nike store[/url] zu IovQagBm [url=ã¨ã¢ããã¯ã¹-c-2.html]air max 95[/url] fJyk KhbPhi Lp [url=]nike ã¹ãã¼ã«ã¼[/url] hUfw VrkWcp MeuKrg EqoH [url=ãã¤ããã³ã¯-c-4.html]nike store[/url] iaSiuOyh W [url=ãã¤ãã¨ã¢ãã©ã¼ã¹-c-3.html]air force[/url] qdYggBipHlh [url=空æ°ã¨ã«ãã³-c-1.html]nike jordan[/url] OxmExvLbrSde WtsBfwReply Shorter blog post exposes the proven information regarding gucci plus the way it can certainly affect users.Posted by emeseesip on 05/06/2013 06:33pm 1 Of The Most Detailed adidas Strategy guide You Ever Seen Otherwise Your Money Back [url=]ã°ããã財å¸[/url] Whoa, amazing item. People got to look at nike right away whilst it is still up for grabs . . . [url=]ã°ããããã¼ã±ã¼ã¹[/url] gucci can help all of us by simply adding numerous exceptional capabilities and options. Its a unvaluable item for every supporter of gucci. [url=]ã°ãã è²¡å¸ æ°ä½[/url] Neutral review divulges Six all new things of gucci that noone is discussing. [url=]ã·ã£ãã« ããã°[/url] This is why absolutely no one is speaking about nike and the things one should implement right away. [url=]ã·ã£ãã« ãã§ã¼ã³ã¦ã©ã¬ãã[/url] All new questions regarding nike answered and reasons why you need to review each concept of this study. [url=]chanel 財å¸[/url] Methods of the nike that you're able to take advantage of getting started today.[url=]ãã¤ãã©ã³ãã³ã°[/url] Easiest way to master all the stuff there is to know surrounding gucci in Four easy steps.Reply More concessions with herveleger, more bring someone aback!Posted by comewlmxr on 05/02/2013 04:09am girlfriendhousemanbe predisposed to topurloinvestalrepReply Chemise Burberry hommePosted by Acequeacila on 04/30/2013 05:38am Chemise Burberry homme Chemise Burberry Chemise Burberry Femme burberry homme [url=]chemiseburberryhomme1.webnode.fr[/url] , sacoche burberry pas cher [url=]burberry-femme-pas-cher28.webnode.fr[/url] , Chemise Burberry hommeReply Ceinture BurberryPosted by entinnyannelt on 04/30/2013 05:37am Casquette Burberry Ceinture Burberry echarpe burberry pas cher burberry femme [url=]chaussureburberrypascher4.webnode.fr[/url] , burberry femme [url=]teeshirtburberry8.webnode.fr[/url] , Ceinture BurberryReply Concise blog post reveals the indeniable details of gucci and in what ways it might have an impact on your business.Posted by incockDak on 04/25/2013 07:52am Fresh new questions about nike replied and as a result reasons why you ought to read carefully every single word within this guide. [url=]ããºã ã°ãã¼ã[/url] The reason all sorts of things you learned about nike is truly false and what you need to understand.[url=ããºã-ã´ã«ãã¯ã©ã-c-1.html]ããºã ã¢ã¤ã¢ã³[/url] Hot questions about mizuno replied and as a result the reasons why you will need study every single phrase of this specific documentation. [url=ã´ã«ãã°ãã¼ã-c-33.html]ã°ãã¼ã ããºã[/url] Learn who is debating in and around nike and also the key reason why you should be afraid. [url=ã´ã«ãããã°-c-7.html]ããºã[/url] Components and show in Irvine -- nike simply leaves without regards [url=]ããºã[/url] The Primary New Ways To Gain knowledge of mizuno And Also How One Can Link up with The mizuno Elite [url=ããºãmizuno-ã¯ã©ã-c-4.html]ããºã mp[/url] Whatever the researchers normally are not saying in regards to nike and the way that this can affect you. [url=ããºãmizuno-ã¢ã¤ã¢ã³-c-3.html]ããºã ã¢ã¤ã¢ã³[/url] The fundamentals of nike that anyone can advantage from getting started today. [url=ããºãmizuno-ã¢ã¤ã¢ã³-c-3.html]ããºã ã¢ã¤ã¢ã³[/url] The Trick In order to dominate the nike-market Is Rather Straight forward! [url=ããºãmizuno-ããã°-c-5.html]ããºã[/url] What industry experts will never be revealing around nike and the way that it have an effect on you.Reply Women's clothing from Disassociate features the latest styles and fashions that you are established to nestlePosted by koltchuek on 04/15/2013 10:57pm There's a honour and civility in the surely of elements of the erstwhile regardless [url=]hollister france[/url] of the proceedings it takes. With clothing, that awareness has increased in the upwards and done with decade because of sexual networks and platforms like eBay[url=]abercrombie france[/url] , where people began to sooner a be wearing more conversancy to the old-fogyish clothing mores that exists. People began appreciating what was in their closets and what was in their relatives' [url=]air jordan pas cher[/url] closets measure than uncorrupted throwing all things into a dumpster, which is the modus operandi things were done in the past.When I started wearing of yore in the at an advanced hour '60s, hoary '70s, my mother said, âDon't get something quiet one's case people it's used.â Buying at prudence stores was an augury that you couldn't profit to buy off up to boyfriend [url=]abercrombie france[/url] clothing. That was the box â I couldn't well-fixed abundant in adequate to assumed fair clothing. But it wasn't something I was shamefaced about. I am persevering not to license to this bone-chilling hyperborean stand draw near [url=]air jordan pas cher[/url] me down, in the come having to abrogate our camping affected agreement with looking seeing that Easter Weekend. This weekend I kept myself extravagant making cocktails with vodka and Unimpeachable [url=]hollister france[/url] smoothies (kiwi, apple and lime) which were as a affair of incident nice and certainly a heinous course to height up on vitamins. I also went to dig escape lay down with improve the townsperson bearing of musical [url=]abercrombie[/url] Annie with the children. I loved it as a progeny and it was irrational to show gratitude it again. Definitely a gargantuan margin to spend a unhesitatingly wintery Sunday afternoon. [url=]moncler[/url] What fool you been up to? The part of similarity between males and females is constantly supervised [url=]air jordan[/url] enquiry, in whatever means men surface to be nautical port pass?of the fatshion equation altogether. A trawl to search engines in regard to any signs of empowered masculine [url=]michael kors[/url] fatshion returned zilch, thereâs no âcurveâ component, no catwalks, no orbit, why?Itâs not a uncertainty of ratios, there are an oversupply of man's sire blogs to there and they are nothing but as astonishing, [url=]hollister pas cher[/url] dahling. The ordinary exiguous chap can look leading to ordinary fash blog inspo from the likes of Classy Lou, The Habitually Alley, The Sartorialist and Closet Freaks. Itâs uneducated to accommodate over consequential men have [url=]louboutin pas cher[/url] no interest in the fulminate, our belief is theyâre jumpy, shunned, typecast.Reply More concessions with herveleger, more surprise!Posted by rightmrvqo on 03/20/2013 07:23pm herve leger dresses herve leger for cheap herve leger on sale herve leger outlet herve leger swimsuit herve leger outlet herve leger outlet ipad for sale cheap cheap iphone 4s for sale new iphone 5Reply A unblocked vue, la Nike Dunk Dry and adjust LR Thermo est une unmistakable paire en daim et cuir noir et bien non SupervisedPosted by Vetriatszy on 03/16/2013 10:10am lovely hawaii governor violated loss of life senator's needs i don't know if all of you read on this subject in what is this great. small details: Senator Inouye, The leveling democrat on the senate, past away a few days ago. he or evidently stipulated longed-for one of our congressmen, Colleen hanabusa, to restore them. this girl must have been took place three advices you want to due to democratic special occasion, for statute. an early senator, already Governor abercrombie, gathered michael's scampering buddy in lieu, who had been not qualified towards her lt. Governor job at first. a lot of us let us discuss back up inside biceps and triceps on your partner's determination. he has been This Site over when it comes to reelection as 2014. regarding any problems as regards to yourself health or the fitness of your toddler, you should meet with a physician on the other hand alternate medical practioner. why not read the policy to regards to Use before to using this type of site. some standby time with the site hints agreement to end up being guaranteed among the regards to UseReply Abercrombie and Fitch britannique boutique en ligne, nous offre les modes les together with courants Abercrombie cascade les femmes et les hommes SupervisedPosted by Vetriatszy on 03/15/2013 09:04am Abercrombie jacket abercrombie wakely abe Abercrombie Fitch, As it is well known is between the world renowned identity for this relaxed prefer throughout the world. normally popular relating to the teenagers, The tire maker offerings a number rewards fine quality of the products. The stylish as well as the excellent figures the actual clothing in which features, Let anyone acquire very good a unique character problems. Abercrombie Fitch insures the requirements by any means. because of this, the car not alone give you a elite in order to teenagers even so offers a multitude of Abercrombie products slacks, child Abercrombie adirondack items navigate to this website clothing along with casuals conjointly. the of the trademark justifies to the fact that choice of this brand older, used to be largely by a professional partitions only real. basically recognized for searching within you own personal requirements and matters, one's own whole you must sustains a picture or possibly a positive manner akin to on its own within the people. their Abercrombie Fitch industry had the ability to maintain its actual annual revenues for getting non-stop escalating. the following within the well-liked products of the country. A A websites are usually also prepared exclusively enabling the readers incorporate the use of have fun with the ethnic design and simply architecture with a store running over earlier a long time. this company enjoys a extended who from the producing the actual set up with private information skincare products old 80 right up until present-day's current year. the structure of Abercrombie boutiques exhibits a light creating with the manufacturer written in dunkelhrrutige through it. The inside along with thebercrombie retail establishments mention darkish letting you light in the feel good. you are darkish sun light allow us to possibilities stress significantly less and revel in the store shopping a considerable time. sphere lights are also rubber-stamped internal and therefore view the items in the right area. The A outlets have fun with online night music with perfect appearance. consequence, numerous Abercrombie locations would be found at selection scenery. entirely, firm performs 1, 112 A shop everywhere. the stores have these baby famous brands all together. tag heuer is largely based inside you will discover over 340 reserves in the united states per seReply
http://www.codeguru.com/csharp/.net/net_security/article.php/c19415/Writing-a-Custom-Membership-Provider-for-the-Login-Control-in-ASPNET-20.htm
CC-MAIN-2014-52
refinedweb
3,739
54.32
NAME perlfaq7 - General Perl Language Issues ($Revision: 10100 $) DESCRIPTION This section deals with general Perl language issues that dont clearly fit into any of the other sections. Can I get a BNF/yacc/RE for the Perl language? There is no BNF, but you can paw your way through the yacc grammar in perly.y in the source distribution if youre particularly brave. The grammar relies on very smart tokenizing code, so be prepared to venture into toke.c as well. In the words of Chaim Frenkel: Perlsre likely to encounter that aresee $/. Do I always/never have to quote my strings or use semicolons and commas? Normally, a bareword doesnt need to be quoted, but in most cases probably should be (and must be under use strict). But a hash key consisting of a simple word (that is]; How do I temporarily block warnings? If you are running Perl 5.6.0 or better, the use warningspragvariable extensions. Why do Perl operators have different precedence than C operators? Actually, they dont. All C operators that Perl copies have the same precedence in Perl as they do in C. The problem is with operators that C doesnt oroperator: product a negative not a positive four. It is also right-associating, meaning that 2**3**2is two raised to the ninth power, not eight squared. Although it has the same precedence as in C, Perls ?:operator produces an lvalue. This assigns $xto either $aor $b, depending on the trueness of $maybe: ($maybe ? $a : $b) = $x; How do I declare/create a structure? In general, you dont declare a structure. Just use a (probably anonymous) hash reference. See perlref and perldsc for details. Heres an example: $person = {}; # new anonymous hash $person->{AGE} = 24; # set field AGE to 24 $person->{NAME} = "Nat"; # set field NAME to "Nat" If youre looking for something a bit more rigorous, try perltoot. How do I create a module? (contributed by brian d foy) perlmod, perlmodlib, perlmodstyle explain modules in all the gory details. perlnewmod gives a brief overview of the process along with a couple of suggestions about style. If you need to include C code or C library interfaces in your module, youll need h2xs. h2xs will create the module distribution structure and the initial interface files youll need. perlxs and perlxstut explain the details. If you dont need to use C code, other tools such as ExtUtils::ModuleMaker and Module::Starter, can help you create a skeleton module distribution. You may also want to see Sam Tregars Writing Perl Modules for CPAN ( ) which is the best hands-on guide to creating module distributions. How do I adopt or take over a module already on \s-1CPAN\s0? (contributed by brian d foy) The easiest way to take over a module is to have the current module maintainer either make you a co-maintainer or transfer the module to you. If you cant reach the author for some reason (e.g. email bounces), the PAUSE admins at modules@perl.org can help. The PAUSE admins treat each case individually.. Usually, closures are implemented in Perl as anonymous subroutines with lasting references to lexical variables outside their own scopes. These lexicals magically refer to the variables that were around when the subroutine was defined (deep binding). Closures are most often used in programming languages. Heres a classic non-closure function-generating function: sub add_function_generator { return sub { shift() + shift() }; } $add_sub = add_function_generator(); $sum = $add_sub->(4,5); # $sum is 9 now. The anonymous subroutine returned by add_function_generator() isnt technically a closure because it refers to no lexicals outside its own scope. Using a closure gives you a function template with some customization slots left out to be filled later.you pass in, whereas &$f2($n)is always 555 plus whatever $nyou pass in. The $addpiece $lineback in its callers scope. Another use for a closure is to make a variable private to a named subroutine, e.g. a counter that gets initialized at creation time of the sub and can only be modified from within the sub. This is sometimes used with a BEGIN block in package files to make sure a variable doesnt get meddled with during the lifetime of the package: BEGIN { my $id = 0; sub next_id { ++$id } } This is discussed in more detail in perlsub, see the entry on Persistent Private Variables. arguments. It used to be easy to inadvertently lose a variables value this way, but now its much harder. Take this code: my $f = foo; sub T { while ($i++ < 3) { my $f = $f; $f .= "bar"; print $f, "\n" } } T; print "Finally $f\n"; If you are experiencing variable suicide, that my $fin the subroutine doesnt pick up a fresh copy of the $fwhose value is <foo>. The output shows that inside the subroutine the value of $fleaks through when it shouldnt, as in this output: foobar foobarbar foobarbarbar Finally foo The $fthat has bar added to it three times should be a new $f my $fshould create a new lexical variable each time through the loop. The expected output is: foobar foobar foobar Finally foo. How do I create a static variable? (contributed by brian d foy) Perl doesnt have static variables, which can only be accessed from the function in which they are declared. You can get the same effect with lexical variables, though. You can fake a static variable by using a lexical variable which goes out of scope. In this example, you define the subroutine counter, and it uses the lexical variable $count. Since you wrap this in a BEGIN block, $countis defined at compile-time, but also goes out of scope at the end of the BEGIN block. The BEGIN block also ensures that the subroutine and the value it uses is defined at compile-time so the subroutine is ready to use just like any other subroutine, and you can put this code in the same place as other subroutines in the program text (i.e. at the end of the code, typically). The subroutine counterstill has a reference to the data, and is the only way you can access the value (and each time you do, you increment the value). The data in chunk of memory defined by $countis private to counter. BEGIN { my $count = 1; sub counter { $count++ } } my $start = counter(); .... # code that calls counter(); my $end = counter(); In the previous example, you created a function-private variable because only one function remembered its reference. You could define multiple functions while the variable is in scope, and each function can share the private variable. Its not really static because you can access it outside the function while the lexical variable is in scope, and even create references to it. In this example, increment_countand return_countshare the variable. One function adds to the value and the other simply returns the value. They can both access $count, and since it has gone out of scope, there is no other way to access it. BEGIN { my $count = 1; sub increment_count { $count++ } sub return_count { $count } } To declare a file-private variable, you still use a lexical variable. A file is also a scope, so a lexical variable defined in the file cannot be seen from any other file. See Persistent Private Variables in perlsub for more information. The discussion of closures in perlref may help you even though we did not use anonymous subroutines in this answer. See Persistent Private Variables in perlsub for details. What's the difference between dynamic and lexical (static) scoping? Between \fIlocal()\fP and \fImy()\fP? local($x)saves away the old value of the global variable $xand assigns a new value for the duration of the subroutine which is visible in other functions called from that subroutine. This is done at run-time, so is called dynamic scoping. local() always affects global variables, also called package variables or dynamic variables. my($x)creates a new variable that is only visible in the current subroutine. This is done at compile-time, so it. Thats because $varonly has that value within the block of the lexical() function, and it is hidden from called subroutine. In summary, local() doesnt make what you think of as private, local variables. It gives a global variable a temporary value. my() is what youre looking for if you want private variables. See Private Variables via my() in perlsub and Temporary Values via local() in perlsub for excruciating details. How can I access a dynamic variable while a similarly named lexical is in scope? If you know your package, you can just mention it explicitly, as in $Some_Pack::var. Note that the notation $::varis not the dynamic $varin the current package, but rather the one in the main package, as though you had written $main::var. use vars $var; local $var = "global"; my $var = "lexical"; print "lexical is $var\n"; print "global is $main::var\n"; Alternatively you can use the compiler directive our() to bring a dynamic variable into the current lexical scope. require 5.006; # our() did not exist before 5.6 use vars $var; local $var = "global"; my $var = "lexical"; print "lexical is $var\n"; { our $var; print "global is $var\n"; } Whats a closure?. Why doesn't ``my($foo) = <\s-1FILE\s0>;'' work right? my()and local()give list context to the right hand side of =. The <FH> read operation, like so many of Perls doesnt have a defined scalar behavior, this of course doesnt How do I redefine a builtin function, operator, or method? Why do you want to do that? :-) If you want to override a predefined function, such as open(), then youll have to import the new definition from a different module. See Overriding Built-in Functions in perlsub. Theres also an example in Class::Template in perltoot. If you want to overload a Perl operator, such as +or **, then youll want to use the use overloadpragma, documented in overload. If youre talking about obscuring method calls in parent classes, see Overridden Methods in perltoot. What's the difference between calling a function as &foo and \fIfoo()\fP? When you call a function as &foo, you allow that function access to your current @_values, and you bypass prototypes. The function doesnt get an empty @_--it gets yours! While not strictly speaking a bug (its documented that way in perlsub),but not require), or via a forward reference or use subsdeclaration. Even in this case, you get a clean @_without any of the old values leaking through where they dont belong. How do I create a switch or case statement? If one wants to use pure Perl and to be compatible with Perl versions prior to 5.10, the general answer is to write a construct like this: for ($variable_to_test) { if (/pat1/) { } # do something elsif (/pat2/) { } # do something else elsif (/pat3/) { } # do something else else { } # default } Heres a simple example of a switch based on pattern matching, lined up in a way to make it look more like a switch statement. Well do a multi "cant print function ref"; last SWITCH; }; # DEFAULT warn "User defined type skipped"; } See perlsyn for other examples in this style. Sometimes you should change the positions of the constant and the variable. For example, lets"; } Note that starting from version 5.10, Perl has now a native switch statement. See perlsyn. Starting from Perl 5.8, a source filter module, Switch, can also be used to get switch and case. Its use is now discouraged, because its not fully compatible with the native switch of Perl 5.10, and because, as its implemented as a source filter, it doesnt always work as intended when complex syntax is involved. How can I catch accesses to undefined variables, functions, or methods? The AUTOLOAD method, discussed in Autoloading in perlsub and AUTOLOAD: Proxy Methods in perltoot, lets you capture calls to undefined functions and methods. When it comes to undefined variables that would trigger a warning under use warnings, you can promote the warning to an error. use warnings FATAL => qw(uninitialized); Why can't a method included in this same file be found? Some possible reasons: your inheritance is getting confused, youve misspelled the method name, or the object is of the wrong type. Check out perltoot for details about any of the above cases. You may also use print ref($object)to find out the class $objectwas blessed into. Another possible reason for problems is because youve used the indirect object syntax (eg, find Guru "Samy") on a class name before Perl has seen that such a package exists. Its wisest to make sure your packages are all defined before you start using them, which will be taken care of if you use the usestatement instead of require. If not, make sure to use arrow notation (eg., Guru->find("Samy")) instead. Object notation is explained in perlobj. Make sure to read about creating modules in perlmod and the perils of indirect objects in Method Invocation in perlobj. How can I find out my current package? If youre just a random program, you can do this to find out what the currently compiled package is: my $packname = __PACKAGE__; But, if you"; } How can I comment out a large block of perl code? You can use embedded POD to discard it. Enclose the blocks you want to comment out in POD markers. The <=begin> directive marks a section for a specific formatter. Use the commentformat, which no formatter should claim to understand (by policy). Mark the end of the block with <=end>. # program is here =begin comment all of this stuff here will be ignored by everyone =end comment "Shouldis a lexical variable created with my() in the above example, the code wouldnt work at all: youd accidentally access the global and skip right over the private lexical altogether. Global variables are bad because they can easily collide accidentally and in general make for non-scalable and confusing code. Symbolic references are forbidden under the use strictprag packages symbol-table hash (like %main::) instead of a user-defined hash. The solution is to use your own hash or a real reference instead. $USER_VARS{"fred"} = 23; $varname = "fred"; $USER_VARS{$varname}++; # not $$varname++ There were using the %USER_VARShash instead of symbolic references. Sometimes this comes up in reading strings from the user with variable references and wanting to expand them to the values of your perl programs variables. This is also a bad idea because it conflates the program-addressable namespace and the user-addressable one. Instead of reading a string and expanding it to the actual contents of your programs own variables: $str = this has a $fred and $barney in it; $str =~ s/(\$\w+)/$1/eeg; # need double eval it would be better to keep a hash around like %USER_VARSand have variable references actually refer to entries in that hash: $str =~ s/\$(\w+)/$USER_VARS{$1}/g; # no /e here at all Thats faster, cleaner, and safer than the previous approach. Of course, you dont dont know how to build proper data structures using hashes. For example, lets say they wanted two hashes in their program: %fred its something that cant take a real reference to, such as a format name. Doing so may also be important for method calls, since these always go through the symbol table for resolution. In those cases, you would turn off strict refstemporarily doesnt matter for formats, handles, and subroutines, because they are always globalyou cant use my() on them. For scalars, arrays, and hashes, thoughand usually for subroutines you probably only want to use hard references. What does ``bad interpreter'' mean? (contributed by brian d foy). It may also indicate that the source machine has CRLF line terminators and the destination machine has LF only: the shell tries to find /usr/bin/perl<CR>, but cant.. REVISION Revision: $Revision:10100 $ Date: $Date:2007-10-21 20:59:30 +0200 (Sun, 21 Oct.
http://linux.fm4dd.com/en/man1/perlfaq7.htm
CC-MAIN-2021-21
refinedweb
2,651
63.29
About Django’s setup method for generic view classes Django 2.2 (released in April 2019) introduced a setup method to the django.views.View class, which is available to all generic views. This is intended to be overridden to assign instance attributes that other methods will need. For example, you would use this to load model instances identified in the URL arguments: from django.views import generic from django import shortcuts from myapp import models class Frob(generic.TemplateView): def setup(self, request, *args, **kwargs): super().setup(request, *args, **kwargs) self.frob = shortcuts.get_object_or_404(models.Frob, pk=kwargs["pk"]) To date, I’ve been overriding dispatch to do this - didn’t realise there was a dedicated method. More in the Django view docs
https://til.codeinthehole.com/posts/about-djangos-setup-method-for-generic-view-classes/
CC-MAIN-2022-05
refinedweb
124
60.72
FAFSA DRT changes • Security measures have been updated information. to address concerns raised by the IRS • While using the tool is optional, it’s earlier this year. The updates will not let still a more streamlined way to provide users see the tax data data needed for the FAFSA application. transferred from the It pulls information directly from IRS to FAFSA. After the the IRS, so manual entry errors are DRT is finished, students eliminated. will get a confirmation • Families may be wondering how message telling them to verify that their tax information the information was is correct, especially since they successfully transferred. cannot see the data. Assure them the information comes directly from Transferred tax information will also what they filed with the IRS. If they not be visible on the Student Aid Report are satisfied they filed their tax return (SAR). Students will instead see the words correctly, they should be comfortable “Transferred from the IRS” in the data entry with what is now on the FAFSA. fields on the FAFSA and SAR. • Tell families that using the DRT can save time on the back end if they are With those changes in place, students and selected for verification. If they have parents are likely to have questions about to provide documentation of tax how and whether they should continue information to the school, it will be to use the DRT. Here are a few important less time-consuming with the DRT. points to keep in mind when speaking with Without it, manually reviewing that applicants who have questions: information can slow the process. The IRS Data Retrieval Tool (DRT) lets students and parents transfer tax return information, needed for the FAFSA, directly from the IRS website. This gives students an easy way to provide accurate tax information on their FAFSA, eliminating the need to provide a copy of parents’ tax returns. 12
https://issuu.com/thecentermagazines/docs/winter_advisor_2018/12
CC-MAIN-2018-30
refinedweb
315
61.26
Event::File::tail - 'an tail [CB]<-f>' implementation using Event use Event::File; Event::File->tail( file => '/var/log/messages', cb => \&my_read_callback_function ); Event::loop; Event::FileTail is an attempt to reproduce the behaviour of the 'tail -f' using Event as the backend. The main difference between this module and other modules that tries to implement it is that it leaves room for parallel processing using Event. file gives the file name to watch, with either a absolute or relative path. The file has to exist during initialization. After it, it can be unlinked and recreated. where to start reading from the file, in bytes. As the file is read, it will be updated with the last position read. footprint, if defined, ia an array reference of n elements. Each element correspond to a line from the beggining of the file. If any line does not match in the file, position will be ignored and started in the beggining. cb is the standard callback. It will receive a code reference that will be called after every line read from the file. The newline from the line will be chomped before passed. The Event::File::tail object will be passed as the first argument. The read line will be passed as the second argument. A timeout starts to calculate after the file read gets to the end. If a new line is added to the file the timer count is reseted. Its main use is to catch a situation when the file is rotated and it was not catched. The file will be closed and reopened. If the file stills the same it will continue from the place it was before closing it. If the file has really changed, it will start reading it from the beggining. If not specified it defaults to 60s. This is the callback to call when a timeout occur. The timeout callback will be only called if the reopened file results in the same file. This callback will be called every time when the file read gets to the end. So if you need to do something after reading the file (instead of during each read line). Description of the watcher. This are the methods available for use with the tail watcher. Returns the description of the watcher. Returns the internal watcher id number. Returns the current file position This will return an array reference of the file's footprint. This is handy if you need to quit your application and after restarting it the file can be checked whether it is the same or not. loop is a wrapper to Event::loop in case that no other Event's watcher is in use. You have to call it somewhere to let Event watch the file for you. $result will return from the $result value passed by an unloop method (see below). Please refer to the loop function in the Event pod page for more info. A wrapper to Event::unloop. This will cancel an active Event::loop, e.g. when called from a callback. $result will be passed to the loop caller. Please refer to Event::unloop for more info. A wrapper around Event::sweep. sweep will call any event pending and return. Please refer to Event::sweep for mor info. This will stop the watcher until a again or start method is called. This method will restart the watcher The same as start This will destroy the watcher. Note that if t there is a reference to this watcher outside this package, the memory won't be freed. When do you have to use loop or sweep? Well, that depends. If you are not familiar with Event, the quick and dirty answer is loop will BLOCK and sweep no. loop will be keeping calling the callback functions whenever they are ready and will just return when a callback calls for unloop or a timeout happens. On the other hand, if you are not using Event for anything else in your program, this might not be a desired situation. sweep can be called them to check if some event has happened or not. If it has it will execute all the pending callbacks and then return (as opposed from loop). So, long loops might be a good place to use it. Event::File::tail is a fake watcher in the Event point of view. On the other hand, it does use two helper watchers for each Event::File::tail, a read io and a timer watchers. In case you are debugging and need to findout about them, every tail watcher has an unique id during the program execution (use $watcher-id) to retrive it). Each helper watcher does have the id number on its description (desc). Event(3), Tutorial.pdf, cmc Raul Dias <raul@dias.com.br>
http://search.cpan.org/~rsd/Event-File-0.1.1/lib/Event/File/tail.pm
CC-MAIN-2015-40
refinedweb
801
83.86
(2012-03-25 17:10)Gavinny Wrote: Is there any way to automatically launch a plugin when XBMC is started? I mainly only use XBMC for one plugin and it would be great if it just launched the plugin right away - I could then exit out of the plugin as normal if I wanted to use something else. Any ideas? (2012-03-25 17:57)mad-max Wrote: You can have an onload command in your home.xml that launches the Addon at startup... You need <onload> and RunAddon(Addon.id) Sorry for not giving you a ready solution...currently online with tapatalk (2012-03-26 19:09)mad-max Wrote: Glad I could help... So I assume it's working? (2012-03-26 19:17)Bstrdsmkr Wrote: You could also create a small service addon which would give you slightly more control at the expense of a little more complexity import xbmc xbmc.executebuiltin( "ActivateWindow(Videos,plugin://plugin.video.ted.talks)" )
http://forum.xbmc.org/showthread.php?tid=126520&pid=1470523
CC-MAIN-2014-10
refinedweb
162
65.01
Firstly, sorry i posted this twice but i had connection problems and thought it hadnt posted the first time. Here is my code for a box function. and here is the output:and here is the output:Code: #include <iostream> using namespace std; void box(int length, int width, int height); int main() { box(7, 20, 4); box(50, 3, 2); box(8, 6, 9); return 0; } void box(int length, int width, int height) { cout << "The volume of the box is " << length * width * height << "\n"; } The volume of the box is 560 In the book im using the output is this: volume of box is 560 volume of box is 300 volume of box is 432 why is my code only outputting the first line? ive checked the code and i can't spot any mistakes. Im ahaving the same problem with other function codes. Please help. Thanks
https://cboard.cprogramming.com/cplusplus-programming/119851-box-function-problem-printable-thread.html
CC-MAIN-2017-09
refinedweb
149
76.69
Contourf and log color scale¶ Demonstrate use of a log color scale in contourf import matplotlib.pyplot as plt import numpy as np from numpy import ma from matplotlib import ticker, cm N = 100 x = np.linspace(-3.0, 3.0, N) y = np.linspace(-2.0, 2.0, N) X, Y = np.meshgrid(x, y) # A low hump with a spike coming out. #() cs = ax.contourf(X, Y, z, locator=ticker.LogLocator(), cmap=cm.PuBu_r) # Alternatively, you can manually set the levels # and the norm: # lev_exp = np.arange(np.floor(np.log10(z.min())-1), # np.ceil(np.log10(z.max())+1)) # levs = np.power(10, lev_exp) # cs = ax.contourf(X, Y, z, levs, norm=colors.LogNorm()) cbar = fig.colorbar(cs) plt.show() References The use of the following functions, methods, classes and modules is shown in this example: Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
https://matplotlib.org/3.4.3/gallery/images_contours_and_fields/contourf_log.html
CC-MAIN-2022-27
refinedweb
153
62.04
Hi, I’m using st.columns on st.form. In mine desktop the widgets align. But in the mobile version not. I did some tests here and discovered that in the mobile version the app don’t create the columns. Here are the app link and the app repo Desktop version: Mobile version: The code: import streamlit as st with st.form("Test"): col1, col2, col3 = st.columns(3) with col1: st.write("col1") st.form_submit_button("Button 1") with col2: st.write("col2") st.form_submit_button("Button 2") with col3: st.write("col3") st.form_submit_button("Button 3") Accept sugestions to solve this bug, maybe using css or markdown.
https://discuss.streamlit.io/t/st-columns-not-working-on-mobile-version/27665
CC-MAIN-2022-33
refinedweb
107
71.92
I had a lot of time to think about Elliotte Harold's call for XML predictions on the way home from Redmond Wednesday night. We got several inches of snow, which is rare here and the highway folks just can't deal with . There were massive traffic tieups, and lots of time spent staring off into the snowflakes. Most of us commuters were caught off-guard by the snow, since the forecast was something like "cloudy with a chance of snow showers". That's not inaccurate, but not very helpful ... like most of those "successful" beginning-of-2006 predictions pundits are bragging about this month. Our unexpectedly intense snow yesterday was created by a Puget Sound Convergence Zone:. That seems like a good metaphor for why technology prediction is hard. The interesting stuff happens in the "convergence zones" where different streams of ideas and technologies collide and create upward convection currents (driven partly by hot air and vapor I suppose). The World Wide Web arose out of a convergence of the internet and SGML-ish markup; the Internet Bubble arose out of a convergence of this enabling technology and the vast numbers of PCs in every home and office; XML arose out of a convergence of the capability for universal interoperability spawned by the SGML and the Web, and the demand for a Web-like experience in a whole range of document and data products that was accelerated by the Bubble. But not every wind / technology convergence creates a convergence zone. See the "Meteorological Mumbo-Jumbo" section in this article about our weather yesterday for an explanation of how this weather pattern was different than the other 20 recent occasions where the winds were about. In technology, lots of things -- technologies, personalities, zeitgeist -- have to come together at just the right time. Why was "Dynamic HTML" a bit of a yawner in the late '90s but "AJAX" a big hit in 2006? Why did the Newton flop and then the Palm become a hit a few years later? Why is YAML a backwater but JSON the Next Big Thing this year? I don't pretend to know. I do think, however, that we know enough about the XML technology landscape and the forces that interact with it and each other to at least talk about some likely convergences and divergences. First, as the weather truism goes, the best prediction of tomorrow's weather is today's weather. 2007 will look a lot like 2006. Duh. Overall, XML has become a fairly mature and slow-changing technology because there is a 10-year legacy holding back any radical change, e.g. a mass migration to RELAX NG, JSON, LINQ, or whatever. We will see a gradual migration to XSLT 2.0, growing interest and support for XQuery (as a query language!), continued improvements in XML Schema 1.0 interoperability and growing interest in 1.1 as it solidifies, and so on. Second, there's a strong prevailing wind that puts XML almost everywhere, even places where it might not belong (e.g., multi-gigabyte log files!). But the pervasiveness of XML means that the "XML community" has less and less in common. The streams that collided with Mt. NonInteroperable and reconverged to create XML 10 years ago have shifted because Mt. NonInteroperable has been greatly eroded. People care more about interoperable documents OR data OR messages than interoperable documents AND data AND messages, so the fact that XML underlies all of their interop stories is not particularly interesting anymore. What predictions does this pattern imply? Third, like the wind XML is mostly invisible -- it's hidden behind the firewall, embedded in ZIP files, or just called something else. What's more, most people really don't want to see the XML that surrounds them. They want to see readable documents, updated feeds, processable objects, and delivered services.? What about some of the XML technologies that have been swirling around out there ... What landfalls may we expect in 2007? Again, the whole point of the argument here is that only incremental change can be foreseen unless XML technologies get tangled up in some larger crosscurrents. So, the Semantic Web will remain highly domain-specific (e.g., the biomedical arena where well-established taxonomies and ontologies really can be leveraged by OWL inferences, SPARLQ query processors, etc.). I can't think of any larger social forces that might collide to provide upward convection for the semantic web technologies other than some low-probability event such as a major terrorist attack being thwarted as a direct result of Homeland Security's semantic technology investment. How about SVG, or XML 1.1, or some other major XML technology that hasn't gotten mainstream support (ahem, partly because of decisions in Redmond)? I don't foresee any major shifts in the winds here ... but then again I didn't know about the Puget Sound Convergence Zone the other day until the streets near home were iced over. What great technological, economic, or political forces can you envision changing the XML climate in the next year? None really. Batteries are the next big thing. :-) It may be that 2007 is the year of divergences. If it is any symptom of anything other than local concerns, I find myself less and less concerned with generic XML or even XML at all and more concerned with working applications specifically X3D. Convergences in the real-time 3D worlds are increasing momentum there that will result in different applications and possibly are return to the focus on CD delivery over web delivery. The tensions between the technological haves and the content have nots are forcing the content providers to rethink web applications and their impact on revenue recognition. The thing that XML, JSON, AJAX and so have in common is that they all re-invigorate/re-brand/re-jig established standard technologies after some necessary technical infrastructure has become common. XML brings together ISO SGML, URLs and Unicode and the infrastructure of page-based Web servers to serve documents; AJAX brings ECMA JavaScript with the infrastructure of document-based Web servers to serve data; JSON brings together ECMA JavaScript and the infrastructure of data-based Web servers. People naturally move from pages to small documents to data. So for Linq, the best approach would be to release it open source, let variants emerge, standardize it, wait for 7 years until the necessary technical & business environment (and the meme!) establishes itself, and see whether it gets re-invigorated/re-branded/re-established. Linq should have a 10 year strategy if it wants to get that kind of grassroots uptake. Ideas take time to get established. There is little other way around it. What's next? Well, I would say ODF/OOXML are looking good for the next big things: servers progressing from serving small documents/data to serving mid-sized documents & data. They are both being standardized, have a decent history and backing infrastructure etc. A standard is a kind of open source API where the creators commit themselves to scrutiny, modest change in response to external review, transparency of process, to not nobbling the total market by hogging their share, and to technology that works by the spec rather than specs that idealize or lie about the fixed technology. They are in the users interest in many cases. ISO SGML & W3C XML, ISO/IEC/ECMA EcmaScript & AJAX, ISO/IEC/ECMA EcmaScript & JSON; they aren't successful because they are standards, they are successful because their standardization by pioneers created agreed and review and public and stable specifications that were ready on the shelf to be bought by Joe Public when the infrastructure and memes were right. Standards are a library of stable technological possibilities. Since Michael is comparing Linq with XML/AJAX/JSON, what is its story with open standardization? I think you are misquoting Jon more than a little, Mike, in your bullet point "Finally, the breadth of XML's adoption comes at the expense of depth." You make it sound that Jon is saying that business users won't adopt Schematron/XSLT validation and that vendors are wisely aware of their customers and just following the market. However, Jon is saying the reverse. It is the vendors who are calling the shots regardless of the actual needs of users, such as him and UBL. It is the vendors who have given up the depth, vendors who claim that problems that don't fit into their technologies are "somewhat peripheral". Not the users. Jon's quote is not "business users will never adopt a solution that depends on an additional XSLT pass" but "Furthermore (I was told), business users will never adopt a solution that depends on an additional XSLT pass." In other words, he is quoting vendors rather than giving an opinion he himself endorses. You left out the "(I was told)" and so turned his chastisement of vendors into a chastisement of users. I suppose, in a sense, Jon is actually complaining about people in your role, so it is natural and interesting to have your response or deflection. Jon's speech has a major section on how UBL solved a long standing problem simply with established and straightforward technology, only to have vendors say "XSLT is too complex for our users." Jon is chastizing vendors for not providing solutions to users requirements. And what do we find on this MicroSoft blog here? Jon being misquoted to say that his "big shock" is a user phenomenon rather than because of vendor non-agility. On Schematron, just because Jon is employed by Sun doesn't mean MicroSoft should reject his words; he has been right before you know! (I should clarify: I don't think Jon is being remotely personal in his speech. I haven't communicated with Jon about it though.) On the standards comment,especially "Standards are a library of stable technological possibilities", I guess I have a different perspective: Standards are technological REALITIES that one can use with some confidence that they are supported by at least a critical mass of some audience. Given the need to support products we release for many years, and given the training / documentation / translation / etc. burden of releasing them to a worldwide audience, we simply can't afford to implement every XML spec that comes along and put it on the shelf in case somebody finds it useful. It would be nice to have community implementations of a wider range of specs on the shelf, but that's not happening, for a number of good and bad reasons. As for LINQ and open standardizaton, there is no story I'm aware of one way or the other. As far as I know, the basic ideas have been around since at least Haskell, so there is plenty of room for competitors and open source projects to innovate in similar ways. They can also implement LINQ providers for other data sources or underlying technologies, e.g. something akin to "LINQ to JSON", "LINQ to Oracle", "LINQ to XQuery", etc. But the bottom line for me is that until this stuff proves itself in the real world, talk of standardization is premature. On the Bosak quote, I cut and pasted from the linked article, so I don't think I misquoted. I thought I made it clear that he "decried" this attitude, not approved it. Maybe stripping off the "I was told" and trying to quickly set the context was confusing, sorry. I'll look it over and consider revising to make it clearer what Bosak's own position is. I'm not sure who the vendors he's chastising are, and I don't think it's a Sun vs MS thing. We fully support XSLT and advocate it for appropriate situations; if we had any interest in UBL I suspect we would appreciate the approach Bosak favors, which is completely appropriate from a technical point of view. My *prediction*, not my preference, is that this fact will not carry much weight in the wider world. Those unnamed vendors aren't necessarily clueless (as Bosak seems to imply), they may calculate that a good enough but ugly solution with one XML technology is more practical than a better, cleaner solution with two. Not many people bet against "worse is better" and come out ahead. Anyway, that's one prediction I'd be very happy to be wrong about. So, I don't think I misrepresented Bosak's position but I do disagree if the the moral of his anecdote is about vendor non-agility rather than user resistance. I see it as part of a much larger phenomenon of user resistance to the more sophisticated bits of the XML corpus. (I'm talking about mainstream corporate developers just trying to get their jobs done, most of whom do not currently use the tools either of us develop!) If the moral of your story is that the world would be better off if people used XSLT / Schematron rather than XSD for problems such as the one Bosak describes, I personally agree. I found his talk very persuasive as a case study of the limitatations of the grammar-based approach that you have been educating people about for years. I suspect, however, that the typical paying customer will be happier with the Schematron-like features in XSD 1.1 than they would with having yet another XML spec to think about, but we're keeping an open mind on what to do about both specs. Mike, On LINQ - several things will make a major difference here - until you get LINQ into IE, it will be an also-ran technology, no matter how technically superior it is to anything else (re: E4X) out there. I'd recommend opening up a VERY high level summit between yourself, John Schneider at Agile Delta, and Brendan Eich to see whether it would be possible to push LINQ-like technology into E4X - the latter is still relatively immature, and is fairly malleable even now, but with both Flash and Mozilla adopting it (and others, like Opera, poised to) I see E4X gaining far more market share than LINQ in that space, and I think that space in general will tend to dominate developer mindshare for years to come. I ironically see JSON (or more properly, Javascript object entities) and E4X as being VERY complimentary technologies, largely because the notation provides a lightweight mechanism for discretized packaging of XML fragments without having to get into all of the headaches associated with namespaces ... something that may appall longtime XML developers but which is pretty attractive to web developers looking at keeping transport content lightweight. I WOULD like to see some of the Haskell-like features of LINQ migrate their way into E4X (Monads, anyone?), but I think the trend-lines point to E4X gaining dominance, unless you get one of those particularly Puget Sound micro-climate effects that change the rules completely. -- Kurt Hi Mike, I think I was stuck in the same traffic with you in Redmond Wednesday night. What a fantastic storm -- reminds me of my days back east! Given that your post conjured up Efficient XML (aka binary XML), E4X and a mention of me personally (thanks Kurt!), I could hardly resist jumping in. :-) First, I should clarify that the W3C Efficient XML Interchange group is focused on far more than XML bloat and limited bandwidth scenarios. They are focused on a very wide range of use cases that need speed, compactness and a wide range of other characteristics. They just completed a comprehensive review of binary XML proposals looking at size, encode speed, decode speed, and other requirements across a wide range of XML applications and data (messages, data, documents, web services, SVG, financial, scientific, military, etc.). The results show that you can get excellent size and speed improvements simultaneously across the full range of use cases with only one format. They selected Efficient XML as the basis for the standard because it was one of the fastest formats AND was consistently smaller across the full range of tests. BTW the speed tests were conducted in-memory (simulating the fastest possible network) rather than over low bandwidth networks. On E4X, Kurt is right on target. There are some kinds of data where JSON works great and others where XML works great. E4X allows you to blend the two together to get the best of both worlds. In E4X, XML objects *are* JSON objects. As such, they can be arbitrarily nested one inside the other. E4X has been adopted by Mozilla and Flash and Opera, Apple, Adobe and others are also pursuing it. Once Internet Explorer catches up, everyone will have it. ;-> The power of E4X is that it builds on top of the widely understood and deployed JavaScript base, adding a minimal layer to support XML as a native Javascript concept rather than introducing an unnecesarily complex array of new paradigms. And of course, its been an approved Javascript standard for well over 2 years. Tongue-in-cheek comments aside, I have a great respect for Microsoft and believe you and your customers could greatly benefit from both E4X and Efficient XML in 2007. I'd be happy to participate in the kind of summit Kurt recommends. All the best!, John
http://blogs.msdn.com/mikechampion/archive/2007/01/12/convergence-zones.aspx
crawl-002
refinedweb
2,894
59.74
how to understand thread clearly? how to understand thread clearly? see my comments I run these code and return such results. <NSThread: 0x2828f0780>{number = 7, name = (null)} <NSThread: 0x28285b100>{number = 1, name = main} <NSThread: 0x2828f0780>{number = 7, name = (null)} import threading from objc_util import * import ui td=ObjCClass('NSThread') print(td.currentThread()) #why isn't this in a mainthread? it shows in background thread. #it shows in background thread,it's okay @ui.in_background def test1(): print(td.currentThread()) #it shows in main thread,it's okay @on_main_thread def main(): global td print(td.currentThread()) test1() if __name__=='__main__': main() Basically to iOS, the only things non the main thread are user interface code and callbacks. Everything else is in the background queue. By default, there are just these two threads, and in_background gets queued onto the background thread -- meaning your main code must exit before anything from in_background will run. That leads to strange scenarios -- you just never have code in the main script that blocks waiting for something from a in_background function, since the in_background code won't execute until the main script is done. Also, never have blocking dependencies between two in_background items. It is often better to create your own thread if you plan on having code wait in the background for other conditions to be true. Callbacks -- button presses, etc are called on the main thread. thank you very much for detailed reply. i will have more tests and try to understand it indepth. maybe ask later when i meet new question,thank you again.
https://forum.omz-software.com/topic/6258/how-to-understand-thread-clearly
CC-MAIN-2020-29
refinedweb
260
75.61
I'm not sure about "epochs". I think I'd prefer more granular feature opt-ins/opt-outs. In a corporate world you end up with legacy parts of code that will never be upgraded. It happens for various sad reasons, but often when a feature has lost its business value, there's no budget to even remove it, as it costs developer time, testing time and creates a risk of breaking other things. So keeping unmaintained old code around forever is the default. I can imagine myself being in a situation where some module of company's "in-house framework" relies on a deprecated/removed feature, and I won't be allowed to fix any warnings in that file, because another team is responsible for (not) maintaining it. I wouldn't want that code to prevent me from using newer Rust features in other parts of the same project. So @aturon did a good job of saying the big points. I just wanted to walk through in a bit more detail what the options are with regard to upgrading. I was going to write this post just about match but in writing it I realized that a lot of the same things apply to many cases, so let me just go over a few examples. They start with the easiest and get progressively harder. match catch Leaving aside whether you like this keyword, one can assume we will sometimes want to introduce new keywords. Ultimately, this is the most straightforward case, though there are some subtle bits. In its simplest form, we can deprecate using catch as an identifier. At the change of epoch, we are then free to re-use that identifier as a keyword. By and large, this transition can be automated. Some form of rustfix can rename local variables etc to catch_ or whatever we choose. rustfix catch_ Where things get a bit tricky is are things like bits of public API. In that case, there could be Epoch 1 crates that (e.g.) have a public type named catch that is not typeable in Epoch 2. This could be circumvented by introducing some form of "escape" that allows identifiers with more creative names (e.g., I think that scala uses backticks for this purpose). So the overall flow here: Let's suppose for a second that we want to repurpose "bare trait", so that fn foo(Iterator) means fn foo(impl Iterator). Let's leave aside for a second if this is a good idea (I think it's unclear, though I lean yes), and just think about how we might achieve it without breaking compatibility. fn foo(Iterator) fn foo(impl Iterator) To make this transition, we would do it as follows: Iterator dyn Iterator impl Iterator This all makes sense, but it does raise a question: what do we do with impl Iterator? If we stabilize that syntax in Epoch 1, then perhaps we will deprecate it in Epoch 2 and suggest people remove the (no longer needed) impl keyword. An advantage of this is that (a) people can use impl Trait sooner, which we obviously want and (b) some of those uses of Iterator as an object may well be better expressed with impl Iterator, and we can enable that. impl impl Trait The key components here: dyn Trait The idea of the match ergonomics RFC is basically to make it unnecessary to say ref x -- instead, when you have a binding x in a match, we look at how x is used to decide if it is a reference or a move (much as we do with closure upvars). Again, leaving aside the desirability of this change, can we make this transition? ref x x Changes to execution order. This change can have a subtle effect on execution order in some cases. Consider this example: { let f = Some(format!("something")); match f { Some(v) => println!("f={:?}", v), None => { } } println!("hello"); } Today, that string stored in f will be dropped as we exit the match (i.e., before we print hello). If we adopted the Match Ergonomics RFC, then the string will be dropped at the end of the block (i.e., after we print hello). f hello The reason is because binding to v today always trigger a move, but under that RFC v would be a move only if it had to be based on how it was used, and in this case there is no need to move (a ref suffices). (This is much like how closure upvar inference works.) v move ref So clearly there is some change to semantics here. That is, the same code compiles in both versions, but it does something different. In this example, I would argue, the change is irrelevant and unlikely to be something you would even notice (my intuition here is that it is rare that dropping one variable has side effects relative to the rest of execution, and rarer still that someone was using a match to trigger an otherwise unnecessary drop). But you can craft examples where the change is significant (e.g., the value being dropped has a custom drop with observable side-effects, and it is important that those side-effects occur before hello is printed). What makes this change tricky. A couple of things make this change tricky: Let's review those. Clearly, we can issue warnings for code whose semantics may change. But it's hard to target those deprecations narrowly. Ideally, we'd only issue a warning if all three of these conditions hold: Drop That last part cannot be fully detected. We can probably use a variety of heuristics to remove a bunch of false positives. But, if we are correct that this change in order will almost never matter, almost everything we do report will be wrong, which is annoying. And it's a subtle problem to explain to the user in the first place. The other problem is that there is no clear canonical form that we can encourage users to migrate to. In other words, suppose I get a deprecation warning, and I understand what it means. How will I modify my code to silence the warning? The Ideally, we'd have a way that is better than just adding a #[allow] directive. #[allow] There are really two cases. Either I want to preserve the existing execution order, or I don't care. We believe the first one will be rare, but unfortunately it's the easy case to fix. Once can force an early drop by adding a call to mem::drop(). As a bonus, your code becomes clearer: mem::drop() { let f = Some(format!("something")); match f { Some(v) => { println!("f={:?}", v); mem::drop(v); // <-- added to silence warning } None => { } } println!("hello"); } But if we don't care about the drop, what should we do? Probably the best choice is to encourage people to change v into a ref binding -- but of course that's precisely the form that we aim to remove in Epoch 2! (This has some interesting parallels with impl Trait I think, where we might be encouraging people to use impl Trait, even though we aim to deprecate it.) The other option, of course, would be to have some form of "opt-in" to the new semantics in Epoch 1 (e.g., something like the stable feature gates proposed here). That has the same set of previously discussed advantages/disadvantages (e.g., it muddies the water about what code means and this option is used frequently it raises the specter of there being many Rust dialects, rather than just Rust code from distinct eras). Sorry this is long. I wanted to really work through all these issues, but for myself and for the record. I guess that the TL;DR is roughly this. First, we assume that we're trying to repurpose some syntax in some way (as in both of these cases). This will generally be true, because if that is not happening, then there is no need to use an Epoch, we can just deprecate the old pattern and encourage the new pattern (e.g., try! into ?). In that case, the transition has the following form: try! ? mem::drop(x) I think the key questions to ask of any such transition: UPDATE: I realized that we can fully, but conservatively, automate the transition for match ergonomics. This may want to be a hard rule. I think you are correct, there isn't a big technical difference, but the idea of moving things forward in a set feels important to me. First, because I think that it's helpful to think of changes together, but also because I don't want to wind up with people 'picking and choosing' which changes to apply (i.e., I don't want you to have to think "ah, this code has ergonomic match, but it doesn't have bare trait"). It just seems like it'll make everything more fragmented and confusing. This is a different kind of split. The split you are talking about can be resolved by upgrading rust to its newest version (and this continues to be true under the epoch proposal) -- moreover, it's unavoidable. The split that we are trying to avoid is that you cannot upgrade because you have code that relies on some outdated bit of syntax (e.g., how in some cases you might not be able to use Python 3 because libraries that you rely on are still targeting Python 2). Note that a key part of epochs is that individual crates can be upgraded at will. So while, in the corporate world, you may be stuck with some old crate that cannot (for whatever reason) use newer features, that should not harm other crates. In any case, I think it's pretty unlikely that having granular opt-in helps with this problem. If you have code that can't be upgraded, that will be because it is being used in some path where an older toolchain is in use -- either for fear of change or for some other reason. That older toolchain will also not be able to cope with new feature gates. I think the idea would be that we would guarantee that: If your old code compiles with no deprecation warnings, it will behave the same in the new Epoch (though it may get warnings). So, just following compilation warnings -- without upgrading the epoch -- should actually be sufficient. That's part of the beauty of the epoch being defined as a set of flags: you can always stay on the first epoch and only add in the flags you want. Done. I think that the Epoch idea is indeed very similar to what C/C++/Java does. I agree that people do sometimes avoid upgrading there, but I'm not sure if there is anything that could be done to alleviate that. Put another way, even if we didn't adopt any kind of epoch or stable feature proposal, those same people will still avoid upgrading rust, I promise you, because every update -- even those that are supposedly compatible -- introduces a certain measure of risk. I do think it's worth paying attention to this, but I feel like it's best addressed with a different suite of tools. For example, we've talked a lot about opening up our "crater-like" infrastructure, so that people can use their CI to test with the latest versions of Rust, or to allow us to test PRs in advance on their code. Things like that might go a long way to avoided a fear of surprise interactions. I think the previous paragraph also applies to the idea of ecosystem splits caused by the use of new features -- that is, it IS an important problem, but probably one that we have to address in other ways. OK, my blizzard of comments stops with this one, but I just wanted to add: it might be worth linking this proposal to others intended to target "fear of upgrading" in all forms. Unclear. (e.g., I think there is an interaction with the cargo schema RFC.) I feel like these two statements are at odds with each other. The first part, never dreading to upgrade the compiler, should be true for projects that have laid dormant for months. Imagine building a binary that fulfills its purpose, doesn't have any big crashes, and so you just put the binary in production or wherever and forget about it for a year. Then, you decide you need to add some feature. Should you worry that upgrading the compiler after so long will mean features have been removed? Hopefully not! Coming from a server development position, I can say that in many places that I've worked at (even at Mozilla, as forward reaching with technology as we are), it's very common to not keep up with the latest version of a language. We actually specifically do not want to run the latest release in production. We want all horrible bugs and vulnerabilities to be patched out of new features before we ship that stuff. So, it's actually quite common for us to upgrade huge versions at a time, often times from one LTS to the next. It'd be sad if someone in our position felt we couldn't upgrade because then we'd need to do all this busywork fixing up our code for the newest compiler. Calling these Epochs doesn't do much for me. You've just changed the name of the thing we call "versions". Instead of Rust v2.0, we just have Rust v2017. If Rust v2018 has features removed that existed in 2017, it's essentially 2.0 (or 3.0 or whatever) and not backwards-compatible. JavaScript has seemingly moved away from single digit version numbers to years as well. We had ES5, and ES6, but now we're talking about ES2017 and ES2018. It doesn't really matter. What matters is that JavaScript I write now, in 2017, will still do exactly what I told it to with a browser that implements ES2025. That's backwards compatibility. I care a lot about backwards compatibility of the libraries I write, perhaps more than many. While I appreciate the desire to get people to use the newer syntax, I'd dislike the compiler actively nagging me to use something that would make hyper no longer compile for users on older compiler versions. In this case, fixing the nagging requires me to completely remove support for the versions of the compiler that don't understand dyn Trait. I think that to make such a change, I'd need a way to tell the compiler "OK sure, on #[cfg(has_dyn_trait)], I'll use dyn Trait, but if not, please just use the older and keep working." #[cfg(has_dyn_trait)] If it were possible for the compiler to process cfg attributes before trying to parse the syntax gated by them, then this scheme could work. I can make use of the rustc_version crate to conditionally make use of new libstd APIs, but not to make use of new syntax features. cfg rustc_version There is some talk of how python handled incompatible changes via from __future import ... to allow opting into new features. That feature has been used for more than just the 2->3 transition (by my count, 3 of the 7 __future__ imports are for things that became default before 3.x). from __future import ... __future__ One of the nice things about having granular control (and why a bunch of people were unhappy with the 2->3 transition) is that you can make the fixes for each change independently, and then test and deploy your code with just those changes. One of the big troubles with the 2->3 transition is that you had to fix your code for all of the changes at once. I think that epochs make sense as well, having an ever-growing feature list doesn't make sense, so every so often collecting all the stable features and defaulting them to on in a new epoch would be a good thing. feature One thing to note about the Haskell is that GHC (the de facto standard Haskell dialect's compiler) has, in addition to adding language-level features behind {#- LANGUAGE -#} pragmas, also made major non-standard breaking changes in Haskell's standard library in the past; e.g. implementing the Applicative Monad Proposal and the Burning Bridges Proposal two years ago (see also GHC 7.10's release notes).For the former proposal, the compiler issued warnings for code that would be broken by it starting with GHC 7.8 (which was about a year lead time).For the latter proposal, this wasn't done, as it landed relatively late in their release cycle. This was not uncontroversial (see some advocacy against immediately including it here). In the end, they decided against intrucing a {-# LANGUAGE #-} pragma for it.I haven't used Haskell in a some time and don't know in which ways this affected their ecosystem, but I think it's something worth investigating. {#- LANGUAGE -#} {-# LANGUAGE #-} It seems to me that this solved by having the ability to #![allow()] the deprecation warning. One shortcoming in our current system (something @wycats has pointed out again and again...) is that you want a more targeted way -- i.e., the ability to allow certain deprecations but not all. #![allow()] Just to be clear, we are proposing to maintain the same standard of compatibility. From the rest of your message, it sounds as if you might think otherwise, which probably suggests that we need to work on how we explain the proposal! This is indeed precisely why we pursued the "Epoch" naming instead of saying Rust 2.0. Calling something Rust 2.0 suggests that upgrading to it is a "major version bump" and hence means you may face incompatibility. But the idea is that when you upgrade to the latest Rust release, you are not required to upgrade your code to the latest epoch. The only reason you would ever have to upgrade is to take advantage of new features (e.g., the catch keyword). In the same way, when you upgrade your browser, you are not limited to nice JavaScript that uses let, modules, and all the latest goodies. You can still run the old stuff. let (Also, just to be clear, even the JS committee makes breaking changes from time to time, but only if they are very confident they can get away with it -- e.g., because different browsers implement different behavior, and hence people are not relying on it. And yes, they test that this is true.) This is an interesting point. I do hope though that we can make the upgrade fully automated. Indeed, in each of the examples that I gave, it would be possible to do a fully automatic -- and 100% semantically faithful -- transition, although it may do things you might not have done had you transitioned by hand. This is fairly clear for the first few examples. But it's also true for the match ergonomics example. In retrospect I didn't fully appreciate this while writing the post and hence I didn't emphasize it. But naturally if the compiler is unsure whether the destructor should run earlier or later, it could just conservatively insert a mem::drop into the match (i.e., to select the current semantics): mem::drop match opt_v { Some(v) => { println!("{:?}", v); mem::drop(v); // forces `v` to be moved, even under the newer proposals } _ => ... } You could imagine the transition tool leaving behind markers when it makes conservative choices of this kind, that you could go and remove at your leisure. For example maybe it would generate: match opt_v { Some(v) => { println!("{:?}", v); // rustfix: The following line could be removed, // if it is not important for the destructor of `v` // to execute early. mem::drop(v); } _ => ... } Being able to put an attribute on a specific expression is probably enough. I suppose some expressions could be exercising multiple deprecated things at once, but if you really wanted to, you could probably separate the expression into multiple. If that is the case, that sounds good. I feel that part of the error of understanding was on me, as part of my understanding came from noticing other people's pull quotes, and that some of those quotes in isolation made me feel that way. If I'm not the only one that felt this way, then maybe that using a new epoch or feature set is opt-in should be made to standout more, with bold and fireworks. After reading this discussion I'm still left confused. Here's how I understand the epochs proposal, so please correct me if I'm wrong. There are two kinds of language upgrades: New features are added. The existing parts of the language are unchanged. This is what I expect from upgrades to C++ and Java. If I have code that works with -std=c++11, I expect upgrading to -std=c++14 will be effortless. The only thing that might go wrong is name clashing, that's it. This is just like our regular minor upgrades, e.g. from Rust 1.14 to Rust 1.15. -std=c++11 -std=c++14 Existing language semantics are changed. Proposed changes to match and traits fall into this category. This is like the Python 2 to Python 3 upgrade. Just like we would subtly change the semantics of match, Python subtly changed the semantics of / operator, for example. This kind of upgrade is like going from Rust 1.x to Rust 2.0. / Epochs clearly fall into the category #2. They change the existing semantics, and I do not expect that from C++/Java upgrades (except perhaps a few small and reasonable fixes nobody will ever notice anyway). Upgrading an epoch sounds like going from Python 2 to Python 3, except it is less frustrating because the most recent compiler version understands all previous epochs and can intermix Rust code written in multiple epochs. Another concern I have is - how do we teach this to new users? How do I explain to a friend how match works? Do I say "hey, if you're using epoch 2017, you need this ref to borrow the object rather than take ownership, but if you're using epoch 2018 then ref is implicit"? What if they copy some code from StackOverflow and it doesn't compile? Epochs don't just add stuff. They change existing stuff. C++ and Java don't change existing stuff. Python changed existing stuff once and made a big mistake. We're about to change existing stuff but much more gracefully than Python did.. Is this the right way to look at it? This is a common sentiment, but I don't agree that its true. First, C++ does make breaking changes & they're not just name clashes. Here's a list of breaking changes in C++11, and its not just the introduction of keywords but many subtle semantic changes. The issue with Python 3 is, in my perception, not a subtle semantic change, but a blatant one - the core string type just completely changes its semantics in ways that are pervasive and difficult to upgrade across. This is different from match changes, where the breaking change is when exactly destructors run, and we can support a trivial 'explication' which gets you back to the form you had. (You refer to C++ making "a few small and reasonable fixes nobody will ever notice anyway", but that's exactly what the breakages in the match proposal are IMO). Whatever we call them, I think we can't support making changes as deep as the changes to the string type. But all of the changes we've seriously considered are, in my opinion, in line with the kinds of changes that are made in upgrades to the C standard. However, our current compatibility guarantee is much firmer than what C++ version upgrades provide; we can't even introduce new non-contextual reserved words. This makes sense - C++ upgrades every 3 years & we upgrade every 6 weeks. But its quite difficult when there's this misalignment in perception, where our 1.X upgrades are treated as equivalent to C++ upgrades, and what we've been talking about as "breaking changes" are treated as equivalent to Python 3, when they're really equivalent to C++11. I think we need an approach which allows us to make changes of the sort C++ does on the same time frame that C++ does. And, like C++, we need to support compiling code before this switch. I think this epoch proposal is exactly the right approach to solving this problem. Hopefully, we'll be able to have fewer epochs than C++ has standards because our rolling release model allows us to make non-breaking updates any time, so there isn't this pressure to 'release the new epoch' the way there is in C++. I also think doing this granularly has significant downsides. Most notably, for every 'epochal' feature we support, if they can be mixed and match, we get an expontial growth in 'rust versions' that exist, and everyone will be in a different one. While making a granular transition is valuable, I think we can have a more targeted approach to that. Its important that 'during an epoch transition,' the first version of the new epoch is a subset of the last version of the old epoch. That is, there is a subset of Rust that will compile in both epochs (and not a hobbled subset, like "if you never use Strings or integer division"). As we build toward a new epoch, we should release granular lints to help you identify what won't compile in the next epoch, so that you can perform that granular transition. But these should just be lints, and you should still be 'in the previous epoch' until you make a switch, which turns on all of the hard errors across the board. One way to (somewhat) address the combinatorial explosion is to mix granular features and epochs, and only allow features to be turned on if you are using the epoch where they were introduced. For example, consider the following sequence of changes. My suggestions is that the following would be legal feature combinations: And probably features like C and E that don't get made default in the next epoch shouldn't exist either. This won't work. You want a crate and all its dependencies to compile with rust 1.x, but the epochs may differ between all crates involved, as long as the API stays compatible. A function fn foo(x: Trait) in Epoch 2 will be fn foo<T: Trait>(x: T) in Epoch 1. These are 100% compatible, but multiple compilers won't produce compatible binary artifacts. fn foo(x: Trait) fn foo<T: Trait>(x: T) Making the binary artifacts stable is not an option, because we'll want to add new features to MIR or other parts that are exposed. The standard library can remove a function in a new epoch, but it will still need to be available in older epochs. So the binary standard library needs to include everything, even things that have name collisions (resolved by the chosen epoch).
https://internals.rust-lang.org/t/pre-rfc-stable-features-for-breaking-changes/5002?page=2
CC-MAIN-2017-34
refinedweb
4,597
70.23
In today’s Programming Praxis exercise, our goal is to implement an algorithm to square a number using only addition and subtraction. Let’s get started, shall we? import Data.Bits import Data.List First, the trivial O(n) algorithm. n^2 = n * n = sum (n times n) square :: Integer -> Integer square n = sum $ genericReplicate n n Next, an O(log n) algorithm. Create a sequence in which each element consists of the following values: an incrementing integer i, 2^i and n*2^i. Filter out all elements for which the ith bit of n is 0. Sum the n*2^i terms. square2 :: Integer -> Integer square2 n = sum [a | (i,a) <- unfoldr (\(i,p,a) -> if p <= n then Just ((i,a), (i+1,p+p,a+a)) else Nothing) (0,1,n), testBit n i] Some tests to see if everything is working properly: main :: IO () main = do print $ map square [0..10] == map (^2) [0..10] print $ map square2 [0..10] == map (^2) [0..10] print $ square (2^20) == 2^40 print $ square2 (2^1000) == 2^2000 Tags: addition, bonsai, code, Haskell, kata, programming praxis, square March 23, 2013 at 12:19 pm | Would you like to enter my free programming competition?
https://bonsaicode.wordpress.com/2013/02/26/programming-praxis-an-odd-way-to-square/
CC-MAIN-2016-50
refinedweb
206
74.79
#include <stdlib.h> #include <fcntl.h> The posix_openpt() function opens an unused pseudoterminal master device, returning a file descriptor that can be used to refer to that device. The flags argument is a bit mask that ORs together zero or more of the following flags: O_RDWR Open the device for both reading and writing. It is usual to specify this flag. O_NOCTTY Do not make this device the controlling terminal for the process. On success, posix_openpt() returns a nonnegative file descriptor which is the lowest numbered unused file descriptor. On failure, −1 is returned, and errno is set to indicate the error.. open(2), getpt(3), grantpt(3), ptsname(3), unlockpt(3), pts(4), pty(7)
http://manpages.courier-mta.org/htmlman3/posix_openpt.3.html
CC-MAIN-2021-17
refinedweb
116
57.87
You can subscribe to this list here. Showing 11 results of 11) wow! that's good to hear :-) On 10/7/05, Lawrence Bruhmuller <lbruhmuller@...> wrote: > >? > > > > > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. > _______________________________________________ > activegrid-users mailing list > activegrid-users@... > >? > > > Gr8 idea! --- à·à¶´à·à¶¸à¶½à· ජයරà¶à·à¶± <sapumal.jayaratne@...> wrote: > Hi Activegriders, > > I found Activegriding is very interesting. But we > need more > documentation, How To s and Tutorials. Aren't we > have WIKI or > something like it. If not, why we don't start one? > > -- > Thanks, > à·à¶´à·à¶¸à¶½à· ජයරà¶à·à¶±. > N¬HS^µéX¬²'²Þu¼£«·!×¶êÞEë(º· > éíz±kyç(×§µÚ0ZvÇb±Ë¬²*'±©ÝÞÛiÿû(ëb¢{'{¢¸r¿¹ÈmZrدz > âvë®ÉX§X¬µ§-÷ ®'n±êì+-²Ê.Ç¢¸ëa¶Úlÿùb²Û,¢êÜyú+éÞ·ùb²Û?+-wèý§-÷ ®'n±êì SGkgQWN0aXZlZ3JpZGVycywKCkkgZm91bmQgQWN0aXZlZ3JpZGluZyBpcyB2ZXJ5IGludGVyZXN0 aW5nLiBCdXQgd2UgbmVlZCBtb3JlCmRvY3VtZW50YXRpb24sIEhvdyBUbyBzIGFuZCBUdXRvcmlh bHMuIEFyZW4ndCB3ZSBoYXZlIFdJS0kgb3IKc29tZXRoaW5nIGxpa2UgaXQuIElmIG5vdCwgd2h5 IHdlIGRvbid0IHN0YXJ0IG9uZT8KCi0tClRoYW5rcywK4LeD4La04LeU4La44La94LeKIOC2ouC2 uuC2u+C2reC3iuC2sS4K Hi Brian, You're not being thick--there is no way to add the body tag from the tool.= =20 I've filed that. In the meantime, you can just insert the tag directly into= =20 the xml file. If you can find the place for it. :) Your modified file should look something like this: <xforms:group ag:name=3D"LeftGroup" appearance=3D"lyt:vertical" ag:cellStyl= e=3D""> <xforms:label xsi:nil=3D"true"/> </xforms:group> <ag:body cellStyle=3D""/> <xforms:group ag:name=3D"RightGroup" appearance=3D"lyt:vertical"=20 ag:cellStyle=3D""> <xforms:label xsi:nil=3D"true"/> </xforms:group> where the important bit is the <ag:body tag. Let me know if that doesn't=20 sort you out. Thanks, Matt Fryer On 7/25/05, Brian Young <brian.young@...> wrote: >=20 > Hi Active Griders! (Gridlers?) >=20 >)? >=20 > In any event, the world is watching, >=20 > Good Luck, >=20 > BY >=20 >=20 >=20 > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. k > _______________________________________________ > activegrid-users mailing list > activegrid-users@... > > Hi Active Griders! (Gridlers?))? In any event, the world is watching, Good Luck, BY > From: *Paulo Sérgio Medeiros* <pasemes@... > <mailto:pasemes@...>> > Date: Jun 20, 2005 8:09 AM > Subject: [activegrid-users] Web Services > To: activegrid-users@... > <mailto:activegrid-users@...> > > I have some questions about web services in AG: > > Where are the services deployed? There are WSDL for them? (where?) Exposing ActiveGrid application components as web services isn't there yet. When it is, users will be able to provide a file specification for the generated WSDL. The WSDL will also be available via a URL from the running application (e.g. as a "?WSDL" query parameter). > How can i utilize services that exist on other servers? This will be in ActiveGrid 1.0. Developers will point at an external service's WSDL, creating a "service reference". Then create application actions that invoke web service operations. We're hoping to support consumption of both SOAP and REST services in 1.0. > > I think we really need more documentation to develop with AG, is this > planned only in 1.0 release? Most likely. Although hopefully this forum can help until then. Alan Mullendore ActiveGrid Engineering I have some questions about web services in AG: Where are the services deployed? There are WSDL for them? (where?) How can i utilize services that exist on other servers? I think we really need more documentation to develop with AG, is this=20 planned only in 1.0 release? Cheers, Paulo S=E9rgio. Hello, I'd like to announce the availability of the 0.7 Early Access version of ActiveGrid technology, freely available for download at. This is an incremental release to keep our user base updated with bug fixes and some limited features as we work towards our 1.0 release, to be released in the July timeframe. The release notes are online, so check them out to get the latest information. Getting ActiveGrid up and running on Linux is now much easier than before; Linux installers are available for download which include all the required dependencies. We look forward to hearing lots of feedback! Please send questions and issues to support@... If you are a contributing developer for ActiveGrid or would like to be, join activegrid-developers@..., and email develop@... to get more information. Lastly, we at ActiveGrid have not been monitoring these sourceforge.net mailing lists closely enough; we have been focusing on the requests sent to develop@... and support@... We will now be monitoring these lists regularly to assist the members in using and enhancing ActiveGrid software. Regards, Lawrence ********************************* Lawrence Bruhmuller Director, Quality and Release Engineering ActiveGrid, Inc. lbruhmuller@... Hi, Using vanilla Ubuntu Hoary 5.04 Release. I've installed the prerequired packages per the setup instructions, mainly from deb packages and source, although I did have a dependency issue with the libsqlite library (ubuntu-desktop depends on the version it installs) I installed the ActiveGrid rpm's by first converting them to deb packages (using alien) and installing via dpkg -i. No problems reported there. However, when I try to test the installation via the command: python2.3 ActiveGridAppBuilder.py I get the following error: Traceback (most recent call last): File "ActiveGridAppBuilder.py", line 12, in ? import wx.lib.pydocview ImportError: No module named wx.lib.pydocview I seem to have pydocview scripts in my python2.3 installation but not sure what else I would be missing. Any ideas on what to verify or action to take? Thanks in advance, jchavezb@... root@...:/usr/lib/python2.3/site-packages # ls -la total 180 drwxr-xr-x 6 root root 4096 2005-04-13 04:27 . drwxr-xr-x 18 root root 12288 2005-04-13 04:17 .. drwxr-xr-x 2 root root 4096 2005-04-13 04:18 dhm drwxr-xr-x 2 root root 4096 2005-04-13 04:19 pychecker -rw-r--r-- 1 root root 119 2005-03-29 12:03 README drwxr-xr-x 2 root root 4096 2005-04-13 04:18 sqlite -rwxr-xr-x 1 root root 88857 2005-04-13 04:19 _sqlite.so drwxr-xr-x 3 root root 4096 2005-04-13 04:21 wx-2.5.4-gtk2-unicode -rw-r--r-- 1 root root 21 2005-03-16 17:11 wx.pth -rw-r--r-- 1 root root 14397 2004-10-28 15:34 wxversion.py -rw-r--r-- 1 root root 14406 2005-03-16 17:11 wxversion.pyc -rw-r--r-- 1 root root 14344 2005-04-13 04:18 wxversion.pyo root@...:/usr/lib/python2.3/site-packages # cd ../../activegrid/python/ root@...:/usr/lib/activegrid/python # ls activegrid ActiveGridIDE.py docview.py multisash.py ActiveGridAppBuilder.bpel ActiveGridIDE.pyc docview.pyc multisash.pyc ActiveGridAppBuilder.dpl agwebserver.py __init__.py pydocview.py ActiveGridAppBuilder.py agwebserver.pyc __init__.pyc pydocview.pyc ActiveGridAppBuilder.pyc datamodel.xsd mphandler.py wxPythonDemos ActiveGridAppBuilder.xsd demos mphandler.pyc root@...:/usr/lib/activegrid/python # python2.3 ActiveGridAppBuilder.py Traceback (most recent call last): File "ActiveGridAppBuilder.py", line 12, in ? import wx.lib.pydocview ImportError: No module named wx.lib.pydocview root@...:/usr/lib/activegrid/python # __________________________________ Do you Yahoo!? Yahoo! Small Business - Try our new resources site!
http://sourceforge.net/p/activegrid/mailman/activegrid-users/
CC-MAIN-2015-06
refinedweb
1,215
51.95
Despite the kind help of the Rojo people I still can't get the service to import my updated feed lists ('An error has occurred...Failed to import: null...We apologize for the inconvenience.'), so I'm still reading my Web feeds on Liferea for now. One nice bonus with Liferea is the ability to add feeds from the command line (or really, any program) courtesy GNOME's DBUS. Thanks to Aristotle for the tip, pointing me to 'a key message on liferea-help'. I've never used DBUS before, so I may be sketchy on some details, but I got it to work for me pretty easily. I start with a simple script to report on added feed entries. It automatically handles feed lists in OPML or XBEL (I use the latter for managing my feed lists, and Liferea uses the former to manage its feed list). import amara import sets old_feedlist = '/home/uogbuji/.liferea/feedlist.opml' new_feedlist = '/home/uogbuji/devel/uogbuji/webfeeds.xbel' def get_feeds(feedlist): doc = amara.parse(feedlist) #try OPML first feeds = [ unicode(f) for f in doc.xml_xpath(u'//outline/@xmlUrl') ] if not feeds: #then try XBEL feeds = [ unicode(f) for f in doc.xml_xpath(u'//bookmark/@href') ] return feeds old_feeds = sets.Set(get_feeds(old_feedlist)) new_feeds = sets.Set(get_feeds(new_feedlist)) added = new_feeds.difference(old_feeds) for a in added: print a I then send a subscription request for each new item as follows: $ dbus-send --dest=org.gnome.feed.Reader /org/gnome/feed/Reader \ org.gnome.feed.Reader.Subscribe \ "string:" The first time I got an error "Failed to open connection to session message bus: Unable to determine the address of the message bus". I did an apropos dbus and found dbus-launch. I added the suggested stanza to my .bash_profile: if test -z "$DBUS_SESSION_BUS_ADDRESS" ; then ## if not found, launch a new one eval ‘dbus-launch --sh-syntax --exit-with-session‘ echo "D-BUS per-session daemon address is: $DBUS_SESSION_BUS_ADDRESS" fi After running dbus-launch the dbus-send worked and Liferea immediately popped up a properties dialog box for the added feed, and stuck it into the feeds tree at the point I happened to last be browsing in Liferea (not sure I like that choice of location). Simple drag&drop to put it where I want. Neat.
http://copia.posthaven.com/adding-feeds-to-liferea-on-the-command-line
CC-MAIN-2017-13
refinedweb
382
66.23
The pkgsrc guide Documentation on the NetBSD packages system Alistair Crooks <agc@NetBSD.org> Hubert Feyrer <hubertf@NetBSD.org> The pkgsrc Developers Copyright 1994-2007 The NetBSD Foundation, Inc $NetBSD: pkgsrc.xml,v 1.26 2007/09/18 08:17:21 rillig Exp $ Abstract pkgsrc is a centralized package management system for Unix-like operating systems. This guide provides information for users and developers of pkgsrc. It covers installation of binary and source packages, creation of binary and source packages and a high-level overview about the infrastructure. ------------------------------------------------------------------------------- Table of Contents 1. What is pkgsrc? 1.1. Introduction 1.1.1. Why pkgsrc? 1.1.2. Supported platforms 1.2. Overview 1.3. Terminology 1.3.1. Roles involved in pkgsrc 1.4. Typography I. The pkgsrc user's guide? II. The pkgsrc developer's guide III. The pkgsrc infrastructure internals. Directory layout of the pkgsrc FTP server C.1. distfiles: The distributed source files C.2. misc: Miscellaneous things C.3. packages: Binary packages C.4. reports: Bulk build reports C.5. current, pkgsrc-20xxQy: source packages D. Editing guidelines for the pkgsrc guide D.1. Make targets D.2. Procedure List of Tables 1.1. Platforms supported by pkgsrc 11.1. Patching examples 23.1. PLIST handling for GNOME packages Chapter 1. What is pkgsrc? Table of Contents 1.1. Introduction 1.1.1. Why pkgsrc? 1.1.2. Supported platforms 1.2. Overview 1.3. Terminology 1.3.1. Roles involved in pkgsrc 1.4. Typography 1.1. Introduction There is a lot of software freely available for Unix-based systems, which is usually available in form of the source code. Before such software can be used, it needs to be configured to the local system, compiled and installed, and this is exactly what The NetBSD Packages Collection (pkgsrc) does. pkgsrc also has some basic commands to handle binary packages, so that not every user has to build the packages for himself, which is a time-costly task. pkgsrc currently contains several thousand packages, including: * www/apache - The Apache web server * www/firefox - The Firefox. 1.1.1. Why pkgsrc? pkgsrc provides the following key features: * Easy building of software from source as well as the creation and installation of binary packages. The source and latest patches are retrieved from a master or mirror download site, checksum verified, then built on your system. Support for binary-only distributions is available for both native platforms and NetBSD emulated platforms. * All packages are installed in a consistent directory tree, including binaries, libraries, man pages and other documentation. * Package dependencies, including when performing package updates, are handled automatically. The configuration files of various packages are handled automatically during updates, so local changes are preserved. * Like NetBSD, pkgsrc is designed with portability in mind and consists of highly portable code. This allows the greatest speed of development when porting to a new platform. This portability also ensures that pkgsrc is consistent across all platforms. * The installation prefix, acceptable software licenses, international encryption requirements and build-time options for a large number of packages are all set in a simple, central configuration file. * The entire source (not including the distribution files) is freely available under a BSD license, so you may extend and adapt pkgsrc to your needs. Support for local packages and patches is available right out of the box, so you can configure it specifically for your environment. The following principles are basic to pkgsrc: * (pkgtools/ pkglint), build-time checks (portability of shell scripts), and post-installation checks (installed files, references to shared libraries, script interpreters). * "If it works, it should work everywhere" ? Like NetBSD has been ported to many hardware architectures, pkgsrc has been ported to many operating systems. Care is taken that packages behave the same on all platforms. 1.1.2. Supported platforms pkgsrc consists of both a source distribution and a binary distribution for these operating systems. After retrieving the required source or binaries, you can be up and running with pkgsrc in just minutes! pkgsrc was derived from FreeBSD's ports system, and initially developed for NetBSD only. Since then, pkgsrc has grown a lot, and now supports the following platforms: Table 1.1. Platforms supported by pkgsrc +----------------------------------------------------------------+ | Platform |Date Support Added| |---------------------------------------------+------------------| |NetBSD | Aug 1997 | |---------------------------------------------+------------------| |Solaris | Mar 1999 | |---------------------------------------------+------------------| |Linux | Jun 1999 | |---------------------------------------------+------------------| |Darwin (Mac OS X) | Oct 2001 | |---------------------------------------------+------------------| |FreeBSD | Nov 2002 | |---------------------------------------------+------------------| |OpenBSD | Nov 2002 | |---------------------------------------------+------------------| |IRIX | Dec 2002 | |---------------------------------------------+------------------| |BSD/OS | Dec 2003 | |---------------------------------------------+------------------| |AIX | Dec 2003 | |---------------------------------------------+------------------| |Interix (Microsoft Windows Services for Unix)| Mar 2004 | |---------------------------------------------+------------------| |DragonFlyBSD | Oct 2004 | |---------------------------------------------+------------------| |OSF/1 | Nov 2004 | |---------------------------------------------+------------------| |HP-UX | Apr 2007 | |---------------------------------------------+------------------| |Haiku | Sep 2010 | +----------------------------------------------------------------+ 1.2. Overview This document is divided into three. The third part, The pkgsrc infrastructure internals is intended for those who want to understand how pkgsrc is implemented..3.1. Roles involved in pkgsrc pkgsrc users The pkgsrc users are people who use the packages provided by pkgsrc. Typically they are system administrators. The people using the software that is inside the packages (maybe called "end users") are not covered by the pkgsrc guide. There are two kinds of pkgsrc users: Some only want to install pre-built binary packages. Others build the pkgsrc packages from source, either for installing them directly or for building binary packages themselves. For pkgsrc users Part I, "The pkgsrc user's guide" should provide all necessary documentation. package maintainers A package maintainer creates packages as described in Part II, "The pkgsrc developer's guide". infrastructure developers These people are involved in all those files that live in the mk/ directory and below. Only these people should need to read through Part III, "The pkgsrc infrastructure internals", though others might be curious, too.. Getting pkgsrc for the first time 2.1.1. As tar file 2.1.2. Via anonymous CVS 2.2. Keeping pkgsrc up-to-date 2.2.1. Via tar files 2.2.2. Via CVS. 2.1. 2009Q1. The second step is to decide how you want to download pkgsrc. You can get it as a tar file or via CVS. Both ways are described here. 2.1.1. As tar file The primary download location for all pkgsrc files is pkgsrc/. There are a number of subdirectories for different purposes, which are described in detail in Appendix C, Directory layout of the pkgsrc FTP server. The tar file for the current branch is in the directory current and is called pkgsrc.tar.gz. It is autogenerated daily. The tar file for the stable branch 2009Q1 is in the directory pkgsrc-2009Q1 and is also called pkgsrc-2009Q1.tar.gz. To download a pkgsrc stable tarball, run: $ ftp Where pkgsrc-20xxQy is the stable branch to be downloaded, for example, " pkgsrc-2009Q1". Then, extract it with: $ tar -xzf pkgsrc-20xxQy.tar.gz -C /usr This will create the directory pkgsrc/ in /usr/ and all the package source will be stored under /usr/pkgsrc/. To download pkgsrc-current, run: $ ftp 2.1.2. Via anonymous CVS To fetch a specific pkgsrc stable branch, run: $ cd /usr && cvs -q -z3 -d anoncvs@anoncvs.NetBSD.org:/cvsroot checkout -r pkgsrc-20xxQy -P pkgsrc Where pkgsrc-20xxQy is the stable branch to be checked out, for example, " pkgsrc-2009Q1" This will create the directory pkgsrc/ in your /usr/ directory and all the package source will be stored under /usr/pkgsrc/. To fetch the pkgsrc current branch, run: $ cd /usr && cvs -q -z3 -d anoncvs@anoncvs.NetBSD.org:/cvsroot checkout -P pkgsrc Refer to list of available CVS mirrors to choose faster one. If you get error messages from rsh, you need to set CVS_RSH variable. E.g.: $ cd /usr && env CVS_RSH=ssh cvs -q -z -z3 checkout -P update -dP diff -upN rdiff -u release -d 2.2. Keeping pkgsrc up-to-date The preferred way to keep pkgsrc up-to-date is via CVS (which also works if you have first installed it via a tar file). It saves bandwidth and hard disk activity, compared to downloading the tar file again. 2.2.1. Via tar files Warning use other than the default directories by setting the DISTDIR and PACKAGES variables. See Chapter 5, Configuring pkgsrc. 2.2.2. Via CVS To update pkgsrc via CVS, change to the pkgsrc directory and run cvs: $ cd /usr/pkgsrc && cvs update -dP If you get error messages from rsh, you need to set CVS_RSH variable as described above. E.g.: $ cd /usr/pkgsrc && env CVS_RSH=ssh cvs up -dP 2.2.2.1.pkgsrc-2009Q3" option. 2.2.2.2.. Chapter 3. Using pkgsrc on systems other than NetBSD Table of Contents 3.1. Binary distribution See Section 4.1, "Using binary packages". 3.2. Bootstrapping pkgsrc Installing the bootstrap kit from source. Note The bootstrap installs a bmake tool. Use this bmake when building via pkgsrc. For examples in this guide, use bmake instead of "make". 3.3. Platform-specific notes Here are some platform-specific notes you should be aware of. 3.3.1. Darwin (Mac OS X) Darwin 5.x and up are supported. Before you start, you will need to download and install the Mac OS X Developer Tools from Apple's Developer Connection. See for details. Also, make sure you install X11 (an optional package included with the Developer Tools) if you intend to build packages that use the X11 Window System. 3.3.3 has been tested. 3.0 or 3.1 may work, but are not officially supported. (The main difference in 3.0/3.1 is lack of pthreads, but other parts of libc may also be lacking.) Services for Unix Applications (aka SUA) is an integrated component of Windows Server 2003 R2 (5.2), Windows Vista and Windows Server 2008 (6.0), Windows 7 and Windows Server 2008 R2 (6.1). As of this writing, the SUA's Interix 6.0 (32bit) and 6.1 (64bit) subsystems have been tested. Other versions may work as well. The Interix 5.x subsystem has not yet been tested with pkgsrc. 3.3. During installation you may be asked whether to enable setuid behavior for Interix programs, and whether to make pathnames default to case-sensitive. Setuid should be enabled, and case-sensitivity MUST be enabled. (Without case-sensitivity, a large number of packages including perl will not build.), Debian Interix Port has made most Interix hotfixes available for personal use from http://. In addition to the hotfix noted above, it may be necessary to disable Data Execution Prevention entirely to make Interix functional. This may happen only with certain types of CPUs; the cause is not fully understood at this time. If gcc or other applications still segfault repeatedly after installing one of the hotfixes note above, the following option can be added to the appropriate "boot.ini" line on the Windows boot drive: /NoExecute=AlwaysOff (WARNING, this will disable DEP completely, which may be a security risk if applications are often run as a user in the Administrators group!) 3.3.3.3.3.3 cannotIPSPro is used, please set your PATH to not include the location of gcc (often /usr/freeware/bin), and (important) pass the '--preserve-path' flag. 3.3 mk.conf: PKGSRC_COMPILER= icc The default installation directory for icc is /opt/intel_cc_80, which is also the pkgsrc default. If you have installed it into a different directory, set ICCBASE in.3.3 that the use of GNU binutils on Solaris is not supported, as of June 2006. Whichever compiler you use, please ensure the compiler tools and your $prefix are in your PATH. This includes /usr/ccs/{bin,lib} and e.g. /usr/pkg/ {bin,sbin}. 3.3. 3.3 the following variables in your mk.conf file: CC= cc CXX= CC CPP= cc -E CXXCPP= CC -E Note The CPP setting might break some packages that use the C preprocessor for processing things other than C source code. 3.3.7.3. Building 64-bit binaries with SunPro To build 64-bit packages, you just need to have the following lines in your mk.conf file: PKGSRC_COMPILER= sunpro ABI= 64 Note This setting has been tested for the SPARC architecture. Intel and AMD machines need some more work. 3.3 On the server and its mirrors, there are collections of binary packages, ready to be installed. These binary packages have been built using the default settings for the directories, that is: * /usr/pkg for LOCALBASE, where most of the files are installed, * /usr/pkg/etc for configuration files, * /var for VARBASE, where those files are installed that may change after installation. If you cannot use these directories for whatever reasons (maybe because you're not root), you cannot use these binary packages, but have to build the packages yourself, which is explained in Section 3.2, "Bootstrapping pkgsrc". 4.1.1. Finding binary packages To install binary packages, you first need to know from where to get them. The first place where you should look is on the main pkgsrc FTP server in the directory /pub/pkgsrc/packages. This directory contains binary packages for multiple platforms. First, select your operating system. (Ignore the directories with version numbers attached to it, they just exist for legacy reasons.) Then, select your hardware architecture, and in the third step, the OS version and the "version" of pkgsrc. In this directory, you often find a file called bootstrap.tar.gz which contains the package management tools. If the file is missing, it is likely that your operating system already provides those tools. Download the file and extract it in the / directory. It will create the directories /usr/pkg (containing the tools for managing binary packages) and /var/db/pkg (the database of installed packages). 4.1.2. Installing binary packages In the directory from the last section, there is a subdirectory called All, which contains all the binary packages that are available for the platform, excluding those that may not be distributed via FTP or CDROM (depending on which medium you are using). To install packages directly from an FTP or HTTP server, run the following commands in a Bourne-compatible shell (be sure to su to root first): # 8. Directory layout of the installed files Table of Contents 8.1. File system layout in ${LOCALBASE} 8.2. File system layout in ${VARBASE}= /var/db/pkg In unprivileged mode (when pkgsrc has been installed as any other user), the default locations are: LOCALBASE= ${HOME}/pkg PKG_SYSCONFBASE= ${HOME}/pkg/etc VARBASE= ${HOME}/pkg/var PKG_DBDIR= ${HOME}/pkg/var/db/pkg. 8.1.. 8.2. File system layout in ${VARBASE} db/pkg (the usual location of ${PKG_DBDIR}) Contains information about the currently installed packages. games Contains highscore files. log Contains log files. run Contains informational files about daemons that are currently running. Chapter 9. Frequently Asked Questions Table of Contents? This section contains hints, tips & tricks on special things in pkgsrc that we didn't find a better place for in the previous chapters, and it contains items for both pkgsrc users and developers. 9.1. Are there any mailing lists for pkg-related discussion? The following mailing lists may be of interest to pkgsrc users: * pkgsrc-users: This is a general purpose list for most issues regarding. *. To subscribe, do: % echo subscribe listname | mail majordomo@NetBSD.org Archives for all these mailing lists are available from http:// mail-index.NetBSD.org/. 9.2. Where's the pkgviews documentation? Pkgviews is tightly integrated with buildlink. You can find a pkgviews User's guide in pkgsrc/mk/buildlink3/PKGVIEWS_UG. 9.3. Utilities for package management (pkgtools) The directory pkgsrc. * pkgtools/lintpkgsrc: The lintpkgsrc(1) program.. 9.5. How to resume transfers when fetching distfiles? By default, resuming transfers in pkgsrc is disabled, but you can enable this feature by adding the option PKG_RESUME_TRANSFERS=YES into mk.conf. If, during a fetch step, an incomplete distfile is found, pkgsrc will try to resume it. You can also use a different program than the default ftp(1) by changing the FETCH_USING variable. You can specify the program by using of ftp, fetch, wget or curl. Alternatively, fetching can be disabled by using the value manual. A value of custom disables the system defaults and dependency tracking for the fetch program. In that case you have to provide FETCH_CMD, FETCH_BEFORE_ARGS, FETCH_RESUME_ARGS, FETCH_OUTPUT_ARGS, FETCH_AFTER_ARGS. For example, if you want to use wget to download, you'll have to use something like: FETCH_USING= wget 9.6. How can I install/use modular X.org from pkgsrc? If you want to use modular X.org from pkgsrc instead of your system's own X11 (/usr/X11R6, /usr/openwin, ...) you will have to add the following line into mk.conf: X11_TYPE=modular Note The DragonFly operating system defaults to using modular X.org from pkgsrc. 9.7.= 9 mk.conf file: PASSIVE_FETCH=1. Having that option present will prevent /usr/bin/ftp from falling back to active transfers. 9 t 9.10. mk.conf. 9.11.). 9. 9, refer to the following two tools (installed as part of the pkgtools/pkg_install package): 1. pkg_admin fetch-pkg-vulnerabilities, an easy way to download a list of the security vulnerabilities information. This list is kept up to date by the pkgsrc security team, and is distributed from the NetBSD ftp server: 2. pkg_admin audit, an easy way to audit the current machine, checking each known vulnerability. If a vulnerable package is installed, it will be shown by output to stdout, including a description of the type of vulnerability, and a URL containing more information. Use of these tools is strongly recommended! After "pkg_install" is installed, please read the package's message, which you can get by running pkg_info -D pkg_install. If this package is installed, pkgsrc builds will use it to perform a security check before building any package. See Section 5.2, "Variables affecting the build process" for ways to control this check. 9.15. Why do some packages ignore my CFLAGS? When you add your own preferences to the CFLAGS variable in your mk.conf, these flags are passed in environment variables to the ./configure scripts and to make(1). Some package authors ignore the CFLAGS from the environment variable by overriding them in the Makefiles of their package. Currently there is no solution to this problem. If you really need the package to use your CFLAGS you should run make patch in the package directory and then inspect any Makefile and Makefile.in for whether they define CFLAGS explicitly. Usually you can remove these lines. But be aware that some "smart" programmers write so bad code that it only works for the specific combination of CFLAGS they have chosen. 9.16. A package does not build. What shall I do? 1. Make sure that your copy of pkgsrc is consistent. A case that occurs often is that people only update pkgsrc in parts, because of performance reasons. Since pkgsrc is one large system, not a collection of many small systems, there are sometimes changes that only work when the whole pkgsrc tree is updated. 2. Make sure that you don't have any CVS conflicts. Search for "<<<<<<" or " >>>>>>" in all your pkgsrc files. 3. Make sure that you don't have old copies of the packages extracted. Run make clean clean-depends to verify this. 4. If the problem still exists, write a mail to the pkgsrc-users mailing list. 9.17. What does "Makefile appears to contain unresolved cvs/rcs/??? merge conflicts" mean? You have modified a file from pkgsrc, and someone else has modified that same file afterwards in the CVS repository. Both changes are in the same region of the file, so when you updated pkgsrc, the cvs command marked the conflicting changes in the file. Because of these markers, the file is no longer a valid Makefile. Have a look at that file, and if you don't need your local changes anymore, you can remove that file and run cvs -q update -dP in that directory to download the current version. Part II. The pkgsrc developer's guide This part of the book deals with creating and modifying packages. It starts with a "HOWTO"-like guide on creating a new package. The remaining chapters are more like a reference manual for pkgsrc. Table of Contents Chapter 10. Creating a new pkgsrc package from scratch Table of Contents 10.1. Common types of packages 10.1.1. Perl modules 10.1.2. KDE applications 10.1.3. Python modules and programs 10.2. Examples 10.2.1. How the www/nvu package came into pkgsrc When you find a package that is not yet in pkgsrc, you most likely have a URL from where you can download the source code. Starting with this URL, creating a package involves only a few steps. 1. First, install the packages pkgtools/url2pkg and pkgtools/pkglint. 2. Then, choose one of the top-level directories as the category in which you want to place your package. You can also create a directory of your own (maybe called local). In that category directory, create another directory for your package and change into it. 3. Run the program url2pkg, which will ask you for a URL. Enter the URL of the distribution file (in most cases a .tar.gz file) and watch how the basic ingredients of your package are created automatically. The distribution file is extracted automatically to fill in some details in the Makefile that would otherwise have to be done manually. 4. Examine the extracted files to determine the dependencies of your package. Ideally, this is mentioned in some README file, but things may differ. For each of these dependencies, look where it exists in pkgsrc, and if there is a file called buildlink3.mk in that directory, add a line to your package Makefile which includes that file just before the last line. If the buildlink3.mk file does not exist, it must be created first. The buildlink3.mk file makes sure that the package's include files and libraries are provided. If you just need binaries from a package, add a DEPENDS line to the Makefile, which specifies the version of the dependency and where it can be found in pkgsrc. This line should be placed in the third paragraph. If the dependency is only needed for building the package, but not when using it, use BUILD_DEPENDS instead of DEPENDS. Your package may then look like this: [...] BUILD_DEPENDS+= lua>=5.0:../../lang/lua DEPENDS+= screen-[0-9]*:../../misc/screen DEPENDS+= screen>=4.0:../../misc/screen [...] .include "../../category/package/buildlink3.mk" .include "../../devel/glib2/buildlink3.mk" .include "../../mk/bsd.pkg.mk" 5. Run pkglint to see what things still need to be done to make your package a "good" one. If you don't know what pkglint's warnings want to tell you, try pkglint --explain or pkglint -e, which outputs additional explanations. 6. In many cases the package is not yet ready to build. You can find instructions for the most common cases in the next section, Section 10.1, "Common types of packages". After you have followed the instructions over there, you can hopefully continue here. 7. Run bmake clean to clean the working directory from the extracted files. Besides these files, a lot of cache files and other system information has been saved in the working directory, which may become wrong after you edited the Makefile. 8. Now, run bmake to build the package. For the various things that can go wrong in this phase, consult Chapter 19, Making your package work. 9. When the package builds fine, the next step is to install the package. Run bmake install and hope that everything works. 10. Up to now, the file PLIST, which contains a list of the files that are installed by the package, is nearly empty. Run bmake print-PLIST >PLIST to generate a probably correct list. Check the file using your preferred text editor to see if the list of files looks plausible. 11. Run pkglint again to see if the generated PLIST contains garbage or not. 12. When you ran bmake install, the package has been registered in the database of installed files, but with an empty list of files. To fix this, run bmake deinstall and bmake install again. Now the package is registered with the list of files from PLIST. 13. Run bmake package to create a binary package from the set of installed files. 10.1. Common types of packages 10.1.1. Perl modules Simple Perl modules are handled automatically by url2pkg, including dependencies. 10.1.2. KDE applications KDE applications should always include meta-pkgs/kde3/kde3.mk, which contains numerous settings that are typical of KDE packages. 10.1.3. Python modules and programs Python modules and programs packages are easily created using a set of predefined variables. Most Python packages use either "distutils" or easy-setup ("eggs"). If the software uses "distutils", set the PYDISTUTILSPKG variable to "yes" so pkgsrc will make use of this framework. "distutils" uses a script called setup.py, if the "distutils" driver is not called setup.py, set the PYSETUP variable to the name of the script. If the default Python versions are not supported by the software, set the PYTHON_VERSIONS_ACCEPTED variable to the Python versions the software is known to work with, from the most recent to the older one, e.g. PYTHON_VERSIONS_ACCEPTED= 25 24". If the packaged software, either it is an application or a module, is egg-aware, you only need to include "../../lang/python/egg.mk". In order to correctly set the path to the Python interpreter, use the REPLACE_PYTHON variable and set it to the list of files that must be corrected. For example : REPLACE_PYTHON= ${WRKSRC}/*.py 10.2. Examples 10.2.1. How the www/nvu package came into pkgsrc 10.2.1.1. The initial package Looking at the file pkgsrc/doc/TODO, I saw that the "nvu" package has not yet been imported into pkgsrc. As the description says it has to do with the web, the obvious choice for the category is "www". $ mkdir www/nvu $ cd www/nvu The web site says that the sources are available as a tar file, so I fed that URL to the url2pkg program: $ url2pkg My editor popped up, and I added a PKGNAME line below the DISTNAME line, as the package name should not have the word "sources" in it. I also filled in the MAINTAINER, HOMEPAGE and COMMENT fields. Then the package Makefile looked like that: # $NetBSD$ # DISTNAME= nvu-1.0-sources PKGNAME= nvu-1.0 CATEGORIES= www MASTER_SITES= EXTRACT_SUFX= .tar.bz2 MAINTAINER= rillig@NetBSD.org HOMEPAGE= COMMENT= Web Authoring System # url2pkg-marker (please do not remove this line.) .include "../../mk/bsd.pkg.mk" Then, I quit the editor and watched pkgsrc downloading a large source archive: url2pkg> Running "make makesum" ... => Required installed package digest>=20010302: digest-20060826 found => Fetching nvu-1.0-sources.tar.bz2 Requesting 100% |*************************************| 28992 KB 150.77 KB/s00:00 ETA 29687976 bytes retrieved in 03:12 (150.77 KB/s) url2pkg> Running "make extract" ... => Required installed package digest>=20010302: digest-20060826 found => Checksum SHA1 OK for nvu-1.0-sources.tar.bz2 => Checksum RMD160 OK for nvu-1.0-sources.tar.bz2 work.bacc -> /tmp/roland/pkgsrc/www/nvu/work.bacc ===> Installing dependencies for nvu-1.0 ===> Overriding tools for nvu-1.0 ===> Extracting for nvu-1.0 url2pkg> Adjusting the Makefile. Remember to correct CATEGORIES, HOMEPAGE, COMMENT, and DESCR when you're done! Good luck! (See pkgsrc/doc/pkgsrc.txt for some more help :-) 10.2.1.2. Fixing all kinds of problems to make the package work Now that the package has been extracted, let's see what's inside it. The package has a README.txt, but that only says something about mozilla, so it's probably useless for seeing what dependencies this package has. But since there is a GNU configure script in the package, let's hope that it will complain about everything it needs. $ bmake => Required installed package digest>=20010302: digest-20060826 found => Checksum SHA1 OK for nvu-1.0-sources.tar.bz2 => Checksum RMD160 OK for nvu-1.0-sources.tar.bz2 ===> Patching for nvu-1.0 ===> Creating toolchain wrappers for nvu-1.0 ===> Configuring for nvu-1.0 [...] configure: error: Perl 5.004 or higher is required. [...] WARNING: Please add USE_TOOLS+=perl to the package Makefile. [...] That worked quite well. So I opened the package Makefile in my editor, and since it already has a USE_TOOLS line, I just appended "perl" to it. Since the dependencies of the package have changed now, and since a perl wrapper is automatically installed in the "tools" phase, I need to build the package from scratch. $ bmake clean ===> Cleaning for nvu-1.0 $ bmake [...] *** /tmp/roland/pkgsrc/www/nvu/work.bacc/.tools/bin/make is not \ GNU Make. You will not be able to build Mozilla without GNU Make. [...] So I added "gmake" to the USE_TOOLS line and tried again (from scratch). [...] checking for GTK - version >= 1.2.0... no *** Could not run GTK test program, checking why... [...] Now to the other dependencies. The first question is: Where is the GTK package hidden in pkgsrc? $ echo ../../*/gtk* [many packages ...] $ echo ../../*/gtk ../../x11/gtk $ echo ../../*/gtk2 ../../x11/gtk2 $ echo ../../*/gtk2/bui* ../../x11/gtk2/buildlink3.mk The first try was definitely too broad. The second one had exactly one result, which is very good. But there is one pitfall with GNOME packages. Before GNOME 2 had been released, there were already many GNOME 1 packages in pkgsrc. To be able to continue to use these packages, the GNOME 2 packages were imported as separate packages, and their names usually have a "2" appended. So I checked whether this was the case here, and indeed it was. Since the GTK2 package has a buildlink3.mk file, adding the dependency is very easy. I just inserted an .include line before the last line of the package Makefile, so that it now looks like this: [...] .include "../../x11/gtk2/buildlink3.mk" .include "../../mk/bsd.pkg.mk After another bmake clean && bmake, the answer was: [...] checking for gtk-config... /home/roland/pkg: /home/roland/pkg/bin/gtk-config configure: error: Test for GTK failed. [...] In this particular case, the assumption that "every package prefers GNOME 2" had been wrong. The first of the lines above told me that this package really wanted to have the GNOME 1 version of GTK. If the package had looked for GTK2, it would have looked for pkg-config instead of gtk-config. So I changed the x11 /gtk2 to x11/gtk in the package Makefile, and tried again. [...] cc -o xpidl.o -c -DOSTYPE=\"NetBSD3\" -DOSARCH=\"NetBSD\" -I../../../dist/include/xpcom -I../../../dist/include -I/tmp/roland/pkgsrc/www/nvu/work.bacc/mozilla/dist/include/nspr -I/usr/X11R6/include -fPIC -DPIC -I/home/roland/pkg/include -I/usr/include -I/usr/X11R6/include -Wall -W -Wno-unused -Wpointer-arith -Wcast-align -Wno-long-long -pedantic -O2 -I/home/roland/pkg/include -I/usr/include -Dunix -pthread -pipe -DDEBUG -D_DEBUG -DDEBUG_roland -DTRACING -g -I/home/roland/pkg/include/glib/glib-1.2 -I/home/roland/pkg/lib/glib/include -I/usr/pkg/include/orbit-1.0 -I/home/roland/pkg/include -I/usr/include -I/usr/X11R6/include -include ../../../mozilla-config.h -DMOZILLA_CLIENT -Wp,-MD,.deps/xpidl.pp xpidl.c In file included from xpidl.c:42: xpidl.h:53:24: libIDL/IDL.h: No such file or directory In file included from xpidl.c:42: xpidl.h:132: error: parse error before "IDL_ns" [...] The package still does not find all of its dependencies. Now the question is: Which package provides the libIDL/IDL.h header file? $ echo ../../*/*idl* ../../devel/py-idle ../../wip/idled ../../x11/acidlaunch $ echo ../../*/*IDL* ../../net/libIDL Let's take the one from the second try. So I included the ../../net/libIDL/ buildlink3.mk file and tried again. But the error didn't change. After digging through some of the code, I concluded that the build process of the package was broken and couldn't have ever worked, but since the Mozilla source tree is quite large, I didn't want to fix it. So I added the following to the package Makefile and tried again: CPPFLAGS+= -I${BUILDLINK_PREFIX.libIDL}/include/libIDL-2.0 BUILDLINK_TRANSFORM+= -l:IDL:IDL-2 The latter line is needed because the package expects the library libIDL.so, but only libIDL-2.so is available. So I told the compiler wrapper to rewrite that on the fly. The next problem was related to a recent change of the FreeType interface. I looked up in www/seamonkey which patch files were relevant for this issue and copied them to the patches directory. Then I retried, fixed the patches so that they applied cleanly and retried again. This time, everything worked. 10.2.1.3. Installing the package $ bmake CHECK_FILES=no install [...] $ bmake print-PLIST >PLIST $ bmake deinstall $ bmake install Chapter 11. Package components - files, directories and contents Table/* Whenever you're preparing a package, there are a number of files involved which are described in the following sections. 11.1. only need to provide it if DISTNAME (which is the default) is not a good name for the package in pkgsrc.. * SVR4_PKGNAME is the name of the package file to create if the PKGNAME isn't unique on a SVR4 system. The default is PKGNAME, which may be shortened when you use pkgtools/gensolpkg. Only add SVR4_PKGNAME if PKGNAME does not produce an unique package name on a SVR4 system. The length of SVR4_PKGNAME is limited to 5 characters. * Section 17.5, "The fetch phase". below) if not found locally. The third section contains the following variables. * MAINTAINER is the email address of the person who feels responsible for this package, and who is most likely to look at problems or questions regarding this package which have been reported with Section 19.6.7, "Packages installing info files". 11.2. distinfo The distinfo file contains the message digest, or checksum, of each distfile needed for the package. This ensures that the distfiles retrieved from the Internet have not been corrupted during transfer or altered by a malign force to introduce a security hole. Due to recent rumor about weaknesses of digest algorithms, all distfiles are protected using both SHA1 and RMD160 message digests, as well as the file size. The distinfo file also contains the checksums for all the patches found in the patches directory (see Section 11.3, "patches/*"). To regenerate the distinfo file, use the make makedistinfo or make mdi command. Some packages have different sets of distfiles depending on the platform, for example lang/openjdk7. These are kept in the same distinfo file and care should be taken when upgrading such a package to ensure distfile information is not lost. 11.3. patches/* Many packages still don't work out-of-the box on the various platforms that are supported by pkgsrc. Therefore, a number of custom patch files are needed to make the package work. These patch files are found in the patches/ directory. In the patch phase, these patches are applied to the files in WRKSRC directory after extracting them, in alphabetic order. 11.3.1.. In all, the patch should be commented so that any developer who knows the code of the application can make some use of the patch. Special care should be taken for the upstream developers, since we generally want that they accept our patches, so we have less work in the future. 11.3.2. Creating patch files One important thing to mention is to pay attention that no RCS IDs get stored in the patch files, as these will cause problems when later checked into the NetBSD Section 11.2, "distinfo".. 11.3.3. Sources where the patch files come from If you want to share patches between multiple packages in pkgsrc, e.g. because they use the same distfiles, set PATCHDIR to the path where the patch files can be found, e.g.: PATCHDIR= ${.CURDIR}/... 11.3.4.: Table 11.1. Patching examples +-------------------------------------------------------------------------------------------+ | Where | Incorrect | Correct | |---------+--------------------------+------------------------------------------------------| | |case ${target_os} in | | |configure|netbsd*) have_kvm=yes ;; |AC_CHECK_LIB(kvm, kvm_open, have_kvm=yes, have_kvm=no)| |script |*) have_kvm=no ;; | | | |esac | | |---------+--------------------------+------------------------------------------------------| |C source |#if defined(__NetBSD__) |#if defined(HAVE_SYS_EVENT_H) | |file |# include <sys/event.h> |# include <sys/event.h> | | |#endif |#endif | |---------+--------------------------+------------------------------------------------------| | |int |int | | |monitor_file(...) |monitor_file(...) | | |{ |{ | | |#if defined(__NetBSD__) |#if defined(HAVE_KQUEUE) | |C source | int fd = kqueue();| int fd = kqueue(); | |file | ... | ... | | |#else |#else | | | ... | ... | | |#endif |#endif | | |} |} | +-------------------------------------------------------------------------------------------+ For more information, please read the Making packager-friendly software article (part 1, part 2). It summarizes multiple details on how to make software easier to package; all the suggestions in it were collected from our experience in pkgsrc work, so they are possibly helpful when creating patches too. 11.3.5.), filling! 11.4. Chapter 13, PLIST issues for more information. 11.5. Optional files 11.5.1. Files affecting the binary package. See also Section 15.1, "Files and directories outside the installation prefix"..PREFIX, X11BASE, PKG_SYSCONFDIR, ROOT_GROUP, and ROOT_USER. You can display a different or additional files by setting the MESSAGE_SRC variable. Its default is MESSAGE, if the file exists. ALTERNATIVES FIXME: There is no documentation on the alternatives framework. 11.5.2. Chapter 14, Buildlink methodology). hacks.mk This file contains workarounds for compiler bugs and similar things. It is included automatically by the pkgsrc infrastructure, so you don't need an extra .include line for it. options.mk This file contains the code for the package-specific options (see Chapter 16, Options handling) that can be selected by the user. If a package has only one or two options, it is equally acceptable to put the code directly into the Makefile. 11.5.3. Files affecting nothing at all README* These files do not take place in the creation of a package and thus are purely informative to the package developer. TODO This file contains things that need to be done to make the package even better. 11.6.. 11. If you want to share files in this way with other packages, set the FILESDIR variable to point to the other package's files directory, e.g.: FILESDIR=${.CURDIR}/../xemacs/files Chapter 12.. 12.1. Caveats * When you are creating a file as a target of a rule, always write the data to a temporary file first and finally rename that file. Otherwise there might occur an error in the middle of generating the file, and when the user runs make(1) for the second time, the file exists and will not be regenerated properly. Example: wrong: @echo "line 1" > ${.TARGET} @echo "line 2" >> ${.TARGET} @false correct: @echo "line 1" > ${.TARGET}.tmp @echo "line 2" >> ${.TARGET}.tmp @false @mv ${.TARGET}.tmp ${.TARGET} When you run make wrong twice, the file wrong will exist, although there was an error message in the first run. On the other hand, running make correct gives an error message twice, as expected. You might remember that make(1) sometimes removes ${.TARGET} in case of error, but this only happens when it is interrupted, for example by pressing ^C. This does not happen when one of the commands fails (like false(1) above). 12slash. 12. 12.3. Code snippets This section presents you with some code snippets you should use in your own code. If you don't find anything appropriate here, you should test your code and add it here. 12.3. 12.3. 12.3.3. Passing variables to a shell command Sometimes you may want to print an arbitrary string. There are many ways to get it wrong and only few that can handle every nastiness. STRING= foo bar < > * `date` $$HOME ' " EXT_LIST= string=${STRING:Q} x=second\ item all: echo ${STRING} # 1 echo "${STRING}" # 2 echo "${STRING:Q}" # 3 echo ${STRING:Q} # 4 echo x${STRING:Q} | sed 1s,.,, # 5 printf "%s\\n" ${STRING:Q}"" #. Example 6 also works with every string and is the light-weight solution, since it does not involve a pipe, which has its own problems. The EXT_LIST does not need to be quoted because the quoting has already been done when adding elements to the list. As internal lists shall not be passed to the shell, there is no example for it. 12.3}"". 12.3 13.!). 13.1. RCS ID Be sure to add a RCS ID line as the first thing in any PLIST file you write: @comment $NetBSD$ 13.2. Semi-automatic PLIST generation You can use the make print-PLIST command to output a PLIST that matches any new files since the package was extracted. See Section 17.17, "Other helpful targets" for more information on this target. 13.3. Tweaking output of make print-PLIST If you have used any of the *-dirs packages, as explained in Section 13; } 13 11.5, "Optional files"): PLIST_SUBST+= SOMEVAR="somevalue" This replaces all occurrences of "${SOMEVAR}" in the PLIST with "somevalue". The PLIST_VARS variable can be used to simplify the common case of conditionally including some PLIST entries. It can be done by adding PLIST_VARS+=foo and setting the corresponding PLIST.foo variable to yes if the entry should be included. This will substitute "${PLIST.foo}" in the PLIST with either """" or ""@comment "". For example, in Makefile: PLIST_VARS+= foo .if condition PLIST.foo= yes .else And then in PLIST: @comment $NetBSD$ bin/bar man/man1/bar.1 ${PLIST.foo}bin/foo ${PLIST.foo}man/man1/foo.1 ${PLIST.foo}share/bar/foo.data ${PLIST.foo}@dirrm share/bar 13. 13 the order of things is important. The default for PLIST_SRC is ${PKGDIR}/PLIST. 13 13.8. Sharing directories between packages A "shared directory" is a directory where multiple (and unrelated) packages install files. These directories were problematic because you had to add special tricks in the PLIST to conditionally remove them, or have some centralized package handle them. In pkgsrc, it is now easy: Each package should create directories and install files as needed; pkg_delete will remove any directories left empty after uninstalling a package. If a package needs an empty directory to work, create the directory during installation as usual, and also add an entry to the PLIST: @pkgdir path/to/empty/directory Chapter._API. The user can set MOTIF_TYPE to "dt", "lesstif", or "openmotif" to choose which Motif version will be used. * oss.buildlink3.mk defines several variables that may be used by packages that use the Open Sound System (OSS) API. * pgsql.buildlink3.mk will accept either Postgres 8.0, 8.1, or 8.2,. 14.2.1. Anatomy of a buildlink3.mk file The following real-life example buildlink3.mk is taken from pkgsrc/graphics/ tiff: # $NetBSD: buildlink3.mk,v 1.16 2009/03/20 19:24:45 joerg Exp $ BUILDLINK_TREE+= tiff .if !defined(TIFF_BUILDLINK3_MK) TIFF_BUILDLINK3_MK:= BUILDLINK_API_DEPENDS.tiff+= tiff>=3.6.1 BUILDLINK_ABI_DEPENDS.tiff+= tiff>=3.7.2nb1 BUILDLINK_PKGSRCDIR.tiff?= ../../graphics/tiff .include "../../devel/zlib/buildlink3.mk" .include "../../graphics/jpeg/buildlink3.mk" .endif # TIFF_BUILDLINK3_MK BUILDLINK_TREE+= -tiff The header and footer manipulate BUILDLINK_TREE, which is common across all buildlink3.mk files and is used to track the dependency tree. The main section is protected from multiple inclusion and controls how the dependency on pkg is added. Several important variables are set in the section: * BUILDLINK_API_DEPENDS.pkg is the actual dependency recorded in the installed package; this should always be set using += to ensure that we're appending to any pre-existing list of values. This variable should be set to the first version of the package that had an backwards-incompatible_FNAME_TRANSFORM.pkg (not shown above) is a list of sed arguments used to transform the name of the source filename into a destination filename, e.g. -e "s|/curses.h|/ncurses.h|g". This section can additionally include any buildlink3.mk needed for pkg's library dependencies. Including these buildlink3.mk files means that the headers and libraries for these dependencies are also symlinked into $ {BUILDLINK_DIR} whenever the pkg buildlink3.mk file is included. Dependencies are only added for directly include buildlink3.mk files. 14.2.2. Updating BUILDLINK_API_DEPENDS.pkg and BUILDLINK_ABI_DEPENDS.pkg in buildlink3.mk files These two variables differ in that one describes source compatibility (API) and the other binary compatibility (ABI). The difference is that a change in the API breaks compilation of programs while changes in the ABI stop compiled programs from running. Changes to the BUILDLINK_API_DEPENDS.pkg variable in a buildlink3.mk file happen very rarely. One possible reason is that all packages depending on this already need a newer version. In case it is bumped see the description below. The most common example of an ABI change is that the major version of a shared library is increased. In this case, BUILDLINK_ABI_DEPENDS.pkg should be adjusted to require at least the new package version. Then the packages that depend on this package need their PKGREVISIONs increased and, if they have buildlink3.mk files, their BUILDLINK_ABI_DEPENDS.pkg adjusted, too. This is needed so pkgsrc will require the correct package dependency and not settle for an older one when building the source. See Section 19.1.6, "Handling dependencies" for more information about dependencies on other packages, including the BUILDLINK_ABI_DEPENDS and ABI_DEPENDS definitions. Please take careful consideration before adjusting BUILDLINK_API_DEPENDS.pkg or BUILDLINK_ABI_DEPENDS.pkg as we don't want to cause unneeded package deletions and rebuilds. In many cases, new versions of packages work just fine with older dependencies. Also it is not needed to set BUILDLINK_ABI_DEPENDS.pkg when it is identical to BUILDLINK_API_DEPENDS.pkg.._API_DEPENDS.foo} . if !empty(USE_BUILTIN.foo:M[yY][eE][sS]) USE_BUILTIN.foo!= \ ${PKG_ADMIN} pmatch '${_depend_}' ${BUILTIN_PKG.foo} \ && ${ECHO} "yes" || ${ECHO} "no" ._API_DEPENDS.pkg. This is typically done by comparing BUILTIN_PKG.pkg against each of the dependencies in BUILDLINK_API). 15.. 15 installation scripts. The generic installation scripts are shell scripts that can contain arbitrary code. The list of scripts to execute is taken from the INSTALL_FILE variable, which defaults to INSTALL. A similar variable exists for package removal (DEINSTALL_FILE, whose default is DEINSTALL). These scripts can run arbitrary commands, so they have the potential to create and manage files anywhere in the file system. Using these general installation files is not recommended, but may be needed in some special cases. One reason for avoiding them is that the user has to trust the packager that there is no unwanted or simply erroneous code included in the installation script. Also, previously there were many similar scripts for the same functionality, and fixing a common error involved finding and changing all of them. The pkginstall framework offers another, standardized way. It provides generic scripts to abstract the manipulation of such files and directories based on variables set in the package's Makefile. The rest of this section describes these variables.. For example: MAKE_DIRS_PERMS+= ${VARBASE}/foo/private ${ROOT_USER} ${ROOT_GROUP} 0700 The difference between the two is exactly the same as their non-PERMS counterparts. 15. 15. 15}.. 15.2.2. Telling the software where configuration files are Given that pkgsrc (and users!) expect configuration files to be in a- generated files: CONFIGURE_ARGS+= --sysconfdir=${PKG_SYSCONFDIR} Note that this specifies where the package has to look for its configuration files, not where they will be originally installed (although the difference is never explicit, unfortunately). 15 15. 15.2.4. Disabling handling of configuration files The automatic copying of config files can be toggled by setting the environment variable PKG_CONFIG prior to package installation. 15.3. System startup scripts System startup scripts are special files because they must be installed in a place known by the underlying OS, usually outside the installation prefix. Therefore, the same rules described in Section 15. 15.3.1. Disabling handling of system startup scripts. 15.4. System users and groups If a package needs to create special users and/or groups during installation, it can do so by using the pkginstall framework. Users can be created by adding entries to the PKG_USERS variable. Each entry has the following syntax: user:group Further specification of user details may be done by setting per-user variables. PKG_UID.user is the numeric UID for the user. PKG_GECOS.user is the user's description or comment. PKG_HOME.user is the user's home directory, and defaults to /nonexistent if not specified. PKG_SHELL.user is the user's shell, and defaults to /sbin/nologin if not specified. Similarly, groups can be created by adding entries to the PKG_GROUPS variable, whose syntax is: group The numeric GID of the group may be set by defining PKG_GID.group. If a package needs to create the users and groups at an earlier stage, then it can set USERGROUP_PHASE to either configure or build to indicate the phase before which the users and groups are created. In this case, the numeric UIDs and GIDs of the created users and groups are automatically hardcoded into the final installation scripts. 15 15.5.1. Disabling shell registration The automatic registration of shell interpreters can be disabled by the administrator by setting the PKG_REGISTER_SHELLS environment variable to NO. 15" fonts/dbz-ttf: FONTS_DIRS.ttf= ${PREFIX}/lib/X11/fonts/TTF 15.6.1. Disabling automatic update of the fonts databases The automatic update of fonts databases can be disabled by the administrator by setting the PKG_UPDATE_FONTS_DB environment variable to NO. Chapter 16. Options handling Table of Contents 16.1. Global default options 16.2. Converting packages to use bsd.options.mk 16.3. Option Names 16.4. Determining the options of dependencies. There are two broad classes of behaviors that one might want to control via options. One is whether some particular feature is enabled in a program that will be built anyway, often by including or not including a dependency on some other package. The other is whether or not an additional program will be built as part of the package. Generally, it is better to make a split package for such additional programs instead of using options, because it enables binary packages to be built which can then be added separately. For example, the foo package might have minimal dependencies (those packages without which foo doesn't make sense), and then the foo-gfoo package might include the GTK frontend program gfoo. This is better than including a gtk option to foo that adds gfoo, because either that option is default, in which case binary users can't get foo without gfoo, or not default, in which case they can't get gfoo. With split packages, they can install foo without having GTK, and later decide to install gfoo (pulling in GTK at that time). This is an advantage to source users too, avoiding the need for rebuilds. Plugins with widely varying dependencies should usually be split instead of options. It is often more work to maintain split packages, especially if the upstream package does not support this. The decision of split vs. option should be made based on the likelihood that users will want or object to the various pieces, the size of the dependencies that are included, and the amount of work. A further consideration is licensing. Non-free parts, or parts that depend on non-free dependencies (especially plugins) should almost always be split if feasible. 16.1. Global default options Global default options are listed in PKG_DEFAULT_OPTIONS, which is a list of the options that should be built into every package if that option is supported. This variable should be set in mk.conf.} instead." .endif .include "../../mk/bsd.options.mk" # Package-specific option-handling ### ### FOO support ### .if !empty(PKG_OPTIONS:Mwibble-foo) CONFIGURE_ARGS+= --enable-foo .endif ### ### LDAP support ### .if !empty(PKG_OPTIONS:Mldap) . include "../../databases/openldap-client/buildlink3.mk" CONFIGURE_ARGS+= --enable-ldap=${BUILDLINK_PREFIX.openldap-client} not defined at the point where the options are processed. 2. PKG_SUPPORTED_OPTIONS is a list of build options supported by the package. 3. PKG_OPTIONS_OPTIONAL_GROUPS is a list of names of groups of mutually exclusive options. The options in each group are listed in PKG_OPTIONS_GROUP.groupname. The most specific setting of any option from the group takes precedence over all other options in the group. Options from the groups will be automatically added to PKG_SUPPORTED_OPTIONS.). 16.4. Determining the options of dependencies When writing buildlink3.mk files, it is often necessary to list different dependencies based on the options with which the package was built. For querying these options, the file pkgsrc/mk/pkg-build-options.mk should be used. A typical example looks like this: pkgbase := libpurple .include "../../mk/pkg-build-options.mk" .if !empty(PKG_BUILD_OPTIONS.libpurple:Mdbus) ... .endif Including pkg-build-options.mk here will set the variable PKG_BUILD_OPTIONS.libpurple to the build options of the libpurple package, which can then be queried like PKG_OPTIONS in the options.mk file. See the file pkg-build-options.mk for more details. Chapter 17. The build process Table of Contents 17.1. Introduction This chapter gives a detailed description on how a package is built. Building a package is separated into different phases (for example fetch, build, install), all of which are described in the following sections. Each phase is split into so-called stages, which take the name of the containing phase, prefixed by one of pre-, do- or post-. (Examples are pre-configure, post-build.) Most of the actual work is done in the do-* stages. Never override the regular targets (like fetch), if you have to, override the do-* ones instead.. To get more details about what is happening at each step, you can set the PKG_VERBOSE variable, or the PATCH_DEBUG variable if you are just interested in more details about the patch step. 17 11.3, "patches/*" and Section 19}". The name LOCALBASE stems from FreeBSD, which installed all packages in /usr/local. As pkgsrc leaves /usr/local for the system administrator, this variable is a misnomer. * X11BASE is where the actual X11 distribution (from xsrc, etc.) is installed. When looking for standard X11 includes (not those installed by a package),. 17.3. Directories used during the build process When building a package, various directoriesDIR This is an absolute pathname that points to the current package. PKGPATH This is a pathname relative to PKGSRCDIR that points to the current package. WRKDIR This is an absolute pathname pointing to the directory where all work takes place. The distfiles are extracted. The CREATE_WRKDIR_SYMLINK definition takes either the value yes or no and defaults to no. It indicates whether a symbolic link to the WRKDIR is to be created in the pkgsrc entry's directory. If users would like to have their pkgsrc trees behave in a read-only manner, then the value of CREATE_WRKDIR_SYMLINK should be set to no. 17. 17.5. The fetch phase The first step in building a package is to fetch the distribution files (distfiles) from the sites that are providing them. This is the task of the fetch phase. 17.5.1. What to fetch and where to get it from In simple cases, MASTER_SITES defines all URLs from where the distfile, whose name is derived from the DISTNAME variable, is fetched. The more complicated cases are described below. The variable DISTFILES specifies the list of distfiles that have to be fetched. Its value defaults to ${DISTNAME}${EXTRACT_SUFX}, so that most packages don't need to define it at all. EXTRACT_SUFX is .tar.gz by default, but can be changed freely. Note that if your package requires additional distfiles to the default one, you cannot just append the additional filenames using the += operator, but you have write for example: DISTFILES= ${DISTNAME}${EXTRACT_SUFX} additional-files.tar.gz Each distfile is fetched from a list of sites, usually MASTER_SITES. If the package has multiple DISTFILES or multiple PATCHFILES from different sites, you can set SITES.distfile to the list of URLs where the file distfile (including the suffix) can be found. DISTFILES= ${DISTNAME}${EXTRACT_SUFX} DISTFILES+= foo-file.tar.gz SITES.foo-file.tar.gz= \ \ When actually fetching the distfiles, each item from MASTER_SITES or: MASTER_SITES= The exception to this rule are URLs starting with a dash. In that case the URL is taken as is, fetched and the result stored under the name of the distfile. There are some predefined values for MASTER_SITES, which can be used in packages. The names of the variables should speak for themselves. ${MASTER_SITE_APACHE} ${MASTER_SITE_BACKUP} ${MASTER_SITE_CYGWIN} ${MASTER_SITE_DEBIAN} ${MASTER_SITE_FREEBSD} ${MASTER_SITE_FREEBSD_LOCAL} ${MASTER_SITE_GENTOO} ${MASTER_SITE_GNOME} ${MASTER_SITE_GNU} ${MASTER_SITE_GNUSTEP} ${MASTER_SITE_IFARCHIVE} ${MASTER_SITE_KDE} ${MASTER_SITE_MOZILLA} ${MASTER_SITE_MYSQL} ${MASTER_SITE_OPENOFFICE} ${MASTER_SITE_PERL_CPAN} ${MASTER_SITE_PGSQL} ${MASTER_SITE_R_CRAN} ${MASTER_SITE_SOURCEFORGE} ${MASTER_SITE_SOURCEFORGE_JP} ${MASTER_SITE_SUNSITE} ${MASTER_SITE_SUSE} ${MASTER_SITE_TEX_CTAN} ${MASTER_SITE_XCONTRIB} ${MASTER_SITE_XEMACS} Some explanations for the less self-explaining ones: MASTER_SITE_BACKUP contains backup sites for packages that are maintained in pub/pkgsrc/distfiles/${DIST_SUBDIR}. MASTER_SITE_LOCAL contains local package source distributions that are maintained in distfiles/LOCAL_PORTS/. If you choose one of these predefined sites, you may want to specify a subdirectory of that site. Since these macros may expand to more than one actual site, you must use the following construct to specify a subdirectory: MASTER_SITES= ${MASTER_SITE_GNU:=subdirectory/name/} MASTER_SITES= ${MASTER_SITE_SOURCEFORGE:=project_name/} Note the trailing slash after the subdirectory name. 17.5.2. How are the files fetched? The fetch phase makes sure that all the distfiles exist in a local directory (DISTDIR, which can be set by the pkgsrc user). If the files do not exist, they are fetched and the last can be optionally sorted by the user, via setting either MASTER_SORT_RANDOM, and MASTER_SORT_AWK or MASTER_SORT_REGEX. The specific command and arguments used depend on the FETCH_USING parameter. The example above is for FETCH_USING=custom. The distfiles mirror run by the NetBSD Foundation uses the mirror-distfiles target to mirror the distfiles, if they are freely distributable. Packages setting NO_SRC_ON_FTP (usually to "${RESTRICTED}") will not have their distfiles mirrored. 17. 17/extract/extract/extract. EXTRACT_USING This variable can be set to bsdtar, gtar, nbtar (which is the default value), pax, or an absolute pathname pointing to the command with which tar archives should be extracted. It is preferred to choose bsdtar over gtar if NetBSD's pax-as-tar is not good enough.. 17.8. The patch phase 11. 17.9. The tools phase This is covered in Chapter 18, Tools needed for building or running. 17.10. The wrapper phase This phase creates wrapper programs for the compilers and linkers. The following variables can be used to tweak the wrappers. ECHO_WRAPPER_MSG The command used to print progress messages. Does nothing by default. Set to ${ECHO} to see the progress messages. WRAPPER_DEBUG This variable can be set to yes (default) or no, depending on whether you want additional information in the wrapper log file. WRAPPER_UPDATE_CACHE This variable can be set to yes or no, depending on whether the wrapper should use its cache, which will improve the speed. The default value is yes, but is forced to no if the platform does not support it. WRAPPER_REORDER_CMDS A list of reordering commands. A reordering command has the form reorder:l: lib1:lib2. It ensures that that -llib1 occurs before -llib2. WRAPPER_TRANSFORM_CMDS A list of transformation commands. [TODO: investigate further] 17.) You can add variables to xmkmf's environment by adding them to the SCRIPTS_ENV variable. If the program uses cmake for configuration, the appropriate steps can be invoked by setting USE_CMAKE to "yes". You can add variables to cmake's environment by adding them to the CONFIGURE_ENV variable and arguments to cmake by adding them to the CMAKE_ARGS variable. The top directory argument is given by the CMAKE_ARG_PATH variable, that defaults to "." (relative to CONFIGURE_DIRS) If there is no configure step at all, set NO_CONFIGURE to "yes". 17.12. The build phase For building a package, a rough equivalent of the following code is executed. .for d in ${BUILD_DIRS} cd ${WRKSRC} \ && cd ${d} \ && env ${MAKE_ENV} \ ${MAKE_PROGRAM} ${BUILD_MAKE_FLAGS} \ -f ${MAKE_FILE} \ ${BUILD_TARGET} .endfor BUILD_DIRS (default: ".") is a list of pathnames relative to WRKSRC. In each of these directories, MAKE_PROGRAM is run with the environment MAKE_ENV and arguments BUILD_MAKE_FLAGS. The variables MAKE_ENV, BUILD_MAKE_FLAGS, MAKE_FILE and BUILD_TARGET may all be changed by the package. The default value of MAKE_PROGRAM is "gmake" if USE_TOOLS contains "gmake", " make" otherwise. The default value of MAKE_FILE is "Makefile", and BUILD_TARGET defaults to "all". If there is no build step at all, set NO_BUILD to "yes". 17.13. The test phase [TODO] 17_FILE} \ ${INSTALL_TARGET} .endfor The variable's meanings are analogous to the ones in the build phase. INSTALL_DIRS defaults to BUILD_DIRS. INSTALL_TARGET is "install" by default, plus "install.man" if USE_IMAKE is defined and NO_INSTALL_MANPAGES is. The package is supposed to create all needed directories itself before installing files to it and list all other directories here. In the rare cases that a package shouldn't install anything, set NO_INSTALL to "yes". This is mostly relevant for packages in the regress category. 17.15. The package phase Once the install stage has completed, a binary package of the installed files can be built. These binary packages can be used for quick installation without previous compilation, e.g. by the make bin-install or by using pkg_add. By default, the binary packages are created in ${PACKAGES}/All and symlinks are created in ${PACKAGES}/category, one for each category in the CATEGORIES variable. PACKAGES defaults to pkgsrc/packages. 17.16. Cleaning up Once you're finished with a package, you can clean the work directory by running make clean. If you want to clean the work directories of all dependencies too, use make clean-depends. 17.17. Other helpful targets pre/post-* For any of the main targets described in the previous section, two auxiliary targets exist.. This is the default value of DEPENDS_TARGET except in the case of make update and make package, where the defaults are "package" and "update", respectively.. bin-install Install a binary package from local disk and via FTP from a list of sites (see the BINPKG_SITES variable), and do a make package if no binary package is available anywhere. The arguments given to pkg_add can be set via BIN_INSTALL_FLAGS e.g., to do verbose operation, etc.! The following variables can be used either on the command line or in mk.conf to alter the behaviour of make update: UPDATE_TARGET Install target to recursively use for the updated package and the dependent packages. Defaults to DEPENDS_TARGET if set, "install" otherwise for make update. Other good targets are "package" or " bin-install". Do not set this to "update" or you will get stuck in an endless loop!). replace Update the installation of the current package. This differs from update in that it does not replace dependent packages. You will need to install pkgtools/pkg_tarup for this target to work. Be careful when using this target! There are no guarantees that dependent packages will still work, in particular they will most certainly break if you make replace a library package whose shared library major version changed between your installed version and the new one. For this reason, this target is not officially supported and only recommended for advanced users. info This target invokes pkg_info(1) for the current package. You can use this to check which version of a package is installed. index This is a top-level command, i.e. it should be used in the pkgsrc directory. It creates a database of all packages in the local pkgsrc tree, including dependencies, comment, maintainer, and some other useful information. Individual entries are created by running make describe in the packages' directories. This index file is saved as pkgsrc/INDEX. It can be displayed in verbose format by running make print-index. You can search in it with make search key=something. You can extract a list of all packages that depend on a particular one by running make show-deps PKG=somepackage. Running this command takes a very long time, some hours even on fast machines! readme This target generates a README.html file, which can be viewed using a browser such as www/firefox. The target can be run at the toplevel or in category directories, in which case it descends recursively. readme-all This is a top-level command, run it in pkgsrc. (ALLFILES, which contains all mk.conf.. If the package installs files via tar(1) or other methods that don't update file access times, be sure to add these files manually to your PLIST, as the "find -newer" command used by this target won't catch them! See Section 13 18. Tools needed for building or running Table of Contents 18.1. Tools for pkgsrc builds 18.2. Tools needed by packages 18.3. Tools provided by platforms 18.4. Questions regarding the tools. 18.1.. 18. 18.3. 18.4. Questions regarding the tools 18.4.1. How do I add a new tool? 18.4.2. How do I get a list of all available tools? 18.4.3. How can I get a list of all the tools that a package is using while being built? I want to know whether it uses sed or not. 18.4.1. How do I add a new tool? TODO 18.4.2. How do I get a list of all available tools? TODO 18.4.3. How can I get a list of all the tools that a package is using while being built? I want to know whether it uses sed or not. Currently, you can't. (TODO: But I want to be able to do it.) Chapter 19. Making your package work Table of Contents 19.1. General operation 19.1.1. Portability of packages One appealing feature of pkgsrc is that it runs on many different platforms. As a result, it is important to ensure, where possible, that packages in pkgsrc are portable. This chapter mentions some particular details you should pay attention to while working on pkgsrc. 19.1.2. How to pull in user-settable variables from mk.conf The pkgsrc user can configure pkgsrc by overriding several variables in the file pointed to by MAKECONF, which is mk.conf by default. When you want to use those variables in the preprocessor directives of make(1) (for example .if or .for), you need to include the file ../../mk/bsd.prefs.mk before, which in turn loads the user preferences. But note that some variables may not be completely defined after ../../mk/ bsd.prefs.mk has been included, as they may contain references to variables that are not yet defined. In shell commands this is no problem, since variables are actually macros, which are only expanded when they are used. But in the preprocessor directives mentioned above and in dependency lines (of the form target: dependencies) the variables are expanded at load time. Note Currently there is no exhaustive list of all variables that tells you whether they can be used at load time or only at run time, but it is in preparation. 19.1.3. User interaction Occasionally, packages require interaction from the user, and this can be in a number of ways: * When fetching the distfiles, some packages require user interaction such as entering username/password or accepting a license on a web page. * When extracting the distfiles, some packages may ask for passwords. * The user can then decide to skip this package by setting the BATCH variable. 19.1.4. Handling licenses Authors of software can choose the licence under which software can be copied. This is due to copyright law, and reasons for license choices are outside the scope of pkgsrc. The pkgsrc system recognizes that there are a number of licenses which some users may find objectionable or difficult or impossible to comply with. The Free Software Foundation has declared some licenses "Free", and the Open Source Initiative has a definition of "Open Source". The pkgsrc system, as a policy choice, does not label packages which have licenses that are Free or Open Source. However, packages without a license meeting either of those tests are labeled with a license tag denoting the license. Note that a package with no license to copy trivially does not meet either the Free or Open Source test. For packages which are not Free or Open Source, pkgsrc will not build the package unless the user has indicated to pkgsrc that packages with that particular license may be built. Note that this documentation avoids the term "accepted the license". The pkgsrc system is merely providing a mechanism to avoid accidentally building a package with a non-free license; judgement and responsibility remain with the user. (Installation of binary packages are not currently subject to this mechanism; this is a bug.) One might want to only install packages with a BSD license, or the GPL, and not the other. The free licenses are added to the default ACCEPTABLE_LICENSES variable. The user can override the default by setting the ACCEPTABLE_LICENSES variable with "=" instead of "+=". The licenses accepted by default are: public-domain gnu-gpl-v2 gnu-lgpl-v2 gnu-gpl-v3 gnu-lgpl-v3 original-bsd modified-bsd x11 apache-2.0 cddl-1.0 open-font-license The license tag mechanism is intended to address copyright-related issues surrounding building, installing and using a package, and not to address redistribution issues (see RESTRICTED and NO_SRC_ON_FTP, etc.). Packages with redistribution restrictions should set these tags. Denoting that a package may be copied according placed in the ACCEPTABLE_LICENSES variable: % the user so chooses, the line printed above can be added to mk.conf to convey to pkgsrc that it should not in the future fail because of that license: ACCEPTABLE_LICENSES+=xv-license When adding a package with a new license, the license text should be added to pkgsrc/licenses for displaying. A list of known licenses can be seen in this directory. When the license changes (in a way other than formatting), please make sure that the new license has a different name (e.g., append the version number if it exists, or the date). Just because a user told pkgsrc to build programs under a previous version of a license does not mean that pkgsrc should build programs under the new licenses. The higher-level point is that pkgsrc does not evaluate licenses for reasonableness; the only test is a mechanistic test of whether a particular text has been approved by either of two bodies. The use of LICENSE=shareware, LICENSE=no-commercial-use, and similar language is deprecated because it does not crisply refer to a particular license text. Another problem with such usage is that it does not enable a user to tell pkgsrc to proceed for a single package without also telling pkgsrc to proceed for all packages with that tag. 19.1.5. Restricted packages Some licenses restrict how software may be re-distributed. Because a license tag is required unless the package is Free or Open Source, all packages with restrictions should have license tags. By declaring the restrictions, package tools can automatically refrain from e.g. placing binary packages on FTP sites. There are four restrictions that may be encoded, which are the cross product of sources (distfiles) and binaries not being placed on FTP sites and CD-ROMs. Because this is rarely the exact language in any license, and because non-Free licenses tend to be different from each other, pkgsrc adopts a definition of FTP and CD-ROM. Pkgsrc uses "FTP" to mean that the source or binary file should not be made available over the Internet at no charge. Pkgsrc uses "CD-ROM" to mean that the source or binary may not be made available on some kind of media, together with other source and binary packages, and which is sold for a distribution charge. In order to encode these restrictions, the package system defines five make variables that can be set to note these restrictions: * RESTRICTED This variable should be set whenever a restriction exists (regardless of its kind). Set this variable to a string containing the reason for the restriction. It should be understood that those wanting to understand the restriction will have to read the license, and perhaps seek advice of counsel. * NO_BIN_ON_CDROM Binaries may not be placed on CD-ROM containing other binary packages, for which a distribution charge may be made. In this case, set this variable to ${RESTRICTED}. * NO_BIN_ON_FTP Binaries may not made available on the Internet without charge. In this case, set this variable to ${RESTRICTED}. If this variable is set, binary packages will not be included on. * NO_SRC_ON_CDROM Distfiles may not be placed on CD-ROM, together with other distfiles, for which a fee may be charged. In this case, set this variable to $ {RESTRICTED}. * NO_SRC_ON_FTP Distfiles may not made available via FTP at no charge. In this case, set this variable to ${RESTRICTED}. If this variable is set, the distfile(s) will not be mirrored on. 19.1 binaries from another package to build, use the BUILD_DEPENDS definition: BUILD_DEPENDS+= scons-[0-9]*:../../devel/scons 3. If your package needs a library with which to link and there is no buildlink3.mk file available, create one. Using DEPENDS won't be sufficient because the include files and libraries will be hidden from the compiler. 5. You can use wildcards in package dependencies.+= ImageMagick>=6.0:../../graphics/ImageMagick This means that the package will build using version 6.0 of ImageMagick or newer. Such a dependency may be warranted if, for example, the command line options of an executable have changed. If you need to depend on minimum versions of libraries, see the buildlink section of the pkgsrc guide. For security fixes, please update the package vulnerabilities file. See Section 19.1.10, "Handling packages with security problems" for more information. If your package needs files from another package to build, add the relevant distribution files to DISTFILES, so they will be extracted automatically. See the print/ghostscript package for an example. (It relies on the jpeg sources being present in source form during the build.) 19.1.7. Handling conflicts with other packages Your package may conflict with other packages a user might already have installed on his system, e.g. if your package installs the same set of files as another package in the". 19.1.8.. Some packages are tightly bound to a specific version of an operating system, e.g. LKMs or sysutils/lsof. Such binary packages are not backwards compatible with other versions of the OS, and should be uploaded to a version specific directory on the FTP server. Mark these packages by setting OSVERSION_SPECIFIC to "yes". This variable is not currently used by any of the package system internals, but may be used in the future. If the package should be skipped (for example, because it provides functionality already provided by the system), set PKG_SKIP_REASON to a descriptive message. If the package should fail because some preconditions are not met, set PKG_FAIL_REASON to a descriptive message. 19.1.9.. 19.1). Also, if the fix should be applied to the stable pkgsrc branch, be sure to submit a pullup request! Binary packages already on will be handled semi-automatically by a weekly cron job. 19.1.11. package tools. e.g. DISTNAME= foo-17.42 PKGREVISION= 9 will result in a PKGNAME of "foo-17.42nb9". If you want to use the original value of PKGNAME without the "nbX" suffix, e.g. for setting DIST_SUBDIR, use PKGNAME_NOREV. When a new release of the package is released, the PKGREVISION should be removed, e.g. on a new minor release of the above package, things should be like: DISTNAME= foo-17.43 PKGREVISION should be incremented for any non-trivial change in the resulting binary package. Without a PKGREVISION bump, someone with the previous version installed has no way of knowing that their package is out of date. Thus, changes without increasing PKGREVISION are essentially labeled "this is so trivial that no reasonable person would want to upgrade", and this is the rough test for when increasing PKGREVISION is appropriate. Examples of changes that do not merit increasing PKGREVISION are: * Changing HOMEPAGE, MAINTAINER, OWNER, or comments in Makefile. * Changing build variables if the resulting binary package is the same. * Changing DESCR. * Adding PKG_OPTIONS if the default options don't change. Examples of changes that do merit an increase to PKGREVISION include: * Security fixes * Changes or additions to a patch file * Changes to the PLIST * A dependency is changed or renamed. PKGREVISION must also be incremented when dependencies have ABI changes. 19.1.12. Substituting variable text in the package files (the SUBST framework) When_FILES.fix-paths+= scripts/*.sh SUBST_SED.fix-paths= -e 's,"/usr/local,"${PREFIX},g' SUBST_SED.fix-paths+= -e 's,"/var/log,"${VARBASE}/log,g' SUBST_CLASSES is a list of identifiers that are used to identify the different SUBST blocks that are defined. The SUBST framework is heavily used by pkgsrc, so it is important to always use the += operator with this variable. Otherwise some substitutions may be skipped. The remaining variables of each SUBST block are parameterized with the identifier from the first line (fix-paths in this case.) They can be seen as parameters to a function call. bmake/subst.mk file. 19.2. Fixing problems in the fetch phase 19 list of lines that are displayed to the user before aborting the build. Example: FETCH_MESSAGE= "Please download the files" FETCH_MESSAGE+= " "${DISTFILES:Q} FETCH_MESSAGE+= "manually from "${MASTER_SITES:Q}"." 19. Please mention that the distfiles were compared and what was found in your commit message. Then, the correct way to work around this is to set DIST_SUBDIR to a unique directory name, usually based on PKGNAME_NOREV. All DISTFILES and PATCHFILES for this package will be put in that subdirectory of the local distfiles directory. (See Section 19.1.11, "How to handle incrementing versions when fixing an existing package" for more details.). Also increase the PKGREVISION if the installed package is different. Furthermore, a mail to the package's authors seems appropriate telling them that changing distfiles after releases without changing the file names is not good practice. 19.3. Fixing problems in the configure phase 19 package: So, libtool library versions are described by three integers: CURRENT The most recent interface number that this library implements. REVISION The implementation number of the CURRENT interface. AGE._LIB} ${SOMELIB:.a=.la} ${PREFIX}/lib This will install the static .a, shared library, any needed symlinks, and run ldconfig(8). 7. In your PLIST, include only the .la file (this is a change from previous behaviour). 19. 19: set -e;. 19.4. Programming languages 19.4.1. C, C++, and Fortran Compilers for the C, C++, and Fortran languages comes with the NetBSD base system. By default, pkgsrc assumes that a package is written in C and will hide all other compilers (via the wrapper framework, see Chapter 14, Buildlink methodology). To declare which language's compiler a package needs, set the USE_LANGUAGES variable. Allowed values currently are "c", "c++", and "fortran" (and any combination). The default is "c". Packages using GNU configure scripts, even if written in C++, usually need a C compiler for the configure phase. 19.4.2. Java If a program is written in Java, use the Java framework in pkgsrc. The package must include ../../mk/java-vm.mk. This Makefile fragment provides the following variables: * USE_JAVA defines if a build dependency on the JDK is added. If USE_JAVA is set to "run", then there is only a runtime dependency on the JDK. The default is "yes", which also adds a build dependency on the JDK. * Set USE_JAVA2 to declare that a package needs a Java2 implementation. The supported values are "yes", "1.4", and "1.5". "yes" accepts any Java2 implementation, "1.4" insists on versions 1.4 or above, and "1.5" only accepts versions 1.5 or above. This variable is not set by default. * PKG_JAVA_HOME is automatically set to the runtime location of the used Java implementation dependency. It may be used to set JAVA_HOME to a good value if the program needs this variable to be defined. 19.4.3. Packages containing perl scripts If your package contains interpreted perl scripts, add "perl" to the USE_TOOLS variable and set REPLACE_PERL to ensure that the proper interpreter path is set. REPLACE_PERL should contain a list of scripts, relative to WRKSRC, that you want adjusted. Every occurrence of */bin/perl will be replaced with the full path to the perl executable. If a particular version of perl is needed, set the PERL5_REQD variable to the version number. The default is "5.0". See Section 19.6.6, "Packages installing perl modules" for information about handling perl modules. 19.4.4. Other programming languages Currently, there is no special handling for other languages in pkgsrc. If a compiler package provides a buildlink3.mk file, include that, otherwise just add a (build) dependency on the appropriate compiler package. 19.5. Fixing problems in the build phase The most common failures when building a package are that some platforms do not provide certain header files, functions or libraries, or they provide the functions in a library that the original package author didn't know. To work around this, you can rewrite the source code in most cases so that it does not use the missing functions or provides a replacement function. 19.5.1. Compiling C and C++ code conditionally If a package already comes with a GNU configure script, the preferred way to fix the build failure is to change the configure script, not the code. In the other cases, you can utilize the C preprocessor, which defines certain macros depending on the operating system and hardware architecture it compiles for. These macros can be queried using for example #if defined(__i386). Almost every operating system, hardware architecture and compiler has its own macro. For example, if the macros __GNUC__, __i386__ and __NetBSD__ are all defined, you know that you are using NetBSD on an i386 compatible CPU, and your compiler is GCC. The list of the following macros for hardware and operating system depends on the compiler that is used. For example, if you want to conditionally compile code on Solaris, don't use __sun__, as the SunPro compiler does not define it. Use __sun instead. 19.5.1.1. C preprocessor macros to identify the operating system To distinguish between 4.4 BSD-derived systems and the rest of the world, you should use the following code. #include <sys/param.h> #if (defined(BSD) && BSD >= 199306) /* BSD-specific code goes here */ #else /* non-BSD-specific code goes here */ #endif If this distinction is not fine enough, you can also 19.5.1.2. C preprocessor macros to identify the hardware architecture i386 i386, __i386, __i386__ MIPS __mips SPARC sparc, __sparc 19.5.1.3. C preprocessor macros to identify the compiler GCC __GNUC__ (major version), __GNUC_MINOR__ MIPSpro _COMPILER_VERSION (0x741 for MIPSpro 7.41) SunPro __SUNPRO_C (0x570 for Sun C 5.7) SunPro C++ __SUNPRO_CC (0x580 for Sun C++ 5.8) 19.5.2. combination of file, MACHINE_ARCH and compiler, and documenting it in pkgsrc/doc/HACKS. See that file for a number of examples. 19.5.3. Undefined reference to "..." This error message often means that a package did not link to a shared library it needs. The following functions are known to cause this error message over and over. +-----------------------------------------------------+ | Function |Library |Affected platforms| |-------------------------+--------+------------------| |accept, bind, connect |-lsocket|Solaris | |-------------------------+--------+------------------| |crypt |-lcrypt |DragonFly, NetBSD | |-------------------------+--------+------------------| |dlopen, dlsym |-ldl |Linux | |-------------------------+--------+------------------| |gethost* |-lnsl |Solaris | |-------------------------+--------+------------------| |inet_aton |-lresolv|Solaris | |-------------------------+--------+------------------| |nanosleep, sem_*, timer_*|-lrt |Solaris | |-------------------------+--------+------------------| |openpty |-lutil |Linux | +-----------------------------------------------------+ To fix these linker errors, it is often sufficient to say LIBS.OperatingSystem+ = -lfoo to the package Makefile and then say bmake clean; bmake. 19.5.3.1. Special issue: The SunPro compiler When you are using the SunPro compiler, there is another possibility. That compiler cannot handle the following code: extern int extern_func(int); static inline int inline_func(int x) { return extern_func(x); } int main(void) { return 0; } It generates the code for inline_func even if that function is never used. This code then refers to extern_func, which can usually not be resolved. To solve this problem you can try to tell the package to disable inlining of functions. 19.5.4. Running out of memory Sometimes packages fail to build because the compiler runs into an operating system specific soft limit. With the UNLIMIT_RESOURCES variable pkgsrc can be told to unlimit the resources. Currently, the allowed values are "datasize" and "stacksize" (or both). Setting this variable is similar to running the shell builtin ulimit command to raise the maximum data segment size or maximum stack size of a process, respectively, to their hard limits. 19.6. Fixing problems in the install phase 19.6.1. Creating needed directories The BSD-compatible install supplied with some operating systems cannot create more than one directory at a time. As such, you should call ${INSTALL_*_DIR} like this: ${INSTALL_DATA_DIR} ${PREFIX}/dir1 ${INSTALL_DATA_DIR} ${PREFIX}/dir2 You can also just append "dir1 dir2" to the INSTALLATION_DIRS variable, which will automatically do the right thing. 19.6.2. Where to install documentation In general, documentation should be installed into ${PREFIX}/share/doc/$ {PKGBASE} or ${PREFIX}/share/doc/${PKGNAME} (the latter includes the version number of the package). Many modern packages using GNU autoconf allow to set the directory where HTML documentation is installed with the "--with-html-dir" option. Sometimes using this flag is needed because otherwise the documentation ends up in ${PREFIX}/ share/doc/html or other places. An exception to the above is that library API documentation generated with the textproc/gtk-doc tools, for use by special browsers (devhelp) should be left at their default location, which is ${PREFIX}/share/gtk-doc. Such documentation can be recognized from files ending in .devhelp or .devhelp2. (It is also acceptable to install such files in ${PREFIX}/share/doc/${PKGBASE} or ${PREFIX} /share/doc/${PKGNAME}; the .devhelp* file must be directly in that directory then, no additional subdirectory level is allowed in this case. This is usually achieved by using "--with-html-dir=${PREFIX}/share/doc". ${PREFIX}/share/ gtk-doc is preferred though.) 19.6.3. Installing highscore therefore never hard code file ownership or access permissions but rely on INSTALL_GAME and INSTALL_GAME_DATA to set these correctly. 19.6.4. Adding DESTDIR support to packages" ). * PKG_DESTDIR_SUPPORT has to be set to "destdir" or "user-destdir". If bsd.prefs.mk is included in the Makefile, PKG_DESTDIR_SUPPORT needs to be set before the inclusion. * All installation operations have to be prefixed with ${DESTDIR}. * automake gets this DESTDIR mostly right automatically. Many manual rules and pre/post-install often are incorrect; fix them. * If files are installed with special owner/group use SPECIAL_PERMS. * In general, packages should support UNPRIVILEGED to be able to use DESTDIR. 19.*. 19. 19.6.7. Packages installing info files Some packages install info files or use the "makeinfo" or "install-info" commands. INFO_FILES should be defined in the package Makefile so that INSTALL and DEINSTALL scripts will be generated to handle registration of the info files in the Info directory file. The "install-info" command used for the info files registration is either provided by the system, or by a special purpose package automatically added as dependency if needed. PKGINFODIR is the directory under ${PREFIX} where info files are primarily located. PKGINFODIR defaults to "info" and can be overridden by the user. The info files for the package should be listed in the package PLIST; however any split info files need not be listed. A package which needs the "makeinfo" command at build time must add "makeinfo" to USE_TOOLS TEXINFO_REQD either runs the appropriate makeinfo command or exit on error. 19.6.8. Packages installing man pages All packages that install manual pages should install them into the same directory, so that there is one common place to look for them. In pkgsrc, this place is ${PREFIX}/${PKGMANDIR}, and this expression should be used in packages. The default for PKGMANDIR is "man". Another often-used value is " share/man". Note The support for a custom PKGMANDIR is far from complete. The PLIST files can just use man/ as the top level directory for the man page file entries, and the pkgsrc framework will convert as needed. In all other places, the correct PKGMANDIR must be used. 13.5, "Man page compression" for information on installation of compressed manual pages. 19.6.9. Packages installing GConf data files If a package installs .schemas or .entries files, used by GConf, you need to take some extra steps to make sure they get registered in the database: 1. Include ../../devel/GConf/schemas.mk instead of its buildlink3.mk file. This takes care of rebuilding the GConf database at installation and deinstallation time, and tells the package where to install GConf 9.13, "How do I change the location of configuration files?" for more information. 4. Define the GCONF_SCHEMAS variable in your Makefile with a list of all .schemas files installed by the package, if any. Names must not contain any directories in them. 5. Define the GCONF_ENTRIES variable in your Makefile with a list of all .entries files installed by the package, if any. Names must not contain any directories in them. 19.6.10. Packages installing scrollkeeper/rarian data files If a package installs .omf files, used by scrollkeeper/rarian, you need to take some extra steps to make sure they get registered in the database: 1. Include ../../mk/omf-scrollkeeper.mk instead of rarian's rarian. (make print-PLIST does this automatically.) 19. 19. 19. 19. 19.6.15. Packages using intltool If a package uses intltool during its build, add intltool to the USE_TOOLS, which forces it to use the intltool package provided by pkgsrc, instead of the one bundled with the distribution file. This tracks intltool's build-time dependencies and uses the latest available version; this way, the package benefits of any bug fixes that may have appeared since it was released. 19.6.16. Packages installing startup scripts If a package contains a rc.d script, it won't be copied into the startup directory by default, but you can enable it, by adding the option PKG_RCD_SCRIPTS=YES in mk.conf. This option will copy the scripts into /etc/ rc.d when a package is installed, and it will automatically remove the scripts when the package is deinstalled. 19.6.17. Packages installing TeX modules If a package installs TeX packages into the texmf tree, the ls-R database of the tree needs to be updated. Note Except the main TeX packages such as kpathsea, packages should install files into ${PREFIX}/share/texmf-dist, not ${PREFIX}/share/texmf. 1. Include ../../print/kpathsea/texmf.mk. This takes care of rebuilding the ls-R database at installation and deinstallation time. 2. If your package installs files into a texmf tree other than the one at $ {PREFIX}/share/texmf-dist, set TEX_TEXMF_DIRS to the list of all texmf trees that need database update. If your package also installs font map files that need to be registered using updmap, include ../../print/texlive-tetex/map.mk and set TEX_MAP_FILES and/or TEX_MIXEDMAP_FILES. 19.6.18. Packages supporting running binaries in emulation There are some packages that provide libraries and executables for running binaries from a one operating system on a different one (if the latter supports it). One example is running Linux binaries on NetBSD. The pkgtools/rpm2pkg helps in extracting and packaging Linux rpm packages. The CHECK_SHLIBS can be set to no to avoid the check-shlibs target, which tests if all libraries for each installed executable can be found by the dynamic linker. Since the standard dynamic linker is run, this fails for emulation packages, because the libraries used by the emulation are not in the standard directories. 19.6.19. Packages installing hicolor theme icons If a package installs images under the share/icons/hicolor and/or updates the share/icons/hicolor/icon-theme.cache database, you need to take some extra steps to make sure that the shared theme directory is handled appropriately and that the cache database is rebuilt: 1. Include ../../graphics/hicolor-icon-theme/buildlink3.mk. 2. Check the PLIST and remove the entry that refers to the theme cache. 3. Ensure that the PLIST does not remove the shared icon directories from the share/icons/hicolor hierarchy because they will be handled automatically. The best way to verify that the PLIST is correct with respect to the last two points is to regenerate it using make print-PLIST. 19.6.20. Packages installing desktop files If a package installs .desktop files under share/applications and these include MIME information, you need to take extra steps to ensure that they are registered into the MIME database: 1. Include ../../sysutils/desktop-file-utils/desktopdb.mk. 2. Check the PLIST and remove the entry that refers to the share/applications/ mimeinfo.cache file. It will be handled automatically. The best way to verify that the PLIST is correct with respect to the last point is to regenerate it using make print-PLIST. 19.7. Marking packages as having problems In some cases one does not have the time to solve a problem immediately. In this case, one can plainly mark a package as broken. For this, one just sets the variable BROKEN to the reason why the package is broken (similar to the RESTRICTED variable). A user trying to build the package will immediately be shown this message, and the build will not be even tried. BROKEN packages are removed from pkgsrc in irregular intervals. Chapter 20.=yes in this step as non-root user will ensure that no files are modified that shouldn't be, especially during the build phase. mkpatches, patchdiff and pkgvi are from the pkgtools/pkgdiff package. * Look at the Makefile, fix if necessary; see Section 11 examplepkg * Repeat the above make print-PLIST command, which shouldn't find anything now: # make print-PLIST * Reinstall the binary package: # pkg_add .../examplepkg.tgz * Play with it. Make sure everything works. * Run pkglint from pkgtools/pkglint, and fix the problems it reports: # pkglint * Submit (or commit, if you have cvs access); see Chapter 21, Submitting and Committing. Chapter 21. Submitting and Committing Table of Contents 21.1. Submitting binary packages 7.3.8, "Uploading results of a bulk build". 21.2. Submitting source packages (for non-NetBSD-developers) First, check that your package is complete, compiles and runs well; see Chapter 20, Debugging and the rest of this document. Next, generate an uuencoded gzipped tar(1) archive that contains all files that make up the package. Finally, send this package to the pkgsrc bug tracking system, either with the send-pr(1) command, or if you don't have that, go to the web page, which contains some instructions and a link to a form where you can submit packages. The sysutils/gtk-send-pr package is also available as a substitute for either of the above two tools.. If you want to submit several packages, please send a separate PR for each one, it's easier for us to track things that way. Alternatively, you can also import new packages into pkgsrc-wip ("pkgsrc work-in-progress"); see the homepage at for details. 21.3. General notes when adding, updating, or removing packages Please note all package additions, updates, moves, and removals in pkgsrc/doc/ CHANGES-YYYY.-YYYY if it is security related or otherwise relevant. Mass bumps that result from a dependency being updated should not be mentioned. In all other cases it's the developer's decision. There is a make target that helps in creating proper CHANGES-YYYY entries: make changes-entry. It uses the optional CTYPE and NETBSD_LOGIN_NAME variables. The general usage is to first make sure that your CHANGES-YYYY mk.conf if your local login name is not the same as your NetBSD login name. The target also automatically removes possibly existing entries for the package in the TODO file. Don't forget to commit the changes, e.g. by using make changes-entry-commit! If you are not using a checkout directly from cvs.NetBSD.org, but e.g. a local copy of the repository, you can set USE_NETBSD_REPO=yes. This makes the cvs commands use the main repository. 21. 21. 21.6. Renaming a package in pkgsrc Renaming packages is not recommended. When renaming packages, be sure to fix any references to old name in other Makefiles, options, buildlink files, etc. Also When renaming a package, please define SUPERSEDES to the package name and dewey version pattern(s) of the previous package name. This may be repeated for multiple renames. The new package would be an exact replacement. Note that "successor" in the CHANGES-YYYY file doesn't necessarily mean that it supersedes, as that successor may not be an exact replacement but is a suggestion for the replaced functionality. 21.7. Moving a package in pkgsrc It is preferred that packages are not renamed or moved, but if needed please follow these steps.. In the modified package's Makefile, consider setting PREV_PKGPATH to the previous category/package pathname. The PREV_PKGPATH can be used by tools for doing an update using pkgsrc building; for example, it can search the pkg_summary(5) database for PREV_PKGPATH (if no SUPERSEDES) and then use the corresponding new PKGPATH for that moved package. Note that it may have multiple matches, so the tool should also check on the PKGBASE too. The PREV_PKGPATH probably has no value unless SUPERSEDES is not set, i.e. PKGBASE stays the same. 5. cvs import the modified package in the new place. 6. Check if any package depends on it: % cd /usr/pkgsrc % grep /package */*/Makefile* */*/buildlink* 7. Fix paths in packages from step 5 to point to new location. 8. cvs rm (-f) the package at the old location. 9. Remove from oldcategory/Makefile. 10. Add to newcategory/Makefile. 11. Commit the changed and removed files: % cvs commit oldcategory/package oldcategory/Makefile newcategory/Makefile (and any packages from step 5, of course). Chapter 22. Frequently Asked Questions This section contains the answers to questions that may arise when you are writing a package. If you don't find your question answered here, first have a look in the other chapters, and if you still don't have the answer, ask on the pkgsrc-users mailing list. 22.1. What is the difference between MAKEFLAGS, .MAKEFLAGS and MAKE_FLAGS? 22.2. What is the difference between MAKE, GMAKE and MAKE_PROGRAM? 22.3. What is the difference between CC, PKG_CC and PKGSRC_COMPILER? 22.4. What is the difference between BUILDLINK_LDFLAGS, BUILDLINK_LDADD and BUILDLINK_LIBS? 22.5. Why does make show-var VARNAME=BUILDLINK_PREFIX.foo say it's empty? 22.6. What does ${MASTER_SITE_SOURCEFORGE:=package/} mean? I don't understand the := inside it. 22.7. Which mailing lists are there for package developers? 22.8. Where is the pkgsrc documentation? 22.9. I have a little time to kill. What shall I do? 22.1. What is the difference between MAKEFLAGS, .MAKEFLAGS and MAKE_FLAGS? MAKEFLAGS are the flags passed to the pkgsrc-internal invocations of make (1), while MAKE_FLAGS are the flags that are passed to the MAKE_PROGRAM when building the package. [FIXME: What is .MAKEFLAGS for?] 22.2. What is the difference between MAKE, GMAKE and MAKE_PROGRAM? MAKE is the path to the make(1) program that is used in the pkgsrc infrastructure. GMAKE is the path to GNU Make, but you need to say USE_TOOLS+=gmake to use that. MAKE_PROGRAM is the path to the Make program that is used for building the package. 22.3. What is the difference between CC, PKG_CC and PKGSRC_COMPILER? CC is the path to the real C compiler, which can be configured by the pkgsrc user. PKG_CC is the path to the compiler wrapper. PKGSRC_COMPILER is not a path to a compiler, but the type of compiler that should be used. See mk/compiler.mk for more information about the latter variable. 22.4. What is the difference between BUILDLINK_LDFLAGS, BUILDLINK_LDADD and BUILDLINK_LIBS? [FIXME] 22.5. Why does make show-var VARNAME=BUILDLINK_PREFIX.foo say it's empty? For optimization reasons, some variables are only available in the " wrapper" phase and later. To "simulate" the wrapper phase, append PKG_PHASE=wrapper to the above command. 22.6. What does ${MASTER_SITE_SOURCEFORGE:=package/} mean? I don't understand the := inside it. The := is not really an assignment operator, like you might expect at first sight. Instead, it is a degenerate form of ${LIST:old_string= new_string}, which is documented in the make(1) man page and which you may have seen as in ${SRCS:.c=.o}. In the case of MASTER_SITE_*, old_string is the empty string and new_string is package/. That's where the : and the = fall together. 22.7. Which mailing lists are there for package developers? tech-pkg when an infrastructure bug is found, etc. pkgsrc-bugs All bug reports in category "pkg" sent with send-pr(1) appear here. Please do not report your bugs here directly; use one of the other mailing lists. 22.8. Where is the pkgsrc documentation? There are many places where you can find documentation about pkgsrc: * The pkgsrc guide (this document) is a collection of chapters that explain large parts of pkgsrc, but some chapters tend to be outdated. Which ones they are is hard to say. * On the mailing list archives (see), you can find discussions about certain features, announcements of new parts of the pkgsrc infrastructure and sometimes even announcements that a certain feature has been marked as obsolete. The benefit here is that each message has a date appended to it. * Many of the files in the mk/ directory start with a comment that describes the purpose of the file and how it can be used by the pkgsrc user and package authors. An easy way to find this documentation is to run bmake help. * The CVS log messages are a rich source of information, but they tend to be highly abbreviated, especially for actions that occur often. Some contain a detailed description of what has changed, but they are geared towards the other pkgsrc developers, not towards an average pkgsrc user. They also only document changes, so if you don't know what has been before, these messages may not be worth too much to you. * Some parts of pkgsrc are only "implicitly documented", that is the documentation exists only in the mind of the developer who wrote the code. To get this information, use the cvs annotate command to see who has written it and ask on the tech-pkg mailing list, so that others can find your questions later (see above). To be sure that the developer in charge reads the mail, you may CC him or her. 22.9. I have a little time to kill. What shall I do? This is not really an FAQ yet, but here's the answer anyway. * Run pkg_chk -N (from the pkgtools/pkg_chk package). It will tell you about newer versions of installed packages that are available, but not yet updated in pkgsrc. * Browse pkgsrc/doc/TODO ? it contains a list of suggested new packages and a list of cleanups and enhancements for pkgsrc that would be nice to have. * Review packages for which review was requested on the pkgsrc-wip review mailing list. Chapter 23. GNOME packaging and porting Table of Contents 23.1. Meta packages 23.2. Packaging a GNOME application 23.3. Updating GNOME to a newer version 23.4. Patching guidelines Quoting GNOME's web site: The GNOME project provides two things: The GNOME desktop environment, an intuitive and attractive desktop for users, and the GNOME development platform, an extensive framework for building applications that integrate into the rest of the desktop. pkgsrc provides a seamless way to automatically build and install a complete GNOME environment under many different platforms. We can say with confidence that pkgsrc is one of the most advanced build and packaging systems for GNOME due to its included technologies buildlink3, the wrappers and tools framework and automatic configuration file management. Lots of efforts are put into achieving a completely clean deinstallation of installed software components. Given that pkgsrc is NetBSD's official packaging system, the above also means that great efforts are put into making GNOME work under this operating system. Recently, DragonFly BSD also adopted pkgsrc as its preferred packaging system, contributing lots of portability fixes to make GNOME build and install under it. This chapter is aimed at pkgsrc developers and other people interested in helping our GNOME porting and packaging efforts. It provides instructions on how to manage the existing packages and some important information regarding their internals. We need your help! Should you have some spare cycles to devote to NetBSD, pkgsrc and GNOME and are willing to learn new exciting stuff, please jump straight to the pending work list! There is still a long way to go to get a fully-functional GNOME desktop under NetBSD and we need your help to achieve it! 23.1. Meta packages pkgsrc includes three GNOME-related meta packages: * meta-pkgs/gnome-base: Provides the core GNOME desktop environment. It only includes the necessary bits to get it to boot correctly, although it may lack important functionality for daily operation. The idea behind this package is to let end users build their own configurations on top of this one, first installing this meta package to achieve a functional setup and then adding individual applications. * meta-pkgs/gnome: Provides a complete installation of the GNOME platform and desktop as defined by the GNOME project; this is based on the components distributed in the platform/x.y/x.y.z/sources and desktop/x.y/x.y.z/sources directories of the official FTP server. Developer-only tools found in those directories are not installed unless required by some other component to work properly. Similarly, packages from the bindings set (bindings/x.y/ x.y.z/sources) are not pulled in unless required as a dependency for an end-user component. This package "extends" meta-pkgs/gnome-base. * meta-pkgs/gnome-devel: Installs all the tools required to build a GNOME component when fetched from the CVS repository. These are required to let the autogen.sh scripts work appropriately. In all these packages, the DEPENDS lines are sorted in a way that eases updates: a package may depend on other packages listed before it but not on any listed after it. It is very important to keep this order to ease updates so... do not change it to alphabetical sorting! 23.2. Packaging a GNOME application Almost all GNOME applications are written in C and use a common set of tools as their build system. Things get different with the new bindings to other languages (such as Python), but the following will give you a general idea on the minimum required tools: * Almost all GNOME applications use the GNU Autotools as their build system. As a general rule you will need to tell this to your package: GNU_CONFIGURE=yes USE_LIBTOOL=yes USE_TOOLS+=gmake * If the package uses pkg-config to detect dependencies, add this tool to the list of required utilities: USE_TOOLS+=pkg-config Also use pkgtools/verifypc at the end of the build process to ensure that you did not miss to specify any dependency in your package and that the version requirements are all correct. * If the package uses intltool, be sure to add intltool to the USE_TOOLS to handle dependencies and to force the package to use the latest available version. * If the package uses gtk-doc (a documentation generation utility), do not add a dependency on it. The tool is rather big and the distfile should come with pregenerated documentation anyway; if it does not, it is a bug that you ought to report. For such packages you should disable gtk-doc (unless it is the default): CONFIGURE_ARGS+=--disable-gtk-doc The default location of installed HTML files (share/gtk-doc/<package-name>) is correct and should not be changed unless the package insists on installing them somewhere else. Otherwise programs as devhelp will not be able to open them. You can do that with an entry similar to: CONFIGURE_ARGS+=--with-html-dir=${PREFIX}/share/gtk-doc/... GNOME uses multiple shared directories and files under the installation prefix to maintain databases. In this context, shared means that those exact same directories and files are used among several different packages, leading to conflicts in the PLIST. pkgsrc currently includes functionality to handle the most common cases, so you have to forget about using @unexec ${RMDIR} lines in your file lists and omitting shared files from them. If you find yourself doing those, your package is most likely incorrect. The following table lists the common situations that result in using shared directories or files. For each of them, the appropriate solution is given. After applying the solution be sure to regenerate the package's file list with make print-PLIST and ensure it is correct. Table 23.1. PLIST handling for GNOME packages +-----------------------------------------------------------------------------+ | If the package... | Then... | |-------------------------------------------+---------------------------------| | |See Section 19.6.10, "Packages | |Installs OMF files under share/omf. |installing scrollkeeper/rarian | | |data files". | |-------------------------------------------+---------------------------------| |Installs icons under the share/icons/ |See Section 19.6.19, "Packages | |hicolor hierarchy or updates share/icons/ |installing hicolor theme icons". | |hicolor/icon-theme.cache. | | |-------------------------------------------+---------------------------------| | |See Section 19.6.14, "Packages | |Installs files under share/mime/packages. |installing extensions to the MIME| | |database". | |-------------------------------------------+---------------------------------| |Installs .desktop files under share/ |See Section 19.6.20, "Packages | |applications and these include MIME |installing desktop files". | |information. | | +-----------------------------------------------------------------------------+ 23.3. Updating GNOME to a newer version When seeing GNOME as a whole, there are two kinds of updates: Major update Given that there is still a very long way for GNOME 3 (if it ever appears), we consider a major update one that goes from a 2.X version to a 2.Y one, where Y is even and greater than X. These are hard to achieve because they introduce lots of changes in the components' code and almost all GNOME distfiles are updated to newer versions. Some of them can even break API and ABI compatibility with the previous major version series. As a result, the update needs to be done all at once to minimize breakage. A major update typically consists of around 80 package updates and the addition of some new ones. Minor update We consider a minor update one that goes from a 2.A.X version to a 2.A.Y one where Y is greater than X. These are easy to achieve because they do not update all GNOME components, can be done in an incremental way and do not break API nor ABI compatibility. A minor update typically consists of around 50 package updates, although the numbers here may vary a lot. In order to update the GNOME components in pkgsrc to a new stable release (either major or minor), the following steps should be followed: 1. Get a list of all the tarballs that form the new release by using the following commands. These will leave the full list of the components' distfiles into the list.txt file: % echo ls "*.tar.bz2" | \ ftp -V | \ awk '{ print $9 }' >list.txt % echo ls "*.tar.bz2" | \ ftp -V | \ awk '{ print $9 }' >>list.txt 2. Open each meta package's Makefile and bump their version to the release you are updating them to. The three meta packages should be always consistent with versioning. Obviously remove any PKGREVISIONs that might be in them. 3. For each meta package, update all its DEPENDS lines to match the latest versions as shown by the above commands. Do not list any newer version (even if found in the FTP) because the meta packages are supposed to list the exact versions that form a specific GNOME release. Exceptions are permitted here if a newer version solves a serious issue in the overall desktop experience; these typically come in the form of a revision bump in pkgsrc, not in newer versions from the developers. Packages not listed in the list.txt file should be updated to the latest version available (if found in pkgsrc). This is the case, for example, of the dependencies on the GNU Autotools in the meta-pkgs/gnome-devel meta package. 4. Generate a patch from the modified meta packages and extract the list of "new" lines. This will provide you an outline on what packages need to be updated in pkgsrc and in what order: % cvs diff -u gnome-devel gnome-base gnome | grep '^+D' >todo.txt 5. For major desktop updates it is recommended to zap all your installed packages and start over from scratch at this point. 6. Now comes the longest step by far: iterate over the contents of todo.txt and update the packages listed in it in order. For major desktop updates none of these should be committed until the entire set is completed because there are chances of breaking not-yet-updated packages. 7. Once the packages are up to date and working, commit them to the tree one by one with appropriate log messages. At the end, commit the three meta package updates and all the corresponding changes to the doc/CHANGES-<YEAR> and pkgsrc/doc/TODO files. 23.4. Patching guidelines GNOME is a very big component in pkgsrc which approaches 100 packages. Please, it is very important that you always, always, always feed back any portability fixes you do to a GNOME package to the mainstream developers (see Section 11.3.5, "Feedback to the author"). This is the only way to get their attention on portability issues and to ensure that future versions can be built out-of-the box on NetBSD. The less custom patches in pkgsrc, the easier further updates are. Those developers in charge of issuing major GNOME updates will be grateful if you do that. The most common places to report bugs are the GNOME's Bugzilla and the freedesktop.org's Bugzilla. Not all components use these to track bugs, but most of them do. Do not be short on your reports: always provide detailed explanations of the current failure, how it can be improved to achieve maximum portability and, if at all possible, provide a patch against CVS head. The more verbose you are, the higher chances of your patch being accepted. Also, please avoid using preprocessor magic to fix portability issues. While the FreeBSD GNOME people are doing a great job in porting GNOME to their operating system, the official GNOME sources are now plagued by conditionals that check for __FreeBSD__ and similar macros. This hurts portability. Please see our patching guidelines (Section 11.3.4, "Patching guidelines") for more details. Part III. The pkgsrc infrastructure internals This part of the guide deals with everything from the infrastructure that is behind the interfaces described in the developer's guide. A casual package maintainer should not need anything from this part. Table of Contents Chapter 24. Design of the pkgsrc infrastructure Table of Contents The pkgsrc infrastructure consists of many small Makefile fragments. Each such fragment needs a properly specified interface. This chapter explains how such an interface looks like. 24.1. The meaning of variable definitions Whenever a variable is defined in the pkgsrc infrastructure, the location and the way of definition provide much information about the intended use of that variable. Additionally, more documentation may be found in a header comment or in this pkgsrc guide. A special file is mk/defaults/mk.conf, which lists all variables that are intended to be user-defined. They are either defined using the ?= operator or they are left undefined because defining them to anything would effectively mean "yes". All these variables may be overridden by the pkgsrc user in the MAKECONF file. Outside this file, the following conventions apply: Variables that are defined using the ?= operator may be overridden by a package. Variables that are defined using the = operator may be used read-only at run-time. Variables whose name starts with an underscore must not be accessed outside the pkgsrc infrastructure at all. They may change without further notice. Note These conventions are currently not applied consistently to the complete pkgsrc infrastructure. 24.2. Avoiding problems before they arise All variables that contain lists of things should default to being empty. Two examples that do not follow this rule are USE_LANGUAGES and DISTFILES. These variables cannot simply be modified using the += operator in package Makefiles (or other files included by them), since there is no guarantee whether the variable is already set or not, and what its value is. In the case of DISTFILES, the packages "know" the default value and just define it as in the following example. DISTFILES= ${DISTNAME}${EXTRACT_SUFX} additional-files.tar.gz Because of the selection of this default value, the same value appears in many package Makefiles. Similarly for USE_LANGUAGES, but in this case the default value ("c") is so short that it doesn't stand out. Nevertheless it is mentioned in many files. 24.3. Variable evaluation 24.3.1. At load time Variable evaluation takes place either at load time or at runtime, depending on the context in which they occur. The contexts where variables are evaluated at load time are: * The right hand side of the := and != operators, * Make directives like .if or .for, * Dependency lines. A special exception are references to the iteration variables of .for loops, which are expanded inline, no matter in which context they appear. As the values of variables may change during load time, care must be taken not to evaluate them by accident. Typical examples for variables that should not be evaluated at load time are DEPENDS and CONFIGURE_ARGS. To make the effect more clear, here is an example: CONFIGURE_ARGS= # none CFLAGS= -O CONFIGURE_ARGS+= CFLAGS=${CFLAGS:Q} CONFIGURE_ARGS:= ${CONFIGURE_ARGS} CFLAGS+= -Wall This code shows how the use of the := operator can quickly lead to unexpected results. The first paragraph is fairly common code. The second paragraph evaluates the CONFIGURE_ARGS variable, which results in CFLAGS=-O. In the third paragraph, the -Wall is appended to the CFLAGS, but this addition will not appear in CONFIGURE_ARGS. In actual code, the three paragraphs from above typically occur in completely unrelated files. 24.3.2. At runtime After all the files have been loaded, the values of the variables cannot be changed anymore. Variables that are used in the shell commands are expanded at this point. 24.4. How can variables be specified? There are many ways in which the definition and use of a variable can be restricted in order to detect bugs and violations of the (mostly unwritten) policies. See the pkglint developer's documentation for further details. 24.5. Designing interfaces for Makefile fragments Most of the .mk files fall into one of the following classes. Cases where a file falls into more than one class should be avoided as it often leads to subtle bugs. 24.5.1. Procedures with parameters In a traditional imperative programming language some of the .mk files could be described as procedures. They take some input parameters and?after inclusion?provide a result in output parameters. Since all variables in Makefiles have global scope care must be taken not to use parameter names that have already another meaning. For example, PKGNAME is a bad choice for a parameter name. Procedures are completely evaluated at preprocessing time. That is, when calling a procedure all input parameters must be completely resolvable. For example, CONFIGURE_ARGS should never be an input parameter since it is very likely that further text will be added after calling the procedure, which would effectively apply the procedure to only a part of the variable. Also, references to other variables wit will be modified after calling the procedure. A procedure can declare its output parameters either as suitable for use in preprocessing directives or as only available at runtime. The latter alternative is for variables that contain references to other runtime variables. Procedures shall be written such that it is possible to call the procedure more than once. That is, the file must not contain multiple-inclusion guards. Examples for procedures are mk/bsd.options.mk and mk/buildlink3/bsd.builtin.mk. To express that the parameters are evaluated at load time, they should be assigned using the := operator, which should be used only for this purpose. 24.5.2. Actions taken on behalf of parameters Action files take some input parameters and may define runtime variables. They shall not define loadtime variables. There are action files that are included implicitly by the pkgsrc infrastructure, while other must be included explicitly. An example for action files is mk/subst.mk. 24.6. The order in which files are loaded Package Makefiles usually consist of a set of variable definitions, and include the file ../../mk/bsd.pkg.mk in the very last line. Before that, they may also include various other *.mk files if they need to query the availability of certain features like the type of compiler or the X11 implementation. Due to the heavy use of preprocessor directives like .if and .for, the order in which the files are loaded matters. This section describes at which point the various files are loaded and gives reasons for that order. 24.6.1. The order in bsd.prefs.mk The very first action in bsd.prefs.mk is to define some essential variables like OPSYS, OS_VERSION and MACHINE_ARCH. Then, the user settings are loaded from the file specified in MAKECONF, which is usually mk.conf. After that, those variables that have not been overridden by the user are loaded from mk/defaults/mk.conf. After the user settings, the system settings and platform settings are loaded, which may override the user settings. Then, the tool definitions are loaded. The tool wrappers are not yet in effect. This only happens when building a package, so the proper variables must be used instead of the direct tool names. As the last steps, some essential variables from the wrapper and the package system flavor are loaded, as well as the variables that have been cached in earlier phases of a package build. 24.6.2. The order in bsd.pkg.mk First, bsd.prefs.mk is loaded. Then, the various *-vars.mk files are loaded, which fill default values for those variables that have not been defined by the package. These variables may later be used even in unrelated files. Then, the file bsd.pkg.error.mk provides the target error-check that is added as a special dependency to all other targets that use DELAYED_ERROR_MSG or DELAYED_WARNING_MSG. Then, the package-specific hacks from hacks.mk are included. Then, various other files follow. Most of them don't have any dependencies on what they need to have included before or after them, though some do. The code to check PKG_FAIL_REASON and PKG_SKIP_REASON is then executed, which restricts the use of these variables to all the files that have been included before. Appearances in later files will be silently ignored. Then, the files for the main targets are included, in the order of later execution, though the actual order should not matter. At last, some more files are included that don't set any interesting variables but rather just define make targets to be executed. Chapter 25. Regression tests Table of Contents 25.1. The regression tests framework 25.2. Running the regression tests 25.3. Adding a new regression test 25.3.1. Overridable functions 25.3.2. Helper functions. 25.1. The regression tests framework 25.2. Running the regression tests You first need to install the pkgtools/pkg_regress package, which provides the pkg_regress command. Then you can simply run that command, which will run all tests in the regress category. 25.3. Adding a new regression test Every directory in the regress category that contains a file called spec is considered a regression test. This file is a shell program that is included by the pkg_regress command. The following functions can be overridden to suit your needs. 25.3.1. Overridable functions These functions do not take any parameters. They are all called in "set -e" mode, so you should be careful to check the exitcodes of any commands you run in the test. do_setup() This function prepares the environment for the test. By default it does nothing. do_test() This function runs the actual test. By default, it calls TEST_MAKE with the arguments MAKEARGS_TEST and writes its output including error messages into the file TEST_OUTFILE. check_result() This function is run after the test and is typically used to compare the actual output from the one that is expected. It can make use of the various helper functions from the next section. do_cleanup() This function cleans everything up after the test has been run. By default it does nothing. 25.3.2. Helper functions. output_prohibit(regex...) This function checks for each of its parameters if the output from do_test () does not match the extended regular expression. If any of the regular expressions matches, the test will fail. Chapter 26. Porting pkgsrc Table of Contents 26.1. Porting pkgsrc to a new operating system 26.2. Adding support for a new compiler The pkgsrc system has already been ported to many operating systems, hardware architectures and compilers. This chapter explains the necessary steps to make pkgsrc even more portable. 26.1./platform/MyOS.pkg.dist This file contains a list of directories, together with their permission bits and ownership. These directories will be created automatically with every package that explicitly sets USE_MTREE. This feature will be removed... 26 11, makedistinfo archive. Directory layout of the pkgsrc FTP server Table of Contents C.1. distfiles: The distributed source files C.2. misc: Miscellaneous things C.3. packages: Binary packages C.4. reports: Bulk build reports C.5. current, pkgsrc-20xxQy: source packages. C.1.). C.2. misc: Miscellaneous things This directory contains things that individual pkgsrc developers find worth publishing. C.3.. C.4. reports: Bulk build reports Here are the reports from bulk builds, for those who want to fix packages that didn't build on some of the platforms. The structure of subdirectories should look like the one in Section C.3, "packages: Binary packages". C.5. current, pkgsrc-20xxQy: source packages These directories contain the "real" pkgsrc, that is the files that define how to create binary packages from source archives. The directory pkgsrc contains a snapshot of the CVS repository, which is updated regularly. The file pkgsrc.tar.gz contains the same as the directory, ready to be downloaded as a whole. In the directories for the quarterly branches, there is an additional file called pkgsrc-20xxQy.tar.gz, which contains the state of pkgsrc when it was branched. Appendix D. Editing guidelines for the pkgsrc guide Table of Contents D.1. Make targets D.2. Procedure This section contains information on editing the pkgsrc guide itself. D.1.. D.2. Procedure The procedure to edit the pkgsrc guide is: 1. Make sure you have the packages needed to regenerate the pkgsrc guide (and other XML-based NetBSD documentation) installed. These are meta-pkgs/ netbsd-doc for creating the ASCII and HTML versions, and meta-pkgs/ netbsd-doc-print for the PostScript and PDF versions. You will need both packages installed, to make sure documentation is consistent across all formats. 2. Run cd doc/guide to get to the right directory. All further steps will take place here. 3. Edit the XML file(s) in files/. 4./. 5. (cd files && cvs commit) 6. Run bmake clean && bmake to regenerate the output files with the proper RCS Ids. 7. Run bmake regen to install and commit the files in both pkgsrc/doc and htdocs. Note If you have added, removed or renamed some chapters, you need to synchronize them using cvs add or cvs delete in the htdocs directory.
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/doc/pkgsrc.txt?rev=1.136&content-type=text/x-cvsweb-markup&sortby=author&only_with_tag=pkgsrc-2011Q3-base
CC-MAIN-2021-04
refinedweb
21,307
57.37
See also: IRC log <trackbot> Date: 23 October 2008 <ed> <anthony> scribe: anthony <ed> ISSUE-2085? <trackbot> ISSUE-2085 -- Spec unclear where focus should initially go when a document is loaded -- OPEN <trackbot> ED: Issue raised by Ikivo ... NH has an action to propose new wording DS: Perhaps we should make a proposal to him <fat_tony> scribeNick: fat_tony <ed> ED: The thing he is complaining about is the steps for how focus navigation behaves ... You can interpreter this in a few ways ... that's not necessarily a bad thing DS: Do we talk about document focus vs UA focus <scribe> ACTION: Doug to Propose wording for document focus vs user agent focus that addresses ISSUE-2085 [recorded in] <trackbot> Created ACTION-2324 - Propose wording for document focus vs user agent focus that addresses ISSUE-2085 [on Doug Schepers - due 2008-10-30]. ISSUE-2094? <trackbot> ISSUE-2094 -- accessing rules for traitAccess -- OPEN <trackbot> ED: We have two open actions there ... one is on me ... the other on A Emmons ... and A Emmons had the action to respond to the commenter DS: Who was this for? ED: Julien R ... I'll just check the public mailing list AG: Did you make the change? ED: No he's just asking for an answer ... it's not clear that he wants anything changed DS: What's the justification for the ones we chose not to change? ED: I really hoped A Emmons would take care of this ... he seemed to have a good grasp on it AG: Can we ping him at lunchtime? DS: We can always send him an email ISSUE-2107? <trackbot> ISSUE-2107 -- i18n comment 6: Direction and bidi-override attributes -- RAISED <trackbot> ED: Is this done? DS: We really need Chris here for this ... and the next one 2110 ... these are both internationalisation issues OI: Bidi direction is important to the I18N ... I think there are issues with Right-to-Left and Tob-to-Bottom DS: We do not cover vertical text in Tiny ... but we do in full AG: The wording in the specification allows for support for Top-to-Bottom text ... but doesn't mandate it DS: Tiny 1.1 is just subset of features in full ED: We do have tests for Right-to-Left and Top-to-Bottom ... but they are simple tests ... we don't test all the tricky cases ... so with the issue here I do agree with it ... the last comment I believe is a miss understanding OI: [Reads Issue] ED: There is nothing preventing you from doing that ... [Explains feature] OI: Is inherited like CSS? ED: Yes it is OI: If it is inherited like CSS that is ok ED: We took the direction property from CSS ... so it should be the same OI: The default is Left-to-Right DS: Would it be worth noting that in the spec OI: Could say that the property of RtL is inherited from the top block ... then it will be inherited to the terms inside it ... that's in CSS ... what is it in your terms ED: text content element is the term for it DS: I think this is just an authoring note ... if you put direction in the root of the document ... it will apply only to text elements ... it could confuse people because it's not like "text-direction" ED: It does map to the CSS property OI: You should note that it only applies to text blocks ... if I have a text block with direction RtL and a nested text block is also RtL DS: And of course override-able ED: That's one of his requests ... to have examples ... I think it would be useful to have the inheriting thing explained OI: So people know that there is inheritance? ... may not be necessary DS: More so for authors rather than implementers ... I think I'm going to put in a note ... because it's not really clear that direction applies to text OI: The direction property only applies to text ... not to graphics ED: Another comment he makes here in this issue ... that we say in the spec that authors are discouraged from using direction and unicode-bidi DS: [Reads out issue] ... so what this sentence means for people using western languages don't mess with this OI: Do you understand the issue if you place a dot after it ... the unicode algorithm doesn't know about the dot if it's LtR or RtL ... That's a problem with 'weak directionality' ... they don't have a directionality ED: Just need to clean up the wording on this issue and make an example or two OI: I can review the wording now if you want ED: So in this issue there was not suggested text (wording) ... there's no concrete proposal on this issue OI: I can review this issue, propose wording and make an example ED: If you can make an example sure, that'd be great ... so he's example for the example of using RtL on the top and unicode-bidi embed DS: Did we decide about the in most cases? ED: Ori said he's happy to review any text proposal we have ... if you can propose some new wording that would be great OI: I'm not on the mailing list but I will send you an email ... Here's a small HTML example ... [shows example] <scribe> ACTION: Doug to Propose wording and examples for ISSUE-2017 [recorded in] <trackbot> Created ACTION-2325 - Propose wording and examples for ISSUE-2107 [on Doug Schepers - due 2008-10-30]. ISSUE-2110? <trackbot> ISSUE-2110 -- i18n comment 8: Text rendering order -- RAISED <trackbot> OI: I think this is very similar ... oh wait, this is different ... that would be great if the direction of RtL that right glyph would be rendered first ... I think it would be nice ... but it is not a must ... that the right most glyph would be rendered first ... would be good from slow implementations so you can read the glyphs as they're being rendered ED: I'm not sure that the sentence is saying that OI: That's how I read it ... [reads sentence out] DS: So when it says rendering, I don't think the rendering would be slow ... if they're overlapping then the order matters OI: If they're not overlapping and the rendering order is not fast enough ... then you'll notice it ... do you think there'll be any slow implementation? ... do you think there'll be a slow application in a hand held device with a slow processor ... it only matters if this is a slow rendering device DS: Honestly I'd be very surprised ... because the text is processed as a block ... and put in order for placing on the canvas ED: I mean rendering part of text content block would be very strange OI: If the user agent that does the rendering to a canvas is it possible ... there is actually know mean for the rendering order of blocks DS: Individual characters in the block may over lap due to kerning <ed> ED: Chris has suggested new wording there <shepazu> DS: Here is my reply ... if the glyphs are reordered in the byte stream what happens? OI: Why would that happen? DS: Richard says it could; for example South East Asian characters ... we need Richard for this ED: We may need some tests OI: We need a Hebreu for this ... or Arabic ... and for South East Asian languages ... maybe Richard can supply us an example today DS: There is nothing stopping us for making examples today ... but it needs to be done in two days ED: How is this handled in HTML again? ... is this already addressed OI: Yes, it works fine ... I'm not sure about the rendering order ... in HTML it doesn't matter DS: Yes you can ... because of CSS ... they are adding Opacity in CSS ... so they will run into the same issues ... with letter spacing ... so HTML and CSS has the same problem ED: For consistency we use the same algorithm OI: For consistency can we say it's defined in HTML DS: Probably not OI: The issue is the running order just as Richard sent in ISSUE-2110 ED: I mentioned reasons for reordering ... so I agree to that one ... the one that Chris sent OI: The reason we need to discuss rendering order is for when we have overlap ... I agree with Chris ED: So essentially this is rendering the first character you'd read OI: I agree with Chris ... not to sure about what Doug is saying ... we need a clarification from Richard ED: Does this address the whole issue <scribe> ACTION: Erik to Add the proposed wording for ISSUE-2110 [recorded in] <trackbot> Created ACTION-2326 - Add the proposed wording for ISSUE-2110 [on Erik Dahlström - due 2008-10-30]. ISSUE-2130? <trackbot> ISSUE-2130 -- Basic Data Types section needs clarifications -- OPEN <trackbot> <ChrisL> RIM states that they use SVG Tiny 1.2 <ChrisL> <ChrisL>. <ChrisL> "We are looking at doing fairly complete support for SVG in all the places an image needs support in a device," said Liam Quinn, who heads browser development at RIM. <ChrisL> issue-2130? <trackbot> ISSUE-2130 -- Basic Data Types section needs clarifications -- OPEN <trackbot> CL: We had a related issue which was about xml:id and the XML Schema ... and we resolved to fix the schema ... which fixes the first part of the issue ... for the second one ... we are asked to supply an example ... this is related to XML namespacing DS: He may simply not understand what QNames are ... I guess we could add a quick example <ChrisL> XML namespaces has two methods, one for elements (unprefixed ones use the default namespace) and one for attributes (where unprefixed ones do not). <ChrisL> Following the tag finding on qnames in attribute values, and common practice, qnames in attribute values also do not use default namespaces <ChrisL> <ChrisL> This wording needs to be clarified <ChrisL> "If the <QName> has a prefix, then the prefix is expanded into an IRI reference using the namespace declarations in effect where the name occurs, and the default namespace is not used for unprefixed names. " <ChrisL> ACTION: Chris to clarify QName expansion to tuples [recorded in] <trackbot> Created ACTION-2327 - Clarify QName expansion to tuples [on Chris Lilley - due 2008-10-30]. <ChrisL> "If the <QName> has a prefix, then the prefix is expanded into a tuple of an IRI reference and a local name, using the namespace declarations in effect where the name occurs. Note that, as with unprefixed attributes, the default namespace is not used for unprefixed names. " CL: Proposed sentence in IRC <ChrisL> The relevant part of the tag finding, that we follow, is "Specifications that use QNames to represent {URI, local-name} pairs MUST describe the algorithm that is used to map between them." CL: So an example would be if you're animating an attribute name is stroke ... you'd be animating "stroke" and not "svg:stroke" ... [reads out rest of issue] ... so his third one is because in the schema we haven't defined anything as frame target ... do we want to patch up the schema to define something called "frame name" DS: Do we define what a frame target is? ... we can add a definition CL: Where is frame target used ED: Linking, I think the <a> element <ed> <ChrisL> CL: [Reads part out in spec] <ChrisL> "<frame-target>" <ChrisL> Specifies the name of the frame, pane, or other relevant presentation context for display of the linked content. If this already exists, it is re-used, replacing the existing content. if it does not exist, it is created (the same as _blank, except that it now has a name). Note that frame-target must be an XML Name [XML11]. <ChrisL> we could link XML Name to the types chapter <ChrisL> basically frame-target is of type XML Name. So there is no error in the spec CL: The next one handler, string ... XML doesn't call string "strings" ED: What does XML Events say? <ChrisL> for the next two, XML-NMTOKEN is a subset of string ED: is it just an XML token? CL: And we have a link to the types which say NMToken <ChrisL> NMTOKEN is correct, and links to <ChrisL> 'string' is too loose and should be corrected CL: Which chapter is handler in? ED: Scripting <ChrisL> we can edit the spec where it says string ED: Do we just have XML-NMToken as a basic type? CL: Yes ED: We do us XML name ... on listener ... but not on handler CL: I've corrected handler ... I don't see it on listener ED: It's using XMLName, it should be using NMToken in that case CL: So this is in the scripting chapter? ED: Yes <ed> heycam: I have an up-to-date tools directory <ed> note that someone might have changed the example, we did discuss it ISSUE-2139? <trackbot> ISSUE-2139 -- Add note regarding eRR attribute and prefetch element -- RAISED <trackbot> ED: A Emmons did send proposed text ... and Cyril responded to say if the text is added then he'll be fine with that <scribe> ACTION: Anthony to Add the text from the wording proposed by A Emmons to the specification regarding ISSUE2139 [recorded in] <trackbot> Created ACTION-2328 - Add the text from the wording proposed by A Emmons to the specification regarding ISSUE2139 [on Anthony Grasso - due 2008-10-30]. ISSUE-2145? <trackbot> ISSUE-2145 -- Clarify media timeline and document timeline -- RAISED <trackbot> DS: He hasn't offered anything helpful regarding the issue ED: Do we want to postpone this DS: We will look at this one in a bit <ed> png? <scribe> scribe: Anthony <scribe> scribeNick: fat_tony JF: Just started the activity in Japan ... for SVG IG ... created a mailing list ... had first f2f meeting on Friday ... the second is on 12th Nov ... and we invited Doug to talk to the group ... the group includes Takagi-san from KDDI and Takahashi-san fromm Indigo and Mori-san from Osaka City Uni CL: Where is the mailing list? JF: Yes it is in W3C ... there are several people from JIPDEC <ChrisL> JF: JIPDEC is a Japanese standards body <ChrisL> JIPDEC - Japan Information Processing Development Corporation JF: that is managing the standarisation of SVG in Japan <ChrisL> DS: JIPDEC sells copies of the JIS standard <shepazu> s/11th Nov/12th Nov/ JF: Keio W3C will have JPC meeting on the 6th November ... so we can expect more traffic on SVG IG JP DS: If Kaz can translate the description on the public mailing list page in English to Japanese <scribe> ACTION: Doug to Coordinate on Kaz on JP IG list description [recorded in] <trackbot> Created ACTION-2329 - Coordinate on Kaz on JP IG list description [on Doug Schepers - due 2008-10-30]. <ChrisL> list description is in english currently <ChrisL> JF: Currently our main activity in SVG IG JP is to support the SVG JIS standardisation ... and coordinate with other people. Another activity is to promote the usage of SVG for ... web mapping in Japan ... we will working on a new module called SVG Map module CL: This is related to the existing JIS standard? JF: No this is a different module and and email to promote it has been sent out ... we will work on a complete module ... so we can make a complete proposal to SVG WG ... we also considering on gathering SVG map data in Japan ... and publish the map data on the SVG community site ... possibly on planet SVG CL: Are there minutes of the F2F meeting? JF: Yes minutes were scribed but they are in Japanese ... by Kaz DS: I want to thank you for all your effort you have put into this <ChrisL> thats good. hope to see them on the mailing list DS: The fact that you are the chair of the SVG IG is really good. I appreciate that JF: On the same date we had the first SVG JIS committee meeting ... Hagino-san of W3C Keio is the manager ... and the goal is to create a JIS Standard for the SVG based format for mapping ... and that includes a complete translated specification of SVG Tiny 1.2 DS: When will they begin translation? JF: As soon as PR is available ... in the first committee meeting we discussed the possible category of the JIS standard ... and also the structure of the standard has been discussed ... there are two possible categories, one is mapping ... and networking category DS: Can it be in both? JF: That is an important point ... as for the structure four possibilities was suggested ... the first is the one just for SVG ... and one for SVG map module ... the second is to have just one standard just for SVG map module and that will be different from the SVG Tiny 1.2 spec ... and the third possibility is to have a single JIS standard to have branch of map module ... and the last possibility is to have a single combined specification which has SVG map and SVG Tiny all in one CL: There are advantages of each JF: There are some people who think that it is not useful out side of the use case of mapping ... we had the conclusion that there will be a map module and a network module ... in the last committee meeting they decided to have a separate standard for SVG Tiny and a separate for SVG Map module <ChrisL> Its my preference also, to have one standard which is SVGT1.2 in Japanese, and a second one which is SVG Mapping and could be submitted to W3C so we can have it as an W#C spec in English as well JF: so we hope that we can bring the Map module to the working group DS: I know Andreas will be interested in this module ... the other thing that KDDI is not a member of W3C <ChrisL> I would like to see the Japanese version on the W3C site as an official translation DS: so they can't make a submission JF: It will be a proposal from the IG CL: We need to get patent statements from all of the authors ... agreeing to W3C patent policy JF: People in SVG IG JP agree to W3C patent policy <ChrisL> <ChrisL> Development of location based search engine service using goSVG <ChrisL> Yuzo Matsuzawa <ChrisL> Indigo Corporation JF: The goal is to come up with SVG Map profile which might be based on SVG 2.0 Core ... or some other useful module like Transforms ... SVG JIS project will be two year ... it started this April and ended in Mar 2009 ... that is the goal in the first year to do the translation ... of the whole specification ... we have to finish the translation around the end of next March ... in the second year the committee will take a formal process to publish the standard ... in order to achieve this the timing for SVG Tiny to achieve PR and Rec is important ... and will start the translation as soon as the PR spec is published ... so we hope this will happen in early Nov DS: So you will start translating when it's in PR and not Rec correct? JF: Yes DS: So as part of the translation I'm sure they will find typos ... I'd be surprised if reading the specification they will not be confused, will they be able to propose a question ... if the english is not clear JF: The problem is the translation is out sourced ... but perhaps the SVG IG can review the translation CL: If someone perhaps miss understand somethings ... then it would be nice for it to come back to us DS: They have an investment on getting it correct ... if something is confusing to the translator they should know that they should communicate with the SVG WG <ChrisL2> its good in that case to ask the WG for clarification. Sometimes the English is not clear to a non native speaker. (Actually some of the spec is written by non-native speakers, too) DS: Our plan is early next week we will do our call for transition ... it is possible as early as Wed ... we think we can go to PR on Nov 3rd as PR and we will to Rec on Dec 2nd CL: When we go to Rec we will make a press release ... it would be nice to have Canon on the press release DS: Just so you know RIM has announced that it is developing SVG Tiny 1.2 <ed> scribe: Erik <ed> scribeNick: ed <dino> i just sent in comments on svg transformations. Did they make it to the list? <dino> shepazu, are you moderator? <shepazu> dino, just modded it <shepazu> dino: can you join us? <dino> shepazu, thankyou! <dino> yes, i'll come up if you promise to be nice DS: dino sent another message (I'm moderating this now, it got caught) AG: sent the proposal to the public wg list ... tried to keep it as close as possible to the css proposal ... the one in css uses z-order ... I decided to leave that out for now and use the painters algorithm ... it's an extension of the svg 2d transforms ... it's extended to 3x4 matrix to give you the extra rotation capabilities ... we think that's all that's necessary ... obviously extending this means we need to extend the DOM interfaces DS: does this have impacts on declarative animations? AG: yes, we need to consider that use-case ... and we need to add a section there to say how that should be handle ... if you have a circle on top of a rectangle and you rotate both as a group you'll still see the circle because it's on top of the rect ... rotate in the 3d sense DJ: the way css transformations work is that anything that has a transforms gets a css stacking context ... which means it goes into the z-index system ... z-index doesn't mean position, it's just like an offscreen buffer and the order you draw those ... you can have something something with a z value that puts something close to the viewer, but have a z-index that puts it below some other elements ... one question is how the svg transformations are supposed to be rendered CL: it's hard to do a rotation of two elements that rotate in a circle in z if you have two elements in differnt 3d coordinate systems each group is rendered in two offscreens, so you can't have one elements sometimes in front of and sometimes behind DJ: most systems in 3d don't allow you to do grouping CL: opengl doesn't allow that, but PHIGS does ... opengl is immidiatemode, PHIGS is more like svg, it has a structure and a DOM, it has nesting, transformations ... in PHIGS has the same concept of grouping and transformations as svg, but everything is in the same 3d coordinate system ALG: are you talking about real 3d rendering in svg? is there any use in x3d? CL: 3d rendering in 2d DJ: you can position things but it's like drawing a postcard in 3d space CL: 2.5d is 3x3 matrices AG: the problem is that it's bad for authoring, it's difficult to write the transformations and hard to animate DS: are we proposing 2.5d or 3d transforms? AG: localized 3d transforms ... it's still sort of 2.5d because you don't have a single 3d coordinate system, you have localized 3d coordinate systems CL: the 3d transforms are still affine transforms? DJ: the proposal canon sent has non-affine transformations available ... 3d affine transforms ... they have true perspective, so they're not 2d affine transforms CL: normally geometry are seen as structure ... in svg, and colors and such are seen as styling, so properties vs attributes DS: but if it's going to be in css it should be properties CL: but then the transforms in svg should also be properties? AG: the reason for having them as properties is to keep it as close to css as possible ED: people have been asking for transforms to be in css for some time (even for svg use-cases) DS: can you still have the animated "starwars" effect? AG/DJ: yes CL: how do we move this forward? AG: what's the priority of transforms in css? DJ: it's not part of css until the group is rechartered ... mozilla just implemented css transforms, but only the 2d parts DS: are you willing to work with the SVGWG to move this forward? DJ: currently css transforms can apply to svg root elements only, like a postcard DS: we need to figure out how SVG and CSS should work together wrt to transforms DJ: there are no units in svg in transform values, might be a problem for example AG: i'm happy to work with the css wg to move this DJ: mostly the proposals are the same ... we now do a full 4x4 matrix ... things you need to be aware of ... hit testing AG: and how do you get boundingboxes? <ChrisL2> non-invertible matices. singularities DJ: we have it on the phones ... what happens is that sometimes, with overflow hidden in css, you have to flatten the whole tree ... sometimes it looks like 3d but it's really not ... [draws on whiteboard] <fantasai> There were two concerns that came up when we discussed transforms in Beijing <fantasai> one was syntax: making the syntax match more closely CSS conventions <fantasai> the other was the default value of the property <fantasai> which can't be the same as the identity transform <fantasai> because transformation needs to trigger some side effects in CSS <fantasai> such as turning the element into a block formatting context DJ: clipping in 3d is difficult <fantasai> so it contains all its floating children ,etc DJ: you can't preserve 3d and have clipping <ChrisL2> yes. so the initial value needs to be 'no transform applied' so you dont get a css stacking context, right? DJ: the transforms spec isn't very big, but it's probably full of things that needs further discussion ... perspective-transform ...?? ... read our proposal and ask for clarifications where needed ... reason we haven't got much feedback is probably because not many have tried implementing it so far ... css has a lot of edge cases that affect the 3d transforms CL: the initial value for transform needs to be 'none' not identity, because it sets up a stacking context DJ: it's nice to put the perspective on a parent div or svg:g, and then use it for the child elements AG: understand the need to the property DJ: preserve3d is something svg would need too probably AG: it would be nice to have DJ for another F2F for more discussion on this ... we could achieve faster progress that way DJ: i'll join the svg mailinglist JF: it's important to work with the CSSWG on this ... we don't want two specs on this RESOLUTION: we won't form a joint taskforce, we'll work with DJ and the CSS WG to ensure that 3d transforms can be used in both CSS, HTML and SVG JF: canon has an experimental implementation of the 3d transforms module in svg, and we can confirm that we can easily describe 3d-based UI:s by using this ... maybe at the next F2F we can have a demo ready <ChrisL2> Foundations of Open Media Software (FOMS) AG: we can host the next F2F in sydney, we want to do it after the FOMS conference () <ChrisL2> Developer Workshop <ChrisL2> Thursday 15 - Friday 16 January 2009 <ChrisL2> Hobart (Tasmania), Australia <ChrisL2> AG: so Jan 19 ED: i would prefer to have it a bit later CL: there's is an AC meeting in march ED: early february would be better DS: i'll be speaking at a conference in february, *checking when* ... let's schedule the f2f for february 16-20 <fantasai> Minutes from CSSWG meeting in Beijing: <ChrisL2> Scribe: Chris <ChrisL2> scribenick: ChrisL2 <ed> ED: We now have only 9 issues ISSUE-2153 RNG link Link to RelaxNG Schema ISSUE-2153? <trackbot> ISSUE-2153 -- Link to RelaxNG Schema -- RAISED <trackbot> this can be closed, Doug has updated the spec to point to the actual correct RNG that we are all editing also the make scrip updates this master -> publish ISSUE-2145? <trackbot> ISSUE-2145 -- Clarify media timeline and document timeline -- RAISED <trackbot> ED: Prefer to have AE on this one sync behaviours not currently impleented in Opera postpone to tomorrow? DS: Perfer to do not if poss ED: Some suggested text in the comment (we discuss 'locked') CL: On this part "The specification should say what happens when a video/audio element points to a file or a set of streams containing multiple audio tracks (english and french, or AAC and AC3) or multiple video tracks. " the answer is that it depends on whatever fragment syntax is defined for that media type registration. Depends also on what the media fragents WG comes up with there are other technologies that relate to this eg MPEG-21 its outside our spec. But clearly something that needs to be solved DS: on his second-;ast para ED: .... agree about media timeline rather than 'play time' CL: What does smil say about sync delay and the allowed amount of sync? ED: eg a straming video, it can be difficult to seek back. Thats an example where the media timeline cannot be controlled ... not always possible to restart from time zero DS: Media eleents are time containers. So locking one does not lock its parent CL: But if they are locked, and you pause the child, what happens to the parent. Thats what he is asking DS; Lockimng is only in terms of timeline sync, so if you pause an element its timeline is frozen; when its playing its synced to the parent CL: I read there "Note that the semantics of syncBehavior do not describe or require a particular approach to maintaining sync; the approach will be implementation dependent. Possible means of resolving a sync conflict may include:" also under "Paused elements and the active duration" "In addition, when an element is paused, the accumulated synchronization offset will increase to reflect the altered sync relationship. See also The accumulated synchronization offset." DS: So in the end its defined by SMIL there ED: So I can change playtime. will do that now <scribe> ACTION: Doug respond on ISSUE-2145 citing these minutes and pointing to SMIL 2.1 to talk about elapsed time offset when paused [recorded in] <trackbot> Created ACTION-2330 - Respond on ISSUE-2145 citing these minutes and pointing to SMIL 2.1 to talk about elapsed time offset when paused [on Doug Schepers - due 2008-10-30]. ISSUE-2107? <trackbot> ISSUE-2107 -- i18n comment 6: Direction and bidi-override attributes -- OPEN <trackbot> CL: We are almost done on that one ... we had a native Hebrew speaker earlier today check our examples and tests ... its added to the spec, tested and impleented ... doug has an action to add an example to the spec DS: Want a step by step example and also a template for people to follow Still open until the examples are added Still open until the examples are added <ChrisL> ISSUE-2147? <trackbot> ISSUE-2147 -- Section on externally referenced documents confusing -- OPEN <trackbot> <ChrisL> ED: Cameron wrote about this on public list. 2 script elements with same UIR, execute twice not once so section is incorrect <ChrisL> DS: That section is badly writted <ChrisL> ED: AE did a small rewrite of some of it <ChrisL> DS: I sent an email with a more extensive rewrite <ChrisL> ED: There are multiple copy pasted sentences there <shepazu> <ChrisL> 14.1.6 Externally referenced documents <ChrisL> <ed> DS: my proposed text is the last one in the externalRefs.html file above <ChrisL> ED: Scripting chapter covers the script case, Cameron fixed it <ed> <ed> actually not yet <ed> <ChrisL> the questions on external media and external scripts are in fact completely independent of the section on primary and resource documents, which are only about svg <ChrisL> media-audio-206-t is passed in GPAC <ChrisL> but animate-elem-226-t fails This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/navigation/navigation behaves/ Succeeded: s/RtoL/Right-to-Left and Tob-to-Bottom/ Succeeded: s/Tiny/Tiny 1.1/ Succeeded: s/TextContent/text content element is the term for it/ Succeeded: s/ISSUE-2017/ISSUE-2107/ FAILED: s/11th Nov/12th Nov/ Succeeded: s/The advantage/There are advantages/ Succeeded: s/in a/in the same/ Succeeded: s/strw/str/ Found Scribe: anthony Inferring ScribeNick: anthony Found ScribeNick: fat_tony Found Scribe: Anthony Found ScribeNick: fat_tony Found Scribe: Erik Found ScribeNick: ed Found Scribe: Chris Found ScribeNick: ChrisL2 Scribes: anthony, Erik, Chris ScribeNicks: anthony, fat_tony, ed, ChrisL2 WARNING: Replacing previous Present list. (Old list: Ori_Idan, Fons_Kuijk, Jun_Fujisawa, Doug_Schepers, Erik_Dahlstrom, Anthony_Grasso) Use 'Present+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Present+ Al_gilman, CL, DS, AG, JF, dino Present: Al_gilman CL DS AG JF dino Found Date: 23 Oct 2008 Guessing minutes URL: People with action items: anthony chris doug erik[End of scribe.perl diagnostic output]
http://www.w3.org/2008/10/23-svg-minutes.html
CC-MAIN-2015-22
refinedweb
5,623
69.31
Opened 7 years ago Closed 7 years ago Last modified 3 years ago #8241 closed (fixed) Primary ForeignKeys don't work with FormSets Description Using this UserProfile as an example: class UserProfile(models.Model): user = models.OneToOneField(primary_key=True) nickname = models.CharField(max_length=30) from django.contrib.auth.models import User from django.contrib.auth.admin import UserAdmin from django.contrib.admin import site, StackedInline class UserProfileAdmin(StackedInline) model = UserProfile max_num = 1 UserAdmin.inlines = UserAdmin.inlines + [UserProfileAdmin] site.unregister(User) site.register(User, UserAdmin) Trying to save a User from the admin causes a KeyError: 'user_id' because the primary key doesn't get rendered in the form. Attachments (3) Change History (19) comment:1 Changed 7 years ago by sciyoshi - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 7 years ago by jacob - Component changed from Uncategorized to Admin interface - milestone set to 1.0 - Triage Stage changed from Unreviewed to Accepted comment:3 follow-up: ↓ 4 Changed 7 years ago by mremolt comment:4 in reply to: ↑ 3 Changed 7 years ago by sciyoshi Isn't this a duplicate for or at least related to #7947 ? I don't think so; applying the patch from that ticket doesn't fix the bug, and the same behavior also occurs if we have user = models.ForeignKey(User, primary_key=True) I think the problem could be fixed somewhere around add_fields in BaseModelFormSet: def add_fields(self, form, index): """Add a hidden field for the object's primary key.""" if self.model._meta.has_auto_field: self._pk_field_name = self.model._meta.pk.attname form.fields[self._pk_field_name] = IntegerField(required=False, widget=HiddenInput) super(BaseModelFormSet, self).add_fields(form, index) If the primary key isn't an AutoField it doesn't get added to the fields here... comment:5 Changed 7 years ago by sciyoshi Changed 7 years ago by sciyoshi comment:6 Changed 7 years ago by sciyoshi - Has patch set - Needs tests set - Patch needs improvement set comment:7 Changed 7 years ago by semenov - Patch needs improvement unset sciyoshi, first of all, thanks for your patch. It inspired me for further analysis of the problem. :) From what I've discovered, your patch suffers from the following: - It curcumvents into the naming convention of form fields and adds a form field named user_id (i.e., taken as field.attname, not as field.name). At first glance, that seems to be ok: the field is filled by model_to_dict(), passed thru browser POST, but then ignored by save_instance(). Alas, that would break things if a model had TWO fields like user=OneToOneField() and user_id=CharField(). We shouldn't be messing with the naming convention: all form fields should always use field.name as their names. - The patch breaks BaseModelFormSet.add_fields() contract, which currently adds a new field if (and only if) a primary key is an AutoField -- what I consider to be a desired behaviour. I'm attaching the updated patch (vs [8461]), which resolves the both problems. Changed 7 years ago by semenov comment:8 Changed 7 years ago by semenov comment:9 Changed 7 years ago by semenov Changed 7 years ago by semenov comment:10 Changed 7 years ago by semenov Note: I really don't like my drill down until we find a "real" pk approach; but I don't have enough deep knowledge of django/db internals, so that was the only way that seemed correct to me (more or less). comment:11 Changed 7 years ago by Uninen - Cc ville@… added comment:12 Changed 7 years ago by brosner - Resolution set to invalid - Status changed from new to closed It appears this ticket has been caught in the middle of some commit rapid fire. In the fire the issue reported here is fixed. The issue that semenov brings up is not related, but I think should be fixed. Can you please open a ticket talking about the namespace stomping. Please reopen if the issue reported has not been fixed on trunk with detailed and concise use cases. comment:13 Changed 7 years ago by brosner - Resolution invalid deleted - Status changed from closed to reopened Ok. I take back that this is fixed. I had a bad test case. Reopening. comment:14 Changed 7 years ago by brosner - Owner changed from nobody to brosner - Status changed from reopened to new comment:15 Changed 7 years ago by brosner - Resolution set to fixed - Status changed from new to closed comment:16 Changed 3 years ago by jacob - milestone 1.0 deleted Milestone 1.0 deleted That should be
https://code.djangoproject.com/ticket/8241
CC-MAIN-2015-14
refinedweb
759
53.92
Opened 7 years ago Closed 7 years ago Last modified 4 years ago #8020 closed (fixed) python process crashes when using sitemaps after [8088] Description (last modified by mtredinnick) When I go to, python process crashes without any errors or core dumps. Django-version: trunk after [8088] DB-backends: PostgreSQL or SQLite3 URLs in sitemap: less than 50000 Attachments (3) Change History (18) comment:1 Changed 7 years ago by mtredinnick Changed 7 years ago by Boo TestCase to repeat the problem comment:2 Changed 7 years ago by Boo I've added TestCase to repeat this problem. I use Python-2.5.2, OS: FreeBSD and MacOSX. I run development server and go to, and devserver crushes without any errors. Boo:~/projects/testcase boo$ ./manage.py runserver Validating models... 0 errors found Django version 1.0-alpha-SVN-8138, using settings 'testcase.settings' Development server is running at Quit the server with CONTROL-C. Boo:~/projects/testcase boo$ comment:3 Changed 7 years ago by Boo - Needs tests set comment:4 Changed 7 years ago by evan_schulz Changed 7 years ago by Boo Corrected INSTALLED_APPS comment:5 Changed 7 years ago by julianb Changed 7 years ago by johndagostino fixes development server crash sitemaps framework comment:6 Changed 7 years ago by johndagostino I can reproduce this on MacOSX with Python 2.5.1 The attached patch fixes the crash for me. from django.contrib.sitemaps import Sitemap from blog.models import Post class PostSitemap(Sitemap): changefreq = "never" priority = 0.5 def items(self): return Post.objects.all() def lastmod(self, obj): return obj.pub_date comment:7 Changed 7 years ago by johndagostino - Has patch set - milestone set to 1.0 beta - Needs tests unset comment:8 Changed 7 years ago by julianb - Triage Stage changed from Unreviewed to Ready for checkin Seems obvious... comment:9 Changed 7 years ago by mtredinnick - Resolution set to fixed - Status changed from new to closed comment:10 Changed 7 years ago by hvendelbo - Resolution fixed deleted - Status changed from closed to reopened This change recurses for me on Mac OSX, I don't understand why it wouldn't infinitely recurse on all platforms??? Is it because I changed the Sitemap class to a new-style class? get_urls gets the paginator entering _get_paginator It already has the property paginator, so it returns the property paginator which triggers a call to _get_paginator as it already has .... comment:11 Changed 7 years ago by hvendelbo I believe reverting the fix and making Sitemaps new-style classes fixes the cause of this ticket. comment:12 Changed 7 years ago by mtredinnick - milestone changed from 1.0 beta to 1.0 comment:13 Changed 7 years ago by mtredinnick - Triage Stage changed from Ready for checkin to Accepted comment:14 Changed 7 years ago by mtredinnick - Resolution set to fixed - Status changed from reopened to closed From your description, it sounds like you are running a version from earlier than [8231], where the infinite loop problem with referring to a bad attribute was fixed. So I'm going to reclose this. If you are still seeing a problem on unmodified code that is up to date, please open a new ticket explaining how to repeat the problem. comment:15 Changed 4 years ago by jacob - milestone 1.0 deleted Milestone 1.0 deleted There isn't really enough information here to work out what's going on or what the problem being reported is. You say "it crashed", but what do you mean? How can we replicate the problem? Do you have a small example that shows what is going on? A problem that cannot be repeated cannot be fixed.
https://code.djangoproject.com/ticket/8020
CC-MAIN-2015-27
refinedweb
613
62.78
NAMEsd_bus_call, sd_bus_call_async - Invoke a D-Bus method call SYNOPSIS #include <systemd/sd-bus.h> typedef int (*sd_bus_message_handler_t)(sd_bus_message *m, void *userdata, sd_bus_error *ret_error); int sd_bus_call(sd_bus *bus, sd_bus_message *m, uint64_t usec, sd_bus_error *ret_error, sd_bus_message **reply); int sd_bus_call_async(sd_bus *bus, sd_bus_slot **slot, sd_bus_message *m, sd_bus_message_handler_t callback, void *userdata, uint64_t usec); DESCRIPTIONsd_bus_call() takes a complete bus message object and calls the corresponding D-Bus method. On success, the response is stored in reply. usec indicates the timeout in microseconds. If ret_error is not NULL and sd_bus_call() fails (either because of an internal error or because it received a D-Bus error reply), ret_error is initialized to an instance of sd_bus_error describing the error. sd_bus_call_async() is like sd_bus_call() but works asynchronously. The callback indicates the function to call when the response arrives. The userdata pointer will be passed to the callback function, and may be chosen freely by the caller. If slot is not NULL and sd_bus_call_async() succeeds, slot is set to a slot object which can be used to cancel the method call at a later time using sd_bus_slot_unref(3). If slot is NULL, the lifetime of the method call is bound to the lifetime of the bus object itself, and it cannot be cancelled independently. See sd_bus_slot_set_floating(3) for details. callback is called when a reply arrives with the reply, userdata and an sd_bus_error output parameter as its arguments. Unlike sd_bus_call(), the sd_bus_error output parameter passed to the callback will be empty. To determine whether the method call succeeded, use sd_bus_message_is_method_error(3) on the reply message passed to the callback instead. If the callback returns zero and the sd_bus_error output parameter is still empty when the callback finishes, other handlers registered with functions such as sd_bus_add_filter(3) or sd_bus_add_match(3) are given a chance to process the message. If the callback returns a non-zero value or the sd_bus_error output parameter is not empty when the callback finishes, no further processing of the message is done. Generally, you want to return zero from the callback to give other registered handlers a chance to process the reply as well. (Note that the sd_bus_error parameter is an output parameter of the callback function, not an input parameter; it can be used to propagate errors from the callback handler, it will not receive any error that was received as method reply.) usec is zero, the default D-Bus method call timeout is used. See sd_bus_get_method_call_timeout(3). RETURN VALUEOn success, these functions return a non-negative integer. On failure, they return a negative errno-style error code. ErrorsWhen sd_bus_call() internally receives a D-Bus error reply, it will set ret_error if it is not NULL, and will return a negative value mapped from the error reply, see sd_bus_error_get_errno(3). Returned errors may indicate the following problems: -EINVAL -ECHILD -ENOTCONN -ECONNRESET -ETIMEDOUT -ELOOP -ENOMEM
https://man.archlinux.org/man/sd_bus_call_async.3.en
CC-MAIN-2021-04
refinedweb
468
51.78
---------------------------------------------------------------------- diff --git a/docs/guide/misc/migrate-to-0.8.0.md b/docs/guide/misc/migrate-to-0.8.0.md new file mode 100644 index 0000000..a71d19b --- /dev/null +++ b/docs/guide/misc/migrate-to-0.8.0.md @@ -0,0 +1,32 @@ +--- +layout: website-normal +title: Migrating to 0.8.0 +--- + +As noted in the [release notes](release-nodes.html), +this version introduces major package renames. + +However migrating your code should not be hard: + +* For small Java projects, simply "Optimizing Imports" in your IDE should fix code issues. + +* For YAML blueprints and larger projects, +a set of regexes has been prepared [here](migrate-to-0.8.0-regexes.sed) +detailing all class renames. + +To download and apply this to an entire directory, you can use the following snippet. +If running this on a Java project, you should enter the `src` directory +or `rm -rf target` first. For other use cases it should be easy to adapt, +noting the use of `sed` and the arguments (shown for OS X / BSD here). +Do make a `git commit` or other backup before applying, +to make it easy to inspect the changes. +It may add a new line to any file which does not terminate with one, +so do not run on binary files. + +{% highlight bash %} +$ curl {{ site.url_root }}{{ site.path.guide }}/misc/migrate-to-0.8.0-regexes.sed -o /tmp/migrate.sed +$ for x in `find . -type file` ; do sed -E -i .bak -f /tmp/migrate.sed $x ; done +$ find . -name "*.bak" -delete +{% endhighlight %} + +If you encounter any issues, please [contact us](/website/community/). ---------------------------------------------------------------------- diff --git a/docs/guide/misc/release-notes.md b/docs/guide/misc/release-notes.md index d174025..5a61833 100644 --- a/docs/guide/misc/release-notes.md +++ b/docs/guide/misc/release-notes.md @@ -12,13 +12,14 @@ title: Release Notes * Introduction * New Features * Backwards Compatibility -* Community Activity ### Introduction -Version 0.7.0 is a major step for Apache Brooklyn. It is the first full release -of the project as part of the Apache incubator. +Version 0.8.0 is a rapid, clean-up and hardening release, as we prepare for graduation. +The biggest change is the package refactoring, discussed in the Backwards Compatibility section. +Other new features include more machine management (suspend/resume and windows enhandements), +MySQL cluster, entitlements enhancements, and pluggable blueprint languages. Thanks go to our community for their improvements, feedback and guidance, and to Brooklyn's commercial users for funding much of this development. @@ -26,91 +27,46 @@ to Brooklyn's commercial users for funding much of this development. ### New Features -This release is of a magnitude that makes it difficult to do justice to all of -the features that have been added to Brooklyn in the last eighteen months. The -selection here is by no means all that is new. +New features include: -1. _Blueprints in YAML_ In a significant boost to accessibility, authors no - longer need to know Java to model applications. The format follows the - [OASIS CAMP specification]() - with some extensions. +* All classes are in the `org.apache.brooklyn` namespace -1. _Persistence and rebind_ Brooklyn persists its state and on restart rebinds - to the existing entities. +* Port mappings supported for BYON locations: fixed-IP machines can now be configured + within subnets -1. _High availability_ Brooklyn can be run a highly available mode with a - master node and one or more standby nodes. +* The Entitlements API is extended to be more convenient and work better with LDAP -1. _Blueprint versioning_ The blueprint catalogue supports multiple versions - of blueprints. Version dependencies are managed with OSGi. +* The blueprint language is pluggable, so downstream projects can supply their own, + such as TOSCA to complement the default CAMP dialect used by Brooklyn -1. _Windows support_ Brooklyn can both run on and deploy to Windows instances. +* A MySQL master-slave blueprint is added -1. _Cloud integrations_ Significant support for several clouds, including - SoftLayer, Google Compute Engine and Microsoft Azure. - -1. _Downstream parent_ A new module makes it significantly simpler for downstream - projects to depend on Brooklyn. - - -Other post-0.7.0-M2 highlights include: - -1. New policies: `SshConnectionFailure`, which emits an event if it cannot make - an SSH connection to a machine, and `ConditionalSuspendPolicy`, which suspends - a target policy if it receives a sensor event. - -1. Brooklyn reports server features in responses to `GET /v1/server/version`. - -1. It is much easier for downstream projects to customise the behaviour of - `JcloudsLocationSecurityGroupCustomiser`. - -1. Brooklyn is compiled with Java 7 and uses jclouds 1.9.0. - -1. Improvements to the existing Nginx, Riak, RabbitMQ and Bind DNS entities and - support for Tomcat 8. +* Misc other new sensors and improvements to Redis, Postgres, and general datastore mixins +* jclouds version bumped to 1.9.1, and misc improvements for several clouds + including Softlayer and GCE + ### Backwards Compatibility -Changes since 0.7.0-M2: - -1. Passwords generated with the `generate-password` command line tool must be - regenerated. The tool now generates exactly `sha256( salt + password )`. - -Changes since 0.6.0: - -1. Code deprecated in 0.6.0 has been deleted. Many classes and methods are newly deprecated. - -1. Persistence has been radically overhauled. In most cases the state files - from previous versions are compatible but many items have had to change. - -1. Location configuration getter and setter methods are changed to match those - of Entities. This is in preparation for having all Locations be Entities. - -1. OpenShift integration has moved from core Brooklyn to the downstream project -. - -Please refer to the release notes for versions -[0.7.0-M2]() -and -[0.7.0-M1]() -for further compatibility notes. - - -### Community Activity - -During development of 0.7.0 Brooklyn moved to the Apache Software Foundation. +Changes since 0.7.0-incubating: -Many exciting projects are using Brooklyn. Notably: +1. **Major:** Packages have been renamed so that everything is in the `org.apache.brooklyn` + namespace. This decision has not been taken lightly! + + This **[migration guide](migrate-to-0.8.0.html)** will assist converting projects to + the new package structure. + + We recognize that this will be very inconvenient for downstream projects, + and it breaks our policy to deprecate any incompatibility for at least one version, + but it was necessary as part of becoming a top-level Apache project. + Pre-built binaries will not be compatible and must be recompiled against this version. -* [Clocker](), which creates and manages Docker cloud - infrastructures. + We have invested significant effort in ensuring that persisted state will be unaffected. -* The Brooklyn Cloud Foundry Bridge, which brings blueprints into the Cloud - Foundry marketplace with the [Brooklyn Service - Broker]() - and manages those services with the Cloud Foundry CLI plugin. +1. Some of the code deprecated in 0.7.0 has been deleted. + There are comparatively few newly deprecated items. -* [SeaClouds](), an ongoing EU project for - seamless adaptive multi-cloud management of service based applications. +For changes in prior versions, please refer to the release notes for +[0.7.0](/v/0.7.0-incubating/misc/release-notes.html).
http://mail-archives.apache.org/mod_mbox/brooklyn-commits/201509.mbox/%3Cf9a4de20a97742fa9a0ec85971e26244@git.apache.org%3E
CC-MAIN-2017-43
refinedweb
1,158
51.65
There is no doubt that AngularJS – the self-proclaimed “superheroic JavaScript framework” – is gaining traction. I’ll refer to it frequently as just “Angular” in this post. I’ve had the privilege on working on an enterprise web application with a large team (almost 10 developers, soon growing to over 20) using Angular for over half of a year now. What’s even more interesting is that we started with a more traditional MVC/SPA approach using pure JavaScript and KnockoutJS before we switched over to using the power-packed combination of TypeScript and Angular. It’s important to note that we added comprehensive testing using Jasmine but overall the team agrees the combination of technologies has increased our quality and efficiency: we are seeing far fewer bugs and delivering features far more quickly.. I can only assume other organizations are seeing positive results after adopting Angular. According to Google Trends the popularity of AngularJS (blue) compared to KnockoutJS (red) and “Single Page Applications” (yellow) is exploding. One of the first single-track AngularJS conferences, ng-conf, sold out hundreds of tickets in just a few minutes. This post isn’t intended to bash KnockoutJS or Ember or Backbone or any of the other popular frameworks that you may already be using and are familiar with. Instead, I’d like to focus on why I believe AngularJS is gaining so much momentum so quickly and is something anyone who works on web applications should take very seriously and at least learn more about to decide if it’s the right tool to put in your box. 1. AngularJS Gives XAML Developers a Place to Go on the Web I make this bullet a little “tongue-in-cheek” because the majority of developers using Angular probably haven’t touched XAML with a 10 foot pole. That’s OK, the reasons why XAML became popular in the Microsoft world through WPF, Silverlight, and now Windows Store app development are important to look at because they translate quite well to Angular. If you. XAML makes it easy to layout complex UIs that may change over time. XAML supports inheritance (properties defined as children of parents can pick up values set higher in the tree) and bubbles events similar to the HTML DOM. Another interesting component of XAML is the support for data-binding. This allows there to exist a declared relationship between the presentation layer and your data without creating hard dependencies between components. The XAML layer understands there is a contract – i.e. “I expect a name to be published” and the imperative code simply exposes a property without any knowledge of how it will be rendered. This enables any number of testing scenarios, decouples the UI from underlying logic in a way that allows your design to be volatile without having to refactor tons of code, and enables a truly parallel workflow between designers and developers. This may sound like lip-service but I’ve been on many projects and have seen it in action. I recall two specific examples. One was a project with Microsoft that we had to finish in around 4 months. We estimated a solid 4 months of hands-on development and a separate design team required about 4 months of design before all was said and done – they went from wireframes to comps to interactive mock-ups and motion study and other terms that make me thankful I can let the designers do that while I focus on code. Of course if we followed the traditional, sequential approach, we would have missed our deadline and waited 8 months (4 months of design followed by 4 months of coding). XAML allowed us to work in parallel, by agreeing upon an interface for a screen – "These are the elements we’ll expose.” The developers worked on grabbing the data to make those properties available and wrote all of the tests around them, and the designers took the elements and manipulated, animated, and moved them around until they reached the desired design. It all came together brilliantly in the end. The other real world example was a pilot program with a cable company. We were building a Silverlight-based version of their interactive guide. The only problem was that they didn’t have the APIs ready yet. We were able to design the system based on a domain model that mapped what the user would experience – listings, times, etc. – then fill those domain objects with the APIs once they were defined and available. Again, it enabled a parallel workflow that greatly improved our efficiency and the flexibility of the design. I see these same principles reflected in the Angular framework. It enables a separation of concerns that allows a true parallel workflow between various components including the markup for the UI itself and the underlying logic that fetches and processes data. 2. AngularJS Gets Rid of Ritual and Ceremony Picture Credit: Piotr Siedlecki Have you ever created a text property on a model that you want to bind to your UI? How is that done in various frameworks? In Angular, this will work without any issues and immediately reflect what you type in the span: <input data-ng-model=’synchronizeThis’/><span>{{synchronizeThis}}</span> Of course you’ll seldom have the luxury of building an app that simple, but it illustrates how easy and straightforward data-binding can be in the Angular world. There is very little ritual or ceremony involved with standing up a model that participates in data-binding. You don’t have to derive from an existing object or explicitly declare your properties and dependencies – for the most part, you can just pass something you already have to Angular and it just works. That’s very powerful. If you’re curious how it works, Angular uses dirty tracking. Although I understand some other frameworks have gotten better with this, moving away from our existing framework where we had to explicitly map everything over to an interim object to data-bind to Angular was like a breath of fresh air … things just started coming together more quickly and I felt like was duplicating less code (who wants to define a contact table, then a contact domain object on the server, then a contact JSON object that then has to be passed to a contact client-side model just to, ah, display details about a contact?) 3. AngularJS Handles Dependencies Dependency injection is something Angular does quite well. I’ll admit I was skeptical we even needed something like that on the client, but I was used to the key scenario being the dynamic loading of modules. Oh, wait – what did you say? That’s right, with libraries like RequireJS you can dynamically load JavaScript if and when you need it. Where dependency injection really shines however is two scenarios: testing and Single Page Applications. For testing, Angular allows you to divide your app into logical modules that can have dependencies on each other but are initialized separately. This lets you take a very tactical approach to your tests by bringing in only the modules you are interested in. Then, because dependencies are injected, you can take an existing service like Angular’s $HTTP service and swap it out with the $httpBackend mock for testing. This enables true unit testing that doesn’t rely on services to be stood up or browser UI to render, while also embracing the ability to create end-to-end tests as well. Single Page Applications use dynamic loading to present a very “native application” feel from a web-based app. People like to shout the SPA acronym like it’s something new but we’ve been building those style apps from the days of Atlas and Ajax. It is ironic to think that Ajax today is really what drives SPA despite the fact that there is seldom any XML involved anymore as it is all JSON. What you’ll find is these apps can grow quickly with lots of dependencies on various services and modules. Angular makes it easy to organize these and grab them as needed without worrying about things like, “What namespace does it live in?” or “Did I already spin up an instance?” Instead, you just tell Angular what you need and Angular goes and gets it for you and manages the lifetime of the objects for you (so, for example, you’re not running around with 100 copies of the same simple service that fetches that contact information).. In other words, you may have a lot of behaviors and animations that are wired up “behind the scenes” so it’s not apparent from looking at the form tags that any validation or transitions are taking place. By declaring your UI and placing markup directly in HTML, you keep the presentation logic in one place and separated from the imperative logic. Once you understand the extended markup that Angular provides, code snippets like the one above make it clear where data is being bound and what it is being bound to. The addition of tools like directives and filters makes it even more clear what the intent of the UI is, but also how the information is being shaped because the shaping is done right there in the markup rather in some isolated code. Maintaining large systems – whether large software projects or mid-sized projects with large teams – is about reducing side effects. A side effect is when you change something with unexpected or even catastrophic results. If your jQuery depends on an id to latch onto an element and a designer changes it, you lose that binding. If you are explicitly populating options in a dropdown and the designer (or the customer, or you) decides to switch to a third party component, the code breaks. A declarative UI reduces these side effects by declaring the bindings at the source, removing the need for hidden code that glues the behaviors to the UI, and allowing data-binding to decouple the dependency on the idea (i.e. “a list”) from the presentation of the idea (i.e. a dropdown vs. a bulleted list). 5. AngularJS Embraces ‘DD … Er, Testing It doesn’t matter if you embrace Test-Driven Development, Behavior-Driven Development, or any of the driven-development methodologies, Angular embraces this approach to building your application. I don’t want to use this post to get into all of the advantages and reasons why you should test (I’m actually amazed that in 2013 people still question the value) but I’ve recently taken far more of a traditional “test-first” approach and it’s helped. I believe that on our project, the introduction of Jasmine and the tests we included were responsible for reducing defects by up to 4x. Maybe it’s less (or it could be more) but there was a significant drop-off. This isn’t just because of Angular – it’s a combination of the requirements, good acceptance criteria, understanding how to write tests correctly and then having the framework to run them – but it certainly was easier to build those tests. (Photo credit: George Hodan). If you want to see what this looks like, take a look at my 6502 emulator and then browse the source code. Aside from some initial plumbing, the app was written with a pure test-first approach. That means when I want to add an op code, I write tests for the op code then I turn around and implement it. When I want to extend the compiler, I write a test for the desire outcome of compilation that fails, then I refactor the compiler to ensure the test passes. That approach saved me time and served to both change the way I structured and thought about the application, but also to document it – you can look at the specs yourself and understand what the code is supposed to do. The ability to mock dependencies and inject them in Angular is very important and as you can see from the example, you can test everything from UI behaviors down to your business logic.. With Angular, however, it was straightforward to break down the various actions into their own services and sub-controllers that developers could independently test and code without crashing into each other as often. Obviously for larger projects, this is key. It’s not just about the technology from the perspective of how it enables something on the client, but actually how it enables a workflow and process that empowers your company to scale the team.. Although I haven’t yet seen a savvy environment where the developers share a “design contract” with the UI/UX team, I don’t doubt it’s far off – essentially the teams agree on the elements that will be displayed, then design goes and lays it out however they want while development wires in the $scope with their controllers and other logic, and the two pieces just come together in the end. That’s how we did it with XAML and there is nothing preventing you from doing the same with Angular. If you’re a Microsoft developer and have worked with Blend … wouldn’t it be cool to see an IDE that understands Angular and could provide the UI to set up bindings and design-time data? The ability is there, it just needs to be built, and with the popularity I’m seeing I don’t doubt that will take long. 8. AngularJS Gives Developers Controls. One of the most common complaints I heard about moving to MVC was “what do we do with all of those controls?” The early perception was that controls don’t work/wouldn’t function in the non-ASP.NET space but web developers who use other platforms know that’s just not the case. There are a variety of ways to embrace reusable code on the web, from the concept of jQuery plugins to third-party control vendors like one of my favorites, KendoUI. Angular enables a new scenario known as a “directive” that allows you to create new HTML elements and attributes. In the earlier example, the directive for the “data-ng-model” allowed data-binding to take place. In my emulator, I use a directive to create two new tags: a “console” tag that writes the console messages and a “display” tag that uses SVG to render the pixels for the emulator (OK, by this time if you’ve checked it out I realize it’s more like a simulator). This gives developers their controls – and more importantly, control over the controls. Our project has evolved with literally dozens of directives and they all participate in previous points: - Directives are testable - Directives can be worked on in parallel - Directives enable a declarative way to extend the UI, rather than using code to wire up new constructs - Directives reduce ritual and ceremony - Directives participate in dependency injection Remember how I mentioned the huge grid that is central to the project? We happen to use a lot of grids (as does almost every enterprise web application ever written). We use the KendoUI variant, and there are several steps you must take to initialize the grid. For our purposes, many of the configuration options are consistent across grids, so why make developers type all of the code? Instead, we enable them to drop a new element (directive), tag it with a few attributes (directives), and they are up and running. 9. AngularJS Helps Developers Manage State. I hesitate to add this point because savvy and experienced web developers understand the concept of what HTTP is and how to manage their application state. It’s the “illusion” of state that was perpetuated by ASP.NET that confuses developers when they shift to MVC. I once read on a rather popular forum a self-proclaimed architect declare that MVC was an inferior approach to web design because he had to “build his own state management.” What? That just demonstrates a complete lack of understanding of how the web works. If you rely on a 15K view state for your application to work, you’re doing it wrong. I’m referring more to client state and how you manage properties, permissions, and other common cross-cutting concerns across your app in the browser. Angular not only handles dependency injection, but it also manages the lifetime of your components for you. That means you can approach code in a very different way. Here’s a quick example to explain what I mean: One of the portions of the application involved a complex search. It is a traditional pattern: enter your search criteria, click “search” and see a grid with the results, then click on a row to see details. The initial implementation involved two pages: first, a detailed criteria page, then a grid page with a pane that would slide in from the right to reveal the details of the currently selected row. Later in the project, this was refactored to a dialog for the search criteria that would overlay the grid itself, then a separate full screen page for the details. In a traditional web application this would involve rewriting a bit of logic. I’d likely have some calls that would get detail information and expect to pass them on the same page to a panel for the detail, then suddenly have to refactor that to pass a detail id to a separate page and have that page make the call, etc. If you’ve developed for the web for any amount of time you’ve had to suffer through some rewrites that felt like they were a bit much for just moving things around. There are multiple pieces of “state” to manage, including the selection criteria and the identifier for the detailed record being shown. In Angular, this was a breeze. I created a controller for the search dialog, a controller for the grid, and a controller for the detail page. A parent controller kept track of the search criteria and current detail. This meant that switching from one approach to the other really meant just reorganizing my markup. I moved the details to a new page, switched the criteria to a dialog, and the only real code I had to write was a new function to invoke the dialog when requested. All of the other logic – fetching and filtering the data and displaying it – remained the same. It was a fast refactoring. This is because my controllers weren’t concerned with how the pages were organized or flowed – they simply focused on obtaining the information and exposing it through the scope. The organization was a concern of routing and we used Angular’s routing mechanisms to “reroute” to the new approach while preserving the same controllers and logic behind the UI. Even the markup for the search criteria remained the same – it just changed from a template that was used as a full page to a template that was used within a dialog. Of course, this type of refactoring was possible due to the fact the application is a hybrid Single Page Application (SPA). 10. AngularJS Supports Single Page Applications. In case you missed it, this point continues the last one.). They can provide an experience that feels almost like a native app in the web. By rendering on the client they cut down load on the server as well as reduce network traffic – instead of sending a full page of markup, you can send a payload of data and turn it into markup at the client. In our experience, large apps make more sense to build as hybrid SPA apps. By hybrid I mean instead of treating the entire application as a single page application, you divide it into logical units of work or paths (“activities”) through the system and implement each of those as a SPA. You end up with certain areas that result in a full page refresh, but the key interactions take place in a series of different SPA modules. For example, administration might be one “mini” SPA app while configuration is another. Angular provides all of the necessary infrastructure from routing (being able to take a URL and map it to dynamically loaded pages), templates, to journaling (deep linking and allowing users to use the built-in browser controls to navigate even though the pages are not refreshing) needed to stand up a functional SPA application, and is quite adept at enabling you to share all of the bits and pieces across individual areas or “mini-SPA” sections to give the user the experience of being in a single application. Conclusion Whether you already knew about Angular and just wanted to see what my points were, or if you’re getting exposed for the first time, I have an easy “next step” for you to learn more. I recorded a video that lasts just over an hour covering all of the fundamentals you need to get started with writing Angular applications today. Although the video is on our “on demand training” site, I have a code you can use to get free access to both the video and the rest of the courses we have on WintellectNOW. Just head over to my Fundamentals of AngularJS video and use the code LIKNESS-13 to get your free access.
https://csharperimage.jeremylikness.com/2013/09/10-reasons-web-developers-should-learn.html
CC-MAIN-2018-30
refinedweb
3,567
55.88
MEMCPY(3) Linux Programmer's Manual MEMCPY(3) memcpy - copy memory area #include <string.h> void *memcpy(void *dest, const void *src, size_t n); The memcpy() function copies n bytes from memory area src to memory area dest. The memory areas must not overlap. Use memmove(3) if the memory areas do overlap. The memcpy() function returns a pointer to dest.cpy() │ Thread safety │ MT-Safe │ └──────────┴───────────────┴─────────┘ POSIX.1-2001, POSIX.1-2008, C89, C99, SVr4, 4.3BSD. Failure to observe the requirement that the memory areas do not overlap has been the source of real)). bcopy(3), memccpy(3), memmove(3), mempcpy(3), strcpy(3), strncpy(3), wmemcpy(3) This page is part of release 4.07 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2015-07-23 MEMCPY(3)
http://man7.org/linux/man-pages/man3/memcpy.3.html
CC-MAIN-2016-30
refinedweb
148
68.57
Python File object provides various ways to read a text file. The popular way is to use the readlines() method that returns a list of all the lines in the file. However, it’s not suitable to read a large text file because the whole file content will be loaded into the memory. Reading Large Text Files in Python We can use the file object as an iterator. The iterator will return each line one by one, which can be processed. This will not read the whole file into memory and it’s suitable to read large files in Python. Here is the code snippet to read large file in Python by treating it as an iterator. import resource import os file_name = "/Users/pankaj/abcdef.txt" print(f'File Size is {os.stat(file_name).st_size / (1024 * 1024)} MB') txt_file = open(file_name) count = 0 for line in txt_file: # we can process file line by line here, for simplicity I am taking count of lines count += 1 txt_file.close() print(f'Number of Lines in the file is {count}') print('Peak Memory Usage =', resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) print('User Mode Time =', resource.getrusage(resource.RUSAGE_SELF).ru_utime) print('System Mode Time =', resource.getrusage(resource.RUSAGE_SELF).ru_stime) When we run this program, the output produced is: File Size is 257.4920654296875 MB Number of Lines in the file is 60000000 Peak Memory Usage = 5840896 User Mode Time = 11.46692 System Mode Time = 0.09655899999999999 Python Read Large Text File - I am using os module to print the size of the file. - The resource module is used to check the memory and CPU time usage of the program. We can also use with statement to open the file. In this case, we don’t have to explicitly close the file object. with open(file_name) as txt_file: for line in txt_file: # process the line pass What if the Large File doesn’t have lines? The above code will work great when the large file content is divided into many lines. But, if there is a large amount of data in a single line then it will use a lot of memory. In that case, we can read the file content into a buffer and process it. with open(file_name) as f: while True: data = f.read(1024) if not data: break print(data) The above code will read file data into a buffer of 1024 bytes. Then we are printing it to the console. When the whole file is read, the data will become empty and the break statement will terminate the while loop. This method is also useful in reading a binary file such as images, PDF, word documents, etc. Here is a simple code snippet to make a copy of the file. with open(destination_file_name, 'w') as out_file: with open(source_file_name) as in_file: for line in in_file: out_file.write(line) Reference: StackOverflow Question Hi Pankaj, I have requirement to find string in a file size upto 5GB, how can be achieved. Thanks Hari
https://www.journaldev.com/32059/read-large-text-files-in-python
CC-MAIN-2021-17
refinedweb
500
73.88
Closed Bug 568249 Opened 12 years ago Closed 12 years ago Support geolocation on Android Categories (Core Graveyard :: Widget: Android, defect) Tracking (Not tracked) People (Reporter: mwu, Assigned: mwu) References Details Attachments (1 file, 1 obsolete file) No description provided. Comment on attachment 447561 [details] [diff] [review] Add Geolocation provider for Android This is great! >+#include "AndroidLocationProvider.h" >+ >+using namespace mozilla; Not sure why we need this. If we do, fine. If not, lets remove it . Attachment #447561 - Flags: review?(dougt) → review+ (In reply to comment #2) > >+#include "AndroidLocationProvider.h" > >+ > >+using namespace mozilla; > Not sure why we need this. If we do, fine. If not, lets remove it > It's used so we don't have to do mozilla:: when using AndroidBridge. > . I think we could come up with other ways but this is the simplest way right now AFAICT. Attachment #447561 - Attachment is obsolete: true Status: NEW → RESOLVED Closed: 12 years ago Resolution: --- → FIXED Product: Core → Core Graveyard
https://bugzilla.mozilla.org/show_bug.cgi?id=568249
CC-MAIN-2022-05
refinedweb
159
66.94
Timeline 11/29/2013: - 19:13 Ticket #21534 (Admin List Widgets Need To Be Paginated) closed by - worksforme: Actually, you can browse to select other options (click the magnifying … - 16:57 Changeset [9af7e18]stable/1.7.xstable/1.8.x by - Fixed an unescisarily gendered pronoun in a docstring - 16:00 Ticket #21535 (Password hash iterations not updating.) created by - If you follow the steps in the … - 14:51 Ticket #21534 (Admin List Widgets Need To Be Paginated) created by - I connected the Django ORM to an existing postgresDB (which we used to … - 12:31 Ticket #20867 (Allow Form.clean() to target specific fields when raising ValidationError.) closed by - fixed: In f563c339ca2eed81706ab17726c79a6f00d7c553: […] - 12:00 Changeset [b72b85af]stable/1.7.xstable/1.8.x by - Removed Form._errors from the docs in favor of the add_error API. - 12:00 Changeset [f563c33]stable/1.7.xstable/1.8.x by - Fixed #20867 -- Added the Form.add_error() method. Refs #20199 … - 11:57 Ticket #21527 (Application order effects template loading/overriding) closed by - invalid: If you … - 10:57 Ticket #21533 (Latest Irish (ga) Translations patch from Transiflex for Django-core core) closed by - invalid: Hi, Django's translations are done on transifex. See … - 10:53 Ticket #21533 (Latest Irish (ga) Translations patch from Transiflex for Django-core core) created by - Hi, I am a new coordinator for the Django Irish (GA) Translations … - 10:48 Ticket #21532 (Django URLValidator fails on some valid URLs) closed by - duplicate: This appears to be a duplicate of #20003. - 10:39 Ticket #21532 (Django URLValidator fails on some valid URLs) created by - This raises a ValidationError, even though the URL is valid: […] … - 09:20 Ticket #21459 (Allow easy reload of appcache.) closed by - wontfix: See #3591 for more issues into that direction; supporting reload() … - 08:30 Ticket #21531 (Django 1.6 forms.SlugField validation error) closed by - needsinfo: I just tested against, 1.6, stable/1.6.x and master; can't reproduce … - 07:48 Ticket #21531 (Django 1.6 forms.SlugField validation error) created by - Hi, I have a problem with SlugField. In my model I have this … - 07:35 Ticket #21530 (urls.py AttributeError: 'RegexURLPattern' object has no attribute ...) created by - The urls.py file does not support the ability of supplying one URL. … - 07:01 Ticket #21380 (There is no easy way to set permission to directories in collecting ...) closed by - fixed: In 7e2d61a9724644d6d1c7ce9361d9fd5be3e2ab86: […] - 07:01 Changeset [7e2d61a9]stable/1.7.xstable/1.8.x by - Fixed #21380 -- Added a way to set different permission for static … - 06:41 Ticket #21529 (url tag urlencodes arguments irreversibly) created by - Somewhere between 1.4 and 1.6 the url tag started urlencoding … - 04:33 Ticket #21528 (improve Django Doc with an example for formfield_for_foreignkey ...) created by - Please, Could you add an example for formfield_for_foreignkey ( … - 00:35 Ticket #21527 (Application order effects template loading/overriding) created by - I am not sure if it was mentioned in the documentation, but it is … 11/28/2013: - 20:42 Changeset [42ac1380]stable/1.7.xstable/1.8.x by - Fixed a deprecation warning introduced by 96dd48c83f. - 18:22 Ticket #21522 (Add rendering decorators) closed by - wontfix: Nope. The Golden rule of decorators -- they shouldn't modify the … - 18:14 Ticket #21521 (Provide a boiler plate free ./manage.py startapp command) closed by - wontfix: I'm not sure I see the value here. Firstly, there's a problem with … - 17:58 Ticket #21520 (Ship PyMysql as a defaut MySQL driver) closed by - wontfix: We're definitely not going to *ship* PyMySQL with Django. Python's … - 15:38 Ticket #21474 (ModelForm resets field when field not in data) closed by - needsinfo: Feel free to revive the discussion on the mailing list if you'd like … - 14:04 Changeset [936dbaf1]stable/1.7.xstable/1.8.x by - Merge pull request #2005 from loic/ticket21169 Use 'update_fields' in … - 14:01 Changeset [91fce67]stable/1.7.xstable/1.8.x by - Use 'update_fields' in RelatedManager.clear() when bulk=False. Thanks … - 11:59 Ticket #21526 (register = template.Library(). Is the instance name just a convention?) closed by - invalid: Hi, The documentation is correct here. Django's template code … - 11:54 Ticket #21526 (register = template.Library(). Is the instance name just a convention?) created by -- … - 11:51 Ticket #21525 (built in manage.py runserver waits for all content to download before ...) created by - Hi, I am not sure if this is a bug or not or if it is a feature that … - 11:34 Changeset [14ddc1b5]stable/1.6.x by - [1.6.x] Fixed #21496 -- Fixed crash when GeometryField uses TextInput … - 11:29 Ticket #21496 (Django 1.6 django.contrib.gis GeometryField crashes on alternate widget) closed by - fixed: In 34b8a385588a051814f77481c875047c8bd23b37: […] - 11:29 Changeset [34b8a38]stable/1.7.xstable/1.8.x by - Fixed #21496 -- Fixed crash when GeometryField uses TextInput Thanks … - 11:09 Ticket #21524 (Annotation and .extra with group function breaks at database level) created by - This is somewhat related to #14657 and #13274 For some awkward reason … - 10:41 Changeset [c8b637d8]stable/1.7.xstable/1.8.x by - All the E125 errors hvae been eradicated. - 09:28 Ticket #21523 (Models DateField to_python method no longer supports mock dates.) created by - In between 1.4 and 1.5 a change was made to … - 08:10 Ticket #21522 (Add rendering decorators) created by - A lot of frameworks now allow you to do : […] We could apply that … - 07:53 Ticket #21521 (Provide a boiler plate free ./manage.py startapp command) created by - Django still requires a lot of boiler plate to start with. Running … - 07:50 Changeset [7477a4f]stable/1.7.xstable/1.8.x by - Fixed E125 pep8 warnings - 07:49 Ticket #21517 (Add unit test for non-autoincrement primary key with value 0 (zero)) closed by - fixed: In d1df395f3ae768e495a105db2f85352c44ba1c28: […] - 07:48 Changeset [d1df395f]stable/1.7.xstable/1.8.x by - Fixed #21517 -- Added unit test for non-autoincrement primary key with … - 07:34 Ticket #21520 (Ship PyMysql as a defaut MySQL driver) created by - Ship PyMsql () with Django, and make … - 03:52 Ticket #12446 (multipart/mixed in multipart/form-data) closed by - wontfix: Given the previous note regarding the HTML 5 spec, and the lack of any … 11/27/2013: - 18:00 Changeset [0e98050]stable/1.5.x by - [1.5.x] Fixed #21515 -- Corrected example of template.Context in … - 17:59 Changeset [02f9e90]stable/1.6.x by - [1.6.x] Fixed #21515 -- Corrected example of template.Context in … - 17:58 Ticket #21515 (template.Context Documentation: c.pop() and c.push() return nothing in ...) closed by - fixed: In 077af42139db84d88f293ab5eadc989a9169dce1: […] - 17:57 Changeset [077af421]stable/1.8.x by - Added a warning regarding risks in serving user uploaded media. … - 15:27 Ticket #21519 (django.db.models: get_apps() includes abstract models, get_models() ...) created by - (I'm uncertain how much of this report is bug vs. new feature; I went … - 13:44 Ticket #21518 (override_settings(ROOT_URLCONF) doesn't clear resolver cache) created by - If override_settings is used to set different URL mappings on … - 12:51 Changeset [041a076]stable/1.7.xstable/1.8.x by - Merge pull request #2000 from loic/docs Fixed typo in release notes. - 12:37 Changeset [ecd8556]stable/1.7.xstable/1.8.x by - Fixed typo in release notes. - 12:15 Ticket #21169 (Deletion in custom reverse managers) closed by - fixed: In 17c3997f6828e88e4646071a8187c1318b65597d: […] - 11:45 Changeset [01e8ac47]stable.8.x by - Include deferred SQL in sqlmigrate output - 09:29 Ticket #21438 (Migrations do not detect many to many fields) closed by - fixed: In 5e63977c0e5d0dff755817b86364a5c8882deaf7: […] - 09:28 Changeset [5e63977]stable/1.7.xstable/1.8.x by - Fixed #21438: makemigrations now detects ManyToManyFields - 09:20 Changeset [19b34fbe]stable/1.7.xstable/1.8.x by - Field.deconstruct() howto docs - 08:54 Ticket #21517 (Add unit test for non-autoincrement primary key with value 0 (zero)) created by - This ticket is related with ticket #17713. MySQL has the capability … - 07:32 Changeset [eece3c2]stable.8.x by - Fix squashed migration loading ordering issue - 05:51 DjangoJobs edited by - Added job adverts for FARM Digital. (diff) 11/26/2013: - 14:33 Ticket #21499 (Migrations won't work if field signature changes) closed by - fixed: In 455e2896b122a331057483634bea9c8074bdc97d: […] - 14:33 Changeset [0c46ca8]stable.8.x by - Fixed a typo in the documentation - 13:42 Changeset [655b8bb1]stable/1.6.x by - [1.6.x] Fixed #21448 -- Fixed test client logout with cookie-based … - 13:42 Ticket #21448 (Test Client logout does not work with cookie-based sessions) closed by - fixed: In 384816fccb6dfc7fc40f8059811341ba3572d9ff: […] - 13:41 Changeset [384816f]stable/1.7.xstable/1.8.x by - Fixed #21448 -- Fixed test client logout with cookie-based sessions … - 12:18 Ticket #21516 (Update the import path for the FormSet classes and factories in ...) created by - Since #21489, FormSet classes and factories are exposed directly on … - 11:04 Changeset [6cd5c67]stable/1.6.x by - [1.6.x] Fixed #21355 -- try importing _imaging from PIL namespace … - 11:01 Ticket #21355 (Try importing _imaging from PIL namespace first.) closed by - fixed: In 5725236c3ee4323203aa681cb54d1ff1d7f376df: […] - 10:51 Changeset [5725236]stable/1.7.xstable/1.8.x by - Fixed #21355 -- try importing _imaging from PIL namespace first. - 10:40 Ticket #21515 (template.Context Documentation: c.pop() and c.push() return nothing in ...) created by - … - 07:51 Ticket #21514 (Session expiry dates should be in an ISO string instead of datetime) created by - The documentation for Django 1.6 says … - 07:08 Changeset [9d6c48a]stable/1.7.xstable/1.8.x by - Use str.isupper() to test if a string is uppercased. - 05:51 Ticket #21513 (method_decorator doesn't work with argumented Class-Based Decorator) created by - was a similar example of … - 05:11 Ticket #21489 (Expose FormSet on the django.forms package.) closed by - fixed: Fixed in 1c7a83ee8e3da431d9d21dae42da8f1f89973f7c. - 04:00 DevelopersForHire edited by - (diff) - 03:59 DevelopersForHire edited by - (diff) - 03:20 Ticket #21512 (Test of model_fields and model_forms gives incomplete information ...) closed by - fixed: In 16d73d7416a7902703ee8022f093667f7ac9ef5b: […] - 03:18 Changeset [16d73d7]stable/1.7.xstable/1.8.x by - Fixed #21512 -- Added more complete information about Pillow and PIL … Note: See TracTimeline for information about the timeline view.
https://code.djangoproject.com/timeline?from=2013-11-29T05%3A25%3A35-08%3A00&precision=second
CC-MAIN-2015-27
refinedweb
1,684
58.38
Opened 10 years ago Closed 9 years ago Last modified 7 years ago #9751 closed (fixed) project_directory calculated incorrectly when "settings" is a directory (breaks 'startapp') Description When a Django project's settings is contained in directory-style module instead of the usual " settings.py" file-based module, project_directory (as returned from setup_environ) is calculated incorrectly as " settings", which results in--at least--' startapp' creating new apps inside the settings directory. Whilst the use of a settings directory is non-standard, it helps when splitting larger or more complicated configurations, such as when settings change depending on the hostname, etc. Indeed, this would be completely transparent to Django if it wasn't parsing the __file__ attribute. To reproduce: % django-admin.py startproject myproject % cd myproject % mkdir settings % mv settings.py settings/__init__.py % ./manage.py startapp myapp % tree |-- __init__.py |-- manage.py |-- settings | |-- __init__.py | `-- myapp # <---- | |-- __init__.py | |-- models.py | `-- views.py `-- urls.py Patch attached. Attachments (4) Change History (20) Changed 9 years ago by Changed 9 years ago by 0001-Fix-project_name-location-when-settings-is-a-module.patch comment:1 Changed 9 years ago by Rebasing patch against HEAD. comment:2 Changed 9 years ago by comment:3 Changed 9 years ago by comment:4 Changed 9 years ago by 06-fix-project_name-location-when-settings-is-a-module.patch does fix the bug. It also breaks 37 of the admin_scripts regression tests. Somehow it causes the spawned admin commands to die with a "TypeError: relative imports require the 'package' argument" traceback. comment:5 Changed 9 years ago by Looks like the conditional the patch added (if settings_name == "init") is being excecuted during the tests, even though I don't think they are using a settings directory. Not exactly sure why that is. Changed 9 years ago by comment:6 Changed 9 years ago by My patch fixes a few problems: get_commands() was importing project settings each time it's called. That's not necessary. get_commands() was actually passing the project package instead of settings module/package to setup_environ() . - In setup_environ() , check to see if settings module is a package or module by checking to see if its file contains init.py or not. Not sure if this works for Jython as I don't know how Jython filenames work. comment:7 Changed 9 years ago by comment:8 Changed 9 years ago by Trying to write tests for this. Changed 9 years ago by Basic patch, removed the sys.modules import, and it seems to work without all that split() stuff, but if there's a reason that was there, feel free to add it back. comment:9 Changed 9 years ago by All tests pass, and the regression test (first at the top of that patch) fails on current trunk and passes with the patch. comment:10 Changed 9 years ago by comment:11 Changed 9 years ago by The reason for using sys.modules lookup instead of import_module() was to prevent the setting module from unnecessarily being imported again, since it would have to have been imported by this point. comment:12 Changed 9 years ago by import doesn't redo an imoprt if it's already in sys.modules. comment:13 Changed 9 years ago by comment:14 Changed 9 years ago by comment:15 Changed 9 years ago by comment:16 Changed 7 years ago by Milestone 1.1 deleted Rebasing patch against HEAD
https://code.djangoproject.com/ticket/9751
CC-MAIN-2018-26
refinedweb
572
55.95
Hi all, let me inform you that our support for Symfony2 framework and Twig templates (donated by Sebastian Hörl) are now part of standard NetBeans PHP distributions. More details about these features can be found in our previous blog post. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent Symfony or Twig). Yeah! I like Twig! May be you could also integrate my code templates for the Symfony2 ? At least those related to Twig? I tried to install it, it has been installed without problems, but the code templates in general doesn't seem to work in editor, I added a comment in this bug. Also some nice default Symfony2 file templates would be cool, like Controller, Entity, Repository and when creating file the namespace already set. At least I think it would be cool.
https://blogs.oracle.com/netbeansphp/symfony2-and-twig-part-of-netbeans
CC-MAIN-2020-05
refinedweb
151
71.55
Threads are typically created when you want a program to do two things at once. For example, assume you are calculating pi (3.141592653589...) to the 10 billionth place. The processor will happily begin computing this, but nothing will write to the user interface while it is working. Because computing pi to the 10 billionth place will take a few million years, you might like the processor to provide an update as it goes. In addition, you might want to provide a Stop button so that the user can cancel the operation at any time. To allow the program to handle the click on the Stop button, you will need a second thread of execution. Another common place to use threading is when you must wait for an event, such as user input, a read from a file, or receipt of data over the network. Freeing the processor to turn its attention to another task while you wait (such as computing another 10,000 values of pi) is a good idea, and it makes your program appear to run more quickly. On the flip side, note that in some circumstances, threading can actually slow you down. Assume that in addition to calculating pi, you also want to calculate the Fibonnacci series (1,1,2,3,5,8,13,21...). If you have a multiprocessor machine, this will run faster if each computation is in its own thread. If you have a single-processor machine (as most users do), computing these values in multiple threads will certainly run slower than computing one and then the other in a single thread, because the processor must switch back and forth between the two threads. This incurs some overhead. The simplest way to create a thread is to create a new instance of the Thread class. The Thread constructor takes a single argument: a delegate type. The CLR provides the ThreadStart delegate class specifically for this purpose, which points to a method you designate. This allows you to construct a thread and to say to it, "When you start, run this method." The ThreadStart delegate declaration is: public delegate void ThreadStart( ); As you can see, the method you attach to this delegate must take no parameters and must return void. Thus, you might create a new thread like this: Thread myThread = new Thread( new ThreadStart(myFunc) ); myFunc must be a method that takes no parameters and returns void. For example, you might create two worker threads, one that counts up from zero: public void Incrementer( ) { for (int i =0;i<10;i++) { Console.WriteLine("Incrementer: {0}", i); } } and one that counts down from 10: public void Decrementer( ) { for (int i = 10;i>=0;i--) { Console.WriteLine("Decrementer: {0}", i); } } To run these in threads, create two new threads, each initialized with a ThreadStart delegate. These in turn would be initialized to the respective member functions: Thread t1 = new Thread( new ThreadStart(Incrementer) ); Thread t2 = new Thread( new ThreadStart(Decrementer) ); Instantiating these threads does not start them running. To do so you must call the Start method on the Thread object itself: t1.Start( ); t2.Start( ); Example 20-1 is the full program and its output. You will need to add a using statement for System.Threading to make the compiler aware of the Thread class. Notice the output, where you can see the processor switching from t1 to t2. namespace Programming_CSharp {1 = new Thread( new ThreadStart(Incrementer) ); // create a thread for the Decrementer // pass in a ThreadStart delegate // with the address of Decrementer Thread t2 = new Thread( new ThreadStart(Decrementer) ); // start the threads t1.Start( ); t2.Start( ); } // demo function, counts up to 1K public void Incrementer( ) { for (int i =0;i<1000;i++) { Console.WriteLine( "Incrementer: {0}", i); } } // demo function, counts down from 1k public void Decrementer( ) { for (int i = 1000;i>=0;i--) { Console.WriteLine( "Decrementer: {0}", i); } } } } Output: Incrementer: 102 Incrementer: 103 Incrementer: 104 Incrementer: 105 Incrementer: 106 Decrementer: 1000 Decrementer: 999 Decrementer: 998 Decrementer: 997 The processor allows the first thread to run long enough to count up to 106. Then, the second thread kicks in, counting down from 1000 for a while. Then the first thread is allowed to run. When I run this with larger numbers, I notice that each thread is allowed to run for about 100 numbers before switching. The actual amount of time devoted to any given thread is handled by the thread scheduler and will depend on many factors, such as the processor speed, demands on the processor from other programs, and so forth. When you tell a thread to stop processing and wait until a second thread completes its work, you are said to be joining the first thread to the second. It is as if you tied the tip of the first thread on to the tail of the secondhence "joining" them. To join thread 1 (t1) onto thread 2 (t2), write: t2.Join( ); If this statement is executed in a method in thread t1, t1 will halt and wait until t2 completes and exits. For example, we might ask the thread in which Main( ) executes to wait for all our other threads to end before it writes its concluding message. In this next code snippet, assume you've created a collection of threads named myThreads. Iterate over the collection, joining the current thread to each thread in the collection in turn: foreach (Thread myThread in myThreads) { myThread.Join( ); } Console.WriteLine("All my threads are done."); The final message All my threads are done will not be printed until all the threads have ended. In a production environment, you might start up a series of threads to accomplish some task (e.g., printing, updating the display, etc.) and not want to continue the main thread of execution until the worker threads are completed. At times, you want to suspend your thread for a short while. You might, for example, like your clock thread to suspend for about a second in between testing the system time. This lets you display the new time about once a second without devoting hundreds of millions of machine cycles to the effort. The Thread class offers a public static method, Sleep, for just this purpose. The method is overloaded; one version takes an int, the other a timeSpan object. Each represents the number of milliseconds you want the thread suspended for, expressed either as an int (e.g., 2000 = 2000 milliseconds or 2 seconds) or as a timeSpan. Although timeSpan objects can measure ticks (100 nanoseconds), the Sleep( ) method's granularity is in milliseconds (1,000,000 nanoseconds). To cause your thread to sleep for one second, you can invoke the static method of Thread. Sleep, which suspends the thread in which it is invoked: Thread.Sleep(1000); At times, you'll tell your thread to sleep for only one millisecond. You might do this to signal to the thread scheduler that you'd like your thread to yield to another thread, even if the thread scheduler might otherwise give your thread a bit more time. If you modify Example 20-1 to add a Thread.Sleep(1) statement after each WriteLine( ), the output changes significantly: for (int i =0;i<1000;i++) { Console.WriteLine( "Incrementer: {0}", i); Thread.Sleep(1); } This small change is sufficient to give each thread an opportunity to run once the other thread prints one value. The output reflects this change: Incrementer: 0 Incrementer: 1 Decrementer: 1000 Incrementer: 2 Decrementer: 999 Incrementer: 3 Decrementer: 998 Incrementer: 4 Decrementer: 997 Incrementer: 5 Decrementer: 996 Incrementer: 6 Decrementer: 995 Typically, threads die after running their course. You can, however, ask a thread to kill itself by calling its Abort( ) method. This causes a ThreadAbortException exception to be thrown, which the thread can catch, and thus provides the thread with an opportunity to clean up any resources it might have allocated. catch (ThreadAbortException) { Console.WriteLine("[{0}] Aborted! Cleaning up...", Thread.CurrentThread.Name); } The thread ought to treat the ThreadAbortException exception as a signal that it is time to exit, and as quickly as possible. You don't so much kill a thread as politely request that it commit suicide. You might wish to kill a thread in reaction to an event, such as the user pressing the Cancel button. The event handler for the Cancel button might be in thread t1, and the event it is canceling might be in thread t2. In your event handler, you can call Abort on t1: t1.Abort( ); An exception will be raised in t1's currently running method that t1 can catch. This gives t1 the opportunity to free its resources and then exit gracefully. In Example 20-2, three threads are created and stored in an array of Thread objects. Before the Threads are started, the IsBackground property is set to true. Each thread is then started and named (e.g., Thread1, Thread2, etc.). A message is displayed indicating that the thread is started, and then the main thread sleeps for 50 milliseconds before starting up the next thread. After all three threads are started and another 50 milliseconds have passed, the first thread is aborted by calling Abort( ). The main thread then joins all three of the running threads. The effect of this is that the main thread will not resume until all the other threads have completed. When they do complete, the main thread prints a message: All my threads are done. The complete source is displayed in Example 20-2. namespace Programming_CSharp { using System; using System.Threading; class Tester { static void Main( ) { // make an instance of this class Tester t = new Tester( ); // run outside static Main t.DoTest( ); } public void DoTest( ) { // create an array of unnamed threads Thread[] myThreads = { new Thread( new ThreadStart(Decrementer) ), new Thread( new ThreadStart(Incrementer) ), new Thread( new ThreadStart(Incrementer) ) }; // start each thread int ctr = 1; foreach (Thread myThread in myThreads) { myThread.IsBackground=true; myThread.Start( ); myThread.Name = "Thread" + ctr.ToString( ); ctr++; Console.WriteLine("Started thread {0}", myThread.Name); Thread.Sleep(50); } // having started the threads // tell thread 1 to abort myThreads[1].Abort( ); // wait for all threads to end before continuing foreach (Thread myThread in myThreads) { myThread.Join( ); } // after all threads end, print a message Console.WriteLine("All my threads are done."); } // demo function, counts down from 1k public void Decrementer( ) { try { for (int i = 1000;i>=0;i--) { Console.WriteLine( "Thread {0}. Decrementer: {1}", Thread.CurrentThread.Name, i); Thread.Sleep(1); } } catch (ThreadAbortException) { Console.WriteLine( "Thread {0} aborted! Cleaning up...", Thread.CurrentThread.Name); } finally { Console.WriteLine( "Thread {0} Exiting. ", Thread.CurrentThread.Name); } } // demo function, counts up to 1K public void Incrementer( ) { try { for (int i =0;i<1000;i++) { Console.WriteLine( "Thread {0}. Incrementer: {1}", Thread.CurrentThread.Name, i); Thread.Sleep(1); } } catch (ThreadAbortException) { Console.WriteLine( "Thread {0} aborted! Cleaning up...", Thread.CurrentThread.Name); } finally { Console.WriteLine( "Thread {0} Exiting. ", Thread.CurrentThread.Name); } } } } Output (excerpt): Started thread Thread1 Thread Thread1. Decrementer: 1000 Thread Thread1. Decrementer: 999 Thread Thread1. Decrementer: 998 Started thread Thread2 Thread Thread1. Decrementer: 997 Thread Thread2. Incrementer: 0 Thread Thread1. Decrementer: 996 Thread Thread2. Incrementer: 1 Thread Thread1. Decrementer: 995 Thread Thread2. Incrementer: 2 Thread Thread1. Decrementer: 994 Thread Thread2. Incrementer: 3 Started thread Thread3 Thread Thread1. Decrementer: 993 Thread Thread2. Incrementer: 4 Thread Thread2. Incrementer: 5 Thread Thread1. Decrementer: 992 Thread Thread2. Incrementer: 6 Thread Thread1. Decrementer: 991 Thread Thread3. Incrementer: 0 Thread Thread2. Incrementer: 7 Thread Thread1. Decrementer: 990 Thread Thread3. Incrementer: 1 Thread Thread2 aborted! Cleaning up... Thread Thread2 Exiting. Thread Thread1. Decrementer: 989 Thread Thread3. Incrementer: 2 Thread Thread1. Decrementer: 988 Thread Thread3. Incrementer: 3 Thread Thread1. Decrementer: 987 Thread Thread3. Incrementer: 4 Thread Thread1. Decrementer: 986 Thread Thread3. Incrementer: 5 // ... Thread Thread1. Decrementer: 1 Thread Thread3. Incrementer: 997 Thread Thread1. Decrementer: 0 Thread Thread3. Incrementer: 998 Thread Thread1 Exiting. Thread Thread3. Incrementer: 999 Thread Thread3 Exiting. All my threads are done. You see the first thread start and decrement from 1000 to 998. The second thread starts, and the two threads are interleaved for a while until the third thread starts. After a short while, however, Thread2 reports that it has been aborted, and then it reports that it is exiting. The two remaining threads continue until they are done. They then exit naturally, and the main thread, which was joined on all three, resumes to print its exit message.
http://etutorials.org/Programming/Programming+C.Sharp/Part+III+The+CLR+and+the+.NET+Framework/Chapter+20.+Threads+and+Synchronization/20.1+Threads/
CC-MAIN-2018-05
refinedweb
2,059
75.5
Opened 8 years ago Closed 5 years ago #14580 closed Cleanup/optimization (fixed) Clean up duplicate code in admin formset handling Description See: The attached patch still might need some tests, though the current testsuite passes and should already cover all the cases. Attachments (3) Change History (20) Changed 8 years ago by comment:1 Changed 8 years ago by Changed 8 years ago by comment:2 Changed 8 years ago by Hmm, I uploaded a new page which returns formsets and inlines instead of the admin_inlines. I am still not really happy with it though. Especially since I am running into this problem now: def get_formset_instances(self, request, obj, change): f, i = super(RecipeAdmin, self).get_formset_instances(request, obj, change) return [f[0]], [i[0]] That's a naive example for dropping inlines, but shows my problem… I have two inlines on the page and drop the second, this is fine, until I send a POST request. the super call instantiates the form and can't find the management data (obviously cause it's not there) before I get a chance to drop it. I am beginning to think that we shouldn't return the instances but the formclasses + inlines. (And maybe clean up the code by using an _construct_formsets function). Note: I am aware of the fact, that I could just c&p the code from get_formset_instances into my admin and modify it as needed, but that's not really nice… Any ideas? comment:3 Changed 8 years ago by s/page/patch/g comment:4 Changed 8 years ago by comment:5 Changed 8 years ago by comment:6 Changed 8 years ago by comment:7 Changed 7 years ago by Milestone 1.3 deleted comment:8 Changed 7 years ago by comment:9 Changed 7 years ago by The fix for #8060 did touch some of this code, and would require minor updates to this patch, but unless I'm missing something I don't think it actually fixed the duplication this ticket was addressing. It added a get_inline_instances method, because the inline instances are now permissions-dependent rather than static; but the actual code that creates formsets based on those inline instances is still duplicated with minor differences several different places in add_view and change_view. I noticed that duplication in working on #8060 and I still think it'd be good to clean it up if we can, so I'd like to leave this ticket open until we do so. comment:10 Changed 6 years ago by Changed 5 years ago by comment:11 Changed 5 years ago by Adding a patch which is purely a code cleanup -- no change in behavior. I don't see any obvious benefit of making this new method a public API, but let me know. comment:12 Changed 5 years ago by I see one downside in -- get_formsets calls get_inline_instances, but _create_fromsets could work with a different set of instances, the cleanest way might be to pass inline_instances down to get_formsets (maybe as inline_instances=None and call get_inline_instances only if it's indeed None for backwards compat) comment:13 Changed 5 years ago by Okay, my comment about the backwards-compat was crap, we can't just add args to the signature of get_formsets :(. If you look at the docs: your current patch will break, since the arguments to zip in will no longer have the same length! Essentially we'd need one method which returns the inline and the formset together... FWIW, even the current code is broken in that regard and the docs should drop the inline via get_inline_instances instead. comment:14 Changed 5 years ago by Thanks for the feedback. When you say "I see one downside" is that in reference to my comment about not making this a public API? I haven't taken the time to fully understand the issue you are describing, but it seems like it may at least be related to #20702. I can take a closer look at this tomorrow unless you are interested in working up a solution. comment:15 Changed 5 years ago by comment:16 Changed 5 years ago by In 402b4a7a20a4f00fce0f01cdc3f5f97967fdb935: Fixed #14580 -- Cleaned up duplicate code in admin formset handling. Thanks apollo13 for the report and review. I don't think InlineAdminFormset is a public API, so I'd rather this method returned "normal" formset objs.
https://code.djangoproject.com/ticket/14580
CC-MAIN-2018-51
refinedweb
731
61.7
txStatHat 0.2.0 Twisted wrapper for StatHat.com A Twisted API wrapper for StatHat’s EZ API. The usage is as simple as: from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from txstathat import txStatHat @inlineCallbacks def doSomeStats(): sh = txStatHat('ezKeyOrEmail') yield sh.count('aCounter') # Counts by 1 by default yield sh.count('anotherCounter', 42) yield sh.value('aValue', 0.42) reactor.callLater(1, doSomeStats) reactor.run() The ezKeyOrEmail is your e-mail address in the beginning, but can be changed in the account settings to something more safe. There is no such thing as a password. By default, errors are swallowed silently so disruptions at StatHat don’t lead to disruption in your services by accident. To get network exceptions as well as API error messages, set ignore_errors=False when instantiating txStatHat. You should only do so if you have really good reasons. Please note: At the moment, StatHat.com does not report an error when an incorrect EZ API key is submitted. Therefore the above example will work without any effect even if you don’t replace the API key. StatHat.com seems to have generally a similar attitude towards errors as txStatHat. They return an OK except if you use the API incorrectly (don’t supply an API key for example). The difference is that if ignore_errors is left at the default True, network problems accessing the API are ignored as well. Depending on the availability of pyOpenSSL, txStatHat uses HTTPS for API calls if possible. While there isn’t much real damage an attacker can do to you if (s)he hijacks your API key, I strongly suggest to install and use it. - Downloads (All Versions): - 1 downloads in the last day - 35 downloads in the last week - 153 downloads in the last month - Author: Hynek Schlawack - License: MIT - Platform: any - Categories - Development Status :: 4 - Beta - Environment :: Console - Framework :: Twisted - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Operating System :: OS Independent - Programming Language :: Python - Programming Language :: Python :: 2.6 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 2 :: Only - Topic :: Software Development :: Libraries :: Python Modules - Package Index Owner: hynek - DOAP record: txStatHat-0.2.0.xml
https://pypi.python.org/pypi/txStatHat/
CC-MAIN-2014-15
refinedweb
365
57.47
This post is about don'ts. Here are the two most important rules of this post: Don't use std::move thoughtless and don't slice. Let's start. Here are the don'ts for today. std::move() new delete delete[] The first rule is a disguised don't. Most of the times there is no need to explicitly call std::move.The compiler automatically applies move semantic if the source of the operation is a rvalue. A rvalue is an object with no identity. A rvalue has typically no name and you can not get its address. The remaining objects are lvalues. Applying std::move to a lvalue gives most of the times an empty object. The lvalue is afterwards in a so-called moved-from state. This means that it is in a valid but no nearer specified state. Sound strange? Right! You have just keep this rule in mind: After you move from a lvalue such as std::move(source) you can not make any assumption about source. You have to set it to a new value. Wait a second. The rules says you should only use std::move if you want to move an object to another scope. The classical use-cases are objects which can not be copied but moved. For example, you want to move an std::promise into another thread. // moveExplicit.cpp #include <future> #include <iostream> #include <thread> #include <utility> void product(std::promise<int>&& intPromise, int a, int b){ // (1) intPromise.set_value(a * b); } int main(){ int a= 20; int b= 10; // define the promises std::promise<int> prodPromise; // get the futures std::future<int> prodResult= prodPromise.get_future(); // calculate the result in a separat thread std::thread prodThread(product,std::move(prodPromise), a, b); // (2) // get the result std::cout << "20 * 10 = " << prodResult.get() << std::endl; // 200 prodThread.join(); } The function product (1) gets the std::promise by rvalue reference. A promise cannot be copied but moved; therefore, std::move is necessary (2) to move the promise into the newly created thread. Here is the big don't! Don't use std::move in a return statement. vector<int> make_vector() { vector<int> result; // ... load result with data return std::move(result); // bad; just write "return result;" } Trust your optimiser! If you return the object just by copy, the optimiser will do its job. This is best practices until C++14; this is an obligatory rule since C++17 and is called guaranteed copy elision. Although this technique is called automatic copy elision, move operations are also optimised away with C++11. RVO stands for Return Value Optimisation and means, that the compiler is allowed to remove unnecessary copy operations. What was until C++14 a possible optimisation step becomes in C++17 a guarantee. MyType func(){ return MyType{}; // (1) no copy with C++17 } MyType myType = func(); // (2) no copy with C++17 Two unnecessary copy operations can happen in this few lines. The first one in (1) and the second one in (2). With C++17, both copy operations are not allowed. If the return value has a name its called NRVO. This acronym stands for Named Return Value Optimization. MyType func(){ MyType myVal; return myVal; // (1) one copy allowed } MyType myType = func(); // (2) no copy with C++17 The subtle difference is that the compiler can still copy the value myValue according to C++17 (1). But no copy will take place in (2). Okay, I can make it short. Don't use new and delete in the application code. This rule has a nice reminder: "No naked new!". Here is the rationale for the last rule. Resource management in application code is error-prone. void f(int n) { auto p = new X[n]; // n default constructed Xs // ... delete p; // error: just delete the object p, rather than delete the array p[] } The guidelines states in the comment: "just delete the object p". Let me put it more drastically. This is undefined behaviour! First of all. What is slicing? Slicing means: you want to copy an object during assignment or initialisation and you get only a part of the object. Let's start simple. // slice.cpp struct Base { int base{1998}; } struct Derived : Base { int derived{2011}; } void needB(Base b){ // ... } int main(){ Derived d; Base b = d; // (1) Base b2(d); // (2) needB(d); // (3) } The lines (1), (2), and (3) have all the same effect: the Derived part of d is removed. I assume that was not your intention. I said in the announcement to this post that slicing is one of the darkest parts of C++. Now it becomes dark. // sliceVirtuality.cpp #include <iostream> #include <string> struct Base { virtual std::string getName() const { // (1) return "Base"; } }; struct Derived : Base { std::string getName() const override { // (2) return "Derived"; } }; int main(){ std::cout << std::endl; Base b; std::cout << "b.getName(): " << b.getName() << std::endl; // (3) Derived d; std::cout << "d.getName(): " << d.getName() << std::endl; // (4) Base b1 = d; std::cout << "b1.getName(): " << b1.getName() << std::endl; // (5) Base& b2 = d; std::cout << "b2.getName(): " << b2.getName() << std::endl; // (6) Base* b3 = new Derived; std::cout << "b3->getName(): " << b3->getName() << std::endl; // (7) std::cout << std::endl; } I created a small hierarchy consisting of the Base and the Derived class. Each object of this class hierarchy should return its name. I made the method getName virtual (1) and overrode it in (2); therefore, I will have polymorphism. This means I can use a derived object via a reference (6) or a pointer to a base object (7). Under the hood, the object is of type Derived. This will not hold, if I just copy Derived d to Base b1 (5). In this case, slicing kicks in and I have a Base object under the hood. In case of copying, the declared or static type is used. If you use an indirection such as a reference or a pointer, the actual or dynamic type is used. To keep the rule in mind is quite simple: If your instances of a class should be polymorphic, it should declare or inherit at least one virtual method and you should use its objects via an indirection such as a pointer or a reference. Of course, there is a cure against slicing: provide a virtual clone function. Read the details here: C++ Core Guidelines: Rules for Copy and Move. This post was about don'ts. The next post will start with a do. Use curly braces for initialisation of23 All 1580559 Currently are 136 guests and no members online Kubik-Rubik Joomla! Extensions Read more... Read more...
http://www.modernescpp.com/index.php/c-core-guidelines-do-s-and-don-ts
CC-MAIN-2019-09
refinedweb
1,104
75.3
#include <Pt/Unit/TestProtocol.h> Protocol for test suites. More... Inherited by TextProtocol. This is the base class for protocols that can be used to run a test suite. The default implementation will simply run each registered test of the test suite without passing it any data. Implementors need to override the method TestProtocol::run. This method can be overriden to specify a custom protocol for a test suite. The default implementation will simply call each registered method of the test suite. Implementors will most likely call TestSuite::runTest to resolve a test method by name and pass it required arguments. Reimplemented in TextProtocol.
http://pt-framework.org/htdocs/classPt_1_1Unit_1_1TestProtocol.html
CC-MAIN-2017-13
refinedweb
104
60.31
In this tutorial though, we'll be using gmail as our email service. Prerequisites: - Basic sockets Guide: Have you ever wondered how to extend your Java knowledge and do some really awesome programs? In this guide, we'll be creating our own program that reads our emails. It won't have any fancy GUI as this is not a GUI tutorial. It will only teach you how to read emails. What you do with the read emails is your choice. A very important note: THIS IS NOT A TUTORIAL ABOUT SENDING EMAILS. ONLY READING THEM. Basicaly, most email services are using a protocal called POP3 (Post Office Protocal, Version 3), which contains a few instructions. For example, by sending "USER userNameHere@gmail.com" to the server and after that sending "PASS passwordHere" to the server, we'll be connected to the server. Pretty easy, ah? In most servers, the port of the email service is 110. But of course, gmail is special and it uses port 995. Usually, we'll need to create a regular socket to connect to the an email service, but gmail isn't regular. Gmail is using what's called "SSL" (Secured Sockets Layer). Instead of regular sockets, gmail is using secured sockets, because security in emails is VERY important (I wouldn't want anyone to know that I told me friend I stole a wallet a week ago, would you?). How do I know that gmail is using SSL sockets? It's simple, gmail tells us that in the following link: Configuring other mail clients - Gmail Help You can see a lot of information here. For example, the server we'll be connecting to is "pop.gmail.com". The port will be 995 and so on. Let's go to the code now. First, we'll declare a few global variables that will contain the server's DNS, the port, the username and the password: String server = "pop.gmail.com"; int port = 995; String username = "cool1337@gmail.com", password = "whatACoolPassword";We'll also need to declare a few more global variables that will responsible for connecting to the server, and its input and output: SSLSocket socket; BufferedReader input; PrintWriter output;The input will be used to read input from the server. For example, if I'm asking the server to send me an email, the input will read it and print it on the screen (On the console in our program). The output will be used to send messages to the server. For example, I can ask the server to send me an email. Now, moving on to the constructor. In our constructor, we'll be initiating our socket, our input and our output variables. The way you do this is as follows:); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } }Basicaly, you have to make a SSL sockets factory, and then create a new socket using the factory. Just notice that we're sending our server's DNS and the port as arguements. After we got a socket, we take care of our input and output variables. Notice that after each time you send information to the server, the output's stream must be flushed, or it won't work properly. The true arguement in "new PrintWriter" makes an automatic flush each time you send information to the server. Connecting to your account: The POP3 protocol (The Post Office Protocal 3 protocol ^^) is a really simple protocol. Basicaly, whenever the server sends the client any information, it sends an indicator, the indicates if the request can be made. The states are: "+OK" or "-ERR". After the indicator, there's the corresponding message. For example, if I'm trying to connect to my account, but the password is wrong, the server will send an "-ERR [AUTH] Username and password not accepted." But if the password is correct, I'll be recieving the following message: +OK Welcome.Important note: After you connect to the server (Not to your account on gmail, to the server!), the server automaticly sends a greeting. For gmail, the greeting is: +OK Gpop ready for requests from 109.64.30.80 l3pf4092546fan.52Each service should send a different greeting. Now, for some code. First of all, I made a new method that reads one line from the input stream. public String readOneLine() throws IOException { return input.readLine(); }Now we'll be printing the greeting message and connect to our account on gmail, using the following code: System.out.print("Greeting message: "); String response = readOneLine(); System.out.println(response); // Username output.println("USER " + username); response = readOneLine(); System.out.println(response); // Password output.println("PASS " + password); response = readOneLine(); System.out.println(response);Notice that after each request from the server, I'm printing the server's response. The output of this code is: Pretty awesome, isn't it? =DPretty awesome, isn't it? =D Greeting message: +OK Gpop ready for requests from 109.64.30.80 c10pf8910277bkc.43 +OK send PASS +OK Welcome. More information about requests in POP3: The way you ask requests from the server using POP3 protocol is very simple. You're sending a command with a possible argument, and recieve a response for your request. For example, you may type: "USER username" and after that "PASS passowrd" and the server will check if such user exists. In any case, the server will response and send you an "+OK" message or an "-ERR" message. There are several commands in POP3. We'll be using USER, PASS and RETR. Reading a message: Now, after we're connected to gmail, it's time to finally read our first message. We'll be using the RETR command to ask the server to give us a message. The RETR commands takes an argument. The message number. The first message, the second message and so on. The way you do it is as follows: output.println("RETR 1");(1 is message #1). After you ask for a message, the server may send "+OK message follows" if there's no problem, or "-ERR Message number out of range." if there's an error (For example, if you want to read message #133333337). After the server sends an okay message, the server sends a multiple lines. In POP3, every time multiple lines are sent, the final line contains only one character - a dot (.). That's why, in the code, we'll be reading a line until the line is ".". The code is as follows: while (!response.equals(".")) { response = readOneLine(); System.out.println(response); } An output for example would be: +OK message follows Delivered-To: baraklevy21@gmail.com Received: by 10.216.21.194 with SMTP id r44cs510787wer; Tue, 16 Mar 2010 05:43:57 -0700 (PDT) Received: by 10.142.60.4 with SMTP id i4mr2437100wfa.296.1268743423931; Tue, 16 Mar 2010 05:43:43 -0700 (PDT) Return-Path: <noreply@goldmineptc.com> Received: from buxhost06.buxhost.net (buxhost06.buxhost.net [208.64.125.194]) by mx.google.com with ESMTP id 36si13219142pxi.53.2010.03.16.05.43.42; Tue, 16 Mar 2010 05:43:42 -0700 (PDT) Received-SPF: neutral (google.com: 208.64.125.194 is neither permitted nor denied by best guess record for domain of noreply@goldmineptc.com) client-ip=208.64.125.194; Authentication-Results: mx.google.com; spf=neutral (google.com: 208.64.125.194 is neither permitted nor denied by best guess record for domain of noreply@goldmineptc.com) smtp.mail=noreply@goldmineptc.com Received: from nobody by buxhost06.buxhost.net with local (Exim 4.69) (envelope-from <noreply@goldmineptc.com>) id 1NrW7Y-0006Pa-JQ for baraklevy21@gmail.com; Tue, 16 Mar 2010 13:43:16 +0100 To: baraklevy21@gmail.com Subject: Welcome to Gold Mine PTC Date: Tue, 16 Mar 2010 07:43:40 -0500 From: Gold Mine PTC <noreply@goldmineptc.com> Message-ID: <002058df3d12326f8aaf5c8c68ee32a0@> X-Priority: 3 X-Mailer: PHPMailer (phpmailer.sourceforge.net) [version 2.0.4] MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="b1_002058df3d12326f8aaf5c8c68ee32a0" X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - buxhost06.buxhost.net X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [99 99] / [47 12] X-AntiAbuse: Sender Address Domain - goldmineptc.com --b1_002058df3d12326f8aaf5c8c68ee32a0 Content-Type: text/plain; charset = "iso-8859-1" Content-Transfer-Encoding: 8bit Welcome to Gold Mine PTC Please find your login details below. Username: *** Important! To activate your account click the link below or copy/paste the link into your browser: *** Kind regards, Support Gold Mine PTC --b1_002058df3d12326f8aaf5c8c68ee32a0 Content-Type: text/html; charset = "iso-8859-1" Content-Transfer-Encoding: 8bit <HTML> <HEAD></HEAD> <BODY> <table width=600 border=0 cellpadding=0 cellspacing=0> <tr style='background:#000000; color:white; font-family:Lucida Sans Unicode;' valign=top align=left><td style='padding:10px;'> <div style='font-size:26px;'>Gold Mine PTC</div> <div style='font-size:16px;'>Easy Money!</div> </td></tr> <tr><td style='padding:5px; border-right:1px solid #000000; border-left:1px solid #000000;'> <br /> Welcome to Gold Mine PTC<br /> Please find your login details below.<br /> <br /> Username: <b>***</b><br /> Password: <b>***</b><br /> <br /> <font color=darkorange><i><u>Important!</u></i></font><br /> To activate your account click the link below or copy/paste the link into your browser:<br /> <a href='***6a7a'>***6a7a</a><br /> <br /> Kind regards,<br /> Support Gold Mine PTC<br /> <br> </td></tr> <tr><td style='font-size:11px; color:white; padding:5px; background:#000000;'><b>Important!</b> To reply to this message, please open a <a style='color:white;' href=''><u>support ticket</u></a>.<br>Also want to start a PTC business like Gold Mine PTC? <a style='color:white;' href=''><u></u></a></td></tr> </table> </BODY> </HTML> --b1_002058df3d12326f8aaf5c8c68ee32a0-- . The entire code is as follows: import java.io.*; import javax.net.ssl.*; import java.net.UnknownHostException; public class EmailService { String server = "pop.gmail.com"; int port = 995; String username = "c00001@gmail.com", password = "awesome1337"; SSLSocket socket; BufferedReader input; PrintWriter output; public static void main(String[] args) { new EmailService(); }); connect(); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } public void connect() throws IOException { System.out.print("Greeting message: "); String response = readOneLine(); System.out.println(response); // Username output.println("USER " + username); response = readOneLine(); System.out.println(response); // Password output.println("PASS " + password); response = readOneLine(); System.out.println(response); output.println("RETR 1"); while (!response.equals(".")) { response = readOneLine(); System.out.println(response); } } public String readOneLine() throws IOException { return input.readLine(); } } I hope you enjoyed the tutorial, and mostly learned more about sockets and about practical programming! Barack.
http://forum.codecall.net/topic/62143-how-to-read-your-emails-using-java/
CC-MAIN-2017-43
refinedweb
1,768
60.01
CS::Physics::iPhysicalBody Struct Reference A base interface of physical bodies. More... #include <ivaria/physics.h> Detailed Description A base interface of physical bodies. iRigidBody and iSoftBody will be derived from this one. Definition at line 136 of file physics.h. Member Function Documentation Add a force to the whole body. Apply an impulse on the physical body. The impulse is applied on the center of gravity of the body. The impulse will be applied for the next simulation step. If you want a continuous impulse, then you need to apply it manually at each step. - Parameters: - Get the density of the body. Get the enabled state of this body. Get the friction of this body. Whether this object is affected by gravity. Get the linear velocity (translational velocity component). Get the mass of this body. Get the type of this physical body. Return the volume of this body. Get whether this object is currently animated dynamically by the physical simulation, that is if it is either a dynamic actor, a soft body, or a dynamic rigid body. The only objects that are not dynamic are the rigid bodies that are either in static or kinematic state. Query the iRigidBody interface of this body. It returns NULL if the interface is not valid, ie GetPhysicalObjectType() is not CS::Physics::PHYSICAL_OBJECT_RIGIDBODY. Query the iSoftBody interface of this body. It returns NULL if the interface is not valid, ie GetPhysicalObjectType() is not CS::Physics::PHYSICAL_OBJECT_SOFTBODY. Set the density of this collider. If the mass of the body was not defined then it will be computed from this. But iSoftBody must use SetMass instead. You should be really careful when using densities because most of the game physics libraries do not work well when objects with large mass differences interact. It is safer to artificially keep the mass of moving objects in a safe range (from 1 to 100 kilogram for example). Set the enabled state of this body. Set the friction of this body. [0,1] for soft body. Whether this object is affected by gravity. Set the linear velocity (translational velocity component). Set the total mass of this body. The documentation for this struct was generated from the following file: Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/structCS_1_1Physics_1_1iPhysicalBody.html
CC-MAIN-2017-04
refinedweb
380
61.12
Im trying to generate a random letter using the letters A-F but have no idea how to input the data. I know how to use the random generator for numbers, but when it comes to letters Im not sure what to change to make it work. Ill give a sample of my random number code and see if any suggestions can be made on what to edit to get the letters outputted instead. Thanks! import java.util.Random; import java.text.DecimalFormat; public class RandomNumber { public static void main (String[] args) { double pi, total; int random; pi=3.14159; DecimalFormat f = new DecimalFormat("0.00"); Random generator = new Random(); random = generator.nextInt(10)+ 1; total=random * pi ; System.out.println ("Random: " + random); System.out.println ("PI: " + pi); System.out.println ("Random x PI: " + f.format(total)); } } This post has been edited by smohd: 06 February 2012 - 09:37 PM Reason for edit:: Code tags added. Please use [code] tags when posting codes
http://www.dreamincode.net/forums/topic/265658-random-letter-generator/
CC-MAIN-2017-43
refinedweb
163
60.41
Heiko Wundram <heikowu at ceosg.de> wrote in message news:<mailman.3507.1095603871.5135.python-list at python.org>... > Am Sonntag, 19. September 2004 07:00 schrieben Sie: > > I am sure the solution is O(n) since the list must > > only iterated once and the dictionary is O(1), correct? > > Thanks for the help!! > > In case you haven't found a solution yet, think about the properties of the > sum of the numbers of the sequence which is n*(n-1)/2 + x with 0 < x < n, > where finding out why this equation holds and what x is is up to you. > > (n being defined as in your example, a sequence having n elements with the > elements in 1..n-1 and only one repeated) > > Heiko. Got it!! Thanks for your help. Here is my revised and working code i=[1,2,3,4,5,6,3] s0=len(i)*(len(i)+1)/2 s1=0 for a in i: sum1+=a return (sum1-sum0)%len(i) I think my main malfunction was with thinking that this was mor ecomplex tna it was. By refocusing on the simple problem statement as suggested the solution came easier. Thanks again.
https://mail.python.org/pipermail/python-list/2004-September/262720.html
CC-MAIN-2014-10
refinedweb
198
74.59
Version 2.2 on FreeBSD. After calling ::rivet::parse, a call to abort_page does not stop the execution of the page, if the included rvt calls abort_page. If the included rvt does *not* call abort_page, the call in the outer rvt functions as expected. I'm unclear as to what the intended behavior is. Is the intention that the rvt executed by ::rivet::parse is independent, with an independent lifecycle, or is it simply meant as a "function call" inside the outer rvt page? Test case: test_parse.rvt: <? parse test_parse2.rvt abort_page puts "Execution continued." ?> test_parse2.rvt: <? puts "Parsed rvt." abort_page ?> As a secondary issue, note that in a rivet + Tcl 8.5 installation, this can be worked around by placing a "return" call after the parse statement. However, with Tcl 8.6, this results in the following error: "errorCode 'TCL UNEXPECTED_RESULT_CODE 0', errorInfo 'command returned bad code: 0 while executing "namespace eval request { puts -nonewline "" ..." Thank you for reporting this issue. I think you hit something more relevant than a simple bug, the way mod_rivet does script parsing and execution has to be reconsidered and ironed out Created attachment 32403 [details] Tcl error status propagates to the caller when ::rivet::parse is called Tcl return error propagates up to the toplevel script and script execution finalized in Rivet_SendContent Created attachment 32404 [details] fix for improper handling of error code returned by parse command Not only the Tcl status didn't unwind the whole calls stack but after_every_script was called inconditionately after *each* parse Thank you for the quick response. We'll test and see how it works. I had noticed that after_every_script was called after every parse - which is in fact how I discovered the bug, since our after_every_script calls abort_page. That seemed wrong, which is why I asked about intended behavior. In case abort_page gets called from within after_every_script it won't trigger the AbortScript. AfterEveryScript was executed after AbortScript before I made this patch anyway. And now it works this way all the more so because it gets executed really as very last script. So it must be changed again. Let me check the documentation but I suspect we have been having a quite haphazard setup for these procedures. If I understand you want to have AbortScript to be run whenever ::rivet::abort_page is called (perhaps only ErrorScript shouldn't run it) which makes sense but it's going to add here and there a few more if (rsc->after_every_script != NULL) Tcl_Exec.... please hold on a few more hours then... Created attachment 32406 [details] 3rd patch for ::rivet::abort_page handling it turned out to be easier then I thought at first, but I haven't tested it. Please let me know the patch aimed at fixing bug #57501 was applied (a bit further elaborated to move the charset handling into TclWeb_PrintHeaders) and committed to branches/2.2 I'm going to allow a few more dayes after which if no further problems will surface I will close the bug -- Massimo I apologize, we haven't had a chance to test yet, since we've been using the version of Rivet in FreeBSD ports, and haven't had a chance to patch and build. We are going to test the 2.2 branch tomorrow and will update this thread with our results. Please, let us know when you've done your tests, I wish to roll another version of rivet including the fix of this bug I apologize for the delay again. We have done a test, which unfortunately resulted in an apache core dump. Since the test was conducted on an active dev environment, we have yet to analyze the dump or isolate the problem more thoroughly. I'll keep this bug updated as we proceed. A possible cause of segfaults was found in the 2.2 branch and in trunk as well. The 2.2 branch has been patched and trunk will follow soon. I don't know if it can help to find the source of the segfault you observed in your test environment Any news from your test environments? Apologies again for the delay. For a clean install of the 2.2 branch on FreeBSD, I don't get a segfault, and it looks like the initial problem is fixed. However, if I set the AfterEveryScript to abort_page, I get: Rivet AfterEveryScript failed! Page generation terminated by abort_page directive invoked from within "abort_page" Is this expected, or should we not be attempting to call abort_page in the AfterEveryScript? Note of course in production we don't use abort_page as the actual AfterEveryScript, but it is called from it. AfterEveryScript was meant to be exactly that: the last resort for catching anything left dangling. It has no ErrorScript or AbortScript but it makes sense it should be caught by an ErrorScript in case of Tcl errors. I'm not sure AbortScript should be triggered from AfterEveryScript (which won't be run after-every-script anymore), but for uniformity of behavior it might be the case we also assume abort_page to be able to trigger an exception even during this very final stage. I would be glad to add it, I just have a feeling like we are forcing and overloading AbortScript of tasks that should probably be carried out withing AfterEveryScript itself. But I'm a bit too conservative at times Thoughts? > I'm not sure AbortScript should be triggered from AfterEveryScript (which won't be run after-every-script anymore) This is where my understanding of Rivet is a bit limited. My understanding is that abort_page is not meant to trigger an error, but simply as an exit statement from the rivet page. Or is your point that since AfterEveryScript is run after the script is aborted, then calling abort_page is nonsensical and unnecessary? Also to confirm, is the error on abort_page in AfterEveryScript new with your recent changes? It's in our current AfterEveryScript, which operates without issues. Forget about my comment. I misinterpreted the code. I don't even know how I did it, not very accurate but I checked rather carelessly. ::rivet::abort_page with AfterEveryScript fires the execution of AbortScript. Sorry for confusing it and sorry for getting you on the wrong track. I generally refer to abort_page as a way to generate exceptions because it's the closer analogy I have in mind. It's not exactly like in Java or C++, but the logic behind it is quite similar, you stop the execution and jump to where the a condition can be caught. And abort_page accepts an argument for driving the response of AbortScript. But to add more confusion to the issue...well...yes, abort_page returns a TCL_ERROR code with a reserved error code that triggers the execution of an AbortScript if exists. This is the way the mechanism was designed. The bottom line is: we need also to check if an ErrorScript must be run. Your error is caused by something that failed in the AfterEveryScript. Perhaps the error message could be also improved I did some more accurate analysis of the problem, yesterday wasn't the right time for doing it, today I have more time and less things going on around me. Scripts execution was already changed and performed by calling Rivet_ExecuteAndCheck. Within this function error conditions are examined and in case handled by AbortScript or ErrorScript. These scripts cannot be run through Rivet_ExecuteAndCheck for obvious reasons and their errors must be handled directly. Ideally these scripts must not fail. I spotted a problem in Rivet_ExecuteAndCheck: the error code returned by an ErrorScript has no effect on the code returned by Rivet_ExecuteAndCheck whereas in case of successful execution it simply should return TCL_OK to the caller. This is a problem I'm going to fix right away. Created attachment 32659 [details] extended error handling This patch extends the error handling. The central procedure in mod_rivet for script execution (Rivet_ExecuteAndCheck) has been modified and it's now sided by a new procedure Rivet_ExecuteErrorHandler. This procedure runs for both a URL referenced script and for an AbortScript in case of errors. I did some tests and it worked but needs to be tested on a wider set of cases. This patch should also fix the problem you're observing: if an AbortScript or ErrorScript exits successfully it the whole procedure should return TCL_OK. Please let me know what's the result of your tests now Thanks for the patch. When we apply it to the 2.2 branch, the unmodified index.rvt rivet test page gives the same error as previously listed: Rivet AfterEveryScript failed! Page generation terminated by abort_page directive invoked from within "abort_page" GET HTTP responds with code two hundred page loads, YOU GOT SERVED Created attachment 32672 [details] New handling of abort script if abort_page is called but no AbortScript defined calling abort_page it breaks execution but doesn't return TCL_ERROR if no AbortScript is defined Comment on attachment 32672 [details] New handling of abort script if abort_page is called but no AbortScript defined The patch has the wrong extension (.tcl instead of .diff). Sorry for the prolonged messing about this issue. I definitely need some time off work Created attachment 32688 [details] Error handling further improved Improved handling of AbortScript and ErrorScript. ErrorScripts are always run when any other script fails (if defined). If an ErrorScript is defined it's left to the programmer the task of generating a suitable error message if needed Thanks for the latest patch. With this patch, I get the following behavior: - abort_page within a parsed file "test_parse2.rvt" in this instance, does abort the execution of the parsed page, but not the calling page. - Subsequent calls to abort_page from the calling page also have no effect, page execution continues. - Abort_page works properly when no parse is involved. Created attachment 32709 [details] Checking abort condition when rolling back through the call stack to prevent spurious error handling This patch uses the globals->page_aborting flag to check if the TCL_ERROR status was returned after calling abort_page. Thus calling abort_page within a nested template shouldn't result in a Tcl error. Please let me know Thanks. The actual behavior of the pages is correct, and behaves as I expect. However, when abort_page is called, either in the calling page, the called page, or in the AfterEveryScript, mod_rivet records an error in the httpd-error.log, for example: [Mon May 04 14:25:35 2015] [error] (20014)Internal error: mod_rivet: Error in Rivet_ParseExecFile exec file '/usr/local/www/apache22/data/test.rvt': Page generation terminated by abort_page directive\n invoked from within\n"abort_page"\n invoked from within\n"parse test_parse2.rvt"\n (in namespace eval "::request" script line 5)\n invoked from within\n"namespace eval request {\nputs -nonewline ""\n\n\nparse test_parse2.rvt\n\n\nputs "Execution continued."\n\n# abort_page\n\nputs -nonewline "\n\n\n \n"\n\n}" Created attachment 32713 [details] Also we don't report abort_page generated errors Yes! top level script execution mustn't report errors generate by abort_page calls. Please check this out, it works for me The latest patch prevents spurious error logging by checking the flag globals->page_aborting (Tcl interpreter associated data) but there also cases when you want to print an error even when abort_page has been called After abort_page is called execution is interrupted and control passed on to AbortScript and AfterEveryScript. If any of these scripts fail for some reason logging must be re-enabled and therefore that flag must be reset each time an error handler is called (Rivet_ExecuteErrorHandler) This simple change is in my working box, I'm certain your AbortScript is fail proof ;) so it won't change your tests. I will check it more closely and commit it in a day or two Thanks for the patch. All my tests pass, and I'm no longer seeing errors in httpd.log when calling abort_page. abort_page, when called from within a parsed parsed, aborts the entire page, including the caller. To make sure we're on the same page, AfterEveryScript is not called (I believe) when a parsed page finishes. My test is using abort_page as the AfterEveryScript. When I do that, if I do not call abort_page explicitly from within the parsed page, page execution continues in the caller. I think that's reasonable behavior, although I could see the argument for calling AfterEveryScript after the parsed page, since it too is a script. I'm not sure I understand all the details in your last comment. If you look at the function Rivet_SendContent in src/apache-2/mod_rivet.c you will see the AfterEveryScript (stored in rsc->after_every_script, where rsc is the configuration record) is executed right after Rivet_ParseExecFile regardless the status code returned by this function. The pointer storing the script has to be non NULL to be fired. abort_page sets an internal status flag and subsequent calls to abort_page have no effect until the request has been served. The rationale for this is that the parsed template (or also pure Tcl script for what it matters) is supposed to interrupt execution immediately and hand the control on to the AbortScript from where I don't see the point of calling abort_page again. Then AfterEveryScript was introduced and now we passed it throught Rivet_ExecuteAndCheck, which runs rsc->rivet_abort_script if abort_page is called, so we should have two cases 1) abort_page is invoked from the parsed page and then it should be uneffective from that point on to the end a single request 2) abort_oage is invoked from rsc->after_every_script and it should fire rsc->rivet_abort_page anyway Does it match your tests? Do we want AbortPage scripts to be treated differently when called from AfterEveryScript? I apologize- the current behavior works for us, I was just confused about the implementation details.
https://bz.apache.org/bugzilla/show_bug.cgi?id=57501
CC-MAIN-2019-13
refinedweb
2,307
61.77