text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Next.js provides an integrated TypeScript experience, including zero-configuration set up and built-in types for Pages, APIs, and more. create-next-appsupport You can create a TypeScript project with create-next-app using the --ts, --typescript flag like so: npx create-next-app@latest --ts # or yarn create next-app --typescript # or pnpm create next-app --ts To get started in an existing project, create an empty tsconfig.json file in the root folder: touch tsconfig.json Next.js will automatically configure this file with default values. Providing your own tsconfig.json with custom compiler options is also supported. You can also provide a relative path to a tsconfig.json file by setting typescript.tsconfigPath prop inside your next.config.js file. Starting in v12.0.0, Next.js uses SWC by default to compile TypeScript and TSX for faster builds. Next.js will use Babel to handle TypeScript if .babelrcis present. This has some caveats and some compiler options are handled differently. Then, run next (normally npm run dev or yarn dev) and Next.js will guide you through the installation of the required packages to finish the setup: npm run dev # You'll see instructions like these: # # Please install TypeScript, @types/react, and @types/node by running: # # yarn add --dev typescript @types/react @types/node # # ... You're now ready to start converting files from .js to .tsx and leveraging the benefits of TypeScript! A file named next-env.d.tswill be created at the root of your project. This file ensures Next.js types are picked up by the TypeScript compiler. You should not remove it or edit it as it can change at any time. This file should not be committed and should be ignored by version control (e.g. inside your .gitignorefile). TypeScript strictmode is turned off by default. When you feel comfortable with TypeScript, it's recommended to turn it on in your tsconfig.json. Instead of editing next-env.d.ts, you can include additional types by adding a new file e.g. additional.d.tsand then referencing it in the includearray in your tsconfig.json. By default, Next.js will do type checking as part of next build. We recommend using code editor type checking during development. If you want to silence the error reports, refer to the documentation for Ignoring TypeScript errors. For getStaticProps, getStaticPaths, and getServerSideProps, you can use the GetStaticProps, GetStaticPaths, and GetServerSideProps types respectively: import { GetStaticProps, GetStaticPaths, GetServerSideProps } from 'next' export const getStaticProps: GetStaticProps = async (context) => { // ... } export const getStaticPaths: GetStaticPaths = async () => { // ... } export const getServerSideProps: GetServerSideProps = async (context) => { // ... } If you're using getInitialProps, you can follow the directions on this page. The following is an example of how to use the built-in types for API routes: import type { NextApiRequest, NextApiResponse } from 'next' export default (req: NextApiRequest, res: NextApiResponse) => { res.status(200).json({ name: 'John Doe' }) } You can also type the response data: import type { NextApiRequest, NextApiResponse } from 'next' type Data = { name: string } export default (req: NextApiRequest, res: NextApiResponse<Data>) => { res.status(200).json({ name: 'John Doe' }) } App If you have a custom App, you can use the built-in type AppProps and change file name to ./pages/_app.tsx like so: import type { AppProps } from 'next/app' export default function MyApp({ Component, pageProps }: AppProps) { return <Component {...pageProps} /> } Next.js automatically supports the tsconfig.json "paths" and "baseUrl" options. You can learn more about this feature on the Module Path aliases documentation. The next.config.js file must be a JavaScript file as it does not get parsed by Babel or TypeScript, however you can add some type checking in your IDE using JSDoc as below: // @ts-check /** * @type {import('next').NextConfig} **/ const nextConfig = { /* config options here */ } module.exports = nextConfig Since v10.2.1 Next.js supports incremental type checking when enabled in your tsconfig.json, this can help speed up type checking in larger applications. It is highly recommended to be on at least v4.3.2 of TypeScript to experience the best performance when leveraging this feature. Next.js fails your production build ( next build) when TypeScript errors are present in your project. If you'd like Next.js to dangerously produce production code even when your application has errors, you can disable the built-in type checking step. If disabled, be sure you are running type checks as part of your build or deploy process, otherwise this can be very dangerous. Open next.config.js and enable the ignoreBuildErrors option in the typescript config: module.exports = { typescript: { // !! WARN !! // Dangerously allow production builds to successfully complete even if // your project has type errors. // !! WARN !! ignoreBuildErrors: true, }, }
https://nextjs.org/docs/basic-features/typescript
CC-MAIN-2022-40
refinedweb
770
58.99
KDECore KCmdLineArgs Class ReferenceA class for command-line argument handling. More... #include <kcmdlineargs.h> Detailed DescriptionA class for command-line argument handling. KCmdLineArgs provides simple access to the command-line arguments for an application. It takes into account Qt-specific options, KDE-specific options and application specific options. This class is used in main() via the static method init(). A typical KDE application using KCmdLineArgs should look like this: int main(int argc, char *argv[]) { // Initialize command line args KCmdLineArgs::init(argc, argv, appName, programName, description, version); // Tell which options are supported KCmdLineArgs::addCmdLineOptions( options ); // Add options from other components KUniqueApplication::addCmdLineOptions(); .... // Create application object without passing 'argc' and 'argv' again. KUniqueApplication app; .... // Handle our own options/arguments // A KApplication will usually do this in main but this is not // necessary. // A KUniqueApplication might want to handle it in newInstance(). KCmdLineArgs *args = KCmdLineArgs::parsedArgs(); // A binary option (on / off) if (args->isSet("some-option")) .... // An option which takes an additional argument QCString anotherOptionArg = args->getOption("another-option"); // Arguments (e.g. files to open) for(int i = 0; i < args->count(); i++) // Counting start at 0! { // don't forget to convert to Unicode! openFile( QFile::decodeName( args->arg(i))); // Or more convenient: // openURL( args->url(i)); } args->clear(); // Free up some memory. .... } The options that an application supports are configured using the KCmdLineOptions class. An example is shown below: static const KCmdLineOptions options[] = { { "a", I18N_NOOP("A short binary option"), 0 }, { "b <file>", I18N_NOOP("A short option which takes an argument"), 0 }, { "c <speed>", I18N_NOOP("As above but with a default value"), "9600" }, { "option1", I18N_NOOP("A long binary option, off by default"), 0 }, { "nooption2", I18N_NOOP("A long binary option, on by default"), 0 }, { ":", I18N_NOOP("Extra options:"), 0 }, { "option3 <file>", I18N_NOOP("A long option which takes an argument"), 0 }, { "option4 <speed>", I18N_NOOP("A long option which takes an argument, defaulting to 9600"), "9600" }, { "d", 0, 0 }, { "option5", I18N_NOOP("A long option which has a short option as alias"), 0 }, { "e", 0, 0 }, { "nooption6", I18N_NOOP("Another long option with an alias"), 0 }, { "f", 0, 0 }, { "option7 <speed>", I18N_NOOP("'--option7 speed' is the same as '-f speed'"), 0 }, { "!option8 <cmd>", I18N_NOOP("All options following this one will be treated as arguments"), 0 }, { "+file", I18N_NOOP("A required argument 'file'"), 0 }, { "+[arg1]", I18N_NOOP("An optional argument 'arg1'"), 0 }, { "!+command", I18N_NOOP("A required argument 'command', that can contain multiple words, even starting with '-'"), 0 }, { "", I18N_NOOP("Additional help text not associated with any particular option") 0 }, KCmdLineLastOption // End of options. }; The I18N_NOOP macro is used to indicate that these strings should be marked for translation. The actual translation is done by KCmdLineArgs. You can't use i18n() here because we are setting up a static data structure and can't do translations at compile time. Note that a program should define the options before any arguments. When a long option has a short option as an alias, a program should only test for the long option. With the above options a command line could look like: myapp -a -c 4800 --display localhost:0.0 --nooption5 -d /tmp/file Long binary options can be in the form 'option' and 'nooption'. A command line may contain the same binary option multiple times, the last option determines the outcome: myapp --nooption4 --option4 --nooption4 myapp --nooption4 If an option value is provided multiple times, normally only the last value is used: myapp -c 1200 -c 2400 -c 4800 myapp -c 4800 However, an application can choose to use all values specified as well. As an example of this, consider that you may wish to specify a number of directories to use:getOptionList() Tips for end-users: - Single char options like "-a -b -c" may be combined into "-abc" - The option "--foo bar" may also be written "--foo=bar" - The option "-P lp1" may also be written "-P=lp1" or "-Plp1" - The option "--foo bar" may also be written "-foo bar" - Version: - 0.0.4 Definition at line 222 of file kcmdlineargs.h. Constructor & Destructor Documentation Constructor. For internal use only. Constructor. The given arguments are assumed to be constants. Definition at line 989 of file kcmdlineargs.cpp. Destructor. For internal use only. use only. Use clear() if you want to free up some memory. Destructor. Definition at line 1001 of file kcmdlineargs.cpp. Member Function Documentation Add options to your application. You must make sure that all possible options have been added before any class uses the command line arguments. The list of options should look like this: static KCmdLineOptions options[] = { { "option1 <argument>", I18N_NOOP("Description 1"), "my_extra_arg" }, { "o", 0, 0 }, { "option2", I18N_NOOP("Description 2"), 0 }, { "nooption3", I18N_NOOP("Description 3"), 0 }, KCmdLineLastOption } - "option1" is an option that requires an additional argument, but if one is not provided, it uses "my_extra_arg". - "option2" is an option that can be turned on. The default is off. - "option3" is an option that can be turned off. The default is on. - "o" does not have a description. It is an alias for the option that follows. In this case "option2". - "+file" specifies an argument. The '+' is removed. If your program doesn't specify that it can use arguments your program will abort when an argument is passed to it. Note that the reverse is not true. If required, you must check yourself the number of arguments specified by the user: KCmdLineArgs *args = KCmdLineArgs::parsedArgs(); if (args->count() == 0) KCmdLineArgs::usage(i18n("No file specified!")); cmd = myapp [options] file options = (option)* option = --option1 \<argument> | (-o | --option2 | --nooption2) | ( --option3 | --nooption3 ) Instead of "--option3" one may also use "-option3" Usage examples: - "myapp --option1 test" - "myapp" (same as "myapp --option1 my_extra_arg") - "myapp --option2" - "myapp --nooption2" (same as "myapp", since it is off by default) - "myapp -o" (same as "myapp --option2") - "myapp --nooption3" - "myapp --option3 (same as "myapp", since it is on by default) - "myapp --option2 --nooption2" (same as "myapp", because it option2 is off by default, and the last usage applies) - "myapp /tmp/file" - Parameters: - Definition at line 206 of file kcmdlineargs.cpp. Add standard option --tempfile. - Since: - 3.4 Definition at line 1287 of file kcmdlineargs.cpp. Get the appname according to argv[0]. - Returns: - the name of the application Definition at line 199 of file kcmdlineargs.cpp. Read out an argument. - Parameters: - - Returns: - A const char *pointer to the n'th argument. Definition at line 1230 of file kcmdlineargs.cpp. Clear all options and arguments. Definition at line 1010 of file kcmdlineargs.cpp. Read the number of arguments that aren't options (but, for example, filenames). - Returns: - The number of arguments that aren't options Definition at line 1222 of file kcmdlineargs.cpp. Get the CWD (Current Working Directory) associated with the current command line arguments. Typically this is needed in KUniqueApplication::newInstance() since the CWD of the process may be different from the CWD where the user started a second instance. - Returns: - the current working directory Definition at line 194 of file kcmdlineargs.cpp. Enable i18n to be able to print a translated error message. N.B.: This function leaks memory, therefore you are expected to exit afterwards (e.g., by calling usage()). Definition at line 743 of file kcmdlineargs.cpp. Read out a string option. The option must have a corresponding KCmdLineOptions entry of the form: - Parameters: - - Returns: - The value of the option. If the option was not present on the command line the default is returned. If the option was present more than the value of the last occurrence is used. Definition at line 1117 of file kcmdlineargs.cpp. Read out all occurrences of a string option. The option must have a corresponding KCmdLineOptions entry of the form: - Parameters: - - Returns: - A list of all option values. If no option was present on the command line, an empty list is returned. Definition at line 1149 of file kcmdlineargs.cpp. Initialize Class. This function should be called as the very first thing in your application. This method will rarely be used, since it doesn't provide any argument parsing. It does provide access to the KAboutData information. This method is exactly the same as calling init(0,0, const KAboutData *about, true). - Parameters: - - See also: - KAboutData Definition at line 153 of file kcmdlineargs.cpp. Initialize class. This function should be called as the very first thing in your application. It uses KAboutData to replace some of the arguments that would otherwise be required. - Parameters: - Definition at line 162 of file kcmdlineargs.cpp. - Deprecated: - You should convert any calls to this method to use the one above, by adding in the program name to be used for display purposes. Do not forget to mark it for translation using I18N_NOOP. Definition at line 136 of file kcmdlineargs.cpp. Initialize class. This function should be called as the very first thing in your application. - Parameters: - - Since: - 3.2 Definition at line 127 of file kcmdlineargs.cpp. Read out a boolean option or check for the presence of string option. - Parameters: - - Returns: - The value of the option. It will be true if the option was specifically turned on in the command line, or if the option is turned on by default (in the KCmdLineOptions list) and was not specifically turned off in the command line. Equivalently, it will be false if the option was specifically turned off in the command line, or if the option is turned off by default (in the KCmdLineOptions list) and was not specifically turned on in the command line. Definition at line 1179 of file kcmdlineargs.cpp. - Returns: - true if --tempfile was set - Since: - 3.4 Definition at line 1292 of file kcmdlineargs.cpp. Load arguments from a stream. Definition at line 264 of file kcmdlineargs.cpp. Made public for apps that don't use KCmdLineArgs - Parameters: - - Returns: - the url. Definition at line 1251 of file kcmdlineargs.cpp. Access parsed arguments. This function returns all command line arguments that your code handles. If unknown command-line arguments are encountered the program is aborted and usage information is shown. - Parameters: - Definition at line 310 of file kcmdlineargs.cpp. Reset all option definitions, i.e. cancel all addCmdLineOptions calls. Note that KApplication's options are removed too, you might want to call KApplication::addCmdLineOptions if you want them back. You usually don't want to call this method. Definition at line 1019 of file kcmdlineargs.cpp. Made public for apps that don't use KCmdLineArgs To be done before makeURL, to set the current working directory in case makeURL needs it. - Parameters: - Definition at line 516 of file kcmdlineargs.h. Read out an argument representing a URL. The argument can be - an absolute filename - a relative filename - a URL - Parameters: - - Returns: - a URL representing the n'th argument. Definition at line 1246 of file kcmdlineargs.cpp. Print an error to stderr and the usage help to stdout and exit. - Parameters: - Definition at line 757 of file kcmdlineargs.cpp. Print the usage help to stdout and exit. - Parameters: - Definition at line 772 of file kcmdlineargs.cpp. The documentation for this class was generated from the following files:
http://api.kde.org/3.5-api/kdelibs-apidocs/kdecore/html/classKCmdLineArgs.html
CC-MAIN-2014-42
refinedweb
1,825
57.27
Back to article April 28, 2003 Command Prompt Inc. has bolstered SSL support and added support for a slew of capabilities in Mammoth PostgreSQL, its commercially supported distribution of the open-source PostgreSQL database. New features in Mammoth PostgreSQL 7.3.2, standard and deluxe editions, include support for schemas and namespaces, enabling users to create objects in separate namespaces so that two people or applications can have tables with the same name. The product also features a public schema for shared tables. Also included is support for table functions. Users now can call a table function in the SELECT FROM clause and treat its output like a table. PL/pgSQL functions can also now return data sets. The article continues at The Network for Technology Professionals About Internet.com Legal Notices, Licensing, Permissions, Privacy Policy. Advertise | Newsletters | E-mail Offers
http://www.databasejournal.com/news/print.php/2197371
CC-MAIN-2014-42
refinedweb
141
50.53
1.The only time I have seen the add variable be unavailable is when either the .h or cpp is read only. 2.There never was a CListView control in the tool bar. There was a CListCtrl in the toolbar and it's still there. "Citadel85" wrote in message news:f0f03f26.0310310716.6d19e851@posting.google.c om... > OK ... have a bit of experience with VC 6 ... having a lot of problems > convering to the new .net ide. > > 1st... creating control variables is proven to be a pain in the XXX > > I constantly open the dialog in the resource editor and while 'some' > items are class wizard accessible (I want to create a control > variable) most of the time the menu option 'add variable' for the item > I want is not available. Anyone experience this problem???? Yeah, I > know how to do it manaully; I have an MCSD (VC 6) ... but, the wizard > sure saved time for that nitnoid stuff. > > > 2nd ... Have I done a brain dump or something? ... My toolbox control > does not appear to contain the CListView item anymore? ... I have the > CListBox.. but not the ListView. I also noted that .net comes with > Crystal embedded. These control items are also disabled? Do I have > to register these components manually to be able to use them (Crystal > and some module for the CListView)? > I thought I might have simply accidentially moved or deleted it... so, > I opened the toolbar customizer, ordered the avaialabe tools by > namespace and found the Windows.System.Form namespace item - unchecked > and rechecked the item. It added the icon to my 'dialog' control > list.. but, while there,... it is disabled. HELP!!!! > > > Any assistance is appreciated.
http://fixunix.com/programmer/96374-net-problems-listview-class-wizard-add-variable.html
CC-MAIN-2014-41
refinedweb
280
77.94
23 August 2013 23:00 [Source: ICIS news] HOUSTON (ICIS)--Here is Friday’s end of day Americas oil and chemical market summary from ICIS. CRUDE: Oct WTI: $106.42/bbl, up $1.39; Oct Brent: $111.04/bbl, up $1.14 NYMEX WTI crude futures extended the previous session’s gains on pre-weekend buying. A rally in the gasoline futures complex in response to various refinery issues in ?xml:namespace> RBOB: Sep $3.0072 /gal, up 4.24 cents/gal Reformulated blendstock for oxygen blending (RBOB) gasoline futures settled higher on higher crude futures and reports of refinery outages. This was the first time the September contract traded above the $3/gal mark since it took over the prompt spot on 1 August. NATURAL GAS: Sep: $3.485/MMBtu, down 6.0 cents The front month contract on the NYMEX natural gas futures slipped back below the $3.50/MMBtu mark as the market corrected itself following Thursday's near 3% price surge. Bearish sentiment strengthened through the day on concerns over high production and low long-term demand, despite service company Baker Hughes reporting a slight fall in the number of drilling rigs being used across the US in its latest weekly rig count. ETHANE: lower at 24.75 cents/gal Ethane spot prices were slightly lower on a drop in natural gas futures trading. AROMATICS: styrene flat at 76.00-77.00 cents/lb Prompt styrene spot prices were discussed at 76-77 cents/lb FOB (free on board) on Friday, sources said. The range was flat from a week earlier as supply remains tight and trade participants kept to the sidelines. OLEFINS: ethylene traded lower at 54.5 cents/lb, PGP traded higher at 67.25 cents/lb US August ethylene traded three times at 54.50 cents/lb, lower than the previous day’s trade at 54.75 cents/lb, as supply concerns eased. US August polymer-grade propylene (PGP) traded at 67.25 cents/lb on Friday afternoon, higher than the previous day’s trade of 67.00 cents/lb but lower than an early-Friday trade at 67
http://www.icis.com/Articles/2013/08/23/9700299/EVENING-SNAPSHOT---Americas-Markets-Summary.html
CC-MAIN-2014-52
refinedweb
356
69.07
Hi I’ve got a Raspberry Pi Zero with a Grove Hat and three sound sensors plugged into the analogue ports. They are working - I get values being reported using the grove_sound_sensor example. However, they never report below 400, even in a quiet room and will often report 999 when there is little more than birdsong in the distance or one will not respond even if there is a loud noise in the immediate vicinity. Putting them linearly at 500mm spacings and making a loud test sound at one end does not always result in them reporting in the correct order, but I suspect this is a consequential error. They are definitely responding to sounds individually though. I am using this cut-down code: import sys import math import datetime from grove.adc import ADC adc = ADC() TH = 999 detected = [] times = [] V0 = 'v0' V2 = 'v2' V4 = 'v4' print('Detecting sound...') while True: v0 = adc.read(0) v2 = adc.read(2) v4 = adc.read(4) #if len(detected) > 0: # print('.', end='') # sys.stdout.flush() if v0 >= TH and V0 not in detected: detected.append(V0) times.append(datetime.datetime.now().timestamp()) if v2 >= TH and V2 not in detected: detected.append(V2) times.append(datetime.datetime.now().timestamp()) if v4 >= TH and V4 not in detected: detected.append(V4) times.append(datetime.datetime.now().timestamp()) if len(detected) == 3: break print() print(detected) print(times) Am I doing something wrong? Would I be better off switching to Loudness Sensors? Thanks Nick
https://forum.seeedstudio.com/t/sound-sensor-range-of-values-400-999/259602
CC-MAIN-2021-31
refinedweb
250
57.98
Join the community to find out what other Atlassian users are discussing, debating and creating. Hello :) Im wondering is there a way to send post http request via the scriptrunner postfunction feature? The current script i have is JAVA: OkHttpClient client = new OkHttpClient(); MediaType mediaType = MediaType.parse("application/json"); RequestBody body = RequestBody.create(mediaType, "{\r\n\"username\":\"USERNAME\",\r\n\"password\":\"PASSWORD\"\r\n}"); Request request = new Request.Builder() .url(" ADDRESS/authentication") .post(body) .addHeader("content-type", "application/json") .addHeader("cache-control", "no-cache") .addHeader("postman-token", "dabd56ae-0efb-db67-72fe-657b5682a466") .build(); Response response = client.newCall(request).execute(); Anyone care to help to convert into groovyscript. Many thanks! Pon Hi Pon: Doing a search on this I found a community question that might be helpful to you. Here you will find a descriptive example on how to do this with Jira and Scriptrunner. If this isn't working for you, give me a thorough example on what you want to do and I will try to provide a custom script for you. Cheers! Dyelamos Hey Daniel, Thanks for the reply! Im quite new to this, basically what im trying to do is call rest api to return a sucessful response which: i attempted the above, i dont see where i can place the auth/payload when sending the request, i tried searching also around and gotten around this far: package com.hybris.activity import groovyx.net.http.ContentType import groovyx.net.http.RESTClient def activitiRestClient = new RESTClient("") activitiRestClient.auth.basic "USERNAME", "PASSWORD" def response = activitiRestClient.get( path: "/devices" ) println response.data.toString(2) I have a python 3 code which is as follows (and works in python): import http.client conn = http.client.HTTPConnection("10.150.14.164:8090") payload = "{\r\n\"username\":\"USERNAME\",\r\n\"password\":\"PASSWORD\"\r\n}" headers = { 'content-type': "application/json", 'cache-control': "no-cache", 'postman-token': "aace72a6-d5d2-1f44-23bf-4a10b6306b4c" } conn.request("POST", "/authentication", payload, headers) res = conn.getresponse() data = res.read() print(data.decode("utf-8")) which im trying to convert over to groovy script. Any guidance would be very helpful :) Thanks, Pon Hi Pon: In this link you will find official documentation as to how to use httpBuilder to send a post request. That is the utility that we use to build post request in Adaptavist. Cheers DYelamos Thanks Daniel :) got it working. It was so simple all along: import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import groovyx.net.http.ContentType import static groovyx.net.http.Method.* import groovy.json.JsonSlurper import net.sf.json.groovy.JsonSlurper def http = new HTTPBuilder('') http.request(POST) { requestContentType = ContentType.JSON body = [username: 'USERNAME', password: 'PASSWORD'] response.success = { resp, JSON -> return JSON } response.failure = { resp -> return "Request failed with status ${resp.status}" } } Glad you did mate! Thanks for the upvote and good luck in future programming! Cheers Strange! This code gives me errors on multiple lines starting with 'requestContentType' and 'response' are not defined. def url = '' def post = new HttpPost(url) post.setEntity(new StringEntity(jsonRequest, 'UTF-8')) // execute def client = HttpClientBuilder.create().build() def response = client.execute(post) def bufferedReader = new BufferedReader(new InputStreamReader(response.getEntity().getContent())) def jsonResponse = bufferedReader.getText() I used this. I think that groovy code is syntactically incorrect. You must use def for inferred types. @Daniel Yelamos [Adaptavist] Can you help me with the GET method to call a rest api and print the response HI @Daniel Yelamos [Adaptavist] I have a Similar requirement can you please help me With REST Call I Need to Make a REST call Depends Upon the Value Selected in the Custom Field I have posted my requirement here Please go through this once please Can you please me Thanks, Kumar @Daniel Yelamos [Adaptavist] I have the below code, which I want to run in groovy via scriptrunner curl -x "proxyServer:port" -k "http url where I want to post the data" -d '{ "title" : "API Change", "description": "Add loads of DNS records", }' Could you please help me so that I can write the same code in groovy. Regards, Sugand.
https://community.atlassian.com/t5/Marketplace-Apps-Integrations/Send-post-http-request-via-scriptrunner-post-function/qaq-p/598870
CC-MAIN-2019-39
refinedweb
678
50.94
The Visual Studio 2008 CSharp samples include several valuable tools that LINQ developers can use to help expedite the development process. One of the is the Expression Tree Visualizer. This tool works in both Visual Studio Express and the other versions of Visual Studio that support C# development. Note: If you are unfamiliar with expression trees, then you might want to view this post. You can access the Expression Tree Visualizer by choosing Help | Samples from the Visual Studio 2008 menu system. From there you will be able to access links to the updated online version of the samples. After you unzip the samples you should navigate to the LinqSamples directory, open the ExpressionTreeVisualizer project, and build it. By default, F6 is the build key in Visual Studio 2008. After the build process is complete a library called ExpressionTreeVisualizer.dll will be created in the following directory: …\LinqSamples\ExpressionTreeVisualizer\ExpressionTreeVisualizer\bin\Debug Copy this DLL to your Visualizers directory. If the directory does not exist, you can create it inside the default user directory for Visual Studio. That directory would typically be located here: …\Documents\Visual Studio 2008\Visualizers One Windows XP system, the path might look like this: …\My Documents\Visual Studio 2008\Visualizers The Code Snippets, Projects and Templates directories also appear in this same location. You can also find the location of this directory by selecting Tools | Options | Projects and Solutions | General from the Visual Studio 2008 menu. You may need to restart Visual Studio after copying the ExpressionTreeVisualizer library to the Visualizers directory. You can now test the visualizer by opening a project that creates an expression tree such as the Expression Tree Basics sample from the Code Gallery. Alternatively, you can create a default console application, add System.Linq.Expressions to the using statements, and add two lines of code to the program body: using System; using System.Linq.Expressions; namespace ConsoleApplication107 { class Program { static void Main(string[] args) { Expression<Func<int, int, int>> expression = (a, b) => a + b; Console.WriteLine(expression); } } } If you are using the code shown above, set a breakpoint on the WriteLine statement and run the program. Hover your mouse under the variable expression, as shown in Figure 1. A small Quick Info box with a magnifying glass will appear. Figure 1: Examining the variable expression at run time in Visual Studio with Quick Info. If you click on the magnifying class, an option to view the Expression Tree Visualizer will appear, as shown in Figure 2. Figure 2: By clicking on the magnifying class you can get access to the Expression Tree Visualizer. If you select the visualizer option then it will appear and you can view the nodes of your expression tree, as shown in Figure 3. Figure 3: A visualization of the expression tree for a simple lambda expression. If you are working with a LINQ to SQL query, then can use the same technique to view an expression tree. You will, however, need to take one extra step because the expression tree is a member of the IQueryable variable returned from a typical LINQ to SQL query expression. Consider this simple LINQ to SQL query: var query = from c in db.Customers where c.City == "London" select c; At runtime, you can get Quick Info on the variable query, much as we did on the variable expression earlier in this post. Figure 4 shows how it looks: Figure 4: Hover the mouse over the variable query, then drill down to view the expression tree which is found in the queryExpression field. (Double click on this image to see a full-sized version.) When you select the magnifying class, you will be presented with an option to pop up the Expression Tree Visualizer, which will look like the image shown in Figure 3, but with different data. Summary This post describes how to use the Expression Tree Visualizer that ships with the Visual Studio 2008 samples. See also: You’ve been kicked (a good thing) – Trackback from DotNetKicks.com Thanks for this…. I really needed it 😀 Here is the latest in my link-listing series .  Also check out my ASP.NET Tips, Tricks and Tutorials IEnumerable Tales: Expression Tree Visualizer I’m not a stupid man but I can’t for the life of me find the updated samples you’re talking about downloading. How many clicks to get to the bottom of an msdn blog post? The world may never know . . . You have been kicked a good thing To download the samples, you need to click on the "c# samples" link, and then click the ‘Releases’ tab. This is rather non-intuitive. At least, I was confused! Is there a way to construct an expression tree from the string returned by .ToString() ? Thanks! Welcome to the forty-first Community Convergence. The big news this week is that we have moved Future Propecia and online drugs stores. Propecia side effects fre. Propecia reverse impotence. Propecia post side effect. Propecia uk order online. I am as well interested in knowing of if there is a way to construct an expression tree from the string returned by .ToString() ? in other words: I want to offer to user a UI that would allow them to construct a lambda expression string (e.g.: "y => y == ‘5’", in my case TResult of the Func would always be bool) then store it into a database as a business rule. How can i then reconstuct the corresponding Expression<Func<object, bool>> from my string? Thanks! Taking the Magic out of Expression Taking the Magic out of Expression Declare your ObjectDataSource using lambda expressions Hi Chalie, I get this exception whenever I click on the magnifier: Function evaluation disabled because a previous function evaluation timed out. You must continue execution to reenable function evaluation. And no window is shown which show the expression tree. Thanx Hey, calvert is miss-guiding us. I placed it in the documentvs 2008…debugger… folder but every time i run it throws exception. But from Scott Gu’s blog i found this location for the visualizers: Program FilesMicrosoft Visual Studio 9.0Common7PackagesDebuggerVisualizers Now its working. Thank you Calvert. It appears that the current visualizer does not work with 2010 (beta 1). Is there a revised version for 2010? It appears to be missing from the C# samples for 2010 as well. Thanks for the tip! Installed the Visualiser on VS 2008, and it works a treat. Can you give me new link for expression tree visualizer for vs2008? because i can't find the file
https://blogs.msdn.microsoft.com/charlie/2008/02/13/linq-farm-seed-using-the-expression-tree-visualizer/
CC-MAIN-2016-30
refinedweb
1,094
56.35
11 February 2010 04:37 [Source: ICIS news] SINGAPORE (ICIS news)--Korea Petrochemical Industry Co (KPIC) plans to keep operating its 470,000 tonne/year naphtha cracker in Onsan at maximum capacity, even as ethylene prices started to slide in pre-Lunar New Year holidays, a company source said on Thursday. The cracker would run at full rate in March, steady to levels in February and January, he said. “Ethylene prices have weakened but the margins are still healthy. So the Korean crackers are eager to operate at 100%,” said the source. Margins have toned down to around $640/tonne (€467/tonne) from nearly $700/tonne two weeks ago, tracking a softer downstream petrochemicals market, traders said. Asian ethylene prices dropped to below $1,350/tonne CFR (cost and freight) NE (northeast) Asia from close to $1,400/tonne ?xml:namespace> But the margins were considered healthy, and were wide enough to cover the base cost of $250-300/tonne, they added. Another KPIC source said the cracker may undergo a turnaround in October, though the plans were tentative for now. Asia’s second-half March open-spec naphtha rose to $686.50-689.50/tonne CFR (cost and freight) Japan on Thursday midday trade, up $8.50-9.50/tonne CFR Japan a day ago as US crude futures marched towards $75/bbl, based on ICIS pricing data. KPIC bought 300,000 tonnes of full range term naphtha for its October 2009-September 2010 requirement, at discounts of $1.00-2.00/tonne
http://www.icis.com/Articles/2010/02/11/9333683/s-koreas-kpic-to-run-naphtha-cracker-at-full-tilt-in-march.html
CC-MAIN-2014-35
refinedweb
254
61.26
Here is the question: Write a program that creates a queue of Strings and allows the input of an integer value n, followed by n names to be placed in the queue. Then the program should execute a loop that performs the following: 1 It displays the name in the front of the queue and asks the user to specify a number of names to be deleted. The user specifies a value and it deletes that number of names. 2 Each name deleted is displayed on the screen. 3 The two steps are continued until the queue is empty. My problem here is that when i run the code i enter 3 names. But when it comes to delet the names if i enter 3 all the will be delet and the program works correctly but if i enter 1 so that i try to delete 1 by one it shows me the same name that is being deleted each time and when the last name is deleted the program stop working. Any help how i could correct it? here are my codes: #include <iostream> #include <queue> #include <string> using namespace std; int main() { int n,b=1; string x[10000]; int i,c; cout<<"Enter the number of names:"; cin>>n; queue <string> names; cout<<"Enter the names to be placed in the queue:"<<endl; for(i=0;i<n;i++){ cout<<b<<"."; b++ ; cin>>x[i]; names.push(x[i]); } for(i=0;i<n;i++){ cout<< "The person at the front of the queue is " <<names.front()<<endl; cout<< "Enter a number of names to be deleted: "; cin>>c; for(i=0;i<c;i++){ names.pop(); cout<< "The person got deleted is " <<x[i]<<endl; } } cout << "There are currently " << names.size () << " people in the queue" << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/477559/queue-help
CC-MAIN-2018-51
refinedweb
302
77.57
. The project we are about to dive into is from the Android Developers website called Notepad v2 with modifications to make it geared more towards our RandomQuotes Project. We are using an already made example then modifying it because it covers more advanced ground on the GUI and database sides which is excellent for beginners and great for more advanced users to push on with. Since the items will be displayed to us in a ListView we can no longer entitle this project RandomQuote but will instead use EnhancedQuotes as our project name. Just to be clear, we will be creating a whole new project instead of copying another one over. Here is the required information below to make the project Project Name: EnhancedQuotes Build Target: Android 1.5 Application Name: EnhancedQuotes Package Name: com.gregjacobs.enhancedquotes Create Activity: QuotesMain Min SDK Version: 3 After your project is created we can start some more advanced GUI work and integrate that with some update and delete statements. At this point, I’d like to start dividing our code into different files based on the need of the application. This is important in modern programming because it allows us to stay organized and execute functions for different screens or layouts efficiently and effectively. For this project we are going to split our code into 3 .java files and we are going to have 3 different layout files as well. We will start off with the basics by creating a new class file in our package com.gregjacobs.enhancedquotes called QuotesDBAdapter. This will contain our database code but instead of using the previous database file we created, we will start a new one. Lets look at how Google does it and see whats available other than Raw Queries from the previous tutorial. package com.gregjacobs.enhancedquotes; import java.util.Random; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; import android.util.Log; public class QuotesDBAdapter { static Random random = new Random(); public static final String KEY_QUOTES = "quotes"; public static final String KEY_ROWID = "_id"; private static final String TAG = "QuotesDbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table tblRandomQuotes (_id integer primary key autoincrement, " + "quotes text not null);"; private static final String DATABASE_NAME = "Random"; private static final String DATABASE_TABLE = "tblRandomQuotes"; private static final int DATABASE_VERSION = 2; private final Context mCtx;); } } /** * Constructor - takes the context to allow the database to be * opened/created * * @param ctx the Context within which to work */ public QuotesDBAdapter(Context ctx) { this.mCtx = ctx; } public QuotesDBAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } Looking at the code above, all of the imports should look familiar as well as everything leading up to this point. This is standard database code to implement in your Android applications. In the code below we start getting into separating our SQL statements into sections and using the functions that were stated in the previous post. public long createQuote(String quotes) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_QUOTES, quotes); return mDb.insert(DATABASE_TABLE, null, initialValues); } Looking at the insert statement the first variable would be the database table we are inserting into, the next variable is if we have a null set of values we would enter that here and the last is the values being inserted into the table. public boolean deleteQuote(long rowId) { return mDb.delete(DATABASE_TABLE, KEY_ROWID + "=" + rowId, null) > 0; } The delete statement holds three values in its method. The first variable to enter would be the database table, the second being the where statement if there was one. In this case we will need it but for some instances you may not. The last variable is the Where statement arguments but if you included them in the previous part, that will work too. It is good to note that putting “?” in your where statement and defining them in the third variable can be done as well. public Cursor fetchAllQuotes() { return mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_QUOTES}, null, null, null, null, null); } public Cursor fetchQuote(long rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_QUOTES}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } FetchAllQuotes runs a query against the database and grabs the id and the quotes field and return all the results to a cursor. The first variable is the database table, the second is the columns the statement should return, third being the rows from the columns to return if there are any, fourth being the selection arguments, fifth is the group by SQL function, sixth is a having SQL statement, and seventh is the order by SQL function. For this we are only filling the first two and the rest can be null. The fetchQuote uses the same function but specifies what row its looking for. public boolean updateQuote(long rowId, String title) { ContentValues args = new ContentValues(); args.put(KEY_QUOTES, title); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } For the update statement we still need the database name, the new variables for any given row and finally the row number in which to update. public int getAllEntries() { Cursor cursor = mDb.rawQuery( "SELECT COUNT(quotes) FROM tblRandomQuotes", null); if(cursor.moveToFirst()) { return cursor.getInt(0); } return cursor.getInt(0); } public String getRandomEntry() { int id = 1; id = getAllEntries(); int rand = random.nextInt(id) + 1; Cursor cursor = mDb.rawQuery( "SELECT quotes FROM tblRandomQuotes WHERE _id = " + rand, null); if(cursor.moveToFirst()) { return cursor.getString(0); } return cursor.getString(0); } } These two functions above were mentioned last post and will be used to generate a random quote on the screen using a Toast. Next we are going to cover all of the .xml files starting with the strings.xml file. this will contain the strings for all three of our layout XML files. The code should be pretty straight forward with already having done two or three examples. The strings.xml is as follows: <?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">Quotes Tracker</string> <string name="no_quotes">No Quotes Yet</string> <string name="menu_insert">Add Quote</string> <string name="menu_delete">Delete Quote</string> <string name="title">Quote:</string> <string name="confirm">Confirm</string> <string name="edit_quotes">Edit Quote</string> <string name="genRan">Generate Random Quote!</string> </resources> After the strings.xml file we are going to move onto row.xml in the layout folder. It is not created yet so we are going to create a new XML file. We do this by right clicking on the layout folder and navigating to New and then to Other…. After this we will scroll down until we find the XML folder. Open it and double click on the file called XML. Change the name of the XML file from NewFile.xml to row.xml. The file will be created and the console may come up and present you with an error but we will fix that in a second. Now we get to the code we are going to insert into the XML file: <?xml version="1.0" encoding="utf-8"?> <TextView android: The source code for this layout is a label or TextView that will insert multiple times into the main.xml for every entry we have. We will move onto the main.xml to show you how this is done. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <ListView android: <TextView android: <Button android: </LinearLayout> We are using a LinearLayout above and a ListView and a single Label that displays “No Quotes!” if the database is empty. Even though the items in the database are shown we will want to generate one randomly and that is what the button is doing at the bottom of the ListView. We can now move onto the edit.xml here which a new XML file (same spot as last time) will need to be created: <: <EditText android: </LinearLayout> <Button android: </LinearLayout> Above we have one linear layout after another and that is for a very specific reason. To be able to present a neat and clean layout we must use the first linear layout to align everything vertically and fill the parent window. After that, the second linear layout will align the textbox and label horizontally. If the two linear layouts were not present the textbox would be the size of the current screen instead of the neat one line layout we have now. Other than that, the layout is pretty is basic and there should be no trouble here. Next we are going to create a new .java file in our package com.gregjacobs.enhancedquotes called QuoteEdit and it will contain code to accept any edits we may do on our items. Below is the code and comments on the important stuff you may not know, although it should look pretty familiar because we have used almost all of these functions and methods in previous posts. Here is the code for QuoteEdit.java: package com.gregjacobs.enhancedquotes; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; public class QuoteEdit extends Activity { private EditText mQuoteText; private Long mRowId; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.edit); mQuoteText = (EditText) findViewById(R.id.title); Button confirmButton = (Button) findViewById(R.id.confirm); mRowId = null; Bundle extras = getIntent().getExtras(); if (extras != null) { String title = extras.getString(QuotesDBAdapter.KEY_QUOTES); mRowId = extras.getLong(QuotesDBAdapter.KEY_ROWID); if (title != null) { mQuoteText.setText(title); } } All above is pretty standard until you get to the Bundle extras = getIntent().getExtras(); part of the code. This code is pulling from the QuotesMain.java using an Intent. Now some beginners may be wondering what an Intent is. An Intent is a passive object to hold data that can pass between applications. In human-speak, its the glue that allows us to get information from the QuotesMain.java file to the QuotesEdit.java file efficiently and easily. Another new term would be a Bundle. A bundle allows use to map strings to objects such as the Intent we just talked about. So with the Bundle entitled extras, we are able to pull the data from the main .java file over to QuotesEdit.java file and vice versa. confirmButton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { Bundle bundle = new Bundle(); bundle.putString(QuotesDBAdapter.KEY_QUOTES, mQuoteText.getText().toString()); if (mRowId != null) { bundle.putLong(QuotesDBAdapter.KEY_ROWID, mRowId); } Intent mIntent = new Intent(); mIntent.putExtras(bundle); setResult(RESULT_OK, mIntent); finish(); } }); } } The Bundle above will package the current text in the textbox with the original ID of the object and send it back over the QuotesEdit.java to the QuotesMain.java. We are now ready to move onto QuotesMain.java where we are going to pull everything we have done so far together. This code will implement the long press on items as well as utilizing the menu button on any phone to bring up an add function. Here is the code to utilize in QuotesMain.java: package com.gregjacobs.enhancedquotes; import android.app.ListActivity; import android.view.View.OnClickListener; import android.content.Context; import android.content.Intent; import android.database.Cursor; import android.os.Bundle; import android.view.ContextMenu; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ContextMenu.ContextMenuInfo; import android.widget.ListView; import android.widget.Button; import android.widget.SimpleCursorAdapter; import android.widget.Toast; import android.widget.AdapterView.AdapterContextMenuInfo; Above we have a few new imports to be able to use the more advanced items in this project such as intents, menu’s and menuitem, listview and simplecursoradapters. These will all be explained as they come up. public class QuotesMain extends ListActivity { private static final int ACTIVITY_CREATE=0; private static final int ACTIVITY_EDIT=1; private static final int INSERT_ID = Menu.FIRST; private static final int DELETE_ID = Menu.FIRST + 1; private QuotesDBAdapter mDbHelper; private Cursor mNotesCursor; public Button button; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mDbHelper = new QuotesDBAdapter(this); mDbHelper.open(); fillData(); registerForContextMenu(getListView()); button = (Button)findViewById(R.id.genRan); button.setOnClickListener(mAddListener); } We are making variables for creating, editing, inserting and deleting and making them static because they are not going to change. In the onCreate function we utilize fillData() which will be defined below. As well you will notice that we register the listview items in the context menu and set a listener for the button. A context menu is best described as kind of a pop-up menu and this will be utilized when we want to delete a item within the listview. private OnClickListener mAddListener = new OnClickListener() { public void onClick(View v) { //long id1 = 0; // do something when the button is clicked try { String quote = ""; quote = mDbHelper(); } } }; private void fillData() { // Get all of the rows from the database and create the item list mNotesCursor = mDbHelper.fetchAllQuotes(); startManagingCursor(mNotesCursor); // Create an array to specify the fields we want to display in the list (only TITLE) String[] from = new String[]{QuotesDBAdapter.KEY_QUOTES}; // and an array of the fields we want to bind those fields to (in this case just text1) int[] to = new int[]{R.id.text1}; // Now create a simple cursor adapter and set it to display SimpleCursorAdapter notes = new SimpleCursorAdapter(this, R.layout.row, mNotesCursor, from, to); setListAdapter(notes); } The button function above is exactly like the previous one that is used to generate a random quote from our list. The new method fillData() as mentioned above is going to be used to get all of the quotes and bind the ID and the actual quote together and add them to the listview using the SimpleCursorAdapter. The SimpleCursorAdapter is used to bind bind columns in a returned cursor to any text we place on the screen. @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); menu.add(0, INSERT_ID,0, R.string.menu_insert); return true; } @Override public boolean onMenuItemSelected(int featureId, MenuItem item) { switch(item.getItemId()) { case INSERT_ID: createNote(); return true; } return super.onMenuItemSelected(featureId, item); } In the first function above called onCreateOptionsMenu() we are adding the ability to add an item to the database using the menu press option that will bring up dialog asking if we would like to do this. If this completes successfully then the statement will return true. The one below it checks to see if an item has been pressed in the menu. If it has it uses a switch statement to check the value that we defined above. If it matches then we create a note which is defined below. @Override public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { super.onCreateContextMenu(menu, v, menuInfo); menu.add(0, DELETE_ID, 0, R.string.menu_delete); } @Override public boolean onContextItemSelected(MenuItem item) { switch(item.getItemId()) { case DELETE_ID: AdapterContextMenuInfo info = (AdapterContextMenuInfo) item.getMenuInfo(); mDbHelper.deleteQuote(info.id); fillData(); return true; } return super.onContextItemSelected(item); } private void createNote() { Intent i = new Intent(this, QuoteEdit.class); startActivityForResult(i, ACTIVITY_CREATE); } The function above is used to register the context menu and give the option to delete items using the menu.add function as seen above as well as here. If the context menu item Delete is pressed then the database helper will delete the quote based on the ID. The createNote() function uses an intent to pass the application over to the QuoteEdit file and load a new screen and when done a new intent will send the completed data back over here so we can add it to the listview. @Override protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); Cursor c = mNotesCursor; c.moveToPosition(position); Intent i = new Intent(this, QuoteEdit.class); i.putExtra(QuotesDBAdapter.KEY_ROWID, id); i.putExtra(QuotesDBAdapter.KEY_QUOTES, c.getString( c.getColumnIndexOrThrow(QuotesDBAdapter.KEY_QUOTES))); startActivityForResult(i, ACTIVITY_EDIT); } If an item from the listview is pressed the function above is loaded to initialize an intent and put the information into the intent and pull it over to the QuoteEdit class to be edited. When completed the QuoteEdit class will send the completed data back over and we can continue to add, edit or delete more items. @Override protected void onActivityResult(int requestCode, int resultCode, Intent intent) { super.onActivityResult(requestCode, resultCode, intent); Bundle extras = intent.getExtras();; } } } The method above takes the result of an activity and uses the result to utilize a specific method. The result in this case would either be creating a new quote or editing an existing one. The basis of this switch statement is to utilize the database helper and either insert data or update data within the database. We now have one more file to go over before we could run our application on the emulator. This would be the AndroidManifest.XML file and that will control what is registered and what runs, it is essentially the heart of the program and we need it to recognize that we have 2 parts to our application. Here is the code for the AndroidManifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> <activity android:</activity> </application> </manifest>. This just uses it as a reference and will use the specified build target when building your application. The application should build and you will be able to try out the more advanced features of Android programming. The possibilities are endless with the knowledge you learn but what if your database/database code is not working? Thats what the Dalvick Debug Monitor Server (DDMS) is for. When the emulator is running we are able to switch over to the DDMS by going to the top right of your screen and pressing the >> and then clicking on DDMS. If you are new to Android Development this new screen will be very confusing for you. What we are going to take out of going to the DDMS for right now is the ability to add and take from our emulator items which may be of interest. For this particular tutorial we are going to grab a database from the running emulator. Before we get started we will need to download some software I find very useful for SQLite developers. This being SQLite Database Browser (SDB). This software will allow you to open SQLite databases and explore the contents, even modifying them through SQL statements. Once the file is downloaded find the folder and click on the .exe to start it up. Leave this program up and we will get back to it later. To be able to put them into the SDB we need to pull them off the emulator. To do this we have to delve into the running emulator and find the database we want. It is key to remember that databases are application specific so we will need to find the package name and the database will be under a Database Folder. When in DDMS goto the devices tab and click on our emulator. then in the middle of the program should be a tab called File Explorer. Once File Explorer has been clicked we will now see three folders (maybe more depending on what you do with the device) called data,sdcard and system. We will leave system and sdcard alone for right now as we are going to use the data folder so open it. Once open, navigate to another folder called data and open it too. We are now presented with the package names with everything installed on our emulator. Navigate to com.gregjacobs.enhancedquotes and open it. Once open the two folders that appear should be databases and lib. Open databases folder and take the file called Random. Now to be able to take this file we are going to click on it once then navigate to the top of the tab and press the button that looks like a floppy disc with an arrow pointing to the left. Once this icon is clicked a dialog box will appear asking where you want to save the selected file. Choose an easy to locate place and click save. One the file has been taken from the emulator we are going to go back to SDB and click the big open button, find our file we saved and click open. Once the file is open we are able to see the structure of the database and navigate to browse the data. to do this we are going to click on the tab called Browse Data and in the dropdown that says table beside it, we are going to choose tblRandomQuotes. The data in the table will now appear and now you know where to find your data if you ever need to modify something an put it back onto the emulator. The SDB is also good for testing out SQL queries if you are unsure of what the data returned would be. This will be an invaluable tool if you do database applications in Android. Here are the files from my project for comparison: AndroidManifest.xml | edit.xml | main.xml | QuoteEdit.java | QuoteMain.java | QuotesDBAdapter.java | row.xml | strings.xml Now that you have an advanced understanding of some of the GUI options available to you and Database code covered in more detail in this tutorial, you are ready to start making some applications that have a little more depth than just button clicks. With the understanding you have of Intents and Bundles, you can make your programs well rounded and divide your code and layouts to match what your looking to make. If anyone has an idea that they have implemented since following this tutorial feel free to send them to me us so we can check out what you have learned. The next tutorial will cover designing the statistics tracker and using DroidDraw to develop some of the UI. Until the next tutorial, Happy Hacking! Articles Used For Reference: Google Notepad Tutorial – NotepadV2 Continue on to Part 5: DroidDraw & Information Tracker Complete 40 thoughts on “Android Development 101- Part 4:Advanced Database/GUI Code and DD.” this section just caught my eye – there are at least the sentences that don’t make any sense… It might be time to consider multiple page articles. ;) It might be time to avoid software howtos. ;) ohh can an upcoming part include barcodes and qr code ? would be a nice add on to this Please can you make the RSS feed only show a synopsis of articles. Many other sites do it and although most articles are worth reading all the way through, I visit the site to read them, I don’t want the whole thing appearing in my RSS feed. Mowcius Just came through to say the same as mowcius – it makes scrolling through an RSS feed fairly arduous when an article this long is shown in its entirety. Sorry guys. I just changed the wordpress settings to show only a summary in the feeds. I see no difference though. We’ll figure it out. @ AS & Mowcius: It’s called shortcuts. I’m sure there is a “next article” key for your reader. Google Reader: J to go to next article. K to go back. Simple. Screw this. I’m waiting for my AppInventor invite to arrive. How about making two feeds, one with the short version of each article and one with the complete article as before? I enjoyed being able to read articles with google reader instead of having to click on each article. I even read articles I wouldn’t have read otherwise. I have an android phone and I love it (posting from it now), and I enjoy writing apps for the platform. However, this Android SDK tutorial series (and software tutorials in general) have no place on HaD. There is no shortage of information on Androis app development. This isn’t challenging, impressive, or even remotely interesting. Put simply, it lacks hack value. I sincerely hope that HaD does not continue in this direction. This content is far more appropriate for a site like Lifehacker. “Sorry guys. I just changed the wordpress settings to show only a summary in the feeds. I see no difference though. We’ll figure it out.” Nooo? WTF? Because two guys whined? What about the rest of us, who loved the full articles? You don’t have to use a feature just because it’s there :) @Odd Rune, It was a test run, the feed should be back to normal. Not sure how to make 2 feeds, so it will stay this way for a while. Well, I like it.. Nice work. I’m a C# .net dev and this is helpful for explaining the diffs in the IDE as well as how android thinks. @r_d, qwerty and everyone else that wants to bitch about software how-to’s/ I personally enjoy the content. It’s nice to have something original here instead of just a rehash of makezine. That said, you can’t do software hacking without the tools to do so any more than you can solder with the tools and education to do that. I think a how-to solder article would be just as appropriate. Just because it teaches a generalized base skill instead of a specific hack doesn’t make it any less valid for the site. @Jonas – Sorry I didn’t notice it before but I had enclosing what I wrote and WordPress omitted them thinking they were functions. Here is the before and after shot of revisions:. After:. i tried executing the above code,i am getting java.Lang.IllegalArgumentException What’s happening with the following code? It looks like you’re returning cursor.getInt(0) whether the statement is true or false. Why have the if statement at all then? if(cursor.moveToFirst()) { return cursor.getInt(0); } return cursor.getInt(0); @Michael – We use the moveToFirst() method as a precaution to move the cursor to the first row. If the cursor is already there then we wouldn’t go through the if statement. Try taking it out and see if it works and let me know :) I am always overcautious when writing code (saves me from going back later and rewriting code) but if it works without then I would see no problem doing it without the if statement :) I was getting errors if I deleted a quote and then clicked the “Generate Random Quote” button. The query was trying to select records where the _id no longer existed. I added a new method called getIds and changed the getRandomEntry method so that it makes a random selection from the available _id records. I’m still not sure about the if statement. This example works fine if you do this: cursor.moveToFirst(); return cursor; I do think it’s a good idea to check if cursor.moveToFirst() succeeds, but I don’t know what conditions would cause it to fail or how to handle it gracefully if it does. Anyway, here are my changes: public Cursor getIds() { Cursor cursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID}, null,null,null,null,null,null); cursor.moveToFirst(); return cursor; } public String getRandomEntry() { Cursor aCursor = getIds(); int max = aCursor.getCount(); int rand = random.nextInt(max); aCursor.moveToPosition(rand); int rId = aCursor.getInt(0); Cursor cursor = mDb.rawQuery(“SELECT quotes FROM tblRandomQuotes WHERE _id = ” + rId, null); if (cursor.moveToFirst()) { return cursor.getString(0); } else { return “MoveToFirst Failed! Handle this somehow!”; } } I’m working on an application in which I have about 110mb of text data I want to be able to provide to users of the application. I tried putting it in a json file but found that there is an ~16mb limit for my application (at least that is where the emulator barfed). Any ideas or suggestions on how to be able to display the data without running out of memory? I was partially thinking of using multiple json files and deleting old data when I need to bring in new data, but still seems like it’ll be a PITA; especially since I’ll want to do searches…. Check out videos at if you want to see more of what you can do with SQLite. There is a recurring bug if you delete any records, there is a chance of getting a CursorIndexOutOfBoundsException. The problem is that you use a Random number generated outside the database table to determine what record _id to select, but the autoincrement key doesn’t re-use numbers once they’ve been deleted. Therefore the higher percentage of deletes you have versus the number of records in your table, the higher chance you have of getting the error. To reproduce: Add a new Quote(first record in db _id=1) Delete that quote Add another new quote(only record in db _id=2) click Generate and you have a 50% chance of getting the error. Anyway, the easy fix is to replace your getRandomEntry() method with the following: Hope this helps. -Brett doh.. 1 bug. change that to : During OnCreate the database gets created and opened but fillData calls fetchAllQuotes which causes an SQL exception “no such column: quotes: , while compiling: SELECT _id, quotes FROM tblRandomQuotes” Any help appreciated, I’ve been trying for two days to resolve this. Got it, the first time I ran the code in the emulator I must have got far enough to create the table “quotestext” as a typo but despite correcting the typo because the table had already been created in the database it persisted and did not get over-written or replace it with the correct table “quotes” if type text. Should have dived into the SQL browser section quicker. Thanks again for the examples. 2 questions: 1) How do I get the delete function to work? I press menu I get add quote but how do I call the delete quote? 2) If I am inside add quote and then press back (the curve arrow icon) I get an error. Am I missing something? Hello, I love your tutorials thanks for doing them they are very helpful… I have completed the database tutorial, and the application works, but when I try to use DDMS with the emulator running, nothing shows up under the devices tab in DDMS, and also there is nothing in the File Explorer. There was when I first tried it but the app crashed due to a type. I was able to locate the proble in the LogCat but the device and files have disappeared, even if I restart Eclipse, the emulator, and DDMS… any suggestions? I would really like to be able to see the database structure in the tool you provided. Thanks again, any help for me is greatly appreciated! Ok tried reseting the ADB and that got the emulator to show under the devices tab, but when I click on the ‘data’ folder under file explorer tab, all items vanish – seems to be buggy for some reason, any clue what to do? So I keep having the same problem after resetting the adb, I can open some of the folders, but everything disappears very quicky before I can grab the database file… this message is display in the console every time: “[2010-10-09 20:58:14 – DeviceMonitor]Sending jdwp tracking request failed! [2010-10-09 20:58:58 – DeviceMonitor]Adb connection Error:An existing connection was forcibly closed by the remote host [2010-10-09 20:58:59 – DeviceMonitor]Connection attempts: 1 [2010-10-09 20:59:11 – DeviceMonitor]Sending jdwp tracking request failed!” okay… sorry to be cluttering up your comment section – I finally grabbed it, I had to add some info to the manifest file like these: and these attributes into the manifest element: android:versionCode=”1″ android:versionName=”1.0″> and that seemed to buy me enough time to quicky race to the db file and pull it, but the adb still disconnected and files disappeared shortly afterwards… from reading forums I am guessing it is a problem with the manifest declarations… Not sure if you want to publish all this stuff but I just thought I’d share, so yeah, FYI… Tks for post! Found and fixed 2 bugs that were causing crash: 1) Cancelling (BACK button) the add quote activity protected void onActivityResult(int requestCode, int resultCode, Intent intent) { super.onActivityResult(requestCode, resultCode, intent); if(resultCode == RESULT_CANCELED) { Log.d(“TAG”, ” Activity “+intent+ ” CANCELED!”); // quick hack return;} 2) Delete a Quote had a problem .. db not open! public boolean onContextItemSelected(MenuItem item) { switch(item.getItemId()) { case DELETE_ID: mDbHelper.open(); AdapterContextMenuInfo info = (AdapterContextMenuInfo) item.getMenuInfo(); mDbHelper.deleteQuote(info.id); fillData(); return true; So i tried out this code, and it seems theres a bug with editing a quote if it has been added since one has been deleted. It looks like the locations are not lining up. I added a quote, deleted it and then added a new one. When I go to edit the new quote, the app crashes, is this because the position says it its the first but the _id is really 2? Thanks for the help Nevermind, found out what I was doing wrong, I moved the edit option into the context menu and wasn’t reading from the database when I wanted to edit, I was trying to use the mNotesCursor, and thats why things weren’t lining up hello, I tried this code but have some small error for below code mDbHelper.createQuote(title); this line createQuote error found so please tell mi what i am do @Override protected void onActivityResult(int requestCode, int resultCode, Intent intent) { super.onActivityResult(requestCode, resultCode, intent); Bundle extras = intent.getExtras(); mDbHelper.open();; } mDbHelper.close(); } } Excellent work! Thanks heaps for putting this up. The code works fine for me except for one little detail. The default text ‘No Quotes!’ that is supposed to be shown when there are no quotes in the database is always showing underneath the quotes on my system. Any idea where this might have come from? I have followed all the tutorial, but when i execute on the emulator the application stop unexpectedly and i need to force close it. I did an execution with the debugger to find at wich line it’s happening. It’s happening in the fillData() function at the line: SimpleCursorAdapter notes = new SimpleCursorAdapter(this, R.layout.row, mNotesCursor, from, to); If someone have a clue for me? I really don’t understand what I did wrong. Hi, Do you have any tutorial on Google maps and gps? Thanks for the helpful tutorial. Hi, Manny thnx for the tutors so far. I cant wait to continue tomorrow ^_^ nice tutorial. but i was getting a force close for the context menu(on long press). solved it by putting mDbHelper.open(); in QuotesMain.java file in OnContextItemSelected method (before mdbhelper.deletequote).
http://hackaday.com/2010/08/02/android-development-101-part-4advanced-databasegui-code-and-ddms/?like=1&source=post_flair&_wpnonce=836876f408
CC-MAIN-2014-52
refinedweb
5,838
65.42
There is a way to extend SAP queries with Extension Fields by simply using Key User Tools. If you need to use those field in the PDI itself you can make use of the “Custom Object References” I tried this with Silverlight; for HTML 5 please look here. - As we use the Key User Tools your Business User should be a Key User. This means the “Application and User Management” work center must be assigned to this user - Go to the UI for which you want to enhance the query. - Switch to the Adaption Mode via the menu - Expand the UI by clicking on “Advanced” so that the individual query parameters are displayed - Switch to Edit Screen via the menu - Now you see on the right side the supported adaptions. Click on the Extension Fields. If your required field is not listed you can create one. - Mark the checkbox Visible for all those field you the query to be extended by. As result these fields appear on the UI. - Finally click on Save or Publish. That’s all, folks. Horst Hello Horst, Does a field added like that to the Advanced Query panel is added to the Queries of the Root node in the SDK? I have extended the Employee BO with a custom field and I would like to add this field to the QueryByIdentificaiotn query, but it seems it does not work as only the Common node is extensible and not the root. Moreover there is no query in the common node. thank you for your attention. Best regards. Jacques-Antoine Ollier Hello Jacques-Antoine, First, this works only with Key User Extension fields, because any Extension Field from SDK may not exist in another tenant in which the Key User content is importet / re-created but no deployment for the resp. solution has happend. Second, as you wrote the Root node of the BO Emplyoee is not extensible. Another point for: No, this won’t work. 🙁 Sorry, Horst Dear Horst schaude, How can i do -> i would like to Display the Approver Name list one by one in purchase Order Form. i try to create Two Field(Approver ,Approver1) In purchase order.xbo and then write query. The Code is Below.(using For each statement) import ABSL; import AP.Common.GDT; var QueryPurch=PurchaseOrder.QueryByElements;// Write the code to get data from QueryByElements; var QueryParams=QueryPurch.CreateSelectionParams();// Set the Selection parameters for query QueryParams.Add(QueryPurch.ID.content,”I”,”EQ”,this.ID.content);// Pass the value in selection parameters var ResultPurch = QueryPurch.Execute(QueryParams);// Execute the Query foreach (var approve in ResultPurch) { if( ResultPurch.Count()>0) { var ApproverParty = ResultPurch.GetFirst().ApproverParty; if(ApproverParty.IsSet()) { var ApproverName = ApproverParty.Party.Name.GetFirst(); if(ApproverName.IsSet()) { this.Approver1 =this.Approver+ApproverName.PartyFormattedName.content; if(!this.Approver.Contains(ApproverName.PartyFormattedName.content)) { this.Approver= ApproverName.PartyFormattedName.content; } } } } } i have two approver 1.krish and 2. Issac but the 2.Issac approver name only Show in Newly created Po.how can i List the approver name 1 & 2 one by one. Hello Yogaraj, If you use the iterator you should use the variable “approve” in the body of the loop instead of direct access like “ResultPurch.GetFirst().” Then you will read also the second approver. HTH, . Horst
https://blogs.sap.com/2015/08/18/how-to-add-extension-fields-to-standard-queries-with-key-user-tools-silverlight/
CC-MAIN-2020-40
refinedweb
544
58.38
After years of work, Mono can now be built out of the dotnet/runtime repository in a .NET 5-compatible mode! This mode means numerous changes in the available APIs, managed and embedding, as well as internal runtime behavioral changes to better align Mono with CoreCLR and the .NET ecosystem. One area with multiple highly impactful changes to the runtime internals is library loading. For managed assemblies, Mono now follows the algorithms outlined on this page, which result from the removal of AppDomains and the new AssemblyLoadContext APIs. The only exception to this is that Mono still supports bundles registered via the embedding API, and so the runtime will check that as part of the probing logic. The managed loading changes are fairly clear and well documented, but unmanaged library loading has changed in numerous ways, some of them far more subtle. Summary of changes - New P/Invoke resolution algorithm - Dropped support for DllMap - Unmanaged library loading defaults to RTLD_LOCAL - Added support for DefaultDllImportSearchPathsAttribute - On non-Windows platforms, Mono and CoreCLR no longer attempt to probe for A/W variants of symbols - Default loader log level changed from INFO to DEBUG, and new log entries added for the new algorithm More detail where appropriate in the sections below. Dropped support for DllMap The new unmanaged loading algorithm makes no mention of DllMap, as Mono has removed its functionality almost entirely in .NET 5. DllMap’s XML config files have have been disabled on every platform out of security concerns. The DllMap embedding APIs are also disabled on desktop platforms, though this may change. In place of DllMap, users are encouraged to utilize the NativeLibrary resolution APIs, which are set in managed code, and the runtime hosting properties, which are set by embedders with the monovm_initialize function. We recognize that this does not sufficiently cover some existing mono/mono scenarios. If the NativeLibrary APIs are insufficient for your use case, please tell us about it! We’re always looking to improve our interop functionality, and in particular with .NET 6 will be evaluating NativeLibrary, so community input would be greatly appreciated. Unmanaged library loading defaults to RTLD_LOCAL A more subtle, yet no less impactful change is that native library loading now defaults to RTLD_LOCAL to be consistent with CoreCLR and Windows, as opposed to our historical behavior of RTLD_GLOBAL. What this means in practice is that on Unix-like platforms, libraries are no longer loaded into a single global namespace and when looking up symbols, the library must be correctly specified. This change prevents symbol collision, and will both break and enable various scenarios and libraries. For more information on the difference, see the dlopen man page. For an example: historically in Mono on Linux, it was possible to load library foo containing symbol bar, and then invoke bar with a P/Invoke like so: // note the incorrect library name [DllImport("asdf")] public static extern int bar(); This will no longer work. For that P/Invoke to function correctly, the attribute would need to use the correct library name: [DllImport("foo")]. A lot of code in the wild that was using incorrect library names will need to be updated. However, this means that when loading two libraries containing the same symbol name, there is no longer a conflict. There have been some embedding API changes as part of this. MONO_DL_MASK is no longer a full mask, as MONO_DL_GLOBAL has been introduced to specify RTLD_GLOBAL. If both MONO_DL_LOCAL and MONO_DL_GLOBAL, are set, Mono will use local. See mono/utils/mono-dl-fallback.h for more info. This also means that dynamically linking libmonosgen and attempting to resolve Mono symbols from dlopen(NULL, ...) will no longer work. __Internal has been preserved as a Mono-specific extension, but its meaning has been expanded. When P/Invoking into __Internal, the runtime will check both dlopen(NULL) and the runtime library in the case that they differ, so that users attempting to call Mono APIs with __Internal will not have those calls break. Added support for DefaultDllImportSearchPathsAttribute Mono now supports the DefaultDllImportSearchPathsAttribute attribute, which can be found in System.Runtime.InteropServices. In particular, passing DllImportSearchPath.AssemblyDirectory is now required to have the loader search the executing assembly’s directory for native libraries, and the other Windows-specific loader options should be passed down when appropriate. Fin And that’s it! If you have any further questions, feel free to ping us on Discord or Gitter.
https://www.mono-project.com/news/
CC-MAIN-2021-39
refinedweb
739
51.99
Graph Algorithms: Basic Guide for Your Next Technical Interview This tutorial is about basic graph algorithms and how these can be used to solve some general problems asked in technical interviews. Want to ace your technical interview? Schedule a Technical Interview Practice Session with an expert now! GraphGraph A graph. Types of Graphs:Types of Graphs: UndirectedUndirected An undirected graph is a graph in which all the edges are bidirectional, that is, edges don’t point in a specific direction. DirectedDirected A directed graph is a graph in which all the edges are unidirectional. WeightedWeighted Graph in which edges have some weight or cost assigned to them. In the image below, we can see that there are many paths with some cost associated with them, for example, to go from vertex 0 to vertex 3, there are two possible paths - 0 -> 2 -> 3 with cost = 6 + 1 = 7 - 0 -> 1 -> 3 with cost = 1 + 3 = 4 A graph is called Cyclic if it contains a path that starts and ends on the same vertex; such paths are called Cycle. A Graph containing no cycle is called Acyclic Graph. A Tree is an Acyclic Graph such that there exists exactly one path between any pair of vertices and have N-1 edges with N vertices. Graph Representation:Graph Representation: Mainly, a graph is represented in these two ways. Adjacency Matrix:Adjacency Matrix: Adjacency matrix is a V x V matrix in which entry A[i][j] = 1 if there exists a path from vertex i to vertex j—else it is 0. We can observe easily that if graph is Undirected then A[i][j] = A[j][i], while the same is not necessary in the case of a directed graph. Adjacency matrix can be easily modified to store cost of path A[i][j] = C(i,j) in case of weighted graph. Time complexity of accessing information from Adjacency Matrix is O(1) while Space complexity is O(N^2) Example of above can be seen in the image below Code in cpp showing the use of Adjacency Matrix: #include <bits/stdc++.h> using namespace std; int main() { int nodes, edges; cin >> nodes; // input number of nodes in graph cin >> edges; // input number of edges in graph int graph[nodes+1][nodes+1]; for( int i =0 ; i <= nodes ; i++) for( int j = 0 ; j <= nodes ; j++) graph[i][j] = 0; // initilise all nodes as disconnected from all nodes. for( int i = 0 ; i < edges ; i++) { int u , v ; cin >> u >> v; graph[u][v] = 1; // graph[u][v] = c if c is weight of edge graph[v][u] = 1; // if graph is Directed this line will be omitted. } for( int i = 1 ; i <= nodes ; i++ ) { cout << " Node " << i << " is connected to : " ; for( int j = 1 ; j <= nodes ; j++) { if( graph[i][j] ) cout << j << " "; } cout << endl; } return 0; } Adjacency List:Adjacency List: The adjacency list is another way to represent a graph. It is a collection of lists (A) where A[i] contains vertices which are neighbors of vertex i. For a weighted graph, we can store weight or the cost of the edge along with the vertex in the list using pairs. The space complexity of Adjacency list is O( V + E ). You can see an example in the image below: Code in cpp showing the use of Adjacency List: #include <bits/stdc++.h> using namespace std; int main() { int nodes, edges; cin >> nodes; // input number of nodes in graph cin >> edges; // input number of edges in graph vector< vector<int> > graph( nodes+1); // or we can use graph[nodes+1] ( array of vectors ) also for( int i = 0 ; i < edges ; i++) { int u , v ; cin >> u >> v; graph[u].push_back(v); // we can use a pair of (v,cost) in case of weighted graph graph[v].push_back(u); // if graph is Directed this line will be omitted. } for( int i = 1 ; i <= nodes ; i++ ) { cout << " Node " << i << " is connected to : " ; for( int u : graph[i]) { cout << u << " "; } cout << endl; } return 0; } Basic Graph AlgorithmsBasic Graph Algorithms Depth First Search (DFS):Depth First Search (DFS): Depth-first search is a recursive algorithm that uses the idea of backtracking. Basically, it involves exhaustive searching of all the nodes by going ahead—if it is possible. Otherwise, it will backtrack. By backtrack, the current path is completely traversed, we will select the next path. #include <bits/stdc++.h> using namespace std; vector< int > vis; vector< vector<int> > graph; void dfs( int x) { vis[x] = 1; cout << x << " "; for( int u : graph[x] ) { if( !vis[u] ) dfs(u); } } int main() { int nodes, edges; cin >> nodes ; // input number of nodes in graph cin >> edges ; // input number of edges in graph graph.resize( nodes+1); vis.resize(nodes+1); for( int i =0 ; i <= nodes ; i++) vis[i] = 0; // marking all nodes as not visited in starting for( int i = 0 ; i < edges ; i++) { int u , v ; cin >> u >> v; graph[u].push_back(v); // we can use a pair of (v,cost) in case of weighted graph graph[v].push_back(u); // if graph is Directed this line will be omitted. } dfs(1); return 0; } Breadth First Search (BFS):Breadth First Search (BFS): Start traversing from a selected node (source or starting node) and traverse the graph layerwise. This means it explores the neighbor nodes (nodes which are directly connected to the source node) and then move towards the next level neighbor nodes. As the name suggests, we move in breadth of the graph, i.e., we move horizontally first and visit all the nodes of the current layer then we move to the next layer. In BFS, all nodes on layer 1 will be traversed before we move to nodes of layer 2. As the nodes on layer 1 is closer from the source node when compared with nodes on layer 2. As the graph can contain cycles, we may cross the same node again while traversing the graph. So to avoid processing the same node again, we use a boolean array which marks off the node if it has been processed. While visiting all the nodes of the current layer of the graph, we will store them in such a way that we can visit the children of these nodes in the same order as they were visited. To accomplish the process, we will use a queue to store the node and mark it as visited. Then we will store it in the queue until all its neighbors (vertices which are directly connected to it) are marked. As the queue follows the FIFO order (First In First Out), nodes that were inserted first in the queue will be visited first. Sample Code for BFS: void bfs() { memset( vis , 0 , sizeof(vis ) ); // Initiliseg the all nodes to no-visited, queue< int > q; q.push(0); vis[0] = 1; while( !q.empty() ) { int x = q.front(); q.pop(); cout << x << " "; for( int i = 0 ; i < graph[x].size() ; ++i ) { if( !vis[ graph[x][i] ] ) { q.push( graph[x][i] ); vis[ graph[x][i] ] = 1; } } } } Sample problems where these can be used.Sample problems where these can be used. Problem 1. Finding connected components in a graph. | Solution code - Let's consider each vertex will have some Id; and vertices in the same connected component will have the same Id. - Make an array of size N to store the Id. - Iterate over all the vertices and call the DFS on the vertex if it's not visited (by DFS called before ). - Send a parameter in DFS call which will assign the Id to vertices discovered in the given DFS call. - Now vertices having the same Id are in the same connected component. - Now we can easily manipulate the data available to answer query related to the connected component. Problem 2: Amazon (Link to problem | Solution Code) - It is clear from the problem statement that we need to find the number of connected components consisting of 'X'. - Considering each (i, j) with value 'X' as a node, and nodes right, left, up, and down as it's neighbors (consider corner case of first and last row; and first and last column). - Iterate over all the nodes and call DFS if its value is 'X' and is not visited until now. - Return number of unique Id's allotted (we don't need to store the Ids of each vertex) Problem 3: Google (Link to problem | Solution Code) - First, we find the connected component of 'O' just like in problem 2. - With little observation, we can say that the conversion of 'O' to 'X' will take the place for a connected component altogether. - The 'O' present on the boundary will not be converted. - The 'O's connected to boundary 'O' will not be converted. - So if in a connected component there lies an 'O' which is on the boundary, then this component will not change, else it will. - We can find Ids of a connected component that will change in step 5. - Finally, we convert the elements that are present in the connected component found in step 6. Problem 4: Amazon (Link to problem) - Consider each (i,j) as node and nodes on up, down, left, and right as neighbors. - We iterate over the matrix to search the starting character of word. - Then we can call DFS( 0 , word) Pseudo code dfs( node , index, word) if ( node.char != word[index] ) return false if ( index == word.length() ) return true toreturn = false for nextnode in neighbor of node toreturn |= dfs( nextnode, index+1, word) return toreturn Wrapping upWrapping up Hopefully, by knowing some basic graph algorithms this will be useful to your career not only during your technical interview but as you develop more applications in the future.
https://www.codementor.io/rishabhdaal/graph-algorithms-interview-questions-du1085u8l
CC-MAIN-2018-22
refinedweb
1,628
68.3
Problem with `editor.set_file_contents` not overwriting an existing Dropbox file Is this me, or a bug? Does anyone know a workaround? Here's some code I run from the Python scratchpad: import editor name = 'test.txt' text = 'hello world' root = 'dropbox' # 'local' or 'dropbox' editor.set_file_contents(name, text, root) editor.set_file_contents(name, text, root) editor.set_file_contentsis meant to overwrite the file if it exists. Here's what I do: - In Settings, reset Dropbox Sync. - Run the code once. It works correctly and creates one file in Dropbox. I can see it in the Dropbox app and in Editorial's file browser. - Wait 30 seconds and run the code again. Two problems: (a) The Dropbox app shows there is now a file test(1).txt as well as test.txt and (b) Editorial's file browser shows test(1).txt, but not test.txt. - Running the code more creates more files, like test(2).txt. And I found: - Resetting Dropbox Sync makes Editorial's file browser match the Dropbox app browser. - In the above code, if I set the root to 'local' it works properly. - The action Set file contents works properly, and always overwrites the Dropbox file if it exists.
https://forum.omz-software.com/topic/396/problem-with-editor-set_file_contents-not-overwriting-an-existing-dropbox-file
CC-MAIN-2017-43
refinedweb
200
71.21
Investors in Williams Cos Inc (Symbol: WMB) saw new options begin trading $27.50 strike price has a current bid of 65 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $27.50, but will also collect the premium, putting the cost basis of the shares at $26.85 (before broker commissions). To an investor already interested in purchasing shares of WMB, that could represent an attractive alternative to paying $27.79/share today. Because the $27.36% return on the cash commitment, or 20.06% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Williams Cos Inc , and highlighting in green where the $27.50 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $28.00 strike price has a current bid of 48 cents. If an investor was to purchase shares of WMB stock at the current price level of $27.79/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $28.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 2.48% if the stock gets called away at the June 14th expiration (before broker commissions). Of.73% boost of extra return to the investor, or 14.66% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 30%, while the implied volatility in the call contract example is 23%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $27.79) to be 22%..
https://www.nasdaq.com/articles/wmb-june-14th-options-begin-trading-2019-05-02
CC-MAIN-2021-21
refinedweb
309
66.03
Pattern Matching for Java. Pattern matching documents Pattern Matching For Java (this document) — intValue } There are three things going on here: a test (is x an Integer), a conversion (casting obj to Integer), and a destructuring (extracting the intValue component from the Integer). This pattern is straightforward and understood by all Java programmers, but is suboptimal for several reasons. It is tedious; doing both the type test and cast should be unnecessary (what else would you do after an instanceof test?), and the accidental boilerplate of casting and destructuring obfuscates the more significant logic that follows. But most importantly, the needless repetition of the type name provides opportunities for errors to creep unnoticed into programs. This problem gets worse when we want to test against multiple possible target types. We sometimes repeatedly test the same target with annoying and unnecessary., the chain with n branches to the test-and-extract problem, we believe it is time for Java to embrace, C#). A pattern is a combination of a match predicate that determines if the pattern matches a target, along with a set of pattern variables that are conditionally extracted if the pattern matches the target. Many language constructs that test an input, such as instanceof and switch, can be generalized to accept patterns that are matched against the input. One form of pattern is a type pattern, which consists of a type name and the name of a variable to bind the result to, illustrated below in a generalization of instanceof: if (x instanceof Integer i) { // can use i here, of type Integer } Here, x is being matched against the type pattern Integer i. First x is tested to see if it is an instance of Integer. If so, it is cast to Integer, and the result assigned to i. The name i is not a reuse of an existing variable, but instead a declaration of a pattern variable. (The resemblance to a variable declaration is not accidental.) Using patterns with instanceof simplifies commonly messy operations, such as implementation of equals() methods. For a class Point, we might implement equals() as follows: public boolean equals(Object o) { if (!(o instanceof Point)) return false; Point other = (Point) o; return x == other.x && y == other.y; } Using a pattern match instead, we can combine this into a single expression, eliminating the repetition and simplifying the control flow: public boolean equals(Object o) { return (o instanceof Point other) && x == other.x && y == other.y; } Similarly, we could simplify the if..else chain above with type patterns, eliminating the casting and binding boilerplate: String formatted = "unknown"; if (obj instanceof Integer i) { formatted = String.format("int %d", i); } else if (obj instanceof Byte b) { formatted = String.format("byte %d", b); } else if (obj instanceof Long l) { formatted = String.format("long %d", l); } else if (obj instanceof Double d) { formatted = String.format(“double %f", d); } else if (obj instanceof instanceof ...) part is repeated. We’d like to say “choose the block which best describes the target object”, and be guaranteed that exactly one of them will execute. We already have a mechanism for a multi-armed equality test in the language — switch. But switch is currently very limited. You can only switch on a small set of types — numbers, strings, and enums — and you can only test for exact equality against constants. But these limitations are mostly accidents of history; the switch statement is a perfect “match” for pattern matching. Just as the type operand of instanceof can be generalized to patterns, so can case labels. Using a switch expression with pattern cases, we can express our formatting example as: String formatted = switch (obj) { case Integer i -> String.format("int %d", i); case Byte b -> String.format("byte %d", b); case Long l -> String.format("long %d", l); case Double d -> String.format("double %f", d); case String s -> String.format("String %s, s); default -> String.format("Object %s", obj); }; ... Now, the intent of the code is far clearer, because we’re using the right control construct — we’re saying “the expression obj matches at most one of the following conditions, figure it out and evaluate the corresponding expression”. This is more concise, but more importantly it is also safer — we’ve enlisted the language’s aid in ensuring that formatted is always assigned, and the compiler can verify that the supplied cases are exhaustive. As a bonus, it is more optimizable too; in this case we are more likely to be able to do the dispatch in O(1) time. Constant patterns We’re already familiar with a kind of pattern, namely, the constant case labels in today’s switch statement. Currently, case labels can only be numeric, string, or enum constants; going forward, these constant case labels are just constant patterns. Matching a target to a constant pattern means the obvious thing: test for equality against the constant. Previously, a constant case label could only be used to match a target of the same type as the case label; going forward, we can use constant pattern to combine type tests and equality tests, allowing us to match Object against specific constants. Operations on polymorphic data The example above, where we are handed an Object and have to do different things depending on its dynamic type, can be thought of as a sort of “ad-hoc polymorphism.” There is no common supertype to appeal to that would give us virtual dispatch or methods that we could use to differentiate between the various subtypes, so we can resort only to dynamic type tests to answer our question. Often, we are able to arrange our classes into a hierarchy, in which case we can use the type system to make answering questions like this easier. For example, consider this hierarchy for describing an arithmetic expression: interface Node { } class IntNode implements Node { int value; } class NegNode implements Node { Node node; } class MulNode implements Node { Node left, right; } class AddNode implements Node { Node left, right; } An operation we might commonly perform on such a hierarchy is to evaluate the expression; this is an ideal application for a virtual method: interface Node { int eval(); } class IntNode implements Node { int value; int eval() { return value; } } class NegNode implements Node { Node node; int eval() { return -node.eval(); } } class MulNode implements Node { Node left, right; int eval() { return left.eval() * right.eval(); } } class AddNode implements Node { Node left, right; int eval() { return left.eval() + right.eval(); } } In a bigger program, we might define many operations over a hierarchy. Some, like eval(), are intrinsically sensible to the hierarchy, and so we will likely implement them as virtual methods. But some operations are too ad-hoc (such as “does this expression contain any intermediate nodes that evaluate to 42”); it would be silly to put this into the hierarchy, as it would just pollute the API. The Visitor pattern The standard trick for separately specifying a hierarchy from its operations is the visitor pattern, which separate traversal of a data structure from the definition of the data structure itself. For example, if the data structure is a tree that represents a design in a CAD application, nearly every operation requires traversing at least some part of the tree —, which is often a superior way of organizing the code. But, the visitor pattern has costs. To use it, a hierarchy has to be designed for visitation. This involves giving every node an accept(Visitor) method, and defining a Visitor interface: interface NodeVisitor<T> { T visit(IntNode node); T visit(NegNode node); T visit(MulNode node); T visit(AddNode node); } If we wanted to define our evaluation method as a visitor over Node, we would do so like this: too bad. rigid; as visitors get more complicated, it is common to have multiple levels of visitors involved in a single traversal. Visitor has the right idea — separating the operations over a hierarchy from the hierarchy definition itself — but the result is less than ideal. And, if the hierarchy was not designed for visitation — or worse, the elements you are traversing do not even have a common supertype — you are out of luck. In the next section, we’ll see how pattern matching gets us the type-driven traversal that Visitor offers, without its verbosity, intrusiveness, or restrictions. Deconstruction patterns Many classes — like our Node classes — are just typed carriers for structured data; typically, we construct an object from its state with constructors or factories, and then we access this state with accessor methods. If we can access all the state components we pass into the constructor, we can think of construction as being reversible, and the reverse of construction is deconstruction. A deconstruction pattern is like a constructor in reverse; it matches instances of the specified type, and then extracts the state components. If we construct a Node with new IntNode(5) then we can deconstruct a node (assuming IntNode supports deconstruction) with case IntNode(int n) -> ... n is in scope here ... Here’s how we’d implement our eval() method using deconstruction patterns on the Node classes: int eval(Node n) { return switch(n) { case IntNode(int i) -> i; case NegNode(Node n) -> -eval(n); case AddNode(Node left, Node right) -> eval(left) + eval(right); case MulNode(Node left, Node right) -> eval(left) * eval(right); }; } The deconstruction pattern AddNode(Node left, Node right) first tests n to see if it is an AddNode, and if so, casts it to AddNode and extracts the left and right subtrees into pattern variables for further evaluation. This is obviously more compact than the Visitor solution, but more importantly, it is also more direct. We didn’t even need the Node types to have visitor support — or even for there to be a common nontrivial supertype. All we needed was for the Node types to be sufficiently transparent that we could take them apart using deconstruction patterns. Composing patterns Deconstruction patterns are deceptively powerful. When we matched against AddNode(Node x, Node y) in the previous example, it may have looked like Node x and Node y are simply declarations of pattern variables. But, in fact, they are patterns themselves! Assume that AddNode has a constructor that takes Node values for the left and right subtrees, and a deconstructor that yields the left and right subtrees as Nodes. The pattern AddNode(P, Q), where P and Q are patterns, matches a target if: - the target is an AddNode; - the leftnode of that AddNodematches P; - the rightnode of that AddNodematches Q. Because P and Q are patterns, they may have their own pattern variables; if the whole pattern matches, any binding variables in the subpatterns are also bound. So in: case AddNode(Node left, Node right)) -> ... the nested patterns Node left and Node right are just the type patterns we’ve already seen (which happen to be guaranteed to match in this case, based on static type information.) So the effect is that we check if the target is an AddNode, and if so, immediately bind left and right to the left and right sub-nodes. This may sound complicated, but the effect is simple: we can match against an AddNode and bind its components in one go. But we can go further: we can nest other patterns inside a deconstruction pattern as well, either to further constrain what is matched or further destructure the result, as we’ll see below. arm, there’s no problem. For switches over enum, where all enum constants are handled, it is often irritating to have to write a default clause that we expect will never be taken; worse, if we write this default clause, we lose the ability to have the compiler verify that we have exhaustively enumerated the cases. Similarly, for many hierarchies where we might apply pattern matching, such as our Node classes, we would be annoyed to have to include a never-taken default arm when we know we’ve listed all the subtypes. If we could express that the only subtypes of Node are IntNode, AddNode, MulNode, and NegNode, the compiler could use this information to verify that a switch over these types is exhaustive. There’s an age-old technique we can apply here: hierarchy sealing. Suppose we declare our Node type to be sealed; this means that only the subtypes that are co-compiled with it (often discussed separately. Patterns and type inference Just as we sometimes want to let the compiler infer the type of a local variable for us using var rather than spell the type out explicitly, we may wish to do the same thing with type patterns. While it might be useful to explicitly use type patterns in our AddNode example (and the compiler can optimize them away based on static type information, as we’ve seen), we could also use a nested var pattern instead of the nested type patterns. A var pattern uses type inference to map to an equivalent type pattern (effectively matching anything), and binds its target to a pattern variable of the inferred type. A pattern that matches anything may sound silly — and it is silly in itself — but is very useful as a nested pattern. We can transform our eval method into: int eval(Node n) { return switch(n) { case IntNode(var i) -> i; case NegNode(var n) -> -eval(n); case AddNode(var left, var right) -> eval(left) + eval(right); case MulNode(var left, var right) -> eval(left) * eval(right); }; } This version is equivalent to the manifestly typed version — as with the use of var in local variable declarations, the compiler merely infers the correct type for us. As with local variables, the choice of whether to use a nested type pattern or a nested var pattern is solely one of whether the manifest type adds to or distracts from readability and maintainability. Nesting constant patterns Constant patterns are useful on their own (all existing switch statements today use the equivalent of constant patterns), but they are also useful as nested patterns. For example, suppose we want to optimize some special cases in our evaluator, such as “zero times anything is zero”. In this case, we don’t even need to evaluate the other subtree. If IntNode(var i) matches any IntNode, the nested pattern IntNode(0) matches an IntNode that holds a zero value. (The 0 here is a constant pattern.) In this case, we first test the target to see if it is an IntNode, and if so, we extract its numeric payload, and then further try to match that against the constant pattern 0. We can go as deep as we like; we can match against a MulNode whose left component is an IntNode containing zero, and we could optimize away evaluation of both subtrees in this case: int eval(Node n) { return switch(n) { case IntNode(var i) -> i; case NegNode(var n) -> -eval(n); case AddNode(var left, var right) -> eval(left) + eval(right); case MulNode(IntNode(0), var right), MulNode(var left, IntNode(0)) -> 0; case MulNode(var left, var right) -> eval(left) * eval(right); }; } The first MulNode pattern is nested three deep, and it only matches if all the levels match: first we test if the matchee is a MulNode, then we test if the MulNode’s left component is an IntNode; then we test whether that IntNode’s integer component is zero. If our target matches this complex pattern, we know we can simplify the MulNode to zero. Otherwise, we proceed to the next case, which matches any MulNode, which recursively evaluates the left and right subnodes as before. Expressing this with visitors would be circuitous and much harder to read; even though a visitor will handle the outermost layer easily, we would then have to handle the inner layers either with explicit conditional logic, or with more layers of visitors. The ability to compose patterns in this way allows us to specify complicated matching conditions clearly and concisely, making the code easier to read and less error-prone. Any patterns Just as the var pattern matches anything and binds its target to that, the _ pattern matches anything — and binds nothing. Again, this is not terribly useful as a standalone pattern, but is useful as a way of saying “I don’t care about this component.” If a subcomponent is not relevant to the matching, we can make this explicit (and prevent ourselves from accidentally accessing it) by using a _ pattern. For example, we can further rewrite the “multiply by zero” case from the above example using a _ pattern: case MulNode(IntNode(0), _), MulNode(_, IntNode(0)) -> 0; Which says that the other component is irrelevant to the matching logic, and doesn’t need to be given a name — or even be extracted. Patterns are the dual of constructors (and literals) Patterns may appear to be a clever syntactic trick, combining several common operations, but they are in fact something deeper — they are duals of the operations we used to construct, denote, or otherwise obtain values. The literal 0 stands for the number zero; when used as a pattern, it matches the number zero. The expression new Point(1, 2) constructs a Point from a specific (x, y) pair; the pattern Point(int x, int y) matches all points, and extracts the corresponding (x, y) values. For every way we have of constructing or obtaining a value (constructors, static factories, etc), there can be a corresponding pattern that takes apart that value into its component parts. The strong syntactic similarity between construction and deconstruction is no accident. Static patterns Deconstruction patterns are implemented by class members that are analogous to constructors, but which run in reverse, taking an instance and destructuring it into a sequence of components. Just as classes can have static factories as well as constructors, it is also reasonable to have static patterns. And just as static factories are an alternate way to create objects, static patterns can perform the equivalent of deconstruction patterns for types that do not expose their constructors. For example, Optional is constructed with factory methods Optional.of(v) and Optional.empty(). We can expose static patterns accordingly that operate on Optional values, and extract the relevant state: switch (opt) { case Optional.empty(): ... case Optional.of(var v): ... } The syntactic similarly between how the object is constructed and how it is destructured is again not accidental. (An obvious question is whether instance patterns make sense as well; they do, and they provide API designers with some better choices than we currently have. Static and instance patterns will be covered in greater depth in a separate document.) Pattern bind statements We’ve already seen two constructs that can be extended to support patterns: instanceof and switch. Another pattern-aware control construct we might want is a pattern binding statement, which destructures a target using a pattern. For example, say we have: record Point(int x, int y); record Rect(Point p0, Point p1); And we have a Rect which we want to destructure into its bounding points. An unconditional destructuring might look like: Rect r = ... match Rect(var p0, var p1) = r; // use p0, p1 Here, we assert (and the compiler will check) that the pattern is total on the target type, so we destructure the target and bind its components to new variables. If the pattern is partial on the target operand, and thus we cannot guarantee it will match, we can provide an else clause: Object r = ... match Rect(var p0, var p1) = r else throw new IllegalArgumentException("not a Rect"); // use p0, p1 We could even use a nested pattern to extract the corner coordinates in one go: Rect r = ... match Rect(Point(var x0, var y0), Point(var x1, var y1)) = r; // use x0, x1, y0, y1 A match statement can take multiple P=target clauses; in this case, all clauses must match. We could restate the nested match above as follows: match Rect(Point p1, Point p2) = r, Point(var x0, var y0) = p1, Point(var x1, var y1) = p2; More precisely, for a match statement with pattern P, all the bindings of P must be definitely assigned when the match statement completes normally. In general, this means that the else clause must either match something else, or must terminate abruptly (such as by throwing), but we might wish to add a third possibility — an “anonymous matcher” whose bindings are the bindings from the pattern being matched: match Foo(int a, int b) = maybeAFoo else { a = 0; b = 0; } While the operand of the else block looks like an ordinary block, it is type-checked as if it were a matcher whose declaration is matcher anonymous(int a, int b). Like switch, match may throw NullPointerException at runtime if we attempt to destructure a null and do not provide an else clause. Summary of patterns and control flow constructs We’ve now seen several kinds of patterns: - Constant patterns, which test their target for equality with a constant; - Type patterns, which perform an instanceoftest, cast the target, and bind it to a pattern variable; - Deconstruction patterns, which perform an instanceoftest, cast the target, destructure the target, and recursively match the components to subpatterns; - Method patterns, which are more general than deconstruction patterns; - Var patterns, which match anything and bind their target; - The any pattern _, which matches anything and binds nothing. We’ve also seen several contexts in which patterns can be used: - A switchstatement or expresion; - A instanceofpredicate; - A matchstatement. Other possible kinds of patterns, such as collection patterns, could be added later. Similarly, other linguistic constructs, such as catch, could potentially support pattern matching in the future. Pattern matching and records Pattern matching connects quite nicely with another feature currently in development, records (data classes). A data class is one where the author commits to the class being a transparent carrier for its data; in return, data classes implicitly acquire deconstruction patterns (as well as other useful artifacts such as constructors, accessors, equals(), hashCode(), etc.) We can define our Node hierarchy as records quite compactly: sealed interface Node { } record IntNode(int value) implements Node; record NegNode(Node node) implements Node; record SumNode(Node left, Node right) implements Node; record MulNode(Node left, Node right) implements Node; record ParenNode(Node node) implements Node; We now know that the only subtypes of Node are the ones here, so the switch expressions in the examples above will benefit from exhaustiveness analysis, and not require a default arm. (Astute readers will observe that we have arrived at a well-known construct, algebraic data types; records offer us a compact expression for product types, and sealing offers us the other half, sum types.) Scoping Pattern-aware constructs like instanceof have a new property: they may introduce variables from the middle of an expression. An obvious question is: what is the scope of those pattern variables? Let’s look at some motivating examples (the details are in a separate document.) if (x instanceof String s) { System.out.println(s); } Here, the pattern instanceof instanceof String s || s.length() > 0) { // error ... } we should expect an error; s is not well-defined in this context, since the match may not have succeeded in the second subexpression of the conditional. Similarly, s is not well-defined in the else-clause here: if (x instanceof String s) { System.out.println(s + "is a string"); // OK to use s here } else { // error to use s here } But, suppose our condition inverts the match: if (!(x instanceof String s)) { // error to use x here } else { System.out.println(s + "is a string"); // OK to use s here } Here, we want s to be in scope in the else-arm (if it were not, we would not be able to freely refactor if-then-else blocks by inverting their condition and swapping the arms.) Essentially, we want a scoping construct that mimics the definite assignment rules of the language; we want pattern variables to be in scope where they are definitely assigned, and not be in scope when they are not. This allows us to reuse pattern variable names, rather than making up a new one for each pattern, as we would have to here: switch (shape) { case Square(Point corner, int length): ... case Rectangle(Point rectCorner, int rectLength, int rectHeight): ... case Circle(Point center, int radius): ... } If the scope of pattern variables were similar to that of locals, we would be in the unfortunate position of having to make up unique names for every case, as we have here, rather than reusing names like length, which is what we’d prefer to do. Matching scope to definite assignment gives us that — and comports with user expectations of when they should be able to use a pattern variable and when not.
https://openjdk.java.net/projects/amber/design-notes/patterns/pattern-matching-for-java
CC-MAIN-2021-49
refinedweb
4,135
55.47
CA2114610A1 - Airport incursion avoidance system - Google PatentsAirport incursion avoidance system Info - Publication number - CA2114610A1CA2114610A1 CA 2114610 CA2114610A CA2114610A1 CA 2114610 A1 CA2114610 A1 CA 2114610A1 CA 2114610 CA2114610 CA 2114610 CA 2114610 A CA2114610 A CA 2114610A CA 2114610 A1 CA2114610 A1 CA 2114610A1 - Authority - CA - Grant status - Application - Patent type - - Prior art keywords - means - light - airport - system - data - Prior art date - Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) - Abandoned Links Classifications 2`~1ll613 ~IRPORT I~CURSIO~ ~VOInA~OE S~STE~ Backoround of the Invention This invention relates ~o an airport ground collision avoidance system and in partLcular to an apparatus and method for monitoring, controlling and predicting aircraft i or other vehicle movement primarily on airport taxiways and runways to avoid N nway incursions. ' Currently, ground control of aircraft at an airport is i~ done visually by the air traffic controller in the tower. ~ 10 Low visibility conditions sometime~ make it impossible for ;~ the controller to see all par~s of the field. Ground ¦ su~face radar can help in providing coverage durlng low visibility conditions~ it plays an important part in the lj solution of the runway incursion problem bu~ cannot sol~e -j 15 the entire problem. ~ runway incursion is defined as "any occurrence at an airport involving an aircraft, vehicle, person, or ob~ect on the ground that creates a collision ~ hazard or results~in loss of separation with an aircraft '~t taking off, intending to take off, landing, or intending to land." The U.S. Federal Administration Agency (FAA) has estimated that it can only ~ustify the cost of ground surface radar at 29 of the top 100 airpor~s in the United ~; States. However; such radar only provides location ~-information; it cannot alert the controller to possible conflicts between aircraft. .'J ~. , " . . ~ . ' '' ~ , . . ' "' ~ ' . , , ~' '' ' ~' , ' : ~ `` 2~610 . ! . In the prior art, an airport control and monitoring :. system has been used to ~ense whan an airplane reaches a certain point on a taxiway and controls switching lights on and off to indicate to the pilot when he may proceed on to a ru~way. Such a sys~-e~ sends microwave sensor in~ormation to a computer in the control tower. The computer comprises software for controlling the airport lighting and for ~¦ providing fault information on the airport lighting via l displays or a control panel to an operator. Such a system ¦ 10 is described in sales information provided on a ~i-I directional Series 7 Transceiver (BRITEE) produced by ADB-ALNACO, In~., A Siemens Company, of Columbus, Ohio. However, such a system doe~ not ~how the location of all vehicles on an airfield and i~ not able to detect and avoid a possible vehicle incursion. ~1 A well known approach to airport surface traffic ;;~ control has been the use of scanning radars operating at `~ high frequencies Yuch a~ K-band in order to ob~ain adequate .~$ definition and resolution. An exi~ting airport ground tra~fic control equipment of tha~ type is known in the art as Airport Surface Detection Equipment (ASDE). However, ~i such eguipment provides surveillance only, no discrete identification of aircraft on ~he ~urface being available. Al~o there is a need for a relatively high antenna tower and ., 'i 25 a relatively large rotation antenna system thereon. .~ ,~ :,',1 ; ~ -- 2~1~6~ Another approach to airport ground surveillance is a system descri~ed in U. S. Patent No. 3,872,474, issued March 18, 1974, to Arnold M. Levine and assigned to International Telephone and Telegraph Corporation, New York, NY, referred to as LOCAR ~Localized Cable Radar) which comprises a series of small, lower powered, narrow pul~es, transmitting radar having lLmited range and time sequenced along opposite sides of a runway ramp or taxiway. In another U. S. Patent No. 4,197,536, i~sued on April 8, 1980, to Arnold M. Levine, an airport surface identification and control system is described for aircraft equipped with ATCRBS (Alr Tra~fic Control Radio Beacon Sy~tem) and ILS (In~trument Landing System). However, these approache~ aro expensive, require special cabling and for identification purposes require expensive eguipment to be included on the aircraft and other vehicles. Another approach to vehicle identifica~ion such as types of aircraft by identifying the uniquè characteristic of the ~footprint" presented by the configuration of wheels unique to a particular type of vehicle i8 described in U.S. Patent ~o. 3,872,283, issued ~arch la, 1975, to Gerald R. Smith et al. and assigned to The Cadre Corporation of Atlanta Georgia. ;~ An automatic system for surveillance, guidance and fire-fighting at airports using infrared ensors is ,~ ,j ` , , - , ~ , ; . ., ":., - , ,, ~ : - - ~ , .. .. ... . . .: . ~ ,, , , ~ . - . . , ~ : - - ` 2 ~ described in U. S. Patent No. 4,845,629, issued July 4, 198 to Maria V. Z. Murga. The infrared sensor~ are arranged along the flîght lanes and their output signals are processed by a computer to provide information concerning the aircraft movements along the fligh~ lanes. Position detectors are provided ~or detecting the position of aircraft in the taxiways and parking areas. However, such system does not teach the use of edge light~ along the runwayY and taxiways along with their associated wiring and it is not able to detect and avoid a possible vehicle -incursion. The manner in which the invention deals with the disadvantages of the prior art to provide a low cos~ airport incursion avoidance 5ystem Will be evident as the lS description proceeds. .. . . .. :.. .: , :,:; :.. .. ,: . , . 211~ SummarY_of the Invention Accordingly, it is therefore an object of this invention to provide a system that detects a possible aircraft or vehicle incursion at an airport. It is also an ob~ec~ o~ this invention to pxovide a low cost ai~port incursion avoidance system using edge light assemblies and associated wiring along runways and taxiways. It is another ob~ect of this inYention to provide an airport incursion avoidance system that ~enerates a graphic display of the airport showing the location of all ground traffic including direction and velocity data. It is a further ob~ect of this invention to provide an airpor~.incursion avoidance system that generates a verbal alert to an air traf f iC controller or an aircraft pilot. The ob~ects are further:accomplished by pro~iding an airport incursion avoidance system comprising a plurality .~ of light circuits on an airport, each of the light circuits ;:: comprises a plurality of light assembly means, means for ; . providing power to each of the plurality of light circuits and to each of the light assembly means, means in each of the light a~sembly means for sensing ground traffic on the airport, means for processing data received from each of the light assembly means, means for providing data communication . between each of the light assembly means and the processing means, the processing means comprises means for providing a :: . 5 graphic display of the airport including symbols representing the ground traffic, each of the 3ymbols ha~ing direction and velocity data displayed, the processing means further comprises means for predicting an occurrence of an airport incursion in accordance with the data received from the sensing means, and means for alerting an airport controller or aircraft pilot of the predicted airport incursion. Each of the light circuits are located along the edges of a taxiway or a runway on the airport. The sensing means comprises in~rared detectors. The light a~sembly means comprises light mean~ coupled to the lines of the power providing means for lighting the airport, the in~rared detectors sensing me~ns, microprocessor means coupled to the light means, the sensing means, and the data communication means for providing processing, communication and control for the light assembly means, the microprocessor controlling a plurality of lighting patterns of the light means on the airport, and the data communication means being coupled to the microprocessor means and the lines of the power providing means. The light assembly means further comprises a photocell means coupled to the microprocessor meahs for detecting the light intensity of the light means. The light assembly means further comprises a strobe light coupled to the microprocessor means. The processing means comprises redundant computers for fault tolerance operation. The symbols represen~ing the ground traffic comprise icons having a shape indicating type of aircraft or vehicle. The processing means determines the locations of the symbols on the graphic display of the airport in accordance with the data receive from the light assembly means. The processing means further determines a future path of the ground traffic based on a ground clearance command, the future path being shown on the graphic display. The processing means for predicting an occurrence of an airport incursion comprises means for comparing position, direction and velocity of the ground traffic to predetermined separation minimums for the airport. The power providing means comprises constant current power means for providing a separate line to each of the plurality of light circuits, and network bridge means coupled to the constant current power means for providing a -~ communication channel to the processing means for each line of the constant current power means. The alerting means comprises a speech synthecis unit co~nected to a speaker, ;~ and the alerting means al o comprises a speech synthesis ~; unit connected to a radio transmitter. The ob~ects are further accomplished by a method of providing an airport incursion avoidance system comprising the steps of providing a plurality of light circuits on the airport, each of the light circuits comprises a plurality of light assembly means, providing power to each of the plurality of light circuits, sensing ground traffic on the airport with means in each of the light assembly means, processing data received from each of the light asse~bly means in computer means, providing a graphic display of the airport comprising symbol3 representing the ground traffic, .:1 each of the symbol5 having direction and velocity data ~ displayed, providing data communication b2tween the computer j :j means and each of the light a~sembly means, predicting an :j occurrence of an airport incursion in accordance with the data received from the sensing means, and alerting an 1 airport controller or aircraft pilot of the predicted 3 airport incursion. The step of sensing the ground traffic on the airport comprises the ~teps of lighting the airport with a light means coupled to the microprocessor means and ~., the power lines, providing infrared detectors for sensing ;o1 , the ground traffic, performing processing, communica~ion and control within the light assembly means with a microprocessor means coupled to the light means, the sensing means and data communication means, and coupling the data ,~ communication means between the microprocessor means and the power lines. The step of processing data comprises the step .~ of operating redundant computers for ~ault ~olerance. The step of providing power comprises the steps of providing a separate line to each of the plurality of light circuits ~i ~ 2S with a cons~ant current power means, and providing a .,.~ ..: : 8 :; ,,., ,'fi ,, , .. _ .. _._.. _ .. ~ ., ' ,` 21~l~61~ com~unication channel to.the computer means for each line of the constant current power means using a network bridge means. The step of providing a graphic display comprising symbols representing the ground traffic comprise the step of : 5 indicating a type of aircraft or vehicle with icons of : various shapes. The step of proeessing the data from each of the light assembly means comprises the ~tep of determining a location of the symbols on the graphic display of the airport in accordance with tha data. The step of predicting an occurrence of an airport incursion comprises the step of determining a future path of the ground traffic ;l in accordance with a ground clearance command and showing ~ the future path on the graphic display. ,:j : ~:. :', ' ' :: ' Z ~. ., .~i 9 " ,. ~; .: ~ 211~161 0 srief DescriPtion of_the Drawinqs Other and further feature~ of the invention will become ' apparent in connection with the accompanying drawings ,; wherein: , ,, FIG. 1 is a block diagrEm of the invention of an ~ airport vehicle incursion a~oidance system; .j FIG. 2 is a block diagram of an edge light as~embly showing a sensor electronics unit coupled to an edge light :~ . of an airfield lighting sy~tem; ,, . ;, 10 FIG. 3 is a pictorial diagram of the edge light ,: ~ assembly showing the edge light positioned above the sensor i';~ electronics unit; :~ FIG. 4 is a diagram of an airfield runway or taxiway having a plurality of edge light assemblies posi~ioned along each side of the runway or taxiway for detectin~ various size aircraft as shown; ,.~ : 1 FIG. 5 is a block diagram of the central computer system shown in FIG. l; FIG. 6 shows eleven network variables used in programming the microprocessor of an edge light assembly to interface with a sensor, a light and a strobe light; FIG. 7 i~ a block diagram showing an interconnection of network variables for a plurality of edge light ass~mblies located on both sides of a runway, each comprising a sensor electronics unit 10 positioned along a taxiway or runway; . :~ . .. , ~ , . .. . .. . : :,., ` i ' . . . : : :"-.. , ~ ,: :',,, , - , . ~: ,;.,: 21~ 6 1 {~ . . FIGo 8 sho~s a graphic display of a typical taxiway/runway on a portion of an airport as seen by an operator in a control tower, the display showing the location of vehicles as they are detected by the sensors mounted in the edge light assemblies located along taxiways and runways; and FIG. 9 is a block diagram of the data flow within the ., `, system shown in FIG. 1 and FIG. 5. ' 1 ' ': ::`` . . .:1 . . ~, . ! . ~`;1 : y , . ~ .~, ; 1 ~, . ., ':~ .:, '~ . :, ' ' , , ~ !: . ' , ~ . ~ . 1 . . " f ~ 2 ~ 1 0 . ,. : De~criRtion of the Preferred Embodiment Referring to FIG. 1 a block diagram of the invention of ~, an airport vehicle incursion avoidance system 10 is shown ~, comprising a plurality of light circuits 181n, each of said ~ 5 ligh~ cixcuits 181n comprises a plurality of edge light ;1 assemblies 201n connec~ed via wiring 211n to a ligXting . ~ vault 16 which is connected to a central compu~er system 12 ~ia a wide area network 14. Each of the edge light ~, assemblies 201n comprises an infrared (IR) detector vehicle .. 10 4ensor 50 (FIG. 2). ,.. j, . i The edge light assemblies 20,n are generally located along side the runway~ and taxiways of the airport with an avQrage 100 foot spacing and are interconnected to the , lighting vault 16 by single conductor serie~ edge light ~- 15 wiring 211n. Each of the edge light circuits 181nis powered via the wiring 211n by a constant current supply ~, 24ln located in the lighting vault 16. . . Referring now to FIG. 1 and ~IG. 2, communication c between the edge light assemblies 201n and the central . 20 computer syste~ 12 is accomplished with LON Bridges 221n interconn~cting the edge light wiring 211~ with the Wide Area Network 14. Information from a microprocessor 44 located in edge light assembly 201n is coupled to the edge light wiring 211n via a power line modem 54. The LON bridges 221n tran~fers message information from ~he edge ~ . 12 ,~ , ' , " , ' ,: ,,, ', ' ,:, , ' , ..~ 21~ 4 6 1 ~ light circuits 181 A via the wiring 211n ~o the wide area network 14. The wide area network 14 provides a transmission path to the central computer system 12. These circuit components also provide the return path communications link ~rom the central eomputer system 12 to the microprocessor 44 in each edge light assembly 201n. Other apparatus and methods, known to one of ordinary skill in the art, for data com~unication between the edge light assemblies 201n and the central comFuter system 12 may be employed, such as radio techniques, but ~he present embodiment of providing data communication on the edge light wiring 211n provides a low cost system for present airports. The LON Bridge 22 may be embodied by devices manufactured by Echelon Corporation of Palo Alto, California. The wide area network 14 may be implemented by one of ordinary skill in the art using ~tandard Ethernet or Fiber Di tributed Data Interface (FDDI) componen~s. Irhe cons~ant curren~ supply 24 , ~ , . may be embodied by devices manufactured by Crouse-Hinds of Winslow, Connecticut. Referring now to FIG. 2 and FIG. 3, FIG. 3 shows a , ; pictorial diagram of the edge light assembly 201n. The edge ` light assembly 201n comprises a bezel including an `1~ incandescent lamp 40 and an optional strobe light assembly 48 (FIG. 2) which are mounted above an electronics enclosure 43 comprising the vehicle sensor 50. The electronics .~ . , ! 13 ! :. .,, ,: '~' ' ' ; ~ '~ : ` : , .", !i 2 1 ~ enclosure 43 8itS on ~he top of a tubular shaft extending t from a ba~e support 560 The light assembly bezel with lamp 40 and base support 56 may be embodied by devices manufactured by Crouse-Hinds of Winslow, Connecticut. A block diagram of the contents of the electronics ', enclosure 43 is shown in FIG. 2 which comprises a coupling , transformer 53 connected to the edge light wiring 211n. The .~ coupling tran3former 53 provides power to both the :! incandescent lamp 40 via the lamp control triac 42 and the microprocessor power supply 52; in addition, the coupling . transformer 53 provides a data communicatio~ path between the power line modem 54 and the ~ON ~ridges 221n via the edge light wirLng 211~. The microproces~or 44 provides the computational power to run the internal software program that controls the edge light as3emblies 201 n. The microprocessor 44 i~ powered by the microprocessor power supply 52. Also connected to the microprocessor 44 is the lamp control trLac 42, a lamp monitoring photo cell 46, the optional strobe light assembly 48, the ~ehicle sensor 50, and the data communications modem 54. The microprocessor 44 is used to control the inca~descent edge light 40 inten~ity and optional strobe light a~sembly 48. The us~ of the microprocessor 44 in each light assembly 2O1 D allows complete addressable control over every li~ht on the field. The microprocessor 44 may be embodied by a VLSI device . ~'~',;" ' ' .' ' ,' ;' ' ~' , ,, ' r ~ ., ~ ., ; .: " manufactured by Echelon Corporation of Palo Alto, California 94304, called the ~euron~ chip. Still referring to FIG. 2, the sensor 50 in the present embodiment comprises an infrared ~IR) detector and in other embodiments may comprise other devices such as proximity detectors, CCD cameras, microwave motion detectors, inductance loops, or laser beams. The program in the microprocessor 44 is responsible for the initial filtering of ~he sensor data received from the sensor 50 and responsible for the transmis~ion of such da*a to tha central computer systEm 12. The sensor 50 must perform the following function~: detect a stationa~y ob~ect, detect a moving object, have a range at least half the width of the runway or taxiway, be low power and be immune to false alarms. Thi~ system design does not rely on just one type of sen or. Since sensor fusion functions are performed within the central computer sys~em 12, data inputs from all different types of sensors are acceptable. Each sensor relays a different view of what is happening on the airfield and the central computer system 12 combines them. There are a wide range of sensors that may be used in this system. As a new sensor type becomes available, it can be integrated into this system with a minLmum of difficulty. The initial sensor used i~ an IR proximity detector ba~ed around a piezoelectric strip. These are the kind of sensors you use ~-- 2 1 ~ a . . ... at home to turn on your flood lights when heat and/or ~; movement is detected. Nhen the sensor output provides an analog signal, an analog-to-digital converter readily known j in the art may be used to interface with the microprocessor 44. ~ Another proximity detector that can be used is based .~ . ~ around a microwave Gunn diode oscillator. These are ;j currently in use in such applications as Intrusion Alarms, ~! Door Openers, Distance Measurement, Collision Warning, ,,j~ Railroad Switching, etc. ~hese types of sensor~ have a drawback because they are not passive de~ices and care needs to be taken to select frequencies that would not interfere with other airport equipment. Finally, in locations uch as ~he hold pocition lines on taxiway~, solid state laser and detector combinations could be used between ad~acent taxiway ."i, ; lights. These ~ensor systems create a beam that when x broken would identify the location of the front wheel of the airplane. This type of detector would be used in those locations where the absolute position of a vehicle was needed. The laser beam would be modulated by the ; microprocessor 44 to avoid ~he detector being fooled by any other stray radiation. ~ : Referring to FIG. 2 and FIG. 4, a portion of an airport runway 64 or taxiway i~ shown having a plurality of edge light assemblies 201a positioned along each side of the 16 ` -, ~ ! '; ~ .' -''',` ' :' . ' ' .;:. . '''~'` ~ . ' : '", ' '~'. ' ' ' ' ' ' - - 2 ~ . runway or taxiway for detecting various size aixplanes or vehicles 60, 62. The da hed lines repre~ent the coverage : area of the sensors 50 located in each edge light a~sembly 201a positioned along each side of the runway 64 or taxiway S to insure detection of any ai~plane 60, 62 or other vehicles traveling on such runway 64 or taxiway. The edge light assemblies 201n comprising the sensor 50 are logically connected together in such a way that an entire airport is sensiti~ed to the movement of vehicles. Node to node communication takes place to verify and identify the location of the vehicles. Qnce thi~ i8 done a message is sent to the central computer system 12 reporting the .~, vehicles location. Edge light assemblie~ twithout a sensor ;t . electronics unit 43) and taxiway power wiring currently .,t 15 exist along taxiways, runways and open areas of airports, there~ore, the sensor electronics unit 43 is readily added to existing edge lights and existing taxiway power wiring ,~ . without the inconvenience and expense of clo~ing down run~ays and taxiways while installing new cabling. .~ 20 Referring now to FIG. 1, FIG. 5, FIG. 8 and FIG. 9, the : `.1 central computer syst2m 12 is generally located at a control ;`, tower or terminal area of an airport and is interconnected to the LON Bridge~ 221n located in the lighting ~ault 16 with a Wide Area Network 14. The central computer system 12 comprises two redundant computers, computer #1 26 and ~3 . 17 .~ . ,, " - ~:: . .... ~ - , ... ... - . : .. : ~ ..... . . ,', ' ' ' ~ ' ~ , " -., .. ,. , :: .. .. .; , . . . :; 2 ~ ~ computer #2 28 for fault tolerance, the display 30, speech ,., synthe~is units 29 & 31, alert lights 34, keyboard 27 and a speech recognition unit 33, all of these elements being interconnected by the wide area network 14 for the transfer ~i 5 of information. The two computers 26 and 28 communicate with the microprocessors 44 located in the edge light assemblies 201n. Data raceived from the edge light assembly 201n microprocessors 44 are used as an input to a sensor :i fusion software modula 101 (FIG. ~) run on the redundant computers 26 and 2B. The output of the sensor fusion software module 101 operating in the computers 26, 28 is ~i used to drive the CR~ display 30 which displays the location .'~ , . of each vehicle on the airport ~axiway and runways a~ shown in FIG. 80 The central computer system 12 may be em~odied by devices manufactured by IBM Corporation of White Plains, New York. The Wide Area Ne~work 14 may be embodied by devices manufactured by 3Com Corporation of Santa Clara, California. The speech syntheisis units 29, 31 and the speech recognition unit 33 may be embodied by devices manufactured by B~N of Cambridge, Massachusetts. ~-i ; 1 . . ~i The speech synthesis unit 29 is coupled to a speaker 32. Limited infoxmation is sent to the speech synthesis unit 29 via the wide area network 14 to provide the '1 ', capability to give an air traffic controller a verbal alert. `i 25 The speech synthesis unit 31 is coupled to a radio 37 having i -i 18 .~ . ,. , ~ . , ; ~ :-`` 211 ': an antenna 39 to provide the capability to give the pilots a verbal alert. The voice co~mands from the air traffic controller to the pilots are captured by microphone 35 and sent to the pilots via radio 36 and antenna 38. In the S present embodiment a tap is made and the speech information is sent to ~oth the radio 36 and the speech recognition unit ' 33 which is programmed to recognize the limited air traffic ', control vocabulary used by a controller. ~his includes 3 aixline names, aircraft type, the numbers 0-9, the name of the taxiways and runways and various short phrases such ac ~ "hold shortn, "expediten and "give way to.~ The output of ', the speech recognition unit 33 i3 fed to the computers 26, ,j 28. '1l Referring again to FI&. 2, the power line modem 54 ',~ 15 provides a data co~munication path over the edge light ~ ~1 wiring 211n for the microprocessor 44. This two way path is ~" -~, used for the'passing of command and control information between the various edge light assemblies 20~n and the central computer system 12. A power line tran~ceiver module in the power line modem 54 i9 used to provide a data channel. These modules use a carrier current approach to create the data channel. Power line modems that operate at carrier frequencies in the 100 to 450 Rhz band are available from many manufacturers. ~hese modems provide digital communication paths at data rates of up to lO,Q00 bits per .~ ~ 19 .~ , : . ` 2 ~ 6 ~ ~ ,., second utilizing direct sequence spread spectrum modulation. They conform to FCC power line carrier requireme~ts for ~1 conducted emissions, and can work with up to 55 dB of power .¦ line attenuation. ~he power line modem 54 may ~e embodied `, 5 by a device manufactured by Echelon Corporation of Palo ' Alto, California 94304, called the PLT-10 Power Line Transceiver ~odule. The data channel provides a transport layer or lowes~ layer of the open system interconnection (OSI) protocol used ~ 10 in the data network. ~he Neuron3 chip which implements the .~ microprocessor 44 contains all of the firmwaxe required to mplement a 7 layer OSI protocol. When interconneeted via an appropriate medium the Neuron~ chips automa~ically !~ communicate with one another using a robust Collision Sense Multiple Access (CSMA) protocol with forward error . corrections, error checXing and automatic retransmission of ~: missed messages (ARQ). The command and control in~ormation is placed in data packets and sent over the network in accordance with the 7 Layer OSI protocol. All messages generated by the ~ microprocessor 44 and de~tined for the central computer `~ system 12 are received by the network bridge 22 ~ia the .~ power lines 211n and routed to the central computer system `~ 12 over the wide area network 14. : ~y ~ . . - . . , . ,. , . , . -. . . , :.- . - -, . . . i , : , . . . 2 ~ The Neuron~ chip of the microprocessor 44 comprises three processors (not shown) and ~he firm~are required to support a full 6 layer open systems interconnection (OSI) protocol. The user is allocated one of the processors for the application code. The other two processors give the application program access ~o all of the other Neuron~ chips , in the network~ This acce~s creates a Local Operating Network or LON. A LON can be thought of as a high level local area network LAN. The use of the Neuron chip for the implementation of this invention, reduces the amount of ; ; custom hardware and software that otherwise would have to be `, developed. ; Data from the sensor electronic unit 43 of the edge `~ light assemblie~ 201n is coupled to the central computer ~i, 15 system 12 via ~he existing airport taxiway lighting power `~ wiring 21. Using the existing edge light power line to transfer the sensor data into a LON network has many advantages. Asi previously pointed out, the reuse of the existing edge lights eliminates the inconvenience and expense of closing down runways and taxiways while running .i , new cabl~ and provides for a low cost system. ~-~ The Neuron chip allows the edge light assemblies 201_n to automatically communicate with each other at the applications level. This is accomplished through network ~ 25 variables which allow individual Neuron9 chips to pass data `l 21 ~,~ .. .~ . 2 ~ 1610 between themselves. Each ~euron~ ~C~ program comprises both local and network variables. The local Yariables are used by the Neuron~ program as a scratch pad memory. The network j variables are used by the Neuron program in one of two ways, either a a network output variables or a network input variahles. Both kinds of variables can be initialized, evaluated and modified locally. The difference , comes into play in that onc~ a ne~work output variable i~ modified, network messages are automatically sent to each network input variable that is linked to that output variable. This variable linking is done at installation time. As soon as a new value of a ne~work input ~ariable is received by a Neuron~ chip, the code is vectored off to take appropriate action based upon the value of the network input variable. The advantage to the program is that this message pas~ing scheme is entirely transparent since the message pas~ing code is part of the embedded ~euron~ operating system. Referring now to FIG. 6, eleven network variables have been identified for a sensor program in each microprocessor :1 ' . 44 of the edge light as4emblies 201n. The sensor 50 ;,` function has two output variables: prelim detect 70 and ~ confirmed detect 72. The idea here is to have one output ;~ trigger whenever the sensor 50 detects movement. The other ;~1 25 output does not trigger unless the local sensor and the ~ii 22 : .,., ~ . : . : : : -,., .: : 2 1 ~ , sensor on the edge light across the runway both ~pot movement. Only when the detection is confirmed will the signal be fed back to the central computer system 12. This techni~ue of confirmation helps to reduce fal~e alarms in order to implement this technique the ad~acent sensor 50 has ;~ a~ input variable called ad~_prelim detect 78 that is used ~ to receive the other sensors prelim dstect output 70. Other .~ input variables are upstream detect 74 and downstream detect , 76 which are used when chaining ad~acent sensors together. 'i 10 Also needed is a detector sensiti~ity 80 input that is used ~' by the central computer system 12 to control the d~tection ~ ability of the sensor 50. i~ The incandescent light 40 requires two network ` variables, one input and the other an output variable. The input variable light le~el 84 would be used to control the ~ light~s brigh~ness. The range would be OFF or 0% all the ;~ way to FULL ON or 100%. This range from 0~ to 100% would be ; made in 0.5~ steps. Since the edge light assembly 20~n also , contains the photocell 46, an o~tput variable light failure 84 is created to signal that the lamp did not obtain the desired brightness. The strobe light 48 requires three input variables. The stxobe-mode 86 variable is used to select either the OFF, SEQUENTIAL, or ALTERNATE flash modes. Since the two flash modes require a distinct pat~ern to be created, two ~`.J .~ . ~ 23 ;3 . .~ , . :i `/ ` 2 1 ~ -, input vari,~bles active_delay 88 and fla~h delay 90 are used I to time align the strobe flashes. By setting these individual delay factors and then addressing the Neuron~ chips as a group, allows the creation of a field strobe pattern with 1,ust one command. , ., -1 ~eferring now to FIG. 7, a block diagr,~m of an ~ interconnection of ne~work variables for a plurality of edge $~ light ass~smblies 201n located on both sides of a runway is i shown, each of the edge light as~s~,~mblies 20~n comprising a ., 10 microprocessor 44. Each Ne,uron~ program in the :/ microprocessor 44 is designed with certain network input and '1l output variable3. The user~writes the code for the Neuron ~ chips in the microprocei~or 4~, a3suming that the input3 are, `.'! su~,plied and that the outputs are used. To create an actual i .~ network the user ha~ to ~wire up~ the network by ~ interconnecting the indiYidual nodes with a software ;`1 linker. The resulting di~stributed process is best shown in -~1 schematic form, and a portion of the network interconnect ~ 1 .:`, matrix is shown in Figure 7. The prelim_detect 70 output of ~, 20 a sensor nod,e 44~ i5 connected to ~he ad~, Primary detect 92 input of the sensor node 44, across the taxiway. T~i3 is ;~: `~ used as a means to verify actual detections and eliminate false reports. The communic~tions link between these two ~, ! `~ node~ 441 and 44~ is part of the distributed processing. The two nodes communicate ,among themselves without involving :` 24 ~ :~ : . . ~: :: , . . ~. :, , 2~6~ the central computer system 12. If in the automatic mode or if instructed by the controller, the system will al50 alert ;~) the pilots via audio and visual indicatlons. ~ Re~erring again to FIG. 1 and FIG. 4, the central `1, 5 computer system 12 tracks the movement of vehicles as they pass from the sensor 50 to sensor 50 in each edge light assembly 201n. Using a varia~ion of a radar automatic track algorithm, the sy~tem can track position, velocity and heading of all aircraft or vehicles based upon the sensor 50 readings. New vehicles are entered into the system either 1 upon leaving a boarding gate or landing. Unknown vehicles are also tracked automaticaIly. Since taxiway and runway lights are normally acro~s from each other on the pavement (as shown in FIB. 4 and FIG. 7), the microprocessor 44 in each edge lights assembly 201n i~ programmed to combine their sensor 50 inputs and agree before reporting a contact. A further refinement is to have the microprocessor 44 check with the edge light assemblies 20~n on either side of them to see if their sensors 50 had detected the vehicle. This allows a vehicle to be handed off from sensor electronic ~;i3l unit 43 ~o sensor elec~ronic unit 43 of each edge light~i assembly 20~n as it travels down the taxiway. This alsoas~ures that ~ehicle position reports remain consistent. ~l Vehicle velocity may also be calculated by using the `~ 25 ~~ . ~ 2 ~ ~! , ' `~ dictance between sensors, the sensor pattern and the time between detec~ions. Referring to FIG. 5 and FIG. 8, the display 30 is a color monitor which provides a graphical display of the airport, a portion of which is shown in FIG. 8. This is ~i accomplished by storing a map of the airport in the redundant compu~ers 26 and 28 in a digital format. The display 30 shows the location of airplane~ or vehicles as ,3j they are de~ected by ~he sensors 50 mounted in the edge `i, 10 light assemblies 201n along each taxiway and runway or other , . ~ i airport surface area~. All aircraft or vehicles on the airport surface are displayed as icons/ with the shape of the icons being determined by the vehicle type. Vehicle ~, position is shown b~ the location of the icon on the screen. Vehicle direction is shown by either ~he orientation of the icon or by an arrow emanating from the icon. Vehicle status ~ is conveyed by the color of the icon. The future path of ``d the vehicle as provided by the ground clearance command ;' entered via the controllsrs microphone 35 is shown as a colored line on the display 30. The sta~us of all field ~! lights including each edge light 201n in each edg~ light circuit 181n i~ shown via color on the display 30. Use of ob~ect orientated software provides the basis for building a model of an airport. The automatic i ` : inheritance feature allows a data structure to be defined . ~. . . 1 . , : . , . :l -` 2 ~ once for each object and then replicated automatically for each instance of that ob~ect. Automatic flow down assures that elements of the data base are not corrupted due to . typing errors. It also assures that the code is regular and structured. Rule based ob~ect orLented programming makes it difficult to create unintelligible ~spaghetti code. N Ob~ect oriented programming allows the runways, taxiways, aircraft and sensors, to be decoded directly as ob~ect~. ~ach of these ob~ects contains attributes. Some of these attributes are fixed like runway 22R or flight UA347, and some are ~, variable like vehicle status and position. In conventional programming we describe the attributes of an ob~ect in data struc~ure~ and then describe the behaviors of the ob~ect as procedures that operate on those data structures. Object oriented programming shifts the emphasis and focuses first on the data structure and only secondarily on the procedures. More importantly, ob~ect oriented programming allows us to analyze and design programs in a natural manner. We can think in terms of runways and aircraft instead of focusing on either the behavior or ~he data structures of the runways and aircraft. Table 1 shows a list of objects with corresponding attributes. Each physical ob~ect that is important to the runway incursion proble~ is modeled. The basic airplane or vehicle tracking algorithm is shown in Table 2 in a ProgrEm r~, .;~ . ~ i ~ 2 ~ .. . . Design Language (PDL). The algorithm which handles sensor fusion, incursion avoidance and safety a~erts is shown in a ! 1 . `i single program even though it is implemented as distrlbuted ' system using ~oth the central compu~er Rystem 12 and the ;~! 5 sensor microprocessors 44. " ~rABr~ 1 C~LnUCS ATTRIBU~ D~sr$LcprTc~ ! Son~or ~ocatlon X ~ Y coordin~tes of ~on~or I Circuit AC wlrlng oircult naru ~ numbor ', 1 0 Uniquo addro~u ~ot ~ddreau ror this ~en~or ~nd lt~ ~nte ~i L~mp_intonsitr 0~ to 100~ ln 0 5~ ~tsp~ ! ~ S~robo_st~tu- Blin~ r~te~off i Strokc-dal~y ~ro~ start slgn~ Sonsor st~tus Dctect/no datact Son~or_typo IR, lo-~r, ~ro~cimity, otc ;1 Runw~y Nnmo 22R~ 27, 33L, otc ~oc~tion Y ~ Y coordln~tes of atnrt of center line -~ Length In i'e-t ~;1 Wldth ~n reot Directlon ~n degre~s from north Status Not ~ctivo, ~ctlvn takeoff, ~ctlve lnndlng, ~l~r~ ~i Son~ora ~V) ~iat of llght~/3enaor~ ~long thlg runw~y ~` Intersectlon~ t~V~ Llst oS intora-ctlon~ ~, Vehlclos Ll~t o~ vohlclos on the runw~y T ~ lw~y Namo Namo of t~clwlly ~ocatlon X ~ Y coordln~tes of st rt of center llno ~ongth In foot Wldth ~n foet Dlroctlon In dogroos from north Stiltus Not ~ctlve, Actlvo~ ~lBnn `~ Sonsor~ ~V~ t of lntarsootlons `~ Bold Loc~tlon~ Ilst of holding loc~tlon~ ~I Vahlclo- ~V) Li~t of vehlclo~ on tho runw~y , . .':~ . ' . 28 :~1 .j : - ~ Ihter6ection Na~ I~ter-~ctlon N~m~ LocAtion Int~rs~Ctiqn o2 two c~nter lines StAtU- V~ic_At/Occupiod Sensor~i ~MV) Ll-t o~ sun-o~ cr~tln~ lnt~rssction ~ord~r l 5 Air~r~t Airlino Unit-d '~¦ Mod-l 727-200 ~all-nu~b~r N3274Z i ~mpty w-ight 9 5 tone ~ Fraight w~ight 2.3 ~on~ "1 0 Fuel woight 3 2 tons Top ~&d 59a ~ph ~,~ Vl Lpood 100 ~ph VZ_ p-~d 140 m~h Accel~ration 0.23 g' 9 t 15 Doce~l~r_tion 0.34 9~L ~l ~V - Hulti-vAriDblo or ~rrAiy .~ Table 2 while (forev~rl if ~edge light ahows a detsctLon) ¦ ¦ if ~adjacent light also ~how~ a detection senaor fu~ion) ¦ ¦ ¦ /* CONFI N D DETECTION */ .,, ¦ ¦ ¦ $~ ~prev$oua bIock showed a det~ction~ : ¦ ¦ ¦ ¦ /* ACC$PT ~ANDOFF ~/ ¦ Ujpdate aircraft position asid apead 1 1 1 el~e ~AY BE i~N aNI~A~ OR SE~V~ OE TRUC~ ~/ ¦ ¦ ¦ ¦ Al2rt operator to pos~ible lnaurslon ~ /* MAY BE AN AIRCRAFT ENTERING T~B SYSTE~i */ `:i I I I I Start a n~w track .~ 30 ¦ I else ¦ ¦ ¦ Regyeat ~tatua from adjac~nt light if ~Adjacant llght i~ OX) . 29 ., ;~ , . . . . 1 ,`~'. :'. . : ' ! ~ : ' ' . `. . . 2 1 1 ~ ., . ¦ /* NON CONFIRM~D D~TECTION ~/ eilEie ¦ ¦ ¦ ¦ Flag adjacent light for repair ¦ ¦ endif 1 ¦ endif ` ¦ eindif ¦ lf ~Edge light loEie~i a detaction ANr ~tatuEi i,s OR) ¦ ¦ lf ~N~xt block ~ihowed a detection) * PROPER HANDOFF */ 1 1 el~e ~-~. ¦ ¦ ¦ if ~vehicle Eipeed > ~ takeoff) ¦ ¦ .¦ ¦ Handoff to departurei control el~ie ¦ ¦ ¦ ¦ /* MISSING ~ANDO~F ~/ -¦ ¦ ¦ ¦ AlQrt operator to po3~iibla incur~iion . I I I endif ¦ ¦ endif l ¦ endL~ -i ¦ /* CHECR FOR POSSTBLE COLLTSIONS */ ¦ for ~all tracked aLrcraft) Plot future po~ition lf ~position conflict) Alert operator to po~ible incursion ¦ ¦ endif ~5 ¦ endif ~: Update di~play ~-~ endwhilo , . : 30 -: , , ~, , ~ , . :, . . .. . ,,~. . , , , , ", ",,,". , , , , " , . 2 1 ~ .. . Referring again to YIG. 1 and FIG. 2, the control of taxiway lighting intensity is usually done by placing all the lights on ~he same serie~ circuit and then regulating ,~ ,' the current in that circuit. In the present embodLment the i 5 intensity of the lamp 40 is controlled by 3ending a message ~ with the light intensity value to the microprocessor 44 ;~ located within the light as~embly 201n. The message allows ~or intensity set~ings in the range of 0 to 100~ in 0.5% steps. The use of photocell 46 to che~k the light output ' 10 allows a return message to be sent if the bulb does not ; respond, This in turn generates a maintenance report on the ~'~ light. The strobe light 48 provide~ an additional optional i : ~,~ capability under progri~m control of the microproces~or 44. Each of the microprocessors 44 in the edge li~ht assemblies 20 is ~ndividually addressable. This mean~ eYery lamp on the field is controlled individually by the central computer ~ system 12. ;~ The system 10 can be progrEmmed to provide an Active ~ Runway Indicator by using the strobe lights 48 în tho~e edge ,~ 20 light assemblies 20,~ located on the runway 64 to continue ~ the approach light ~rabbit~ strobe pattern all the way down ;, the runway. This lighting pattern could be turned-on as a ~ plane is cleared for landing and then turned-off after the . aircraft has touched down. A pilot approaching the runway ,,1 ~ 25 along an intersecting taxiway would be alerted in a clear ,~ `~ 31 .,,~ ,~ . .~ ~ - 211~ , ~ 1 ; and unambiguous way that the runway was active and should not be crossed. If an incursion was detected the main computers 26, 28 could switch the runway strobe lights 48 from the ~rabbit~ patterr. to a pattern that alternatively flashes either side of the runway in a wig-wag fashion. A switch to this pattern would ~e interpreted by the pilot of an arriving aircraft as a wave off and a signal to go around. The abrupt switch in the pattern of the strobes would be instantaneously pi~ked up by the air crew in time for them to initiate an aborted landing procedure. During Category III weather condition~ both runway and taxiway visibility are very low. Currently radio based landing ~y~tem~ are used to get the aircra~t from final . . . approach to the runway. Once on the runway it is not always ob~iou3 which taxiways are to be used to reach the airport terminal. In system 10 the main computers 26,28 can control ~ the taxiway lamps 40 as the means for guiding aircraft on ;~ the ground during CAT III conditions. Sincs the intensity 1 20 of the taxiway lamps 40 can be controlled remotely, the `~1 lamp~ ~u~t in front of an aircraft could be intensified or 1 . ~ flashed as a means of guiding it to the terminal. Alternati~ely, a short sequence of the "rabbit" pattern may be programmed into the taxiway strobes ~ust in ront of the aircraft. At intersections, either the unwanted paths ~-" . -~ ` 2 ~ .:, ! may have their lamp3 turned off or the entrance to the ~;~ proper section of taxiway m;~y flach directing ths pilot to head in that direction. of course in a smart system only those lights directly in front of a plane would be controlled, all other lamps on the field would remain in ~ their normal mode. ;I Referring now to FIG. 9, a block diagram is shown of the data flow within the system lO (as shown in FIG. 1 and FIG. 5). The software modules are ~hown that are used to ;`'~ lO process the data within the computers 2~, 28 of the central ~ . computer ~ystem 12. The ~racking of aircraft and other ,.~ vehicles on the airport operates under the control of a sensor fusion software module 101 which re~ides in the -computers 26, 28. ~he sensor fusion softwar~ module lO1 ~ 15 receives data from the plurality of sensor~ 50, a sensor 50 r'~ being located in each edge light asse~bly 201n which reports the heat level detected, and this software module lOl combine this information through the use of rule based artificial intelligence to create a complete picture of all ,,~a, 20 ground traf~ic at the airport on a display 30 of the central computer sy~tem 12. The tracking algorithm starts a track upon the first I xeport of a sensor 50 detecting a heat level that is above the ambient background level of radiation. This detection is then verified by checking the heat level reported by the ;; ~. .. 2 1 1 ~ :; sensor ~irectly across the pavemant from the first reporting sensor. This secondary reading is used to confirm the vehicle detected and to eliminate false alarms. After a vehicle has been confirmed the sensors ad~acent to the first reporting sensor are quer1ed for changei in their detected heat level. AS soon a~ one of the ad~acent i3ensors detects a rise in heat level a direction vector for the vehicle can ! ' be established. Thi~ proceRs continues as the vehicle ii~ -handed off from sensor to sensor in a bucket brigade fa~hion a~ shown in FIG. 7. Vehicle speed can be roughly dztermined by calculating the time between vehicle detection by ad~acent ~ensors. Thiii information is combined with information from a ~ystem data ba~e on the lccation of each sensor to calculate the velocity of the target. Due to hot ' 15 exhauit or ~et bla~t, the sensox~ behind the vehicle may not -~ -, return to a background level immediately. ~ecause of the e `~ condition, the algorithm only u~es the first four sen~ors (two on either side of the taxiway) to calculate the vehicles position. The vehicle is alway-~ assumed to be on 0 the centerline of the pavement and between the first four reporting sen~or~. Vehicle identification can be added to the track either manually or automatically by an automated source that can identify a vehicle by it~ position. An example would be ' 25 prior knowledge of the next aircraft to land on a particular , " , . . . . 2 ~ ., runway. Track~ are ended when a vehicle leaves the detection system. This can occur in one of two ways. The first way is that the vehicle leaves the area covered by the sensors 50. This is determined by a vehicle track moving in the direction of a gateway sensor and then a lack of ~' detection after the gateway sensor has lost contact. A~', second way to leave the detection sys~em is for a track to be lost in the middle of a ~ensor array. This can occur when an aircraft departs or a vehicle runs onto the grass. Takeoff scanarios can be determined by calculating the speed ,j of the vehicle ~ust before detection wai~ lost. If the~,i vehicle speed was increasing and above rotation speed then J the aircraft is assumed to have taken off. If not then the vehicle i~ assumed to have gone on to the grass and an alarm l 15 is sounded. Referring to FIG. 5 and FIG. 9, the ground clearance routing function is performed by the speech recognition unit 33 along with the ground clearance compliance verifier software module 103 running on the computers 26, 28. This software module 103 comprises a vehicle identification routine, clearance path routing, clearance checking routine and a path checking routine. The vehicle identification routine i~ used to receive the airline name and flight nu~ber (i.e. ~Delta 374~) from the speech recognition unit 33 and it highlights the icon of ' ~ 35 "' . ,- ~ ` ' ~ ` - : : , :. ` , ` ` ''~, , ~ ~.~"' ' ` , ; ~ ~ '";~, . ' ' ' ., ' , : , ` that aircraft on the graphic display of the airport on display 30. The clearance pa~h routine takes the remainder of the controller~s phrase (i.e. nouter taxiway to echo, hold short of runway 15 Left~) and provides a graphical display of the clearance on the display 30 ~howing the airport. The clearance checking routine checks the clearance path for possible conflict with other clearances and vehicles. If a conflict is found the portion of the path that would cause an incursion is highlighted in a blinking red and an audible indication is given to the controller via speaker 32. ~ The path checking routine checks ~he actual path of the 3 vehicle as detected by th~ sensors 50 after the clearance path has been entered into the computers 26, 28 and it ~1 -~ monitor~ the actual path for any deviation. If this routine detects that a vehicle has strayed from the as~igned cour~e, the vehicle icon on the graphic display of the airport ~ flashes and an audible indicator is giYen to the controller -j 20 via speaker 32 and optionally the vehicle operator ~ia radio 37. The airport vehicle incursion avoidance system 10 operates under the control of safety logic routines which reside in the collision detection software module 104 l~ 25 running on computers 26, 28. The safety logic routines `s 36 `i A, ,: i , . ~, , 2 ~ ~ receive data from the sensor fusion software module 101 via .~ the tracker software module 102 location program and 1 interpret this information ~hrough the use of rule based . .~ 3 . artificial intelligence to predict possible colli~ions or ru~way incursions. This information is then used by ~he central computer system 12 to alert tower controllers, 3 aircraft pilots and truck opera~or~ to the posaibility of a runway incursion. The tower controllers are alerted by the di~play 30 along with a computer synthesized voice message via speaker 32. Ground traffic is alerted by a combination of ~raffic lights, flashing light~, stop bars and other alert li~hts 34, 1amp8 40 and 48, and computer generated voice ~ommand~ broadcast via radio 36. Knowledge ba~ed problem~ are also called fuzzy problems and their solution~ depend upon both program logic and an interface engine that can dynamically create a decision tree, selecting which heuristics are most appropriate for the specific case being considered. ~ule based systems broaden the scope of possible applications. They allow designers to incorporate ~udgement and experience, and to take a con~istent solution approach acros~ an enti~e problem ~ set. `~ The progrEmming of the rule based incursion detections software is very straight forward. The rules are wIitten in English allowing the experts, in this case the tower ., ~ . ~ 37 .-- . ~ : .--2 ~ ; personnel and the pilots, to re~iew the ~ystem at an under~tandable level. Another feature of the rule based system is that the rules ~tand alone. ~hey can b~ added, deleted or modified without affecting the rest of the code. Thi~ is almost impossible ~o do with code that iB created from scratch. ~n example of a rule we might use is: I~ (Runway StatuR = ActLva) ~ then (stop Bar ~ight~ - ~ED). ;' This is a very simple aIld straight forwaxd rule. It stands alone requiring no extra knowledge except how Runway Statu3 i9 cr~aated. So let's make 60me rule~ affecting .,! . Runway Status. If (Departuro ~ APP~OVED) or (7a~ding ~ INMINEN$), then (Runway tatu~ ~ ACTIVE). For incursio~ detec~ion, another rule is: ¦ If (Runway Status ~ ACTIVE) and (Interg~ction a OCCUPI~D)``i then (Run~ay Incur3ion 3 TRUE). Next, detect that an inter~ection of a runway and taxiway ~¦ are occupied by the rules: If ~Intar~ectlon S~n~or~ 8 DETECT), then (Inter~ection - OCCUPI~D). To predict that an aircraft will run a Hold Position stop, the following rule i~ created: If (Aircraft Stopping Di~tance ~ Di~tance to ~old? osi~ion), then (Int3rseotion - OCCUPI~D). ",, ~ : 1 . , . . ,.,~ ; :-, - : :, 2~ll7~ ~ ?~ In order to show that rules can be added without affecting the reset of the progriam, assume that after a demonstration of the system 10 to tower controllers, they decided that they wanted a ~Panic Button~ in the tower to ; ~ override the rule ba ed software in case they spot a safety violation on the ground. Be~ides installing the button, the only other change would be to add this extra rule. If ~Pi3nic: button - PRESSED), then ~Rtmway Incur~ion - TRUE). It is readily seen that the central rule baæed computer ; program is very straight forward to create, under~tand and J modify. As types of incursions are defined, the sy~tem 10 can be upgraded by adding more rules. Referring again to PIG. 9, the block diagram shows the data flow between the functional elements within the ~ystem -~ lQ (FIG. 1). Vehicles are detected by the sensor 50 in each of the edge light assemblies 201n. This information is passed over the local operating netw~rk (LON) via edge light wiring 211n to the LON bxidges 221n. The individual message packets are then passed to the redundant computers 26 and 28 over the wide area network (WAN) 14 to the WAN interface 108. ~ter arriving at the redundant computers 26 and 28, tha message packet is checked and verified by a mes~ags parser software module 100. The contents o~ the message are then sent to the sensor fusion xoftware module 101. The ~3 ~ ~ 39 ~2`~ ' ~ ' : . '; ~ ' . , ,, . : 211 !~61 . sensor fu~ion software module 101 is used to keep track of the status of all the sensors 50 on the airport; it filters and verifies the data from the airport and ~tores a representative picture of the sensor array in a memory. This information is used directly by the display 30 to show which sensors 50 are responding and used by the tr2cker l software module 102. The tracker software module 102 uses :~ the sensor status information to determine which sensor 50 report~ corre~pond to actual vehicles~ In addition, as the sensor reports and status change, the tracker software module 102 identifies movement of the vehicles and produces a target location and direction outpu~. This information is l used by the display 30 i~ order to display the appropriate -j vehicle icon on the screen. ::l 15 The loeation and direction of the vehicle is also used ~ by the collision detection software module 104. Thi~ module .~ checks all of the vehicles on the ground and plots their expected course. If any two targets are on intersecting paths, this software module generates operator alerts by ~ 20 using the display 30, the alert lights 34, the speech J' synthesis unit 29 coupled to the associated speaker 32, and the speech synthesis unit 31 coupled to radio 37 which is :~ coupled to antenna 39. i Still referring to FIG. 9, another user of target l~ 25 locatio~ and position data is the ground clearance ~ 'I ~0 ` ~ . ~3 ,.;~ . : '~,S,~'' ~ '" . ', .,"' ' ', . ' '", . ., ~ ' ! ' . .',',' ~ . : 2 1 ~ compliance verifier software module 103. This software module 103 receives the ground clearance command from the controller~s microphone 35 via the speech recognition unit 33. Once the cleared route has been determined, it is S stored in the ground clearance compliance verifier software module 103 and used for comparison to the actual route taken by the vehicle. If the information recei~ed from the tracker software moduls 102 shows that the vehicle has deviated from its assigned courss, this ~oftware module 103 ~ 10 generate~ operator alerts by using the display 30, the alert ;i lights 34, the speech synthesis unit 29 coupled to speaker 32, and the speech synthesis unit 31 coupled to radio 37 which is coupled to antenna 39. ~, The keyboard 27 i connected to a keyboard parser '`d 15 software module lO9o When a command has been ~erified by the keyboard parser software module 109, it i~ used to change display 30 options and to reconfi~ure the sensors and network parameters. ~ network configuration data ba~e 106 `~ `~ is updated with these reconfiguration commandc. This information i~ then turned into LON message packets by the ~ command message generator 107 and sen~ to the edge light ``, assemblies 2O1 D via the WAN interface 108 and the ~ON ., j, . )l bridges 221~. This conclude~ the description of the preferred embodiment. ~owever, many modifications ~nd alterations . i~3 ":~ 4 ~ ? ~.., ' - , .. . ,~ ~ 2 ~ will be obvious to one of ordinary skill in the art without departing from the spirit and scope of the inventive concept. Therefore, it is in~ended that the scope of this invention be limited only by the appended claims. . i ~ . . . . I ~ ~.~ : ~ ~:3 ~ " ~ ,. ,:. '~' :. ~ 42 , ~,. .~ Claims (31) a plurality of light circuits on an airport, each of id light circuits comprise a plurality of light assembly ans; means for providing power to each of said plurality of ?ght circuits and to each of said light assembly means; means in each of said light assembly means for sensing ground traffic on said airport; means for processing data received from each of said ?ght assembly means; means for providing data communication between each of ?id light assembly means and said processing means; said processing means comprises means for providing a ?aphic display of said airport comprising symbols ?presenting said ground traffic, each of said symbols ?ving direction and velocity data displayed; said processing means comprises means for predicting an ?currence of an airport incursion in accordance with the ?ta received from said sensing means; and means for alerting an airport controller or aircraft ?lot of said predicted airport incursion. each of said light circuits being located along the edges of a taxiway or a runway on said airport. said sensing means comprises infrared detectors. light means coupled to said lines of said power providing means for lighting said airport; said location of said symbols on said graphic display of said airport in accordance with said data receive from said light assembly means. said processing means determines a future path of said ground traffic based on a ground clearance command, said future path being shown on said graphic display. said processing means for predicting an occurrence of an airport incursion comprises means for comparing position, direction and velocity of said ground traffic to predetermined separation minimums for said airport. constant current power means for providing a separate line to each of said plurality of light circuits; and network bridge means coupled to said constant current power means for providing a communication channel to said processing means for each line of said constant current power means. said alerting means comprises a speech synthesis umit connected to a speaker. said alerting means comprises a speech synthesis unit connected to a radio transmitter. a plurality of light circuits on an airport, each of said light circuits comprises a plurality of light assembly means; constant current power means for providing a separate line to each of said plurality of light circuits; network bridge means coupled to said constant current power means for providing a communication channel to said processing means for each of said constant current power means; infrared detector means in each of said light assembly means for sensing ground traffic on said airport; means for processing ground traffic data received from each of said light assembly means; means for providing data communication on lines of said power providing means between each of said light assembly means and said processing means; said processing means comprises means for providing a graphic display of said airport comprising symbols representing said ground traffic located in accordance with said ground traffic data received from said light assembly means, each of said symbols having direction and velocity data displayed; said processing means comprises means for predicting an occurrence of an airport incursion in accordance with said ground traffic data received from said sensing means including comparing position, direction aand velocity of said ground traffic data to predetermined separation minimums for said airport; and means for alerting an airport controller or aircraft pilot of said predicted airport incursion. each of said light circuits being located along the edges of a taxiway or a runway on said airport. light means coupled to said lines of said power providing means for lighting said airport; said infrared detector constant current future path of said ground traffic based on a ground clearance command, said future path being shown on said graphic display. said alerting means comprises a speech synthesis umit connected to a speaker. said alerting means comprises a speech synthesis unit connected to a radio transmitter. providing a plurality of light circuits on said airport, each of said light circuits comprises a plurality of light assembly means; providing power to each of said plurality of light circuits; sensing ground traffic on said airport with means in each of said light assembly means; processing data received from each of said light assembly means in computer means; providing a graphic display of said airport comprising symbols representing said ground traffic, each of said symbols having direction and velocity data displayed; providing data communication between said computer means and each of said light assembly means; predicting an occurrence of an airport incursion in accordance with the data received from said sensing means; and alerting an airport controller or aircraft pilot of said predicted airport incursion. lighting said airport with a light means coupled to said microprocessor means and said power lines; providing a sensing means; performing processing, communication and control within said light assembly means with a microprocessor means coupled to said light means, said sensing means and data communication means; and coupling said data communication means between said microprocessor means and said power lines. providing a separate line to each of said plurality of light circuits with a constant current power means; and providing a communication channel to said computer means for each line of said constant current power means using a network bridge means.
https://patents.google.com/patent/CA2114610A1/en
CC-MAIN-2018-26
refinedweb
10,340
50.46
Apr 09, 2009 01:19 PM|fretje|LINK I've created a class "StreamedFileResult" analogue to "FileStreamResult", but with the ability to send the file chunked to the client... The main difference is that Response.BufferOutput is set to false. There's also the possibility to supply the size, so the client (browser) can show it to the user before he confirms to download the file. I'm just wandering why this wasn't already in the code... maybe I'm missing something? Also: is this the right way to do this? Any thoughts or suggestions? Here's the code: public class StreamedFileResult : FileResult { // default buffer size as defined in BufferedStream type private const int _bufferSize = 0x1000; public StreamedFileResult(Stream fileStream, string contentType) : base(contentType) { if (fileStream == null) { throw new ArgumentNullException("fileStream"); } FileStream = fileStream; } public Stream FileStream { get; private set; } public long? FileSize { get; set; } protected override void WriteFile(HttpResponseBase response) { response.BufferOutput = false; if (FileSize.HasValue) response.AddHeader("Content-Length", FileSize.ToString()); // grab chunks of data and write to the output stream Stream outputStream = response.OutputStream; using (FileStream) { byte[] buffer = new byte[_bufferSize]; while (true) { int bytesRead = FileStream.Read(buffer, 0, _bufferSize); if (bytesRead == 0) { // no more data break; } outputStream.Write(buffer, 0, bytesRead); } } } } MVC file download link Apr 11, 2009 12:28 AM|arecev|LINK What version of the framework are you using? This MSDN link has information about the FileStream constructor in framework 3.5, including details about methods and properties that are now obsolete. Note that this article also has links to the same information for the 2.0 and 3.0 framework versions of this. Also, this article is somewhat fundamental but it may help. Check the links at the end too: Apr 14, 2009 03:36 PM|fretje|LINK Thanks for the answer but... This is not about FileStream... This is about FileStreamResult... a part of asp.net mvc (This is the forum for asp.net mvc isn't it?) Apr 14, 2009 05:05 PM|CW2|LINK fretjeI'm just wandering why this wasn't already in the code... maybe I'm missing something? fretjeAny thoughts or suggestions? public class StreamedFileResult : FileStreamResult { public StreamedFileResult(Stream fileStream, string contentType) : base(fileStream, contentType) { } public long? FileSize { get; set; } protected override void WriteFile(HttpResponseBase response) { // TODO: Add a property here? response.BufferOutput = false; if(FileSize.HasValue) { response.AddHeader("Content-Length", FileSize.ToString()); } base.WriteFile(response); } } Additionally, you might want to expose output buffering control property, so you can easily enable/disable it if needed (public bool BufferOutput { get; set; }, then in WriteFile() response.BufferOutput = BufferOutput;) Apr 14, 2009 05:15 PM|CW2|LINK You can also add output buffering to FileStreamResult class (or even modify FileResult.ExecuteResult() method), as the source code has been released under open source license. But you will have to merge your changes after the new version is released... Apr 15, 2009 11:30 AM|fretje|LINK CW2The ASP.NET MVC team has done great job , it is just not possible to have everything implemented (in any framework)., it is just not possible to have everything implemented (in any framework). Yes a great job indeed! I've only begun using asp.net mvc for a couple of weeks now, and I almost immediately fell in love with it! I like the way us developers keep control of everything in contrast with web forms where everything is too much abstracted away! If my post came over as rude... I really didn't mean it that way! I was only wandering if I wasn't missing something, as it just didn't feel right to wait 30 seconds (of course depending on the file size) after I clicked on a download link, before the dialog to save the file was displayed. I suppose automated tests don't take the time it takes to do something into account [;)] Anyways, lots of respect and kudos for the team! Also, thanks for the suggestions... I did thought about deriving from FileStreamResult at first, but I ended up creating a new class because I think this should be the functionality of the FileStreamResult, not of a derivative... Especially when you make the BufferOutput a property (very nice suggestion btw), then the same functionality as it used to be (buffering the whole output) can be accomplished with this class, so no need to keep the old one then... Apr 15, 2009 01:46 PM|fretje|LINK CW2the source code has been released under open source license. Yes I'm aware that the code is open source... And I've been looking around to see where to post my problem/suggestion, and I thought this (the asp.net mvc forum) was the best place... Maybe I'll create an issue about this on codeplex, but I first wanted to know if I wasn't missing anything [;)] 6 replies Last post Apr 15, 2009 01:46 PM by fretje
http://forums.asp.net/t/1408527.aspx
CC-MAIN-2014-49
refinedweb
818
66.64
Make install: Fatal error Hello, iam new and i would compile the qtquickcontrols on my pc with qmake && make install. I became the following error: In file included from qquicklinearlayout_p.h:46:0, from plugin.cpp:44: qquickgridlayoutengine_p.h:57:43: fatal error: QtGui/private/qlayoutpolicy_p.h: file or directory not found In the file qquickgridlayoutengine_p.h I found the following lines: // // W A R N I N G // ------------- // // This file is not part of the Qt API. It exists for the convenience // of the graphics view layout classes. This header // file may change from version to version without notice, or even be removed. // // We mean it. // What must i do to compile the qtquickcontrols. I have cloned the stable branch today and i use QT 5.2.1. Thanks for help. Lars Hi, and welcome to the Qt Dev Net! Qt 5.2.1 already contains Qt Quick Controls. You don't need to compile it manually. Just download and install Qt 5.2.1 from Hello. Thank you for your answer. The qtquickcontrols on gitorius are newer. For example he has a calendar control. So i must compile the controls, but it does not work. I hope, anybody can help me. Lars I don't think you can compile a new version of Qt Quick Controls just like that -- it will conflict with the version that came with Qt 5.2.1. If you want the new features, you should download Qt 5.3.0 alpha and compile the whole thing: Ok, i have understand what you mean. I have take a look into the file qquickgridlayoutengine_p.h on Tag v5.2.1 and i find the following includes: #include "qgridlayoutengine_p.h" #include "qquickitem.h" #include "qquicklayout_p.h" #include "qdebug.h" In the stable branch and Tag v5.3.0-alpha1 the file has the following includes: #include <QtGui/private/qgridlayoutengine_p.h> #include <QtGui/private/qlayoutpolicy_p.h> #include <QtCore/qmath.h> #include "qquickitem.h" #include "qquicklayout_p.h" #include "qdebug.h" I think QtGui/private/qgridlayoutengine_p.h is new in QT v5.3.0 and i have no chance to compile this with QT v5.2.1, because the qgridlayoutengine_p.h does not exist. I can wait for the stable of v5.3.0 to test the new controls or i install the alpha in a vm. I understand the warnings in the header file better now. Thanks for the note and have a nice day. Lars Glad I could help :) A pre-beta of Qt 5.3 was JUST made available. You can grab it here: The full release will be at the end of next month. Won't have to wait long!
https://forum.qt.io/topic/38884/make-install-fatal-error
CC-MAIN-2018-34
refinedweb
445
80.68
TypeError: 'module' object is not callable: module object is not callable. Python is telling me my code trying to call something that cannot be called. What is my code trying to call?" - "The code is trying to call on socket. That should be callable! Is the variable socketis what I think it is?` - I should print out what socket is and check print socket Assume that the content of YourClass.py is: class YourClass: # ...... If you use: from YourClassParentDir import YourClass # means YourClass.py In this way, you will get TypeError: 'module' object is not callable if you then tried to call YourClass(). But, if you use: from YourClassParentDir.YourClass import YourClass # means Class YourClass or use YourClass.YourClass(), it works.
https://codehunter.cc/a/python/typeerror-module-object-is-not-callable
CC-MAIN-2022-21
refinedweb
121
79.26
Important: Please read the Qt Code of Conduct - SegFault accessing `ui->tabWidget` but there is warning about vtable entries Warnings I'm getting and trying to correct: Upon building a second time, it builds but segfaults in this function: void MainWindow::zoomCurrentWidget(int times) { auto widget = ui->tabWidget->currentWidget(); if (widget) { auto view = dynamic_cast<ZoomableView*>(widget); if (view) view->zoomTimes(times); } } My mainwindow.his : #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> QT_BEGIN_NAMESPACE namespace Ui { class MainWindow; } QT_END_NAMESPACE class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(QWidget *parent = nullptr); ~MainWindow(); void addTabWidget(QWidget* widget, const QString& tabName); private slots: void on_actionZoom_In_triggered(); void on_actionZoom_Out_triggered(); void zoomCurrentWidget(int times); private: Ui::MainWindow *ui; }; #endif // MAINWINDOW_H I've also tried: virtual ~MainWindow()and virtual ~MainWindow() overrideand ~MainWindow() override. All the same issues. It is supposed to be straightforward to work with UI Forms, but I'm getting segfaults... Then you killed the tabWidget somewhere else already. Where do you initialize ui? - enjoysmath last edited by enjoysmath MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); } Code generated by Qt Creator. Also, it's not because I have zoomCurrentWidgetunder private slots. I put it under public slotsand same error. @enjoysmath For this kind of problem, have you at least tried emptying out the whole build directory and rebuilding from scratch? Perhaps not your case, but this has solved quite a few "funnies" from other people in the past, worth ensuring. - click make clean and rebuild your app( better cd the build dir and do make distclean, then click run qmake) - Set a break point at the beginning of that func and show which line causes crashes. Hi It all looks pretty normal. If you open one of the examples and run. Does it compile without any such errors ? Do you link any other libs via the pro file ? I have seen this error message with some nasty pointer errors. Do you do anything interesting with any of the Widgets ? Like keeping them in a list and also insert into a form or anything like that ? Hi well actually the normal one is undefined reference to `vtable for which can be caused by not implementing virtual functions or lack QOBJECT macro among other things. But this error cant find Linker symbol for virtual table seems to be linked to both virtual function and dangling pointers. If i had to guess , it would be on some kind of pointer error. When it crashes, the call stack gives no hint on what it was doing ? - enjoysmath last edited by enjoysmath void MainWindow::zoomCurrentWidget(int times) { auto widget = ui->tabWidget->currentWidget(); if (widget) { auto view = dynamic_cast<ZoomableView*>(widget); if (view) view->zoomTimes(times); } } @mrjj It's in this function where the SegFault happens. It was usually on the first line, but later the Qt Creator had me believing it was also happening further down. @mrjj It's definitely the first line. It doesn't return a working pointer to the QTabWidget, although QtCreator, when you hover over the code in Debug mode, shows that it's a QTabWidget* typed value. So there is a bug with QtCreator. I'm doing nothing special to induce this bug. Does this mean I have to go back to PyQt5 to finish my project? I thought C++ would be a lot nicer for handling large graphics scenes, but apparently there are just too many bugs. Then you killed the tabWidget somewhere else already. @Christian-Ehrlicher that's the first time I'm accessing the QTabWidget. Okay, you were right :) I was doing this: ``` MainWindow w; w.setCentralWidget(&view); w.show(); Which would of course wipe out the QWidget created from the UI Designer. Thank you! :) @enjoysmath Hi Just as a note: - Does this mean I have to go back to PyQt5 to finish my project? Python version is a binding using the c++ classes so if it was a Qt bug same thing would happen in PyQt5. @mrjj @Christian-Ehrlicher @JonB @JonB Thanks all who helped me solve this. I will make it my duty to write an awesome software tool for my users. -EnjoysMath
https://forum.qt.io/topic/127246/segfault-accessing-ui-tabwidget-but-there-is-warning-about-vtable-entries
CC-MAIN-2021-25
refinedweb
684
63.29
This topic was written by Robin Dunn, author of the wxPython wrapper. wxPython is a blending of the wxWidgets GUI classes and the Python programming language. it's upsides and it's: At line 2 the wxPython classes, constants, and etc. are imported into the current module's namespace. If you prefer to reduce namespace pollution you can use "from wxPython import wx" and then access all the wxPython identifiers through the wx module, for example, "wx.wxFrame". At line 13 the frame's sizing and moving events are connected to methods of the class. These helper functions are intended to be like the event table macros that wxWidgets employs. But since static event tables are impossible with wxPython, we use helpers that are named the same to dynamically build the table. The only real difference is that the first argument to the event helpers is always the window that the event table entry should be added to. Notice the use of wxDLG_PNT and wxDLG_SZE in lines 19-29 to convert from dialog units to pixels. These helpers are unique to wxPython since Python can't do method overloading like C++. There is an OnCloseWindow method at line 34 but no call to EVT_CLOSE to attach the event to the method. Does it really get called? The answer is, yes it does. This is because many of the standard events are attached to windows that have the associated standard method names. I have tried to follow the lead of the C++ classes in this area to determine what is standard but since that changes from time to time I can make no guarantees, nor will it be fully documented. When in doubt, use an EVT_*** function. At lines 17 to 21 notice that there are no saved references to the panel or the static text items that are created. Those of you who know Python might be wondering what happens when Python deletes these objects when they go out of scope. Do they disappear from the GUI? They don't. Remember that in wxPython the Python objects are just shadows of the corresponding C++ objects. Once the C++ windows and controls are attached to their parents, the parents manage them and delete them when necessary. For this reason, most wxPython objects do not need to have a del method that explicitly causes the C++ object to be deleted. If you ever have the need to forcibly delete a window, use the Destroy() method as shown on line 36. Just like wxWidgets in C++, wxPython apps need to create a class derived from wxApp (line 56) that implements a method named OnInit, (line 59.) This method should create the application's main window (line 62) and show it. And finally, at line 72 an instance of the application class is created. At this point wxPython finishes initializing itself, and calls the OnInit method to get things started. (The zero parameter here is a flag for functionality that isn't quite implemented yet. Just ignore it for now.) The call to MainLoop at line 73 starts the event loop which continues until the application terminates or all the top level windows are closed.pyt[email protected]hon-[email protected]users[email protected]@lis[email protected]ts.wx[email protected]widg[email protected]ets.o[email protected]rg
https://docs.wxwidgets.org/3.0/overview_python.html
CC-MAIN-2021-49
refinedweb
555
72.16
There's a lot of talk about universal web applications but developing them tends to be harder than it might sound. You will have to worry about the differences between environments, and you will find problems you might not have anticipated. To understand the topic better, I'm interviewing Dylan Piercey, the author of Rill, a framework designed to address this particular problem. I like to tinker. As a kid, I enjoyed modding video games and got into programming when I was 12 years old. I've programmed professionally for about four years now and fell in love with the massive community behind web development.I like to tinker. As a kid, I enjoyed modding video games and got into programming when I was 12 years old. I've programmed professionally for about four years now and fell in love with the massive community behind web development. Open source software has been my peaceful haven since I learned git. For me, programming is fun, especially on my terms, and FOSS is exactly that. Rill is my two-year-old baby. In JavaScript terms that means it's just turned 21. Jokes aside Rill is a tool that allows you to learn fewer tools. It is Koa designed and optimized from the ground up to work in the browser. So how does this help? Well, first of all, you get one router for both the browser and Node, meaning you can drop react-router and Koa. Secondly, you also get to think of building web applications as if you have a zero latency node server in every user's browser. With this, you can quickly create fault-tolerant, progressively-enhanced websites with minimal effort. Finally, it is a flexible abstraction, just like it is on the server-side already in Express and Koa. With Rill I have been able to replace many tools including Redux. It also supports many different rendering engines with more on the way. Rill also plays nicely with all of the other libraries making upgrading a bit easier. Depends on where you look. Rill on the server-side is more or less a rip-off of Koa with some careful forethought, but in the browser things get interesting. In the browser, Rill works by intercepting all <a> clicks and <form> submissions and pumping them through a browser-side router with the same API as on the server-side. It supports pretty much anything you can think of including cookies, redirects and sessions, all isomorphically implemented (i.e. on both the server and browser). There are a few huge wins here. For instance, you don't have to use any particular <Link> tags or similar and you aren't tied to React. The server-side also doesn't need to do anything fancy to handle links and forms. Lastly, you already know how links and forms work, so just use them. If you'd like to take a look at Rill's link/form hijacking logic it has been separated out into @rill/http, making the main Rill repository completely universal! It provides a unified router. While developing universal applications, I often found myself writing routes twice. As if that wasn't bad enough the syntax for the routers were often vastly different - try comparing react-router with Express and you'll see what I mean. Rill aims to simplify that and provides a consistent routing interface between the server and browser. It also works perfectly fine as a standalone router in either one. Take for instance the following example: import rill from 'rill' import bodyMiddleware from '@rill/body' import reactMiddleware from '@rill/react' // Setup app. const app = rill() // Use isomomorphic React renderer. app.use(reactMiddleware()) // Use an isomorphic form-data / body parser. app.use(bodyMiddleware()) // Register our form page route as normal. app.get('/my-form-page', ({ req, res })=> { res.body = <MyForm/> }) // Setup our post route. app.post('/my-form-submit', ({ req, res })=> { // Analyze the response body (works in node and the browser). req.body //-> { email: ... } // Perform the business logic (typically calling some api). ... // Finally, take the user somewhere meaningful. res.redirect('/thank-you') }) // Start app. app.listen({ port: 3000 }) // Simple full page react component with a form. function MyForm (props) { return ( <html> <head> <title>My App</title> </head> <body> <form action="/my-form-submit" method="POST"> Email: <input name="email"> <button type="submit">Subscribe</button> </form> <script src="/app.js"/> </body> </html> ); } Notice how similar this looks to the server only code. You get to use middleware and routing in a way you probably already know. However, the above example when compiled with webpack, Rollup, or Browserify will also work in the browser! For a more detailed example check out the TodoMVC implementation with React and Rill. I've built 20+ websites and applications all of which needed strong SEO and proper fallbacks for our users using legacy browsers. It became a constant struggle to enhance content for modern browsers while maintaining support for older ones. Rather than building a server-side solution and then rebuilding a client side solution my goal was to make a framework that allowed me to do both at once. It was originally inspired by koa-client and monorouter, and it turned out to be a robust solution. Well, that's largely up to what I build next and what the community requires. Rill has been pretty stable for the past year. Most of the major work has caused no breaking changes. One of the more recent changes is that Rill is now able to run in a service worker, which I think could be interesting for offloading the router to another thread. Another thing that I have meant to explore is a creating a Rill middleware that works similarly to ViralJS allowing for distributed rendering of applications. Something that's been in the back of my head for a while now is making Rill work on other platforms. The code has been formatted in such a way that the document logic has all been extracted into a single file, but I have limited experience with native applications and need a kick to get me going on this front. For Rill the future is hard to see. I've mentioned some obvious features above, but the point of it, as with any router, is to be flexible. Rill in my eyes is a foundation for isomorphic/universal apps and what I've built with it so far is only the tip of the iceberg. In general, I think that things are going to get simpler, faster and smaller. It never seems that way while I'm riding the wave of JavaScript frameworks, but at the same time things are constantly popping up like svelte and choo, which are all considerably simpler than their predecessors and also faster and smaller. However, the main reason I think this is the case is that the web will eventually bake in much more of the functionality that is needed for modern applications such as web components. I think the abstractions will get lighter and lighter until they fade away. At least I hope this trend continues. 😜 Make a GitHub/Twitter account and follow everyone who's doing something cool. You have teachers all around you, and excellent open source software sets a standard to which you can eventually live up to. Don't sweat the stuff you don't know but try to be aware of it. Learn things when you need them and actively search out new solutions when you find that yours are lacking. Find something fun to build. It's far too easy for your day job to ruin programming for you. Try to find genuinely interesting and fun things and work on them when you have time. I'd love to hear more from Patrick Steele-Idem on all of the crazy optimizations available with MarkoJS and where the team thinks it's going. I hope a stable Rill integration is coming soon. 😄 I'm also constantly impressed by the quality of modules pumped out by Yoshua Wuyts and would be interested in his approach to building them. Rill is a lesser-known tool and I am always eager to receive community feedback. If anyone has any questions or just wants to chat, you can always find me on the gitter. Thanks SurviveJS for the interview and Rich Harris for the recommendation. Thanks for the interview Dylan! The approach Rill uses is refreshing and I hope people find it. Check out Rill site and Rill GitHub page to learn more about it.
https://survivejs.com/blog/rill-interview/
CC-MAIN-2018-30
refinedweb
1,431
64.51
I created these two classes to make changing the color of your Edit Box text and your Static text easy. I didn't need all the overhead of a CRichEditCtrl, but I did need to change the color of my text as well as the background color of the box. CStatic didn't have an easy way of changing the color of your text either. CRichEditCtrl CStatic These classes are derived from CEdit and CStatic. CEdit Include the files ColorEdit.cpp, ColorEdit.h and Color.h in your project if you are just working with Edit Boxes. If you want to incorporate colored static text also, you would add the files ColorStatic.cpp, ColorStatic.h. In your dialogs header file, add : #include "ColorEdit.h" #include "ColorStatic.h" //only if using colored static text. public: CColorEdit m_ebCtl; CColorStatic m_stText; //only if using colored static text. There are two ways you can associate your control ids with the classes. From now on, I will assume you are using both classes. In your dialogs .cpp file, add: void YourDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(CYourDlg) //}}AFX_DATA_MAP DDX_Control(pDX, IDC_ST_TEXT, m_stText); DDX_Control(pDX, IDC_EB_CTL, m_ebCtl); } or: BOOL CYourDlg::OnInitDialog() { // TODO: Add extra initialization here m_ebCtl.SubclassDlgItem(IDC_EB_CTL,this); m_stText.SubclassDlgItem(IDC_ST_TEXT,this); } Now that this is finished, it is time to use the class. There are three functions available for Edit Boxes and two for Static Text. They are as follows: There are three functions available Currently: SetBkColor(COLORREF crColor) // Works for both classes SetTextColor(COLORREF crColor) // Works for both classes SetReadOnly(BOOL flag = TRUE) //This function is for CColorEdit only. In the file Color.h is the following code: // Color.h // Colorref's to use with your Programs #define RED RGB(127, 0, 0) #define GREEN RGB( 0,127, 0) #define BLUE RGB( 0, 0,127) #define LIGHTRED RGB(255, 0, 0) #define LIGHTGREEN RGB( 0,255, 0) #define LIGHTBLUE RGB( 0, 0,255) #define BLACK RGB( 0, 0, 0) #define WHITE RGB(255,255,255) #define GRAY RGB(192,192,192) These are just a few I picked out, but add as many colors as you need. Here is how easy it is to use: m_ebCtl.SetTextColor(BLUE); //Changes the Edit Box text to Blue m_ebCtl.SetBkColor(WHITE); //By default your background color is the //same as your system color(color of dialog) m_ebCtl.SetReadOnly(); //This makes it so nobody can edit the text. //If you disable the box it does not let you //change colors. m_stText.SetTextColor(RED); //Changes the Static Text to Red m_stText.SetBkColor(GREEN); //You probably will not use it, but it's here. I hope someone out there finds this useful :) This article has no explicit license attached to it, but may contain usage terms in the article text or the download files themselves. If in doubt, please contact the author via the discussion board below. A list of licenses authors might use can be found here. m_crBkColor = ::GetSysColor(COLOR_3DFACE); // Initializing the Background Color to the system face color. m_crTextColor = BLACK; // Initializing the text to Black m_brBkgnd.CreateSolidBrush(m_crBkColor); // Create the Brush Color for the Background. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1035/Using-Colors-in-CEdit-and-CStatic?msg=4058437
CC-MAIN-2019-43
refinedweb
551
57.67
Navigate through code with the Visual Studio debugger The Visual Studio debugger can help you navigate through code to inspect the state of an app and show its execution flow. You can use keyboard shortcuts, debug commands, breakpoints, and other features to quickly get to the code you want to examine. Familiarity with debugger navigation commands and shortcuts makes it faster and easier to find and resolve app issues. If this is the first time that you've tried to debug code, you may want to read Debugging for absolute beginners and Debugging techniques and tools before going through this article. Get into "break mode" In break mode, app execution is suspended while functions, variables, and objects remain in memory. Once the debugger is in break mode, you can navigate through your code. The most common ways to get into break mode quickly is to either:. For example, from the code editor in Visual Studio, you can use the Run to Cursor command to start the app, debugger attached, and get into break mode, then F11 to navigate code. Once in break mode, you can use a variety of commands to navigate through your code. While in break mode, you can examine the values of variables to look for violations or bugs. For some project types, you can also make adjustments to the app while in break mode. Most debugger windows, like the Modules and Watch windows, are available only while the debugger is attached to your app. Some debugger features, such as viewing variable values in the Locals window or evaluating expressions in the Watch window, are available only while the debugger is paused (that is, in break mode). Note If you break into code that doesn't have source or symbol (.pdb) files loaded, the debugger displays a Source Files Not Found or Symbols Not Found page that can help you find and load the files. See Specify symbol (.pdb) and source files. If you can't load the symbol or source files, you can still debug the assembly instructions in the Disassembly window. Step through code The debugger step commands help you inspect your app state or find out more about its execution flow. Step into code line by line To stop on each statement while debugging, use Debug > Step Into, or press F11. The debugger steps through code statements, not physical lines. For example, an if clause can be written on one line: int x = 42; string s = "Not answered"; if( int x == 42) s = "Answered!"; Dim x As Integer = 42 Dim s As String = "Not answered" If x = 42 Then s = "Answered!" However, when you step into this line, the debugger treats the condition as one step, and the consequence as another. In the preceding example, the condition is true. On a nested function call, Step Into steps into the most deeply nested function. For example, if you use Step Into on a call like Func1(Func2()), the debugger steps into the function Func2. Tip As you execute each line of code, you can hover over variables to see their values, or use the Locals and Watch windows to watch the values change. You can also visually trace the call stack while stepping into functions. (For Visual Studio Enterprise only, see Map methods on the call stack while debugging). Step through code and skip some functions You may not care about a function while debugging, or you know it works, like well-tested library code. You can use the following commands to skip code while code stepping. The functions still execute, but the debugger skips over them. Run to a specific location or function You may prefer to run directly to a specific location or function when you know exactly what code you want to inspect, or you know where you want to start debugging. Run to a breakpoint in code To set a simple breakpoint in your code, click the far left margin next to the line of code where you want to suspend execution. You can also select the line and press F9, select Debug > Toggle Breakpoint, or right-click and select Breakpoint > Insert Breakpoint. The breakpoint appears as a red dot in the left margin next to the code line. The debugger suspends execution just before the line executes. Breakpoints in Visual Studio provide a rich set of additional functionality, such as conditional breakpoints and tracepoints. For details, see Using breakpoints. Run to a function breakpoint You can tell the debugger to run until it reaches a specified function. You can specify the function by name, or you can choose it from the call stack. To specify a function breakpoint by name Select Debug > New Breakpoint > Function Breakpoint In the New Function Breakpoint dialog, type the name of the function and select its language. Select OK. If the function is overloaded or in more than one namespace, you can choose the one you want in the Breakpoints window. To select a function breakpoint from the call stack While debugging, open the Call Stack window by selecting Debug > Windows > Call Stack. In the Call Stack window, right-click a function and select Run To Cursor, or press Ctrl+F10. To visually trace the call stack, see Map methods on the call stack while debugging. Run to a cursor location To run to the cursor location, in source code or the Call Stack window, select the line you want to break at, right-click and select Run To Cursor, or press Ctrl+F10. Selecting Run To Cursor is like setting a temporary breakpoint. Run to Click While paused in the debugger, you can hover over a statement in source code or the Disassembly window, and select the Run execution to here green arrow icon. Using Run to Click eliminates the need to set a temporary breakpoint. Note Run to Click is available starting in Visual Studio 2017. Manually break into code To break in the next available line of code in a running app, select Debug > Break All, or press Ctrl+Alt+Break. Move the pointer to change the execution flow While the debugger is paused, a yellow arrowhead in the margin of the source code or Disassembly window marks the location of the next statement to be executed. You can change the next statement to execute by moving this arrowhead. You can skip over a portion of code, or return to a previous line. Moving the pointer is useful for situations such as skipping a section of code that contains a known bug. To change the next statement to execute, the debugger must be in break mode. In the source code or Disassembly window, drag the yellow arrowhead to a different line, or right-click the line you want to execute next and select Set Next Statement. The program counter jumps directly to the new location, and instructions between the old and new execution points aren't executed. However, if you move the execution point backwards, the intervening instructions aren't undone. Caution - run-time checks enabled, setting the next statement can cause an exception to be thrown when execution reaches the end of the method. - When Edit and Continue is enabled, Set Next Statement fails if you have made edits that Edit and Continue cannot remap immediately. This can occur, for example, if you have edited code inside a catch block. When this happens, an error message tells you that the operation is not supported. - In managed code, you cannot move the next statement if: - The next statement is in a different method than the current statement. - Debugging was started by Just-In-Time debugging. - A call stack unwind is in progress. - A System.StackOverflowException or System.Threading.ThreadAbortException exception has been thrown. Debug non-user code By default, the debugger tries to debug only your app code by enabling a setting called Just My Code. For more details about how this feature works for different project types and languages, and how you can customize it, see Just My Code. To look at framework code, third-party library code, or system calls while debugging, you can disable Just My Code. In Tools (or Debug) > Options > Debugging, clear the Enable Just My Code check box. When Just My Code is disabled, non-user code appears in the debugger windows, and the debugger can step into the non-user code. Note Just My Code is not supported for device projects. Debug system code If you have loaded debugging symbols for Microsoft system code, and disabled Just My Code, you can step into a system call just as you can any other call. To load Microsoft symbols, see Configure symbol locations and loading options. To load symbols for a specific system component: While you're debugging, open the Modules window by selecting Debug > Windows > Modules, or pressing Ctrl+Alt+U. In the Modules window, you can tell which modules have symbols loaded in the Symbol Status column. Right-click the module that you want to load symbols for, and select Load Symbols. Step into properties and operators in managed code The debugger steps over properties and operators in managed code by default. In most cases, this provides a better debugging experience. To enable stepping into properties or operators, choose Debug > Options. On the Debugging > General page, clear the Step over properties and operators (Managed only) check box.
https://docs.microsoft.com/en-gb/visualstudio/debugger/navigating-through-code-with-the-debugger?view=vs-2019
CC-MAIN-2020-29
refinedweb
1,562
61.67
In this exercise/project, I am asked to write a function to determine how much it would cost to ship a package of a certain weight using ground shipping. The instructions are as follows: Write a function for the cost of ground shipping. This function should take one parameter, weight, and return the costof shipping a package of that weight. The instructions include the word “cost” written in code, which makes me believe that I should be returning a variable called “cost” - am I reading this correctly? And, if so, why is it necessary to create a variable within the function to return? It seems like more work to me. Here is the code I wrote for this project, and it worked just fine - even though I didn’t create a variable called “cost”: def ground_shipping(weight): if weight <= 2: return 20 + (1.5 * weight) elif weight <= 6: return 20 + (3 * weight) elif weight <= 10: return 20 + (4 * weight) else: return 20 + (4.75 * weight)
https://discuss.codecademy.com/t/why-should-i-return-the-cost-variable-in-this-project/456895
CC-MAIN-2020-16
refinedweb
165
76.56
Closed Bug 1121297 Opened 6 years ago Closed 6 years ago Make Volatile Buffer threadsafe Categories (Core :: Memory Allocator, defect) Tracking () mozilla38 People (Reporter: seth, Assigned: seth) References Details Attachments (2 files, 7 obsolete files) We are currently experiencing a lot of bugs in ImageLib because, as we move things off-main-thread, we're starting to access VolatileBuffer objects from multiple threads at once. Unforunately, VolatileBuffer isn't threadsafe, and we can't easily work around the issue in ImageLib for a variety of reasons. The best solution is to make VolatileBuffer threadsafe, especially since ImageLib is the only place where VolatileBuffers are used right now. This patch moves VolatileBuffer into libxul, which will make writing part 2 a lot easier since we can use a platform-agnostic mutex. It's worth reviewing |volatilebuffer/moz.build| carefully, as there were a couple of things I copied from |mozalloc/moz.build| without knowing what they do. Attachment #8548592 - Flags: review?(mh+mozilla) This patch adds a mutex to VolatileBuffer and updates all of the implementations to use it. Attachment #8548593 - Flags: review?(mh+mozilla) Given that there are different implementations per-platform and this code is touched by anything that draws images, I think we're going to need a pretty complete try job on this one: Comment on attachment 8548592 [details] [diff] [review] (Part 1) - Move VolatileBuffer into libxul Review of attachment 8548592 [details] [diff] [review]: ----------------------------------------------------------------- Don't copy Makefile.in, you don't need DIST_INSTALL in that directory. ::: memory/volatilebuffer/moz.build @@ -68,4 @@ > > TEST_DIRS += ['tests'] > > GENERATED_INCLUDES += ['/xpcom'] This can probably be removed. @@ -69,5 @@ > TEST_DIRS += ['tests'] > > GENERATED_INCLUDES += ['/xpcom'] > > DISABLE_STL_WRAPPING = True This can definitely be removed. @@ -71,5 @@ > GENERATED_INCLUDES += ['/xpcom'] > > DISABLE_STL_WRAPPING = True > > if CONFIG['CLANG_CXX'] or CONFIG['_MSC_VER']: You should check if this condition can be lifted in either memory/volatilebuffer or memory/mozalloc. ::: moz.build @@ +43,5 @@ > > DIRS += [ > 'mozglue', > 'memory/mozalloc', > + 'memory/volatilebuffer', make it memory/volatile. Attachment #8548592 - Flags: review?(mh+mozilla) → review+ Comment on attachment 8548593 [details] [diff] [review] (Part 2) - Make VolatileBuffer threadsafe Review of attachment 8548593 [details] [diff] [review]: ----------------------------------------------------------------- ::: memory/volatilebuffer/VolatileBuffer.h @@ +41,5 @@ > */ > > namespace mozilla { > > +class MOZALLOC_EXPORT VolatileBuffer Ah, here is something that should be done in part 1: remove MOZALLOC_EXPORT from these files. Attachment #8548593 - Flags: review?(mh+mozilla) → review+ Thanks for the reviews! I'll make those changes. Addressed review comments. Rebased version of part 2. Let's double check that this still builds everywhere, since I changed the warnings-as-errors behavior: I've attempted to switch the tests over to GTest to fix the build errors on that try job, but I haven't had any luck getting the build system to pick the new tests up. OK, this version should address the issues in the previous version of part 3. Attachment #8549416 - Flags: review?(mh+mozilla) Attachment #8549416 - Attachment description: 1121297-switch-the-volatilebuffer-test-to-use-gtest.patch → (Part 3) - Switch the VolatileBuffer tests to use GTest Here's a new try job: Updated this patch to check the return value of posix_memalign / moz_posix_memalign, because not doing so led to a warning that caused the valgrind build to fail. One more try job, to make sure that the valgrind build now works correctly. (And that I haven't inadvertently busted anything with the change in comment 13.) Comment on attachment 8549416 [details] [diff] [review] (Part 3) - Switch the VolatileBuffer tests to use GTest Review of attachment 8549416 [details] [diff] [review]: ----------------------------------------------------------------- This should be folded with part 1. Attachment #8549416 - Flags: review?(mh+mozilla) → review+ Thanks for the review! Part 3 folded into part 1 as requested. Pushed: remote: remote: Backed out in for Cpp unittest bustage: Flags: needinfo?(seth) Looks like this needs to clobber. Pushed again now that inbound has reopened: remote: remote: Flags: needinfo?(seth) Status: NEW → RESOLVED Closed: 6 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla38 Comment on attachment 8550597 [details] [diff] [review] (Part 1) - Move VolatileBuffer into libxul Approval Request Comment [Feature/regressing bug #]: Bug 1116733 is what made this issue visible, though it existed before. [User impact if declined]: Image textures suddenly disappearing or becoming corrupt despite being locked. Affects any platform where we use volatile buffers - Android, B2G, OS X, Windows. [Describe test coverage new/current, TBPL]: Will have been on central for 4 days by the time it gets uplifted; plenty of time to detect any issues for such a simple change. This feature has test coverage. [Risks and why]: Low risk. [String/UUID change made/needed]: None. Attachment #8550597 - Flags: approval-mozilla-aurora? Comment on attachment 8549149 [details] [diff] [review] (Part 2) - Make VolatileBuffer threadsafe Part 2 needs to come along as well. (It does the real work!) Attachment #8549149 - Flags: approval-mozilla-aurora? Tracking 37 as I want to be alerted if this bug is reopened. status-firefox36: --- → unaffected status-firefox37: --- → affected status-firefox38: --- → fixed tracking-firefox37: --- → + Comment on attachment 8550597 [details] [diff] [review] (Part 1) - Move VolatileBuffer into libxul It's early enough in Aurora to take a change like this. Aurora+ Attachment #8550597 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+ Flags: in-testsuite+
https://bugzilla.mozilla.org/show_bug.cgi?id=1121297
CC-MAIN-2020-29
refinedweb
849
56.76
TMPFS(5) Linux Programmer's Manual TMPFS(5) tmpfs - a virtual memory filesystem size option can be used to specify an upper limit on the size of the filesystem. (The default size is half of the available RAM size.)). The tmpfs facility was added in Linux 2.4, as a successor to the older ramfs facility, which did not provide limit checking or allow for the use of swap space. For a description of the mount options that may be employed when mounting a tmpfs filesystem, see mount(8).. df(1), du(1), mount(8) The kernel source file Documentation/filesystems/tmpfs.txt. This page is part of release 4.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-05-03 TMPFS(5) Pages that refer to this page: fallocate(2), ioctl_userfaultfd(2), lseek(2), madvise(2), memfd_create(2), mmap(2), remap_file_pages(2), swapon(2), shm_open(3), filesystems(5), proc(5), cgroups(7), keyrings(7), user_namespaces(7)
http://man7.org/linux/man-pages/man5/tmpfs.5.html
CC-MAIN-2017-26
refinedweb
177
65.01
0 Hello I am reather new in c++ and I try to solve some exercises but i got blocked. Can anyone help me with this? The program should read in values until a negative value is read in. It should determine the largest value that was read in and display that value once the loop stops. This program involves indefinite iteration using a sentential controlled loop. The sentential value is any negative integer. The fact that negative integers are sententials should help you determine how to initialize the variable that contains the largest integer. I wrote this code but it gives me only the result 0: #include<iostream> using namespace std; int main () { int numar, biggest =0; cout<<"insert an integer "; while( numar > 0) cin>>numar; numar=biggest; if(numar<biggest) { numar = biggest; } cout<<biggest; system("pause"); return 0; } and just don'tknow what is wrong
https://www.daniweb.com/programming/software-development/threads/310724/exercise
CC-MAIN-2017-26
refinedweb
146
62.58
Date pickers and calendar popup are a great way for a user to accurately select the date they require. The main problem with most date pickers is that they take an eternity to popup a new window and initialize, this has a negative effect and the user tends to avoid the popup and enter the date manually which may induce errors. In this article, I�ll make use of one of the best calendars on the web today � Microsoft�s calendar.htc and expose it in a more efficient manner. In my last article, I had demonstrated how to use DHTML behaviours to expose common functionality as a CSS style and briefly talked about element behaviours. The Microsoft calendar.htc is an element behaviour that exposes a fully functional DHTML calendar. Microsoft�s calendar is great, and apart from adding a double click event and a bit of styling, I�ve left it intact. Instead of changing the calendar, I decided to create an element behaviour and embed the calendar.htc into it, this way we leave the calendar relatively untouched. The wrapper will expose the calendar in the form of a popup; the popup object is a special type of overlapped window that is very much like a div/ layer, but unlike the layer it hides itself when losing the focus and can go over select elements. Here are the main benefits of the control: First of all, we need to declare our namespace, this is to ensure that the calendar element has a unique qualifier. The HTML element has a xmlns attribute that declares the namespace astutemedia. Setting this attribute allows us to use the prefix astutemedia in the document. The next step is to import the calendar element into the namespace with the import directive. <html xmlns:astutemedia> <head> <?import namespace="astutemedia" implementation="CalendarPopup.htc"> </head> The import directive is really the key to implementing an element behaviour in the primary document. When the browser begins processing the � import� directive, it waits until the contents of the HTC file have been downloaded completely before continuing. The way this instruction is processed is the reason why the behaviour is bound synchronously to the custom element. Element behaviours are implemented the same as ASP.NET server controls, but all of the processing is done on the client. As you can see from the code snippet below, " Calendar1" has a target attribute that references a textbox, which gets populated by the selected date. The second element doesn�t provide a target but gets the selected date from the onDateSelected event via the selectedDate event attribute. <input class="FormTextBox" id="Date1" type="text" maxlength="10" name="Date1"> <astutemedia:calendar</astutemedia:calendar> <astutemedia:calendar </astutemedia:calendar> The solution has four files: The Calendar.htc behaviour has one minor change that allows us to double click on a date to select it. This is achieved by adding a custom event called OnCalendarDblClick. <public:event This gets called when the inner table of the calendar gets double clicked. We reference the innerTableElem and attach the DblClick function to the ondblclick event. window.document.all.innerTableElem.attachEvent("ondblclick", DblClick); The function� function DblClick() { var oEvent = createEventObject(); onCalendarDblClick.fire(oEvent); } We make use of this event in the Calendar.htm file. On the OnCalendarDblClick event, we call a function called CloseCalendar which exposes the value of the selected date. oncalendardblclick="CloseCalendar()" This function is strictly necessary, as we could have called Unload function from the CalendarPopup.htc directly. But adding this function allows readability. function CloseCalendar() { var val = Calendar.value; var id = parent.document.body.children[0].ParentId; parent.parent.document.getElementById(id).Unload(val); } The CalendarPopup.htc holds the main functionality. It exposes the calendar as a custom element and produces the date picker as a popup. We first start off by declaring the component. <public:component <public:defaults <public:property <public:property <public:event <public:method <public:attach </public:component> As you can see from the above code sample, we expose a Target property and give it a default property of an empty string. We also expose an event called onDateSelected, which gets called when the popup unloads. The Unload method gets called by the CloseCalendar function in the Calendar.htm file. We then attach the � Init� function to an event on the holding page (the page where we want to use the behaviour), the event is onContentReady, this event gets fired when the content on the holding page is fully loaded. The Init function first checks to see if the Target property is set, and if so attaches the onDblClick event to the element. This will allow the double click of the element to show the calendar. We then create the popup itself and populate it with the Calendar.htm file. function Init() { // Check to see if there is a target element. if(Target != null && Target != "") { // Add a double click to the target elem, to show the calendar winDoc.getElementById(Target).attachEvent("ondblclick", ShowPopup); } // Create a popup object popup = win.createPopup(); popupBody = popup.document.body; // Add the scriptlet to the popups body. popupBody.innerHTML = "<object id='cal' width='100%' ParentId='" + id + "' height='100%' type='text/x-scriptlet' data='Calendar.htm'></object>"; } The Unload function fires the onDateSelected event and if a target exists, populate it with the selected date value. It then hides the popup. function Unload(val) { // Create a new event. var e = createEventObject(); // Expose the selected value with the event. e.selectedDate = val; // Fire the event. onDateSelected.fire(e); if(Target != null && Target != "") { // Find the target in the parent document and set the value. winDoc.getElementById(Target).value = val; } // Hide the popup. popup.hide(); } The last function is the ShowPopup function; this function shows the popup when the the calendar icon is clicked or the target element is double clicked. It positions the popup 22 pixels from where the click/double-click occurred. It also makes sure that it doesn�t conflict with the boundaries of the browser window. function ShowPopup() { var wEvent = win.event; var winDocBody = winDoc.body; var popupWidth = 320; var popupHeight = 250; // Set the x and y from where the mouse clicks. x = wEvent.x + 22; y = wEvent.y - 22; // Check to see if the popup goes out of bounds. if (x + popupWidth > winDocBody.clientWidth) x = wEvent.x - (popupWidth + winDocBody.scrollLeft + 22); else x += winDocBody.scrollLeft; if (y + popupHeight > winDocBody.offsetHeight) y = wEvent.y - (popupHeight + winDocBody.scrollTop + 22); else y += winDocBody.scrollTop; popupBody.style.border = "1px solid #333333"; // Show the popup. popup.show(40, -80, popupWidth, popupHeight, document.body); } Finally, we output our calendar icon and add an onClick event to show the popup. <body id="TheBody"> <img src="Calendar.gif" width="16" height="15" style="cursor:hand" onclick="ShowPopup()" title="Click to show calendar" align="absMiddle"> </body> This behaviour makes calendars and date pickers a quick and easy solution to a very common problem. Element behaviours are a great way to implement and encapsulate common functionality and are very quick to use. If you are using ASP.NET, you can wrap the element behaviour in a server control to expose the selected date on the server side. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/scripting/CalendarPopup.aspx
crawl-002
refinedweb
1,197
50.12
From: Alan Gutierrez (alan-boost_at_[hidden]) Date: 2005-11-01 02:36:59 * Stefan Seefeld <seefeld_at_[hidden]> [2005-10-31 22:47]: > Some years ago I proposed a XML-related APIs for inclusion > into boost (). > Everybody agreed that such APIs would be very useful as part of boost. > Unfortunately, though, after some weeks of discussing a number of details, > I got distracted, and so I never managed to submit an enhancement. > > I'v now started to look into this topic again, and wrote down a start > of a DOM-like API as I believe would be suitable for boost. > > Here are some highlights: > > * User access dom nodes as <>_ptr objects. All memory management is > hidden, and only requires the document itself to be managed. > > * The API is modular to allow incremental addition of new modules with > minimal impact on existing ones. This implies that implementations > may only support a subset of the API. (For example, no validation.) > > * All classes are parametrized around the (unicode) string type, so > the code can be bound to arbitrary unicode libraries, or even std::string, > if all potential input will be ASCII only. > > * The Implementation uses existing libraries (libxml2, to be specific), > since writing a XML DOM library requires substantial efford. > > A first sketch at a XML API is submitted to the boost file vault under > the 'Programming Interfaces' category. It contains demo code, as well > as some auto-generated documentation. > > I'm aware that this requires some more work before I can attempt > a formal submission. This is simply to see whether there is still > any interest into such an API, and to get some discussion on the > design. I'm going to respond off-the-cuff, so excuse me if what I mention is covered in your sketch. Simply, the Java APIs have moved away from W3C DOM. In that langauge, developers have moved to JDOM, DOM4J, or XOM, for node surgery. The W3C DOM predates namespaces, and namespaces feel kludgy. It permits the construction of documents that are unlikely in the wild. Most documents conform to XML Namespaces. Of those alterate object models noted above, only DOM4J separates interface from implementation as rigidly as W3C DOM, using the factory pattern to create all nodes. More recent object models in Java like XMLBeans move away from modeling XML as a tree of nodes connected by links, and instead models XML as a target node, with a set of axis that are traversed by iterators, rather than node references. This model is the most C++ like. There are also document object models coming out of XPath and XSLT that are not as well known but are all axis based. Saxon's NodeInfo model, Jaxen's Navigator model, and Groovy's GPath model. All of these models are immutable. They support transformations and queries. For many applications this is all that is necessary. XQuery, XSLT, and XPath all generate new documents from immutable documents. The need for document surgery for many in memory applications is not as common as one might think. Transformation is often easier to express. I'd suggest, in any language wide implementation of XML, to attempt to separate transformation and query, from update. They are two very different applications. I'd suggest starting with supporting XML documents that conform to the XPath and Query data model, and working backwards as the need arises. It makes for a much more consice library, and removes a lot of methods for rarely needed, often pathalogical, mutations. Implementing an object model would be much easier, if you ipmlement the 95% that is most frequently used. And if you sepearate the compexity of document mutation from the realative simplicity of iteration and transformation. Cheers. -- Alan Gutierrez - alan_at_[hidden] - Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/11/96131.php
CC-MAIN-2019-51
refinedweb
644
56.66
My program is due today and I'm stuck at work trying to figure this out I have been using the web to get this done. Instructions: Write a program that adds the positive odd numbers you enter from the keyboard while ignoring the even numbers, and stops when a negative number or zero are entered. Display the sum of the accepted odd numbers. This is what I have. #include <iostream> using namespace std; int main(void) { int N = 0; int sum = 0; cout << "Enter a whole number:" << endl; cin >> N; if ( N < 0 ) { cout << "The number was not within the proper range." << endl; return 1; } for( int i = 1; i <= N; i = i + 2){ sum = sum + i; } cout << "The total sum is: " << sum << endl; return 0; } I'm just not cofident at all about this!
https://www.daniweb.com/programming/software-development/threads/360913/my-program-doesnt-seem-right-sum-of-odd-numbers
CC-MAIN-2018-26
refinedweb
136
74.32
Hi, I just wasted half day to get rid of a problem, which I, due to lack of experience, have to describe as 'random mod_python lockups'. Turns out they were caused because I was using Sessions, and Sessions lock until the end of the request by default. The 'lockup' occurs when sending concurrent requests to the server, which happens quite frequently when users open a lot of tabs on the page or - worse - download a file via mod_python. The culprit code was actually a subroutine like this: def _get_user_id(req): s = Session.Session(req) if s.is_new(): s['id'] = _generate_unique_id() s.save() return s['id'] Now, first of all I don't want to complain about locking here. The fact that I noticed it so late, checked everywhere but there for a problem, just shows how much I know about locking issues. So it's a good thing that it is taken care of by mod_python. However, I wonder if mod_python could not have handled this case in a smarter way. After all, the session is only used in a separate subroutine. The session object does not even exist anymore after it returns. Yet it may still take quite some time until the request completes. If it is possible in Python when an object has no more references (I'm new to Python so I'm not sure), I would prefer if it could unlock the Session immediately. It would be safer than letting me doing the locking myself, because I could accidentally unlock at a time where it is not safe (because the reference to the session object is still around and could be accessed). Regards Andreas Klauer
https://modpython.org/pipermail/mod_python/2006-October/022326.html
CC-MAIN-2022-21
refinedweb
279
71.34
I'm having a problem with my linked list. I want to add to the beginning of the list and delete from the beginning of the list. I get an AccessViolation which is coming from where I print out the list. I also don't think it's adding elements to the linked list, it just overwrites whats there already. I don't know if the delete function works but when I run it, I get a NullPointer Exception. #include <StdAfx.h> #include <stdio.h> #include <stdlib.h> typedef struct Node{ // This struct is complete. Do not change it. int num; struct Node *next; } Rec; void main(){ // Complete this function int x = 0; int y = 0; int k; Rec *top, *freeNode, *current, *temp; top = NULL; while(x != 3){ printf("Enter 1 to push a number, 2 to pop, and 3 to quit: "); scanf("%d", &x); switch(x){ case 1: freeNode = (Rec*)malloc(sizeof(Rec)); printf("Enter an integer to push: "); scanf("%d", &freeNode -> num); if(top ==NULL) { top = freeNode; current = freeNode; } else { current -> next = freeNode; top = freeNode; } temp = top; while(temp != NULL) { printf("%d", temp -> num); temp = temp -> next; } case 2: current = top; if (current == NULL) printf("List is Empty"); top = current -> next; free(current); } } Thanks in advance!
https://www.daniweb.com/programming/software-development/threads/476513/question-about-linkedlist-in-c
CC-MAIN-2020-34
refinedweb
208
81.02
Mar 22, 2012 03:58 AM|vchany|LINK Hi all, I am getting an error "No overload for method 'add' takes '2' arguments" for the lines delete.add. Can someone please tell me what is the right way to use the delete.add ? Thanks in advance PROJ_TABLE proj_Table = new PROJ_TABLE(); Delete delete = new Delete("PROJ_TABLE"); delete.add("CODE", proj_Table.CODE); delete.add("NAME",proj_Table.NAME); delete.add(Expression.eq("CODE", proj_Table_AG.CODE)); int ret = DB.execute(delete); return ret; Mar 22, 2012 06:15 AM|Mauro_net|LINK I might be a real jerk trying to help you on this matter but I've never seen before such sintaxis. Are you using a framewok? like hibernate? what do you see when you navigate to delete.add's definition ? (F12 over the "add" word) where is detele() object defined? Mar 22, 2012 08:54 AM|vchany|LINK Sorry! It came from a user defined class. I have been going through the codes and still cannot see why it would need 2 arguments based on my first post.. strange. public class Delete { String from; AndList where = new AndList(); public Delete(String table) { from = table; } public Delete add(Expression e) { if (e != null) where.add(e); return this; } public Delete add(OrList e) { if (e != null && e.count > 0) where.add(e); return this; } public Delete add(AndList e) { if (e != null && e.count > 0) where.add(e); return this; } public override string ToString() { StringBuilder str = new StringBuilder(); str.Append("DELETE FROM ").Append(from); if (where.count > 0) { str.Append(" WHERE "); str.Append(where); } return str.ToString(); } } All-Star 122739 Points Moderator Mar 22, 2012 04:00 PM|MetalAsp.Net|LINK 4 replies Last post Mar 22, 2012 04:00 PM by MetalAsp.Net
http://forums.asp.net/p/1783401/4893991.aspx?Re+Deleting+a+row+from+the+database+Oracle+DataAccess+
CC-MAIN-2014-10
refinedweb
290
78.04
Recursive code can be very confusing and frustrating at first, but is a necessity if you are trying to accomplish certain tasks in an expandable, decoupled way. I recently needed to make a function that could build a multi-dimensional array from a form. The problem is that the depth of the array is completely fluid as the form is made by nested “rules” which play off each other, allowing the user to build a very complex rule set. Without going into the details of of the proprietary code for my company, let’s take a look at the technique I chose and how you can use passing variables by reference to build a recursive loop that can go as deep as the array needs (limited only by server resources). Hopefully this technique will bring you a little clarity! Background The form data that gets passed into the function is used as data to implement an abstract class that can nest. As the nesting occurs, the object within the data needs to be able to to take in the sub data, causing it to get deeper and deeper. I decided to pass this “depth” in the form by using a special character (an underscore in this case) in the form field’s name. The first portion of the form field’s name can be considered a “namespace” of the top level name of the variable. Field names are long strings that are basic subcomponents separated by underscores. For example, “namespace_buttons_0_name=Test 1” should translate to this: $namespace = [ 'buttons'=>[ 0=>[ 'name'=>'Test 1' ] ] ]; If you put a couple of these types of fields together, you can end up making a pretty large array: “namespace_buttons_0_name=Test 1&namespace_buttons_0_color=blue&namespace_buttons_0_name=Test 2&namespace_buttons_0_color=red” becomes: $namespace = [ 'buttons'=>[ 0=>[ 'name'=>'Test 1', 'color': 'blue' ], 1=>[ 'name'=>'Test 2', 'color': 'red' ] ] ]; The deeper you go with this object, the more convoluted the field names become. However, there are some good ways to parse these, so let’s break it down! The PHP The first thing we will want to do is instantiate our base variable. Then, we can start iterating over the POST array to find any fields that are part of this “namespace.” We can use explode to break up the key into a simple array of components, checking the first one for the namespace to make sure we aren’t processing a form field that is not part of our array. $namespace = []; foreach ($_POST as $key=>$value){ $keyComponents = explode('_', $key); if ($keyComponents[0]==='namespace'){ //Do something with it - 'cause this is one of the "namespace" fields! } } Now we need a plan to iterate over the sublevels, creating unset subcomponents as arrays, and setting the final component as the value. As we drop down into these sublevels, we will need a way to keep track of where in the variable we should be. If we are going to allow this array to have unlimited nesting, then we are going to have to do this with a second variable, which is a pointer to the current spot in the keyComponents we are processing. Let’s call this pointer “mySpot” for now. $namespace = []; foreach ($_POST as $key => $value){ $keyComponents = explode('_', $key); if ($keyComponents[0] === 'namespace') { $keyComponentCount = count($keyComponents) - 1; $mySpot = &$namespace; foreach ($keyComponents as $keyComponentDepth=>$keyComponent){ if ($keyComponentDepth > 0){ //skip the namespace //ToDo } } } } As we iterate over the keyComponents, we need to skip the first one as it is the namespace. For everything else we can just check to see if we are at the end of the keyComponents (by comparing the depth to the count) and either set the subarray (at mySpot) to an array or the value. $namespace = []; foreach ($_POST as $key => $value){ $keyComponents = explode('_', $key); if ($keyComponents[0] === 'namespace') { $keyComponentCount = count($keyComponents) - 1; $mySpot = &$namespace; foreach ($keyComponents as $keyComponentDepth=>$keyComponent){ if ($keyComponentDepth > 0){ //skip the namespace if (!isset($mySpot[$keyComponent])){ if ($keyComponentCount == $keyComponentDepth) { //the last one! $mySpot[$keyComponent] = $value; } else { $mySpot[$keyComponent] = []; } } $mySpot = &$mySpot[$keyComponent]; } } } } Each time we go a level deeper, we set mySpot to the new current level so the next iteration tacks on to the existing array. Technically this is not a recursive function as it is not a function at all and doesn’t call itself for each level down. I find this style of recursive programming to be slightly more readable than true recursive functions, but I’m sure that is just a preference! Hopefully this technique will help you to find your own way to dig into recursive processing! Let me know your thoughts in the comments below!
https://engagedphp.com/2018/03/recursive-code/
CC-MAIN-2018-47
refinedweb
765
54.46
So far, we have been adding 3D models to an XBAP WPF application. We exported the models from Blender and 3D Studio Max and we were able to include them in a 3D scene. However, we want to do this using Silverlight, which does not have official support for 3D XAML models. How can we create 3D scenes using real-time rendering in Silverlight 3? We can do this using a 3D graphics engine designed to add software based real-time rendering capabilities to Silverlight. We have two excellent open source alternatives for this goal: System.Windows.Media.Media3Dnamespace from WPF. It offers a subset of WPF 3D capabilities and it offers a very fast ... No credit card required
https://www.oreilly.com/library/view/managing-data-and/9781849685641/ch11s07.html
CC-MAIN-2019-26
refinedweb
119
67.04
Sep 18, 2008 04:35 PM|LINK maartenbaCan you try putting the handler in the root of the website? That didn't work either. I think it might have something to do with the Dependency Injection I'm using. I'll keep looking. I'm determined to figure this out! [:)] Thanks. Aug 07, 2009 12:46 AM|LINK FYI: I had this same problem and just solved it by adding an IgnoreRoute: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("Custom.sfx"); } Provided classic/integration httpHandlers/handlers is setup properly and no authentication is blocking you it should be good to go. Feb 12, 2010 04:16 PM|LINK I was up against this problem last night, and here is how I solved it I added this to my Global.asax file, before I defined my routes : routes.IgnoreRoute("{filename}.ashx"); I added this to my <httpHandlers> node in my web config: <add verb="*" path="*ImgHandler.ashx" type="AMSignsUI.Views.Shared.ImgHandler, AMSignsUI"/> where : AMSignsUI.Views.Shared = the namespace in which I created the handler and AMSignsUI = my application/project name I added this to my <handlers> node in <system.webServer> <add name="ImgHandler" path="*ImgHandler.ashx" verb="*" type="AMSignsUI.Views.Shared.ImgHandler, AMSignsUI"/> Everything worked fine after I completed these steps. Note, the last step applies only if you are using IIS7 12 replies Last post Feb 12, 2010 04:16 PM by N!cky
http://forums.asp.net/p/1320309/3334397.aspx/1?Re+How+to+use+a+custom+HttpHandler+in+MVC+
CC-MAIN-2013-20
refinedweb
241
58.89
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also #include <slp.h> SLPError SLPEscape(const char *pcInBuf, char** ppcOutBuf, SLPBoolean isTag); The SLPEscape() function processes the input string in pcInbuf and escapes any SLP reserved characters. If the isTag parameter is SLPTrue, it then looks for bad tag characters and signals an error if any are found by returning the SLP_PARSE_ERROR code.. It must be freed using SLPFree() when the memory is no longer needed. When true, checks the input buffer for bad tag characters. This function or its callback may return any SLP error code. See the ERRORS section in slp_api(3SLP). The following example shows how to convert the attribute tag ,tag-example, to on the wire format: SLPError err; char* escaped Chars; err = SLPEscape(",tag-example,", &escapedChars, SLP_TRUE);
http://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et3e/index.html
CC-MAIN-2014-15
refinedweb
133
55.34
Ensuring guidelines: - Wall off access to volumes in Kubernetes by creating namespaces that define your trust boundaries - Prevent pods from accessing volume mounts on worker nodes by creating an appropriate Kubernetes pod security policy - Restrict volume access to appropriate worker nodes by specifying a security policy through Trident that is appropriate for each backend We get asked questions about security regularly, below are some of the most common questions, however if you have others, please reach out to us using the comments for this post, Slack, or any of our other communications mechanisms. Can access to a persistent volume be restricted to one pod/container? Persistent Volumes (PVs) managed by Trident are created when a Persistent Volume Claim (PVC) is submitted by the application. This triggers Trident to create the volume on the storage system. PVs are global objects, however, PVCs belong to a single namespace. Only an administrator, or Trident because of the permissions granted to the service account it is using, are able to manage PVs. Why is this important? Namespaces are logical boundaries for most resources in Kubernetes. They are a security domain, with the assumption that everything in a namespace can access everything else within it. However, a user or application is prevented from using resources in a different namespace. For example, a pod in one namespace cannot use a PVC in another, even if the user has access to both. Additionally, a PV that is already bound to one PVC cannot be bound to another, regardless of namespace. This means that even if a user attempts to craft a PVC which claims an existing PV from a different namespace, it will fail. When using Trident, the PV and PVC are destroyed at the same time by default. This behavior can be changed so that PVs are retained, but a PV that was bound to a PVC once and then unbound can never be bound again. So, to answer the question: no, an individual PV/PVC cannot be limited to a single pod, however, PVCs are limited to a single namespace in the same way that other resources are. Can a pod see other volumes mounted to a host, and/or see what storage is presented from the array? If a user in a pod were to execute the “ showmount -e” command, or the iSCSI equivalent, against the storage system providing volume to the Kubernetes cluster, they are able to see the list of exports. However, as was stated above, they cannot gain access to another volume from inside a pod. In order to mitigate this situation, the storage system volume access control policy, whether igroups, volume access groups, or export policies, should be restricted to only nodes in the Kubernetes cluster. This prevents mounting the volume from hosts outside of the Kubernetes cluster and bypassing the security controls in place. Additionally, disable “showmount” functionality for the SVM. Can pods on the same node, but from a different namespace, gain access to a mounted volume? No, with one exception: privileged containers. The process in the pod/container on the Kubernetes node does not have the ability to see resources on the system other than what they have been assigned, this is core Linux namespace functionality used by all containers. A user, or an application, do not pose any threat to other volumes by issuing fdisk or mount commands. If privileged containers are an issue, how do we protect against them? The configuration of the storage system and the Kubernetes cluster should be hardened to further ensure that volumes are neither visible, nor accessible, except by the pod which they are assigned to. - Do not allow privileged pods. To be clear, privileged pods are the only method where the potential to “escape” to the host and access storage not assigned is possible. It is very important to prevent the use of privileged pods by unauthorized applications. Standard containers have no ability to mount volumes (NFS or iSCSI), regardless of whether the user inside the container is root or not. Pod security policies should be used to prevent privileged containers from being created by user applications. See the second example policy here for a method of preventing users from creating privileged pods. Limit application users to specific namespaces which belong to them, with no cluster-level permissions. Remember, only an administrator, or Trident, can manage PVs, users cannot create, manage, destroy, or assign PVs to themselves or anyone. In fact, only cluster administrators can view PV details, users can’t view the details for their own PVs. - Prevent pods from directly accessing the storage network using network policies. This stops attempts to garner information about the storage by pods, e.g. showmountfunctionality, before it even has a chance to succeed. Network policies, applied on a per-namespace basis, or using the host firewall to block access by the pod network to the storage network is a simple and easy way to deny access to entire network segments. Will creating a volume with a specific UID and GID help protect the data? No, it really won’t provide additional protection for Kubernetes-based applications. The assumption here is that the volume has the userID (uid) and groupID (gid) specified and the Unix permissions are set to something like “ 700” (Note: Trident does not support setting uid and gid but does allow Unix permissions to be customized). Additionally, the pod is using a security context which specifies matching uid and gid values. Logically, this means that because the uid/gid of the process and the volume all match, access is granted. If the uid/gid don’t match, then even if the volume is mounted the pod would not be able to access the data. Kubernetes even enables the administrator to limit a namespace to specific uid and gid values to prevent the user in a namespace from attempting to use another namespace’s user information. So, why doesn’t that protect the data? NFSv3 assumes that the client has done the authentication/authorization, the values can be arbitrarily specified and no validation is done by the NFSv3 server. This means that any pod (in the same namespace) could use the uid/gid associated with the volume and gain access. Kerberos could solve some of these issues because the NFS server participates in the authorization process, thus ensuring that only a validly authenticated user, with authorization, is accessing data. However, Kerberos is not supported by Kubernetes except for user authentication when using a proxy. Security happens at all layers A user who has, through whatever means, elevated their access on a Kubernetes node to full root has the ability to mount, manipulate, and/or destroy resources in many ways. It is vitally important to secure your cluster, both Kubernetes itself and the underlying host OS which Kubernetes is on top of, in the same manner you would a hypervisor management console or other critical systems. A good place to start is always the STIG (no, not that Stig) for your operating system. Conclusion Using namespaces to provide isolation between security domains, whether that be an application, a team, or something else, is “good enough” for many use cases. This is especially true when the host OS, Kubernetes, and the storage system have been configured to limit access to the storage devices and additional metadata about them. If you want ultimate protection between applications deployed to Kubernetes, having multiple clusters with dedicated resources provides the most robust separation. We know you will have more questions about things which concern you. We haven’t covered every possible scenario, and probably never will, so please use the comments below or reach out to us on our Slack team, GitHub issues, or open a support case. We’re happy to help!
https://netapp.io/2018/06/15/highly-secure-kubernetes-persistent-volumes/
CC-MAIN-2019-04
refinedweb
1,297
50.57
#include <newbie_to_debian.h> What defines a package as being 'selected'? I would think that it would mean those packages that have been selected via the dselect 'Select' stage. However, output seems to suggest that the --(get|set)-selections only print what's installed, not what's been selected. Is my understanding of this incorrect (*very* possible)? ---- ---- Ian J. Alexander email: ija@eaze.net Office: 817-557-3038 Senior Software Engineer Fax: 817-557-3468 EazeNet - Putting the 'S' in 'ISP' --== There is no spoon. ==-- ---- ---- On Wed, 2 Feb 2000, Colin Watson wrote: > ron@wep.tudelft.nl (Ron Rademaker) wrote: > >Can. > > Try dpkg --get-selections (and its inverse, dpkg --set-selections). >
https://lists.debian.org/debian-user/2000/02/msg00228.html
CC-MAIN-2014-15
refinedweb
110
60.92
Artificial neural networks are a class of supervised machine learning algorithms that are based on the biological concept of how neurons are connected in the brain. These "neurons" (or units) are usually organized into interconnected layers with one input layer, one output layer, and one or more "hidden" layers in between. Each layer consists of a set of units. Intuitively, the size of the input layer depends on the number of input features and the size of the output layer depends on the size of the desired output being calculated (e.g. a multi-class classifcation problem should output one unit for each class as a 0 or 1). The hidden layers can have any number of units contained within. Choosing the number and size of the hidden layers is up to you and ultimately impacts performance. For example, having more hidden layers enables the detection of more complex decision boundaries but comes at the price of being more computationally intensive. Similarly, more complex decision boundaries tend to have a high variance and low bias, and thus have a tendancy to overfit the training data and not generalize well to testing data. Determining the ideal number and size of the hidden layers for your specific problem will usually involve some trial and error as you find the right balance of model qualities to suit your application. Passing data into the network to receive an output is known as forward propagation. This involves applying a weight matrix and activation function to the data between each sequential layer. In contrast, a technique known as backpropagation is used to calculate the partial derivatives of a cost fucntion with respect to each of the weights. The results of backpropagation are then used to minimize the cost function using an optimzation algorithm such as gradient descent. First we'll go through the theory and then jump into some code. Let's define some terms that describe the structure of our network: $X$ is our input feature vector $[x_0, x_1, ... x_m]$ $y$ is our vector of output labels where each position in the vector represents a class $K$ is the number of elements in our output vector $L$ is the number of layers including the input and output layers $s_l$ is the number of non-bias units in layer $l$ $\Theta^{(l)}$ is a matrix of weights mapping the units of layer $l$ to layer $l+1$ of shape $s_{l+1}$ by $s_l + 1$ As data is passed from one layer to the next, it is multiplied by the corresponding weight matrix $\Theta$, and an activation function $g(z)$ is applied elementwise to obtain a vector of activation values $a^{(l)}$ for the new layer. Common activation functions include the sigmoid function and the hyperbolic tangent function. Mathematically, forward propagation is carried out like so: 1. The activation of the input layer is simply the input.$$a_i^{(1)} = x_i$$ 2. To move to the next layer, we first apply the weights to the activation of our current layer.$$z_i^{(l + 1)} = \Theta^{l}a_i^{l}$$ 3. The activation of the next layer is then obtained by applying our activation function.$$a_i^{(l)} = g(z_i^{(l)})$$ 4. This process is repeated until we reach the final layer $L$ where our hypothesis is computed as the activation of layer $L$.$$h_\Theta(x) = a^{(L)}$$ Like linear regression and logistic regression, training involves minimizing a cost function. For neural networks this is no different. The cost function we seek to minimize when training our network is shown below:$$J(\Theta) = \frac{1}{m}\sum_{i=1}^m\sum_{k=1}^K\left[-y_k^{(i)}log((h_\Theta(x^{(i)}))_k)-(1 -y_k^{(i)})log(1-(h_\Theta(x^{(i)}))_k)\right] + \frac{\lambda}{2m}\sum_{l=1}^{L-1}\sum_{i=1}^{s_l}\sum_{j=1}^{s_{l+1}}(\Theta_{j,i}^{(l)})^2$$ This looks intimidating. Let's break it down into less intimidating, more intuitive components. There are basically two terms in this equation. The first half has two nested summation terms that sum the error for all training examples $m$ and for all output nodes $k$. If you are familiar with logistic regression, you might also notice that the term being summed over here is essentially the same but generalized for a neural network, which makes sense:$$\frac{1}{m}\sum_{i=1}^m\sum_{k=1}^K\left[-y_k^{(i)}log((h_\theta(x^{(i)}))_k)-(1 -y_k^{(i)})log(1-(h_\theta(x^{(i)}))_k)\right]$$ The second term is our regularization term with $\lambda$ representing our regularization parameter. This term penalizes for having exceedingly large weights and includes only the non-bias $\Theta$ terms:$$\frac{\lambda}{2m}\sum_{l=1}^{L-1}\sum_{i=1}^{s_l}\sum_{j=1}^{s_{l+1}}(\Theta_{j,i}^{(l)})^2$$ Our cost function is not so intimidating now that we know what each part does. This should make sense intuitively. The total cost for a given choice of $\Theta$ accounts for the differences between the predicted labels and actual labels. It also introduces regularization so that after training our model can generalize well to new unseen data. Now, to minimize this function using an algorithm such as gradient descent, we need to know the partial derivatives of $J(\Theta)$ with respect to each of the $\Theta$ terms. In other words, we need to find:$$\frac{\partial}{\partial\Theta_{j,i}^{(l)}}J(\Theta)$$ How do we do that? Backpropagation! Backpropgation is used to calculate the partial derivatives of the cost function $J(\Theta)$ with respect to each weight. The way this is done is by taking the output of our network, stepping backwards through each layer, and calculating the "error" of each unit in each layer. Just as we represented the activation of a unit with $a_i^{(l)}$, we will similarly represent the "error" of that node using $\delta_i^{(l)}$. What this term represents is the difference between our calculated activation value and the "correct" activation value which is based on the labels in our training set. How we arrive at the equations below involves a lengthy derivation which is beyond the scope of this notebook (INSERT LINKS). In it's general form, the backpropagation algorithm goes like this: 1. Calculate the error in the output layer using the training set labels.$$\delta_i^{L} = a_i^{L} - y_i$$ 2. Step backwards through each layer to obtain the corresponding error values, where $g^\prime$ represents the derivative of your activation function (here the $.*$ represents element-wise multiplication).$$\delta_i^{(l)} = ((\Theta^{(l)})^T\delta^{l+1}) .* g^\prime(z^{(l)})$$ 3. Once you've computed the error values for all layers back to and including layer 2 (there is no error term for layer 1, the input layer). You can assemble them into a single term.$$\Delta_{ij}^{(l)} = a_j^{(l)}\delta_i^{(l+1)}$$ 4. Finally, you have your partial derivative terms, calculated across all training examples $m$. (Omit the $\Theta$ term when $j = 0$, so as to not regularize the bias).$$\frac{\partial}{\partial\Theta_{ij}^{(l)}}J(\Theta) = D_{ij}^{(l)} = \frac{1}{m}\Delta_{ij}^{(l)} + \frac{\lambda}{m}\Theta_{ij}^{(l)}$$ Now, when we go to run gradient descent, our updates to $\Theta$ will look like the following using our newly-assembled $\Delta$ term (remember to omit the regularization term in $D$ when updating the bias):$$\Theta_{ij}^{(l)} := \Theta_{ij}^{(l)} - \alpha D_{ij}^{(l)}$$ Another consideration when implementing a neural network is to properly initialize the weights. If your $\Theta$ terms are initialized to all zeros, then the network will fail to learn. A good choice is to initialize the weights to random numbers between $[-\epsilon, \epsilon]$.$$\epsilon = \frac{\sqrt{6}}{\sqrt{Loutput+Linput}}$$ Where $Loutput$ and $Linput$ represent the dimensions of the $\Theta$ term being initialized. For the purposes of this example, we will implement an artificial neural network with 3 input units (the data has two features and we include one bias unit), two hidden layers with 5 units and one bias unit, and two output units. The sigmoid function will be used as our activation function. We will use the backpropagation algorithm and gradient descent to train the network. This translates to the following: Our training data $X$ will have the shape $m$ x 2 where $m$ is the number of training examples. Similarly our training data labels $y$ will be $m$ x 2 as we have a two-class problem where outputs are either [0, 1] or [1, 0]. Our output layer will have two nodes, one for each class. As such our output vector $y$ will have $K = 2$ elements. As we have one input layer, two hidden layers, and one output layer, we have $L = 4$. For each layer $l$, the number of units will be: $s_1 = 2$ (two input features, not counting one bias unit) $s_2 = 5$ (five hidden units, not counting one bias unit) $s_3 = 5$ (five hidden units, not counting one bias unit) $s_4 = 2$ (two output classes) Thus we will have three weight matrices, $\Theta^{(1)}$ with shape 5 x 3, $\Theta^{(2)}$ with shape 5 x 6, and $\Theta^{(3)}$ with shape 2 x 6. Note their dimensions as they do not map to the bias units of the next layer. Their dimensions are $s_{l+1}$ by $s_l + 1$. import numpy as np np.random.seed(0) from sklearn import datasets import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' Let's set some variables and define some functions to carry out the math we've laid out above. First is our activation function $g(z)$ (the sigmoid function) and it's derivative $g^\prime(z)$. def sigmoid(z): """ Applies the sigmoid function element-wise to the input array z. """ return 1 / (1 + np.exp(-z)) def sigmoid_gradient(z): """ Applies the derivative of the sigmoid function element-wise to the input array z. """ return sigmoid(z) * (1 - sigmoid(z)) Forward propagation function that returns $z_i^{(l)}$ and $a_i^{(l)}$. We'll store them in a dict so the layer numbering in the math matches that of our code. This helps make the code easier to debug since the layer numbers ($l$) will correspond to the key values in our code. def forward_prop(X, Thetas): """ Takes input training data X and weight matrices Thetas, performs forward propagation, and returns (Z, A). The output labels are the last row in the returned matrix A. """ L = len(Thetas) + 1 m = X.shape[0] # Dicts for storing our calculated values Z = {} A = {} # Add bias column and set first layer values to be X X = np.append(np.ones([X.shape[0], 1]), X, axis=1) A[1] = X layer = A[1] # Iterate through each layer, applying weights, # the activation function, and storing values for i in range(1, L): z = layer.dot(Thetas[i].T) a = sigmoid(z) # Only add bias layer and set up next layer if there is a next layer if i != (len(Thetas)): a = np.append(np.ones([m, 1]), a, axis=1) layer = a # Store values A[i+1] = a Z[i+1] = z return Z, A A shortened version of our forward propagation to quickly output the labels rather than $z_i^{(l)}$ and $a_i^{(l)}$. def predict(X, Thetas): """ Performs forward propagation given the input data X and the network weights in Thetas. Returns class assignments for the input data. """ m = X.shape[0] L = len(Thetas) + 1 a = X for i in range(1, L): z = np.append(np.ones([m, 1]), a, axis=1) a = sigmoid(z.dot(Thetas[i].T)) z = a return np.argmax(a, axis=1) Our cost function from earlier. def calculate_cost(X, y, Thetas, lamb=0): """ Function to calculate the cost function of a neural network for given training data X and network weight parameters Thetas. """ m = X.shape[0] n_labels = np.unique(y).shape[0] # Perform forward propagation Z, A = forward_prop(X, Thetas) # Hypothesis = model output, rehshape to numpy array h = A[len(Thetas) + 1] y_mat = np.eye(n_labels)[y,:] # Calculate cost, NOT including regularization, these multiplications are element-wise J = - (1 / m) * (y_mat * np.log(h) + (1 - y_mat) * np.log(1 - h)).sum() if lamb > 0: # Calculate regluarization over non-bias theta terms reg = (lamb / (2 * m)) * sum([np.sum(np.array(Thetas[i][:,1:]) ** 2) for i in Thetas.keys()]) return J + reg else: return J The back propagation function. I'll admit it took me a bit to get this one working correctly! def backpropagation(X, y, A, Z, Thetas, lamb=0.0): """ Performs backpropagation given the training data, results from forward propagation, and weights. Regulariation is optional and omitted by default. """ # Set some useful variables L = len(Thetas) + 1 n_labels = len(np.unique(y)) m = X.shape[0] # Transform labels into an array where each class # is either a 0 or 1 in an m by 2 array y_mat = np.eye(n_labels)[y,:] # This variable is where we'll store our "error" values d = {} # Set the error of the last layer by comparing # previously calculated predictions with the actual labels d[L] = A[L] - y_mat Deltas = {} # Our error accumulator variable Theta_regs = {} # Regularization terms of the cost function derivative Theta_grads = {} # The final partial derivative terms # Step backwards through the network for i in range(1, L): if L - i > 1: # Calculate error for all but the first layer d[L-i] = d[L-i+1].dot(Thetas[L-i][:,1:]) * sigmoid_gradient(Z[L-i]) # Accumulate error terms Deltas[L-i] = d[L-i+1].T.dot(A[L-i]) # Calculate regularization terms, but ignore bias thetas # To ignore bias thetas, note the appended zeros combined with # the `Thetas[layer][:,1:]` array indexing Theta_regs[L-i] = np.append(np.zeros([Thetas[L-i].shape[0], 1]), (lamb / m) * Thetas[L-i][:,1:], axis=1) # Assemble the final partial derivatives Theta_grads[L-i] = (1 / m) * Deltas[L-i] + Theta_regs[L-i] return Theta_grads Before getting into defining our example network, here's a function we will find useful later on to visualize the decision boundary of our model. def plot_decision_boundary(X, y, Thetas, steps=1000, cmap=plt.cm.Paired): """ Function to plot the decision boundary and data points of a model. Data points are colored based on their actual label. """ # = predict(np.c_[xx.ravel(), yy.ravel()], Thetas) # Plot decision boundary in region of interest z = labels.reshape(xx.shape) fig, ax = plt.subplots() ax.contourf(xx, yy, z, cmap=plt.cm.Paired) # Get predicted labels on training data and plot train_labels = predict(X, Thetas) ax.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.Paired) return fig, ax Set some variables describing the network structure: L = 4 # There are four layers in our network s1 = 2 # Two units in layer 1 (input features), plus one bias s2 = 5 # Five units in layer 2 (hidden layer 1), plus one bias s3 = 5 # Five units in layer 3 (hidden layer 2), plus one bias s4 = 2 # Two units in layer 4 (output layer), no bias term layers = [s1, s2, s3, s4] Initialize the weights. Again we'll put them in a dict where the key is the layer that it belongs to. Thetas = {} # Initialize random random weights based on size of layers for i in range(1, len(layers)): epsilon = np.sqrt(6) / np.sqrt(layers[i] + layers[i-1] + 1) Thetas[i] = (np.random.rand(layers[i], layers[i-1] + 1) - 0.5) * epsilon for i in Thetas.keys(): print('Theta%d shape: %d by %d' % (i, Thetas[i].shape[0], Thetas[i].shape[1])) Theta1 shape: 5 by 3 Theta2 shape: 5 by 6 Theta3 shape: 2 by 6 Finally, here is the function we will use to train our model, returning the "trained" $\Theta$ matrices and the cost per iteration as we trained. def train_network(X_train, y_train, Initial_thetas, alpha=0.5, max_iter=1000): """ This function trains our network defined by the weight parameters Initial_thetas. """ m = X_train.shape[0] costs = np.zeros(max_iter) # Accumulate costs Thetas = Initial_thetas.copy() for n in range(max_iter): Z, A = forward_prop(X_train, Thetas) cost = calculate_cost(X_train, y_train, Thetas) costs[n] = cost Theta_grads = backpropagation(X_train, y_train, A, Z, Thetas) for i in range(1, len(Thetas) + 1): Thetas[i] = Thetas[i] - alpha * Theta_grads[i] return Thetas, costs X, y = datasets.make_moons(n_samples=1000, noise=0.1, random_state=0) colors = ['darkred' if label == 1 else 'lightblue' for label in y] plt.scatter(X[:,0], X[:,1], color=colors) y.shape, X.shape ((1000,), (1000, 2)) Trained_thetas, costs = train_network(X, y, Thetas, alpha=0.5, max_iter=1000) fig, ax = plt.subplots() ax.plot(costs) ax.set_ylim(0, ax.get_ylim()[1]) ax.set_ylabel('Cost') ax.set_xlabel('Iteration #'); So our cost is decreasing with each iteration of gradient descent, and seems to have converged to a minimum. These are both good things. Let's take a look at the decision boundary of our model. plot_decision_boundary(X, y, Trained_thetas); Ah, our network seems to have only drawn a straight line decision boundary. Usually, this is the behavior we would get with logisitic regression or with a neural network containing only one hidden layer. Perhaps our cost function hasn't completely converged? Let's try train our model again, but this time for a bit longer. We'll go with 5,000 iterations this time instead of only 1,000. Trained_thetas, costs = train_network(X[:500], y[:500], Thetas, alpha=0.5, max_iter=5000) fig, ax = plt.subplots() ax.plot(costs) ax.set_ylim(0, ax.get_ylim()[1]) ax.set_ylabel('Cost') ax.set_xlabel('Iteration #'); Our cost seems pretty low at the end of this extended training. Now for the decision boundary... plot_decision_boundary(X, y, Trained_thetas); Nice! Now our model has a non-linear decision boundary and our cost function is close to zero, meaning the model we have fits this training set very well. You can see that our model was able to fit data that's not linearly separable without introducing additional features. This is a key advantage of using artificial neural networks compared to simple logistic regression. If we wanted these kinds of results with logistic regression, we would probably have to introduce polynomial features (such as $x_1^n$, $x_2^n$, or $x_1x_2$) in addition to what we already have. Generally, the more hidden layers used, the more complex boundaries our network is able to construct. One topic I haven't yet touched on is cross validation, which is a method for evaluating how well a trained model will generalize to unseen data. Even though our final trained model fits our training data very well, that doesn't mean that it will perform well in classifying new data. I plan to give an example of how to cross-validate your model in a future notebook.
https://jonchar.net/notebooks/Artificial-Neural-Network/
CC-MAIN-2020-29
refinedweb
3,129
54.73
Hi there I’m experimenting with dash-flask-login ( and making some changes to it. The GitHub code has a a login page at ‘/’ and a ‘success’ page at ‘/success’, for when a valid login is entered. I want to change this so that the login page is still at ‘/’, but when a valid login is entered, the success page is loaded at ‘/’. How I’ve tried to achieve this is as follows: login.py: @app.callback(Output('urlLogin', 'pathname'), [Input('loginButton', 'n_clicks')], [State('usernameBox', 'value'), State('passwordBox', 'value')]) def sucess(n_clicks, username, password): user = User.query.filter_by(username=username).first() if user: if check_password_hash(user.password, password): login_user(user) return '/' else: pass else: pass I have changed return ‘/success’ to return ‘/’ i.e. stay on same page In index.py, I’m handling page URL routing like so: @app.callback(Output('pageContent', 'children'), [Input('url', 'pathname')]) def displayPage(pathname): if pathname == ‘/’: if current_user.is_authenticated: return home.layout else: return login.layout Where home.layout is my ‘success’ layout. However, this isn’t working as expected. When a valid login is entered, the page just doesn’t change/refresh and it just stays like this: But if I refresh the page manually, I then see home.layout as I am now logged in. Is my callback in login.py correct? It’s like I need to trigger a page refresh rather than returning ‘/’ (which the Url is at anyway) - is that possible?
https://community.plotly.com/t/how-to-trigger-a-page-refresh-modifying-dash-flask-login/18819
CC-MAIN-2022-21
refinedweb
242
60.61
Why is our source code so boring? Andrew (he/him) ・2 min read Last year, I was privileged enough to see Peter Hilton give a presentation about source code typography at geeCON 2018, and the main points of that talk still roll around in my head pretty regularly. In a nutshell: source code is boring Most developers write ASCII source code into plain text files using a fixed-width font, often trying to keep line length under 80 characters out of habit or "best practices": public class MyClass { public static void main (String[] args) { System.out.println("oh god it's so boring"); } } Fixed-width fonts and 80-column limits are a throwback to the IBM 80-column punch card which dominated the industry for decades until it was slowly phased out in the latter half of the twentieth century: When colour terminals became popular, we could add things like syntax highlighting: public class MyClass { public static void main (String[] args) { System.out.println("boring, but in colour"); } } And what Hilton calls "the most recent innovation in source code", ligatures, which burst onto the source code scene around 2012, have been in common use in print typography for hundreds of years: This begs the question... Why is source code stuck in the past? In a best-case scenario, modern users will define different fonts or styles for different keywords in VS Code in order to construct some kind of visual hierarchy within their programs: ...but we're not reaching the full potential of what's possible. Why not drag-and-drop modules? Why not AI-powered automatic code generation? Why not graphical coding like Scratch or Flowgorithm? Why has there not been a single revolution in the way source code is written in nearly a hundred years? Thoughts on legacy code, diversity and inclusion TLDR; If you're still dropping snarky commentaries about PHP on Twitter, just grow up already. Mine is not boring. It contains all kinds of known and unknown bugs. A lot to research by entomologists ... ;) Emojis are entirely valid in Ruby, and I think they kind of fit the paradigm as well. A lot of Ruby methods have ?as the last character to indicate it's a question. Such as my_array.empty?. I could see my_array.🤔being an intuitive way to inspect an array. Defining this method is easy as 🥧 Even making it a method of array, such as above, is a straightforward monkey patch. There are gotchas in terms of emoji unicode rendering across operating systems, but there is something to this. Emojis are a big part of our text-based communication these days, so why not in the source code? P.S. I find it interesting how Erlang uses very english-esque punctuation techniques... I see your point, but it's still just different characters in a text file. It's still the same medium. Computer programs carry more information today than any other form of communication in human history, but they've always been (with few exceptions) text files. Compare that to the information carried through artistic media. You can paint, sculpt, sketch, photoshop, or screen print and they're all just different kinds of visual artistic expression. Why is there only a single dominant form of programmatic expression? Is it due to the exact nature of programs? That we need to tell machines precisely what it is we want them to do? Could we write a programming language where we only describe the jist of what it is we want to accomplish and let the interpreter figure out the rest? You mean like SQL? 😄 The closest conversation we have here is "declarative vs imperative". In frontend world, ReactJS came around to beat the drum claiming its declarative nature, which is fairly true in that you define possible end states and let the program figure out how to make the right changes. It was a pretty big deal, but yeah, not that transformative. I think text-based characters are just so damn useful for telling the computer what we want to do. People are so damn good at typing in general, the draggable modules thing is really hard to draw new power from. It seems like the best tooling in software is augmentative rather than replacing. Linters and autocomplete seems like the kind of intelligence with the most potential to build on top of, and it's generally progressive enhancement allowing you to go all the way down to a text file if you need to. GUIs that compile to code tend to result in gobbledigook that makes it hard to go in both directions. Apple has still been trying stuff like this for a while and the latest iteration might be more of a happy medium... developer.apple.com/xcode/swiftui/ I want to think there is a big leap we can make that is entirely outside of the normal way we code, but I just don't think it's feasible to leap right there. I think it's a one step at a time thing that takes decades and decades, because big leaps can just lack so much edge case coverage we care about. I think art is not the place to be looking for inspiration -- programming languages, while quite restricted in scope, are languages, and in thousands of years we've only come up with so many modes of linguistic expression. It's pretty much just speech and writing, and writing is clearly the superior of the two for this kind of purpose. Although it's interesting to consider programming a computer by means of tying quipu... A quiputer. 😉 I suppose that's true re: writing. But surely there's at least a better way to communicate these ideas. Sometimes I find myself looking at the array of languages and paradigms available and thinking "that's it?" But then again, the book was invented a few hundred years ago and that's still going strong. Maybe people will still be writing FORTRAN in 2520. I do most of my book reading through audiobook these days. I wonder if a programming language optimized for audio consumption that can be effectively reviewed through one's ears. idea.🤔 Morgan Freeman reading LISP sounds terrible and soothing at the same time. "Open parenthesis, open parenthesis, open parenthesis..." And, even in writing, it's pretty much just been glyphs and script ...and it's really only recently that we've sorta started to handle other-than-ASCII something resembling "well". Maybe not FORTRAN, but definitely COBOL. :p I think you hit the nail on the head with Programming is really a layer of abstraction above a very specialized area of mathematics that is made physical in the circuitry of the computer. While computers can be used to create artistic and visual expressions, using those expressions to program computers won't be as effective. Mathematics has evolved in many ways since its start in ancient times, but it still has a defined, formal language that is used to describe mathematical ideas. As programmers, we're still doing math, albeit at a very different layer of abstraction. It makes sense that a system (programming) built on one that is defined by self-consistency and specificity (mathematics) would also use many similar ideas in its language. Forms of communication such as paintings, sculptures, drawings, and even natural language are fraught with ambiguities, something that you definitely don't want when you're telling a non-human thing what to do. Those expressions are great at certain things, but carrying specific, non-ambiguous meaning isn't one of them. We can't even get humans to agree on the meanings behind these expressions – imagine trying to get computers to understand them! Part of the reason behind the computing revolution was the need for machines that could be told specifically what to do, and then do that task the same way every single time. Ask 100 people what a sculpture means, and you'll very often get closer to 100 different answers than 1 single answer. By and large, I wouldn't want a computer to have 100 different answers to problems that I present to it! Great conversation starter! It's definitely intriguing to think about what the next big revolution in computing will be 😃 Line length limits are to keep things easy to follow. Most sane people these days go to 120 or so, not 80 (possibly more), but the concept is more that you're still able to fit the whole line on a single line in your editor, because wrapped lines are hard to follow and horizontal scroll-off makes it easy to miss things. Those aspects haven't really changed since punch cards died off. Some languages are more entrenched in this (some dialects of Forth for example), but it's generically useful no matter where you are, and has stuck around for reasons other than just punch cards. Monospaced fonts are similar, they got used because that's all you had originally (little to do with punch cards there, the earliest text termiansl were character cell designs, partly because of punch cards but also partly because it was just easier to interface with them that way (you didn't need any kind of font handling, and text wrapping was 100% predictable). These days they're still used because some people still code in text only environments, but also because it makes sure that everything lines up in a way it's easy to follow. Proportional fonts have especially narrow spaces in many cases, which makes indentation difficult to follow at times, and the ability to line up a sequence of expressions seriously helps readability. As far as graphical coding environments, the issue there is efficiency. Information density is important, and graphical coding environments are almost always lower information density than textual source code (this, coincidentally, is part of why they're useful for teaching). I can write a program in Scratch and in Python, and the Python code will take up less than half the space on-screen that the Scratch code will. On top of that, it often gets difficult to construct very large programs (read as 'commercially and industrially important programs') in graphical languages and keep track of them, both because of that low information density, and because the flow control gets unwieldy in many cases. As for ligatures, it's hit or miss whether they realistically help. I don't personally like them myself, they often screw with the perceptual alignment of the code (because they often have wider spacing on the sides than the equivalent constructs without ligatures, and it's not unusual for different 2 or 3 character ligatures to have different spacing as well) and they make it easier to accidentally misread code (for example ==versus ===without surrounding code to compare length against). I'm not particularly fond of using fonts to encode keyword differences for a lot of the same reasons as I don't like ligatures. It's also hard to find fonts that are distinctive enough relative to each other but still clearly readable (that sample picture fails the second requirement, even if it is otherwise a good demonstration). You run into the same types of problems though when you start looking at custom symbols instead of plain ASCII text. APL has issues with this because of it's excessive use of symbols, but I run into issues with this just using non ASCII glyphs in string literals on a regular basis (if most people have to ask 'How do I type that on a keyboard?', you shouldn't be using it in your code). The funny thing is, APL probably has a higher "information density" than just about any other programming language that has ever been created, and it was one of the very first languages. But people don't like entirely symbolic code, it seems. We want something in-between a natural, written language and just strings of symbols. Will we be stuck with ASCII forever? There's a 'sweet spot' for information density in written languages (natural or otherwise). Too high, and the learning curve is too high for it to be useful. Too low, and it quickly becomes unwieldy to actually use it. You'll notice if you look at linguistic history that it's not unusual for logographic, ideographic, and pictographic writing systems to evolve towards segmental systems over time (for example, Egyptian Hieroglyphs eventually being replaced with Coptic, or the Classical Yi script givning way to a syllabary), and the same principle is at work there. Most modern textual programming languages are right about there right now. There's some variance one way or the other for some linguistic constructs, but they're largely pretty consistent once you normalize symbology (that is, ignoring variance due to different choices of keywords or operators for the same semantic meaning). The problem with natural language usage for this though is a bit different. The information density is right around that sweet spot, and there's even a rather good amount of erasure coding built in, but it's quite simply not precise enough for most coding usage. I mean, if we wanted to all start speaking Lojban (never going to happen), or possibly Esperanto (still unlikely, but much more realistic than Lojban), maybe we could use 'natural' language to code, but even then it's a stretch. There's quite simply no room in programming for things like metaphors or analogies, and stuff like hyperbole or sarcasm could be seriously dangerous if not properly inferred by a parser (hell, that's even true of regular communication). As far as ASCII, it's largely practicality any more. Almost any sane modern language supports Unicode in the source itself (not just literals), and I've even personally seen some stuff using extended Latin (stuff like å, ü, or é), Greek, or Cyrillic characters in stuff like variable names. The problem with that is that you have to rely on everyone else who might be using the code to have appropriate fonts to be able to read it correctly, as well as contending with multi-byte encodings when you do any kind of text processing on it. It's just so much simpler to use plain ASCII, which works everywhere without issue (provided you don't have to interface with old IBM mainframes). How could you not love EBCDIC? :p Humans are funny critters. There was a recentishly-published study about information-density in various human languages. Basically, whether the language was oriented towards a lower or higher number of phonemes-per-minute, the amount of information conveyed across a given time-span was nearly the same. One of the values of ASCII vice even Unicode is the greater degree of distinctness to the available tokens. I mean, adding support for Unicode in DNS has been a boon for phishers. Further, the simplicity of ASCII means I have fewer letters that I need to pay close attention to. Also means fewer keys I have to create finger-memory for when I want to type at a high speed ...returning us to the phonemes-per-minute versus effective information-density question. I would argue that there has. To me, "source code" itself really isn't a thing. At least, the typography of ASCII-text code is not the important thing in the way that typography on a website or a book or a magazine article is. For that other media, the text is the end-result of the work, and the way it is presented is, in itself, part of the art of the whole piece. But with code, the final presentation, the end-goal, is the execution of the source code and not the text of the source code itself. Text, as presented in a source file, is simply a medium for writing executable programs. Thus, typography does not improve the experience of coding in the same way that it does for reading a book or an article. In fact, usage of ligatures, fancy fonts, emojis, etc. in code can often detract from the experience of coding because it obscures what is actually underneath. A program, when compiled and executed, does not understand the ligature for ⇒. It understands =>. So while ligatures can help improve comprehension of prose materials, they can actually hinder the comprehension of code for someone who is not intimately familiar with the ASCII characters that are actually being interpreted. But source code is not stuck in the past. We just have different mechanisms other than typography to improve comprehension of our code. Things like editor autocomplete, static code analysis, and yes, syntax highlighting, all help to improve our comprehension of the code as a parallel to the way typography improves comprehension for prose. Keep in mind that typography isn't important for its own sake: it is important because it helps our human minds interpret text faster and more accurately. Code is interpreted and understood differently in our brains; namely, we do not read code strictly top-to-bottom and left-to-right. We understand it through its relationships with other parts of code. Thus, we can't simply expect the same things that worked in prose to help in the same way for code. This goes even as far as why most programmers prefer monospace fonts, because the relationships of individual characters in their columns are important, while it is not in prose. So while typography has not changed much for source code, I would argue that is it because typography fundamentally isn't as helpful. There are other ways to help programmers that are better than just improving the visual display of the text in an editor. As for why we don't see more things like graphical programming languages, I could draw a parallel to why you see many more people writing blog posts and books than you see people making videos or photo-essays. It's simply easier and faster for humans to compose our thoughts through text than through other mediums. Consider how many thousands of years humanity has been writing, compared to just decades of alternative media even existing, and it's easy to understand why our brains are adapted to prefer text-based media. Fair enough, the idea of "information density" was discussed above and it makes sense that a flow chart carries less information per unit area of screen space than some equivalent Python code, for instance. And yet we still, generally, arrange code into libraries, packages, and files. If the "atom" of programming is the method / function / routine (maybe variables are the subatomic particles?), why don't we have a better way of visualising and modifying the interrelationships between those atoms? Surely, blocks of code arranged sequentially in a file is not the best representation of those relationships. I can see the adoption of component-based programming as an iteration on this very concept (emergence of React and Vue; the change from Angular 1 to 2+; SwiftUI on iOS and Jetpack Compose on Android; even GraphQL; etc). Take what was previously a big jumble of code, and break them down into components that carry significantly more semantic and contextual meaning than just normal functions. At their core, they're not so different from moving code from one file to another, or from one function to another, but conceptually it really is a big leap forward. Encapsulating/passing data through a tree of components, rather than a system of functions, makes it easier to understand the relationships among them all. These relationships are still expressed through text, but it is objectively better-organized and helps to make the relationships explicit. It feels like the "atom" has taken a big step up from just functions and classes. Actually we have all that stuff already: Furthermore I believe that our mainstream programming languages have made huge progress in the last decade. Think of Java for example. It was designed when most computers had a single core CPU, so there were no nice abstractions for parallelism. You had to spawn threads manually. But now look at this beauty: All the complicated stuff is hidden behind the scenes. And it's so easy to see what the code is doing, it's as if it's speaking to you. Or think of JavaScript and how beautiful they've hidden the complexity of asynchronous processing behind async/await. A picture might be worth a thousand words, but pictures are hard to diff. I prefer modern programming language constructs, which make it possible that 20 words are worth a thousand words. Uh, there has been, many people keep attempting this and they keep failing because nothing anyone has come up with yet beats text. There have been countless attempts at graphical programming, they all fail. Why, because nothing is as flexible as language, language is the best medium to communicate intricate details of operations. People have tried to evolve language as well, see Subtext programming language. Frankly, I abhor the idea of gist-y programming. That's too in the ballpark of "figure out what I mean". Humans that speak the same native language and come from similar socio-economic backgrounds have a hard enough time understanding each other. The more divergent we are on communication starting-points we are, the worse it gets (e.g., how many times do you have to resort to transliteration to try to convey linguistic constructs that don't have true analogues across two languages?). The prospect of getting something so alien as a machine to "understand" a human's vaguely-conveyed intent specified in an arbitrary language? Don't get me wrong, I'm not arguing for going back to the bashing out pure-binary or even assembler …but, precision counts. Personally, what constitutes "boring" is less hewing to conventions like 80-columns of ASCII than the seeming loss of humorous code-comments, error responses and other easter eggs. I'm also old enough to vaguely remember the days when Sendmail would crap-out with a "giving up the ghost" message and it has to have been nearly a decade since I've had to deal with "martian source" networking problems. Drag'n'drop is used by Eclipse and Intellij IDEA for refactoring (moving classes and packages). Graphical coding is not productive. There were a number of IDE's called "Visual Age" by IBM. As far back as the mid of 90's. As tools they were as visual as possible, most coding looked like connecting pieces (UI components) with arrows using mouse. Hardly you can find comparable level of visuality in modern widely used tools. Because it appeared that creating apps this way is slow and distracting. Programming languages are, well, languages. Most languages are more convenient when used in spoken or written form. Computer languages designed with written form in mind. That's why coding is so text oriented. Frederick P. Brooks saw it coming —Essence and Accident in Software Engineering worrydream.com/refs/Brooks-NoSilve... I kinda disagree here. A hundred years ago there was no source code. What started as punching holes through paper eventually became digital, then has gone through probably a thousand permutations to end up in the editors we use today. How is that not considered "a single revolution" ??? It may not be amazing as the ideas in your head, but dude, A LOT has changed and improved over the last 3 decades alone. Side note: IDEs with drag and drop reusable components would be nice, but frankly i'd settle for better performances in the existing IDEs first. And seriously why do none of them come with SSH or SFTP built-in? I get serverless, but a large number of us still work directly with servers too. I quite prefer source code to be 'boring', since all-night bug hunts and major issues in production are what you get when code is not boring. There's a reason why 'may you live in interesting times' is meant as a curse rather than a blessing 😛😄.
https://dev.to/awwsmm/why-is-our-source-code-so-boring-27ld
CC-MAIN-2019-47
refinedweb
4,030
60.14
Previously Bram Moolenaar wrote: > Can you explain your motivation for chosing this solution? > > Where to create the build directory is not an obvious choice: > - In the directory of the recipe that is executed > - In the directory of the source file > - In the toplevel directory of the project - have it configurable I might have the sources on a read-only medium and build elsewhere. Wichert. -- Wichert Akkerman <wichert@...> A random hacker George Bronnikov wrote: > Here's a patch that makes BDIRs reside inside every subdirectory. Not > tested extensively, but at least it passes the internal tests and a couple > of my own. Can you explain your motivation for chosing this solution? Where to create the build directory is not an obvious choice: - In the directory of the recipe that is executed - In the directory of the source file - In the toplevel directory of the project I think we need to make a design decision out of this. That requires looking into the arguments for each alternative. -- hundred-and-one symptoms of being an internet addict: 157. You fum through a magazine, you first check to see if it has a web address. /// Bram Moolenaar -- Bram@... -- \\\ /// Creator of Vim - Vi IMproved -- \\\ \\\ Project leader for A-A-P -- /// \\\ Help AIDS victims, buy at Amazon -- /// Hello, Here's a patch that makes BDIRs reside inside every subdirectory. Not tested extensively, but at least it passes the internal tests and a couple of my own. Goga Index: DoBuild.py =================================================================== RCS file: /cvsroot/a-a-p/Exec/DoBuild.py,v retrieving revision 1.52 diff -u -r1.52 DoBuild.py --- DoBuild.py 23 Jan 2003 16:59:23 -0000 1.52 +++ DoBuild.py 20 Mar 2003 20:11:20 -0000 @@ -112,7 +112,7 @@ def get_bdir(recdict): """Get the value of $BDIR as an absolute path and ending in '/'.""" - bd = os.path.abspath(get_var_val(0, recdict, "BDIR")) + bd = get_var_val(0, recdict, "BDIR") if bd[-1] != '/' and bd[-1] != '\\': bd = bd + os.sep return bd @@ -128,18 +128,24 @@ bd = get_bdir(recdict) bd = fname_fold(bd) + if bd[-1] == '/': + bd = bd[:-1] + + # Rip out the part that should be $BDIR -- the penultimate component of the pathname. + dname, lname = os.path.split (name) + dname, bname = os.path.split (dname) + # Compare with $BDIR and then remove each "-variant" part. The last - # compare will be against "build/". + # compare will be against "build". while 1: - i = len(bd) - if len(name) > i and name[:i] == bd: - if remove: - return name[i:] - return bd + if bname == bd: + if remove: + return os.path.join (dname, lname) + return bd + os.sep i = string.rfind(bd, '-') if i < 0: break - bd = bd[:i] + os.sep + bd = bd[:i] return None Index: Util.py =================================================================== RCS file: /cvsroot/a-a-p/Exec/Util.py,v retrieving revision 1.48 diff -u -r1.48 Util.py --- Util.py 10 Mar 2003 19:06:50 -0000 1.48 +++ Util.py 20 Mar 2003 20:11:22 -0000 @@ -618,7 +618,9 @@ bdir = node.attributes.get("var_OBJSUF") else: objsuf = get_var_val(0, recdict, "OBJSUF") - name = os.path.join(bdir, name) + + dname, bname = os.path.split(name) + name = os.path.join(dname, bdir, bname) i = string.rfind(name, '.') if i > 0 and i > string.rfind(name, '/') and i > string.rfind(name, '\\'):
https://sourceforge.net/p/a-a-p/mailman/a-a-p-develop/?viewmonth=200303&viewday=20
CC-MAIN-2017-51
refinedweb
538
69.48
I: We fixed the start-up settings on the VME crate to look for a TCS startup file on fb0. The settings on the Baja 4700 are now: On the advice of Ben Abbott, I've ordered the Diamond Systems Athena II computer w/DAQ, as well as an I/O board, solid state disk and housing for it. The delivery time is 4-6 weeks. Diamond Systems Athena II The EDT PCIe4 DV C-Link frame grabber arrived this morning. There is a CD of drivers and software with it that I'll back up to the wiki or 40m svn sometime soon. *" I saw this and tried it when i was installing, but I had more flexibility when I copied the files directly to the hard drive. following line to ~/.bashrc export PYTHONPATH=/home/controls/scripts/modules:/usr/local/lib/python #!/usr/bin/python # Import the Dalsa1M60 packzge import Dalsa1M60, subprocess # define the serial command location serial_cmd_location = '/opt/EDTpdv/serial_cmd' # start a loop that continually gets the temperatures getTemperatures = 1 #!/usr/bin/python #. #!/usr/bin/python # Here's the error: Traceback (most recent call last): Given below is a brief overview of calculating rms of spot position changes to test the accuracy/precision of the centroiding code. Centroids are obtained by summing over the array of size 30 by 30 around peak pixels, as opposed to the old method of using matlab built-in functions only. Still peak pixel positions were obtained by using builtin matlab function. Plese see the code detect_peaks_bygrid.m for bit more details. My apologies for codes being well modularised and bit messy... Please unzip the attached file to find the matlab codes. The rest of this log is mainly put together by Kathryn. (EDIT/PS) The attached codes were run with raw image data saved on the hard disk, but it should be relatively easy to edit the script to use images acquired real time. We are yet to play with real-time images, and still operating under Windows XP... --- When calculating the rms, the code outputs the results of two different methods. The "old" method is using the built-in matlab method while the "new" method is one Won constructed and seems to give a result that is closer to the expected value. In calculating and plotting the rms, the following codes were used: - centroid_statics_raw_bygrid.m (main script run to do the analysis) - process_raw.m (takes raw image data and converts them into 2D array) - detect_peaks_bygrid.m (returns centroids obtained by old and new methods) - shuffle.m (used to shuffle the images before averaging) The reference image frame was obtained by averaging 4000 image frames, the test image frames were obtained by averaging 1, 2, 5, 10 ... 500, 1000 frames respectively, from the remaining 1000 images. In order to convert rms values in units of pixels to wavefront aberration, do the following: aberration = rms * pixel_width * hole_spacing / lever_arm pixel_width: 12 micrometer hole_spacing: about 37*12 micrometer lever_arm: 0.01 meter rms of 0.00018 roughly corresponds to lambda over 10000. Note: In order to get smaller rms values the images had to be shuffled before taking averages. By setting shuffle_array (in centroid_statics_raw_bygrid.m) to be false one can turn off the image array shuffling. N_av rms 1 0.004018866673087 2 0.002724680286563 5 0.002319477846009 10 0.001230553835673 20 0.000767638027270 50 0.000432681002432 100 0.000427139665006 200 0.000270955332752 500 0.000226521040455 1000 0.000153760240692 fitted_slope = -0.481436501422376 Here are some plots: --- Next logs will be about centroid testing with simulated images, and wavefront changes due to the change in the camera temperature! (PS) I uploaded the same figure twice by accident, and the site does not let me remove a copy!... Happy Fourth of July! The following is a brief overview of how we are analyzing the wavefront aberration and includes the aberration parameters calculated for 9 different temperature differences. So far we are still seeing the cylindrical power even after removing the tape/glue on the Hartmann plate. Attached are the relevant matlab codes and a couple of plots of the wavefront aberration. We took pictures when the camera was in equilibrium at room temperature and then at each degree increase in camera temperature as we heated the room using the air conditioner. For each degree increase in camera temperature, we compared the spot positions at the increased temperature to the spot positions at room temperature. We used the following codes to generate the aberration parameters and make plots of the wavefront aberration: -build_M.m (builds 8 by 8 matrix M from centroid displacements) -wf_aberration_temperature_bygrid.m (main script) -wf_from_parms.m (generates 2D aberration array from aberation parameters) -intgrad2.m (generates 2D aberration array from an interpolated array of centroid displacements) In order to perform the "inverse gradient" method to obtain contours, we first interpolated the centroid displacement vectors to generate a square array. As this array has some NaN (not a number) values, we cropped out some outer region of the array and used array values from (200,200) to (800,800). Sorry we forgot to put that part of the code in wf_aberration_temperature_bygrid.m. The main script wf_aberration_temperature_bygrid.m needs to be revised so that the sign conventions are less confusing... We will update the code later. The initial and final temperature values are as follows: Aberration parameters: 1) Comparing high temp (+10) with room temp p: 1.888906773203923e-004 al: -0.295042766811686 phi: 0.195737681653530 c: -0.001591869846958 s: -0.003826146141562 b: 0.098283157674967 be: -0.376038636781319 a: 5.967617809296910 2) Comparing +9 with room temp p: 1.629083055002727e-004 al: -0.222506109890745 phi: 0.193334452094940 c: -0.001548838746542 s: -0.003404217451916 b: 0.091368295953142 be: -0.351830698303612 a: 5.764068008962653 3) Comparing +8 with room temp p: 1.485283322069376e-004 al: -0.212605187544093 phi: 0.206716196097728 c: -0.001425962488852 s: -0.003148796701331 b: 0.089936286297599 be: -0.363538909377296 a: 5.546514425485094 4) Comparing +7 with room temp p: 1.284124028380585e-004 al: -0.163672705473379 phi: 0.229219952949728 c: -0.001452457146947 s: -0.002807207555944 b: 0.084090100490331 be: -0.379195428095102 a: 5.289173743478881 5) Comparing +6 with room temp p: 1.141756950753851e-004 al: -0.149439038317734 phi: 0.240503450300707 c: -0.001350015836130 s: -0.002529240946848 b: 0.078118977034120 be: -0.326704416216547 a: 4.847406652448727 6) Comparing +5 with room temp p: 8.833496828581757e-005 al: -0.071871278822766 phi: 0.263210114512376 c: -0.001257787180513 s: -0.002095618522105 b: 0.069587080420443 be: -0.335912998511077 a: 4.542557551218057 7) Comparing +4 with room temp p: 6.217428324604411e-005 al: 0.019965235199575 phi: 0.250991433584904 c: -0.001266061216964 s: -0.001568527823273 b: 0.058323732750548 be: -0.289315790283207 a: 3.957825468583509 8) Comparing +3 with room temp p: 4.781068895714900e-005 al: 0.140720713391208 phi: 0.270865276786418 c: -0.001228146894728 s: -0.001371110045136 b: 0.052794990899554 be: -0.273968130963666 a: 3.591187350052610 9) Comparing +2 with room temp p: 2.491163442408281e-005 al: 0.495136135872766 phi: 0.220727346409557 c: -9.897729773516012e-004 s: -0.001076008621974 b: 0.048467660428427 be: -0.280879088681660 a: 3.315430577872808 10) Comparing +1 with room temp p: 8.160828332639811e-006 al: 1.368853902659128 phi: 0.116300954280238 c: -6.149390553733007e-004 s: -3.621216621887707e-004 b: 0.025454969698557 be: -0.242584267252882 a: 1.809039775332749 The first plot is of the wavefront aberration obtained by integrating the gradient of the aberration and the second plot fits the aberration according to the aberration parameters so is smoother since it is an approximation. contents of tcs_daq: /target/TCS_westbridge.db I plotted a histogram of the total intensity of the Hartmann sensor when illuminated and found that the 128 count problem extends all the way up through the distribution. This isn't unreasonable since that digitizer is going to be called on mutliple times. First things first, the value of 128 equals a 1 in the 8th digitizer, so for a 16-bit number in binary, it looks like this: 0000 0000 1000 0000 and in hex-code 080 0000 0000 1000 0000 080 The values of the peaks in the attached distribution are as follows: 128 080 180 280 380 480 580 680 780 880 980 A80 B80 C80 I. I've attached an image that shows the locations of those pixels that record a number of counts = (2*n-1)*128. The image is the sum of 200 binary images where pixels are given values of 1 if their number of counts = (2*n-1)*128 and 0 otherwise. The excess of counts is clearly coming from the left hand tap. This is good news because the two taps have independent ADCs and it suggests that it is only a malfunctioning ADC on the LHS that is giving us this problem.. I set up the SLED to test its long term performance. The test began, after a couple of false starts, around 9:15AM this morning. The output of the fiber-optic patch cord attached to the SLED is illuminating a photo-detector. The zero-level on the PD was 72.7mV (with the lights on). Once the PD was turned on the output was ~5.50 +/- 0.01V. This is with roughly 900uW exiting the SLED. The instructions from Superlum suggest limiting the amount of power coupled back into the fiber to less than 3%. With the current setup, the fiber is approximately 2" from the photodetector. What is the power coupled back into the fiber? Assume a worst case of 100% of the light reflected from the PD, the wavelength is 830nm and a waist size of about 6um radius at the output of the fiber. The beam size at 4" (from the fiber output to the PD and back again) or ~100mm from the fiber is about 4.4mm radius. Therefore about (6um/4.4mm)^2 or ~2ppm will be coupled back into the fiber. This is sufficiently small. The attached plots from dataviewer show measurements from the SLED (on-board photodetector, on-board temperature sensor, current setpoint, current limit, current to diode) over the last 15 hours. Restarted the HWS EPICS channels on hartmann with the following command: /cvs/opt/epics-3.14.10-RC2-i386/base/bin/linux-x86/softIoc -S HWS.cmd & I added an alias HWSIoc to controls which can be used to start the HWS EPICS softIoc. HWSIoc controls alias HWSIoc='/cvs/cds/caltech/target/softIoc/startHWSIOC.sh' #!/bin/bash cd /cvs/cds/caltech/target/softIoc /cvs/opt/epics-3.14.10-RC2-i386/base/bin/linux-x86/softIoc -S /cvs/cds/caltech/ target/softIoc/HWS.cmd & cd - Below is a table summarizing the results of recent thermal defocus experiments. The values are the calculated change in measured defocus per unit temperature change of the sensor: More detail on these experiments will be available in my second progress report, which will be uploaded to the LIGO DCC by next Monday. The main purpose of this particular eLog is to summarize what functions I wrote and used to do this data analysis, and how I used them. All relevant code which is referenced here can be found on the SVN; I uploaded my most recent versions earlier today. Here is a flowchart summarizing the three master functions which were specifically addressed for each experiment: py4plot.m is probably the most complicated of these three functions, in terms of the amount of data analysis done, so here's a flowchart which shows how the function works and the main subfunctions that it addresses: Also, here is a step-by-step example of how these functions might be used during a particular experiment: (1)Suppose that I have an experiment which I have named "73010a", in which I wish to take 40 images of 200 sums. I would open the code for framesumexport2.py and change lines 7, 8 and 17 to read: 7 LoopN=40 8 SumN=200 17 mainoutbase="73010a" And I would then save the changes. I would double-check that the output basename had indeed been changed to 73010a (it will overwrite existing data files, if you forget to change the basename before running it). I would then let the script run (changing the set temperature of the lab after the first summed image was taken). Note that the total duration of the measurement is a function of how many images are summed and how many summed images are taken (in this example, if I was taking each single image at a rate of 11Hz, data collection would take ~20 seconds and data processing (summing the images) would take ~4 minutes (on the order of ~1 second per image in the sum) (the script isn't very quick at summing images, obviously). EDIT(7/30 3:40pm): I just updated framesumexport2.py so that the program prompts you for this information. I also changed enabled execute permissions on the copy of the code on the Hartmann machine located in /users/jkunert/, so going to that directory and typing ./framesumexport2.py then inputting the information when prompted is all you need to do now. No need to go change around the actual code every time, any more. (2)Once data collection had ceased entirely, I would open MATLAB and enter the following: [x,y,dx,dy,time,M,centroids]=pyanalyze_uni('73010a',40); The function would then look for 73010a.raw and 73010a.txt in ./opt/EDTpdv/ and import the 40 images individually and centroid them. The x and y outputs are the centroid locations. If, for example, 952 centroids were located, x and y would be 952x1x40 arrays. M would be a 40x4 array of the form: [time_before_img_taken time_after_img_taken digitizer_temp sensor_temp] (3)Once MATLAB had finished the previous function, I would input: tG=struct; py4plot('73010a',0,39,x,y,'73010a','200',[1 952],2,tG) The inputs are, respectively: (1)python output basename, (2)first image to analyze (where the first image is image 0), (3)last image to analyze, (4)x data (or, rather, data to analyze. to analyze y instead, just flip around "x" and "y" in the input), (5)y data (or, if you want to analyze the y-direction, "x" would be the entry here), (6)experiment name, (7)number of sums in each image (as a string), (8)range of centroids to include in analysis (if you have 952 centroids, for example, and no ridiculous noise at the edges of the CCD, then [1 952] would be the best entry here), (9)outlier tolerance (number of standard deviations from initial fit line that a datapoint must be within to be included in the second line fitting, in the dx vs x plot), (10)exponential fitting structure (input an empty structure unless the temperature/time exponential fit turns out poorly, in which case a better fit parameter guess can be inputted as field tG.guess). [INCOMPLETE ENTRY] We set up the Hartmann sensor and illuminated it with the output from the fiber-coupled SLED placed about 1m away. The whole arrangement was covered with a box to block out ambient light. The exposure time on the Hartmann sensor was adjusted so that the maximum number of counts in a pixel was about 95% of the saturation level. We recorded a set of 5000 images to file and analyzed them using the Caltech and Adelaide centroiding codes. The results are shown below. Basically, we see the same deviation from ideal improvement that is observed at Adelaide. I: Appended below is the step by step procedure that I used to install and use the frame grabber SDK. Note that the installation process was a lot simpler with the SDK version 4.2.4.3 than the previous version. Lines starting with ":" are my inputs and with ">" the computer outputs. I tried to put this into elog but the web page says the laser password is wrong so I could not. Won --- 0. Turn on or restart the computer. For installation of the frame grabber SDK, go to step 1. If using the existing installation go to step 5. 1. Copy the script EDTpdv_lnx_4.2.4.3.run to my home folder. 2. Ensure that the script is executable. : chmod +x EDTpdv_lnx_4.2.4.3.run 3. Run the script. : sudo ./EDTpdv_lnx_4.2.4.3.run 4. After entering the root password, the script asks for the installation directory. Default is /opt/EDTpdv, to which I type 'y'. The script then runs, printing out a massive log. This completes the installation process. 5. Move to the directory in which the SDK is installed. : cd /opt/EDTpdv 6. Initialise the camera by loading a camera configuration file dalasa_1m60.cfg located in the camera_config folder. : ./initcam -f camera_config/dalsa_1m60.cfg Which will output the message (if successful) opening pdv unit 0.... done 7. Take an image frame. : ./take -f ~/matlab/images/test.raw which will save the raw file as specified above and generate following message on the terminal: reading image from Dalsa 1M60 12 bit dual channel camera link width 1024 height 1024 depth 12 total bytes 2097152 writing 1024x1024x12 raw file to /home/won/matlab/images/test.raw (actual size 2097152) 1 images 0 timeouts 0 overruns Whether the image taken was valid or not, I followed the exactly same procedure. In step 7, when the image was not valid, the message after executing the take command said "1 timeoutouts", and when the image was valid I got "0 timeouts". You will also get "1 timeouts" if you turn off the camera and execute the take command. So at least I know that when an image taken was not valid it is due to the frame grabber failing to obtain the image from the camera. Attached]
http://nodus.ligo.caltech.edu:8080/TCS_Lab/page1?attach=1&sort=Type
CC-MAIN-2022-27
refinedweb
2,944
68.36
Korg Triton Studio Format Name: Triton Studio Company: Korg Extensions: .kmp, .ksc, .pcg Description: To order Triton Studio Sounds , choose the 'Triton' option under 'Format' and the 'Shipping' Zip Disk ($8), CD-ROM ($2), Floppy Disks ($8) or Download Option (instant download) when checking out. Triton sounds ship in .kmp, .ksc, or .pcg files depending on the sound kit. GN Triton Sounds will load in the Triton Classic, Triton LE (need the sampler option installed), Triton Extreme, and the Triton TR (need the sampler option installed). Sound kits that feature 128 sounds will typically contain 2 files: Drums (64) and one containing Effects and Instrument Chops (64). Addon Collections typically feature 96 samples spread across 2-3 files. When using the download option or copying over, these sounds will not play on the computer. Get Triton Studio Sounds and import them into your Korg Triton Studio Sampler today! Add sampling capabilities and a SCSI port to your Triton Studio with the EXB-SMPL! ($199.95) This will enable the import and usage of the Triton Sounds to work with the Triton TR & Triton LE keyboards The EXB-SMPL provides 16 Megs of sample memory (expandable to 64). In addition to two 1/4 in. audio inputs with variable gain and mic/line switching. This option also adds a SCSI connector port. The Triton Sampler option supports AIFF, .WAV. AKAI S1000 & S3000 (samples and mapped multisamples) formats, as well as Korg format. Order the Korg Triton Studio 61 Key Workstation (List: $3640) Our Price: $26 76 Key Workstation (List: $3800) Our Price: $2999 88 Key Workstation (List: $4440) Our Price: $33. KORG TRITON STUDIO SPECIFICATIONS From and 76-key synth action versions, and an 88-key weighted action model. Now comes with a CD-RW factory installed! The new Version 2.0 operating system for the Triton Studio is available as a free download from Korg's site. The new version features many upgrades to the sequencer, improved sample loading & handling, and more. ***Please Note: Upgrading to Version 2.0 will initialize all data in your TRITON STUDIO!! Unless you save your data before you upgrade, all of your programs, combis, etc. will be lost. Superb Sounds, and Incredible Expandability The Triton Studio features Korg's HI (Hyper Integrated) synthesis system and best PCM sounds. Its high-capacity 48 MB PCM ROM, with 429 multi-samples and 417 drum samples, covers every need imaginable. And, in addition to the "classic" Triton's complete ROM, the Triton Studio also includes a new 16 MB stereo acoustic piano with 2 levels of velocity switching. With room for up to 7 PCM expansion boards and up to 96 MB of sample RAM, you can expand the Triton Studio to a whopping 256 MB of waveform data. The Triton Studio delivers a staggering assortment of high-quality programs created by Korg's acclaimed voicing "team." Programs that range from piano, guitar, organ, synth and drums to the latest synth and sound effects. An amazing 1,536 Program locations are provided in user-writable memory (512 preloaded), as well as 256 sounds and 9 drum kits in ROM for GM2 compatibility. For drum Programs you can choose from 144 user drum kits (20 preloaded) and 9 ROM drum kits for GM2. User-writable memory also provides 1,536 Combination locations (512 preloaded), each of which allows you to use up to 8 timbres (8 sound programs) simultaneously. Open memory locations can be used for sounds that you edit/create, load from expansion board packages, or get from other sources. A Sampling Powerhouse The Triton Studio provides high-grade mono/stereo 16-bit sampling at a stunning 48 kHz sampling frequency. It comes with 16 MB of sample memory (RAM) and you can add 72-pin SIMM boards* to expand the sampling memory to a maximum of 96 MB (three 32-MB SIMMs). Samples in RAM can be used in Programs or in drumkits. You can even sample directly to the internal hard disk to create a mono or stereo .WAV file. This is perfect for capturing large files that you wish to move to a computer-based system, and is used for the audio CD recording feature we'll talk more about later. Samples you record to hard disk can be loaded into sample memory for editing and use in Programs and drumkits**. The Triton Studio supports both analog and digital input sampling. AIFF, .WAV, AKAI (S1000/S3000 samples and mapped multi-samples), and Korg format data can be loaded into sample memory. Your sample data can be exported as AIFF or .WAV format sample files. When using the CDRW-1 option (internal ATAPI connection) you can sample any audio CD played on the drive with no other cables needed. The sample editing functions include essential commands such as Normalize, Truncate, Cut, Copy, and Reverse. The graphic waveform display provides a grid function that displays vertical lines to indicate resolution and tempo. There's even a Rate Convert function that changes the sampling frequency to create lo-fi sounds. Sophisticated editing capabilities such as Time Slice, Time Stretch, Crossfade, and Link are provided, giving you the music production power you want. *User installable. Use either 16 MB or 32 MB 72-pin SIMM boards that support Fast Page or EDO, with 11-bit addressing and have an access speed of 60 ns (nanoseconds) or faster. ** Hard disk samples up to 16 MBs (mono) or 32 MB (stereo) can be loaded into sample memory (if sample memory has been expanded to 32 MBs or more). Open Sampling System The Triton Studio allows you to sample in Program, Combination and Sequencer modes as well as in Sampling Mode. In Sampling mode, you can apply insert effects to one or more samples and internally resample the result to create more complex sounds. In Program, Combination, and Sequence modes, a performance using the full functionality of each mode (filters, effects, arpeggiator, sequencer, etc.) can be resampled, as well as sampling external audio sources from the input jacks. This enables you to make new performance loops using the Triton Studio itself, and to expand your polyphony by resampling polyphonic sections as stereo or even mono voices. Polyphony "limitations" disappear as you take advantage of this exciting new feature! In Sequencer mode you can use the "In-Track Sampling" function to sample external audio (like vocals or guitar) while the song plays, and the system will automatically create the appropriate note "trigger" in the track. You can edit the samples and even Time Slice them if you want complete control over tempo and pitch at a later date. A Synergy of Sampling and Synthesis The PCM samples from internal memory (or your own samples) can be assigned to an oscillator and sent through the filter, LFO, and amp sections to process the sound in a variety of ways. For each oscillator, the Triton Studio provides 2 types of filters (Resonant Low pass or Low Pass + High Pass), powerful multi-stage envelope generators, and 2 MIDI-syncable LFOs (with a choice of 21 different LFO waveforms). In addition, its modulation routing system gives you an amazing 42 sources and 55 destinations for controlling parameters in realtime using velocity, key range, the various knobs and switches and more. High-Quality Effects with Flexible Routing The Triton Studio delivers 102 studio quality effects. Its effect section provides 5 insert effects, 2 master effects, a 3-band master EQ and a mixer section that controls the routing between them. Many of the effects can be synced to MIDI or the internal sequencer, and Dynamic Modulation allows you to control effects parameters in realtime. The Triton Studio provides four individual outputs and you can assign any oscillator to any individual output, with or without effects processing. The insert or master effects can even be applied to external sounds, allowing you to use the Triton Studio as a 6-in/6-out MIDI-controlled effect processor. Up to 6 channels of audio I/O The Triton Studio comes with two channel analog and digital (S/PDIF) audio inputs. The S/PDIF inputs supports both 48 kHz and 96 kHz audio sources, ensuring compatibility with digital recording systems. The optional EXB-mLAN interface adds 2 more input channels, for a total of 6. The 6 channels of audio output consist of the main stereo audio outputs, plus 4 individual outs. You can also use stereo S/PDIF output, and add both 6-channel ADAT optical (EXB-DI) and 6-channel mLAN (EXB-mLAN) digital outputs. Oscillators, drums, timbres/tracks, and the sound from the insert effects can all be easily routed to any output. Powerful Polyphonic Arpeggiators The Triton Studio provides powerful dual arpeggiators that can develop polyphonic phrases based on the notes you play. In addition to five basic presets, internal memory contains hundreds of different arpeggio patterns. The arpeggiator even provides a Fixed Note mode that is ideal for drum and percussion sounds. Arpeggios can be synchronized to MIDI Clock, or to the internal sequencer. You can apply 2 different arpeggio patterns to selected sounds in a Combination or Song, or use velocity to switch between them. You can even record their output into the sequencer. RPPR, Realtime Pattern Play/Recording Phrases (patterns) can be played in realtime simply by pressing keys on the keyboard. You can play a different pattern from each key (up to 74 keys per set), and even press multiple keys to play different phrases simultaneously. You can set up the Triton Studio to play patterns from one key range while using another area of the keyboard to play "conventionally" for sophisticated realtime performances. Versatile Sequencing The 16-track sequencer is powerful yet easy to use, has a mammoth capacity of 200 Songs/200,000 notes, and features the "In-Track Sampling" function for seamless integration of samples into sequences. Speed up your music production by using the 16 preset template songs (along with 150 associated preset rhythm patterns), which include programs, track settings and effects suitably voiced for a specific musical style. You can also create and store 16 of your own templates. Patterns make it easy for you create music. Sequenced phrases of up to 99 measures can be joined to quickly build a track. Each song can store 100 patterns that you create (including arpeggiated riffs), and can also use the preset rhythm patterns. You can even extract parts from your own sequences and SMF files and convert them into patterns. In addition to real-time and step recording, the Triton Studio offers overdub and looped recording, and extensive track editing features. Independent track looping is perfect when repetitive phrases are called for. Combinations can easily be copied and recorded in the sequencer including their arpeggiator-generated data. Real-time knob and other controllers can be recorded and a Master Track can be used to control tempo changes. Cue Lists are a great way to build up songs from smaller sections (verse, chorus, bridge, etc.). Up to 99 song steps (each containing 16 tracks) can be arranged in any order you want. Each Song step can be repeated as needed, and you can audition different arrangements by modifying or creating a new Cue List. Up to 20 Cue Lists can be stored in internal memory and can even be converted back into a single linear song if further track editing or additional recording is needed. SMF (Standard MIDI File) formats 0 and 1 are supported for both loading and saving. A Direct Play function can play song data directly from disk, and the Jukebox function lets you specify a setlist of songs to playback automatically. Create Audio CDs The Triton Studio makes it easy for you to create original audio CDs or data backup CDs without using any external equipment. After your song is finished, just sample the output to the internal hard drive and the resulting .WAV files can be written to the optional CDRW-1 (or a SCSI-connected CD-R/RW) to create an audio CD. From initial inspiration to final CD ? what could be easier? Korg's Exclusive TouchView Interface The easy-to-read 320 x 240 pixel screen of the Triton Studio features Korg's exclusive TouchView graphical user interface, letting you operate the instrument by directly touching the screen. Large, responsive, and faster than ever, this display provides visual waveform editing for samples, a clean mixing layout for sequencing and a logical, category-based system for finding sounds, effects and even waveform data. Controllers Extend Your Performance Potential The Triton Studio provides a joystick, ribbon controller, 2 assignable switches, a value slider, 4 real-time control knobs, and 3 arpeggiator control knobs. Along with optional foot switches and pedals, they can be used to control a broad range of synthesis and effect parameters. 3 Models and Superior Keyboard Feel The Triton Studio is available in 3 sizes: 61 and 76-key synth action versions, and an 88-key weighted action model. The 88-key version features the newly developed RH2 (Real Weighted Hammer Action2) keyboard. With a weightier feel in the low register and a lighter, more sensitive feel as you move upward, this piano-touch keyboard will respond to every nuance of your playing. Expand Your Potential A wide range of user-installable options is available for the Triton Studio. You can install a 6-voice DSP MOSS synthesis engine (EXB-MOSS, adding 128 more Programs), up to 7 PCM expansion boards (16 MBs each), add a 6-channel ADAT output connector (EXB-DI), add 6-channel support for the new mLAN digital audio/MIDI network (EXB-mLAN), increase the sample memory, and install an internal 8x CD-RW drive (CDRW-1). Note: Specifications subject to change without notice. TECHNICAL INFO Tone Generator: HI synthesis system: 48kHz Sampling frequency 48MB PCM ROM 429 multisamples + 417 drum samples Sampling: 16-bit 48 kHz stereo/mono sampling 16 MBs memory standard, expandable to 96 MBs Maximum of 1,000multisamples/4,000samples. Up to 128 indices can be assigned to a multi-sample. AIFF, WAVE, AKAI (S1000/S3000 samples and mapped multi-samples), and Korg format sample data can be loaded. Note: TFD-1S ? 4S not supported Single mode: 60 voices (60 oscillators)/Max 120 voices (120 oscillators)* Double Mode: 30 voices (60 oscillators)/Max 60 voices (120 oscillators)** Polyphony: 60 Oscillators to play PCM samples from the internal ROM bank or RAM (user sampling), and 60 oscillators to play PCM samples from the internal Piano bank or an EXB-PCM expansion board Keyboard: 61 Combinations: 1,536 user combinations (512 preload) Sequencer Capacity and Functionality: 16 timbres, 16 tracks, 1/192 resolution, 150 preset/100 user patterns per song, 200 songs, 20 cue lists, 200,000 notes maximum, Reads and writes Standard MIDI File (Format 0 and 1) RPPR: Realtime Pattern Play / Recording, One set per song, 74 patterns per Arpeggiator: 5 preset patterns, 507 user patterns (3 X preload) Controllers: Joystick, Ribbon controller, (2) Assignable switches, (4) Realtime knobs, (3) Arpeggiator control knobs Control Inputs: Damper pedal (responds to half-pedaling), Assignable (SWITCH/PEDAL) Outputs: Main = L/MONO, R, Individual = 1, 2, 3, 4, S/PDIF (optical 24 bits, 48/96 kHz), Headphones Inputs: 1, 2, Level Switch LINE/MIC, Level volume, S/PDIF (optical 24 bits, 48/96 kHz) Disk Drive: 3.5. Get Hip Hop, Dance, Reggaeton, and R&B Triton Samples! Download Free Triton Sounds!
http://www.gotchanoddin.com/store/index.php/index.php?act=viewFormat&formatId=42
CC-MAIN-2017-04
refinedweb
2,574
50.97
Advertisement c++ graphics programs I WANT TO KNOW THE the programs in c++ ebooks free ebooks reqd for maya and 3dmax EasyEclipse for C and C++ EasyEclipse for C and C++ EasyEclipse for C and C++ is all you need to start developing C and C++ code with Eclipse. There are currently 28 comments c++ programs c++ programs i want the syntax for the following programs according to c++ pattern. 1.sum of multiples of an integer upto 10 2.2/4+4/6+6/8+8/10... getting it i m jss starter of this language i want it soon C++Tutorials ) of writing programs using the Win32 API. The language used is C, most C++ compilers...++ is a programming language substantially different from C. Many see C++ as "a better C than... to teach C++ in a way that makes use of what the language can offer. C++ shares C++GraphicsTutorials C++ Graphics Tutorials  ... for changes to compiled code. DirectX Graphics C/C... in this document is correct. C/C++ Windows programmers who want to learn c++ programs c++ programs Write a program with the following (a) A function to read two double type numbers from keyboard (b) A function to calculate the division of these two numbers (c) A try block to throw an exception when a wrong type C Language C Language What's the right declaration for main()? Is void main() correct? in C language ? please help me sir ! Thank You C Language C Language Respected sir, Why does sizeof report a larger size than I expect for a structure type, as if there were padding at the end? help me sir C language C language i want that when i hit any key only * to be print not the hit key in c language The given example will display aestricks on hitting any key. #include "stdio.h" #include "conio.h" #include "ctype.h" int C Language C Language Respected Sir, How can I determine the byte offset of a field within a structure? How can I access structure fields by name at run time? please help me sir . Thank you sir
http://www.roseindia.net/tutorialhelp/allcomments/5608
CC-MAIN-2015-35
refinedweb
357
73.98
So I been trying to code a bunch of DLL stuff, I've done it before, but for some reason this time it's not working. It crashes. Now, bear in mind that to simplify the problem, I've created a simple application that will run one function. But even that crashes! How is this possible? Is it a vista thing? I tried both CodeBlocks and DevCpp. Main PROGRAM: DLL:DLL:Code:#include <iostream> #include <windows.h> typedef int (*DLLFUNCTION)(int); HINSTANCE g_hDLLinst; int main(){ DLLFUNCTION Message = NULL; std::cout << "Running DLL" << std::endl; g_hDLLinst = LoadLibrary("test.dll"); std::cout << "Library Loaded" << std::endl; if(g_hDLLinst != NULL){ Message = (DLLFUNCTION)GetProcAddress(g_hDLLinst, "Message"); } std::cout << "Message Loaded" << std::endl; int x = Message(5); if(x == 5){ std::cout << "Message Worked" << std::endl; } FreeLibrary(g_hDLLinst); return 0; } MAIN.H: MAIN.CPPMAIN.CPPCode:#ifndef __MAIN_H__ #define __MAIN_H__ #include <windows.h> /* To use this exported function of dll, include this header * in your project. */ #define DLL_EXPORT __declspec(dllexport) #ifdef __cplusplus extern "C" { #endif int DLL_EXPORT DLLMessage(int x); #ifdef __cplusplus } #endif #endif // __MAIN_H__ I put both the dll and the exe into the same folder, and it does:I put both the dll and the exe into the same folder, and it does:Code:#include "main.h" // a sample exported function int DLL_EXPORT DLLMessage(int x){ MessageBox(0, "DLL Worked", "DLL Message", MB_OK | MB_ICONINFORMATION); return x; } } Running DLL Library Loaded Message Loaded Crash.......
http://cboard.cprogramming.com/windows-programming/109293-scary-dll-problem-dlls-crash.html
CC-MAIN-2015-48
refinedweb
239
56.86
In the last four articles or so, we’ve done a real whirlwind tour of Haskell libraries. We created a database schema using Persistent and used it to write basic SQL queries in a type-safe way. We saw how to expose this database via an API with Servant. We also went ahead and added some caching to that server with Redis. Finally, we wrote some basic tests around the behavior of this API. By using Docker, we made those tests reproducible. In this article, we’re going to review this whole process by adding another type to our schema. We’ll write some new endpoints for an Article type, and link this type to our existing User type with a foreign key. Then we’ll learn one more library: Esqueleto. Esqueleto improves on Persistent by allowing us to write type-safe SQL joins. As with the previous articles, there’s a specific branch on the Github repository for this series. Go there and take a look at the esqueleto branch to see the complete code for this article. Adding Article to our Schema So our first step is to extend our schema with our Article type. We’re going to give each article a title, some body text, and a timestamp for its publishing time. One new feature we’ll see is that we’ll add a foreign key referencing the user who wrote the article. Here’s what it looks like within our schema: PTH.share [PTH.mkPersist PTH.sqlSettings, PTH.mkMigrate "migrateAll"] [PTH.persistLowerCase| User sql=users ... Article sql=articles title Text body Text publishedTime UTCTime authorId UserId UniqueTitle title deriving Show Read Eq |] We can use UserId as a type in our schema. This will create a foreign key column when we create the table in our database. In practice, our Article type will look like this when we use it in Haskell: data Article = Article { articleTitle :: Text , articleBody :: Text , articlePublishedTime :: UTCTime , articleAuthorId :: Key User } This means it doesn’t reference the entire user. Instead, it contains the SQL key of that user. Since we’ll be adding the article to our API, we need to add ToJSON and FromJSON instances as well. These are pretty basic as well, so you can check them out here if you’re curious. If you’re curious about JSON instances in general, take a look at this article. Adding Endpoints Now we’re going to extend our API to expose certain information about these articles. First, we’ll write a couple basic endpoints for creating an article and then fetching it by its ID: type FullAPI = "users" :> Capture "userid" Int64 :> Get '[JSON] User :<|> "users" :> ReqBody '[JSON] User :> Post '[JSON] Int64 :<|> "articles" :> Capture "articleid" Int64 :> Get '[JSON] Article :<|> "articles" :> ReqBody '[JSON] Article :> Post '[JSON] Int64 Now, we’ll write a couple special endpoints. The first will take a User ID as a key and then it will provide all the different articles the user has written. We’ll do this endpoint as /articles/author/:authorid. ... :<|> "articles" :> "author" :> Capture "authorid" Int64 :> Get '[JSON] [Entity Article] Our last endpoint will fetch the most recent articles, up to a limit of 10. This will take no parameters and live at the /articles/recent route. It will return tuples of users and their articles, both as entities. … :<|> "articles" :> "recent" :> Get '[JSON] [(Entity User, Entity Article)] Adding Queries (with Esqueleto!) Before we can actually implement these endpoints, we’ll need to write the basic queries for them. For creating an article, we use the standard Persistent insert function: createArticlePG :: PGInfo -> Article -> IO Int64 createArticlePG connString article = fromSqlKey <$> runAction connString (insert article) We could do the same for the basic fetch endpoint. But we’ll write this basic query using Esqueleto in the interest of beginning to learn the syntax. With Persistent, we used list parameters to specify different filters and SQL operations. Esqueleto instead uses a special monad to compose the different type of query. The general format of an esqueleto select call will look like this: fetchArticlePG :: PGInfo -> Int64 -> IO (Maybe Article) fetchArticlePG connString aid = runAction connString selectAction where selectAction :: SqlPersistT (LoggingT IO) (Maybe Article) selectAction = select . from $ \articles -> do ... We use select . from and then provide a function that takes a table variable. Our first queries will only refer to a single table, but we'll see a join later. To complete the function, we’ll provide the monadic action that will incorporate the different parts of our query. The most basic filtering function we can call from within this monad is where_. This allows us to provide a condition on the query, much as we could with the filter list from Persistent. selectAction :: SqlPersistT (LoggingT IO) (Maybe Article) selectAction = select . from $ \articles -> do where_ (articles ^. ArticleId ==. val (toSqlKey aid)) First, we use the ArticleId lens to specify which value of our table we’re filtering. Then we specify the value to compare against. We not only need to lift our Int64 into an SqlKey, but we also need to lift that value using the val function. But now that we’ve added this condition, all we need to do is return the table variable. Now, select returns our results in a list. But since we’re searching by ID, we only expect one result. We’ll use listToMaybe so we only return the head element if it exists. We’ll also use entityVal once again to unwrap the article from its entity. selectAction :: SqlPersistT (LoggingT IO) (Maybe Article) selectAction = ((fmap entityVal) . listToMaybe) <$> (select . from $ \articles -> do where_ (articles ^. ArticleId ==. val (toSqlKey aid)) return articles) Now we should know enough that we can write out the next query. It will fetch all the articles that have written by a particular user. We’ll still be querying on the articles table. But now instead checking the article ID, we’ll make sure the ArticleAuthorId is equal to a certain value. Once again, we’ll lift our Int64 user key into an SqlKey and then again with val to compare it in “SQL-land”. fetchArticleByAuthorPG :: PGInfo -> Int64 -> IO [Entity Article] fetchArticleByAuthorPG connString uid = runAction connString fetchAction where fetchAction :: SqlPersistT (LoggingT IO) [Entity Article] fetchAction = select . from $ \articles -> do where_ (articles ^. ArticleAuthorId ==. val (toSqlKey uid)) return articles And that’s the full query! We want a list of entities this time, so we’ve taken out listToMaybe and entityVal. Now let’s write the final query, where we’ll find the 10 most recent articles regardless of who wrote them. We’ll include the author along with each article. So we’re returning a list of of these different tuples of entities. This query will involve our first join. Instead of using a single table for this query, we’ll use the InnerJoin constructor to combine our users table with the articles table. fetchRecentArticlesPG :: PGInfo -> IO [(Entity User, Entity Article)] fetchRecentArticlesPG connString = runAction connString fetchAction where fetchAction :: SqlPersistT (LoggingT IO) [(Entity User, Entity Article)] fetchAction = select . from $ \(users `InnerJoin` articles) -> do Since we’re joining two tables together, we need to specify what columns we’re joining on. We’ll use the on function for that: fetchAction :: SqlPersistT (LoggingT IO) [(Entity User, Entity Article)] fetchAction = select . from $ \(users `InnerJoin` articles) -> do on (users ^. UserId ==. articles ^. ArticleAuthorId) Now we’ll order our articles based on the timestamp of the article using orderBy. The newest articles should come first, so we'll use a descending order. Then we limit the number of results with the limit function. Finally, we’ll return both the users and the articles, and we’re done! fetchAction :: SqlPersistT (LoggingT IO) [(Entity User, Entity Article)] fetchAction = select . from $ \(users `InnerJoin` articles) -> do on (users ^. UserId ==. articles ^. ArticleAuthorId) orderBy [desc (articles ^. ArticlePublishedTime)] limit 10 return (users, articles) Caching Different Types of Items We won’t go into the details of caching our articles in Redis, but there is one potential issue we want to observe. Currently we’re using a user’s SQL key as their key in our Redis store. So for instance, the string “15” could be such a key. If we try to naively use the same idea for our articles, we’ll have a conflict! Trying to store an article with ID “15” will overwrite the entry containing the User! But the way around this is rather simple. What we would do is that for the user’s key, we would make the string something like users:15. Then for our article, we’ll have its key be articles:15. As long as we deserialize it the proper way, this will be fine. Filling in the Server handlers Now that we’ve written our database query functions, it is very simple to fill in our Server handlers. Most of them boil down to following the patterns we’ve already set with our other two endpoints: fetchArticleHandler :: PGInfo -> Int64 -> Handler Article fetchArticleHandler pgInfo aid = do maybeArticle <- liftIO $ fetchArticlePG pgInfo aid case maybeArticle of Just article -> return article Nothing -> Handler $ (throwE $ err401 { errBody = "Could not find article with that ID" }) createArticleHandler :: PGInfo -> Article -> Handler Int64 createArticleHandler pgInfo article = liftIO $ createArticlePG pgInfo article fetchArticlesByAuthorHandler :: PGInfo -> Int64 -> Handler [Entity Article] fetchArticlesByAuthorHandler pgInfo uid = liftIO $ fetchArticlesByAuthorPG pgInfo uid fetchRecentArticlesHandler :: PGInfo -> Handler [(Entity User, Entity Article)] fetchRecentArticlesHandler pgInfo = liftIO $ fetchRecentArticlesPG pgInfo Then we’ll complete our Server FullAPI like so: fullAPIServer :: PGInfo -> RedisInfo -> Server FullAPI fullAPIServer pgInfo redisInfo = (fetchUsersHandler pgInfo redisInfo) :<|> (createUserHandler pgInfo) :<|> (fetchArticleHandler pgInfo) :<|> (createArticleHandler pgInfo) :<|> (fetchArticlesByAuthorHandler pgInfo) :<|> (fetchRecentArticlesHandler pgInfo) One interesting thing we can do is that we can compose our API types into different sections. For instance, we could separate our FullAPI into two parts. First, we could have the UsersAPI type from before, and then we could make a new type for ArticlesAPI. We can glue these together with the e-plus operator just as we could individual endpoints! type FullAPI = UsersAPI :<|> ArticlesAPI type UsersAPI = "users" :> Capture "userid" Int64 :> Get '[JSON] User :<|> "users" :> ReqBody '[JSON] User :> Post '[JSON] Int64 type ArticlesAPI = "articles" :> Capture "articleid" Int64 :> Get '[JSON] Article :<|> "articles" :> ReqBody '[JSON] Article :> Post '[JSON] Int64 :<|> "articles" :> "author" :> Capture "authorid" Int64 :> Get '[JSON] [Entity Article] :<|> "articles" :> "recent" :> Get '[JSON] [(Entity User, Entity Article)] If we do this, we’ll have to make similar adjustments in other areas combining the endpoints. For example, we would need to update the server handler joining and the client functions. Writing Tests Since we already have some user tests, it would also be good to have a few tests on the Articles section of the API. We’ll add one simple test around creating an article and then fetching it. Then we’ll add one test each for the "articles-by-author" and "recent articles" endpoints. So one of the tricky parts of filling in this section will be that we need to make test Article object. But we'll need them to be functions on the User ID. This is because we can’t know a priori what SQL IDs we'll get when we insert the users into the database. But we can fill in all the other fields, including the published time. Here’s one example, but we’ll have a total of 18 different “test” articles. testArticle1 :: Int64 -> Article testArticle1 uid = Article { articleTitle = "First post" , articleBody = "A great description of our first blog post body." , articlePublishedTime = posixSecondsToUTCTime 1498914000 , articleAuthorId = toSqlKey uid } -- 17 other articles and some test users as well … Our before hooks will create all these different entities in the database. In general, we’ll go straight to the database without calling the API itself. Like with our users tests, we'll want to delete any database items we create. Let's write a generic after-hook that will take user IDs and article IDs and delete them from our database: deleteArtifacts :: PGInfo -> RedisInfo -> [Int64] -> [Int64] -> IO () deleteArtifacts pgInfo redisInfo users articles = do void $ forM articles $ \a -> deleteArticlePG pgInfo a void $ forM users $ \u -> do deleteUserCache redisInfo u deleteUserPG pgInfo u It’s important we delete the articles first! If we delete the users first, we'll encounter foreign key exceptions! Our basic create-and-fetch test looks a lot like the previous user tests. We test the success of the response and that the new article lives in Postgres as we expect. beforeHook4 :: ClientEnv -> PGInfo -> IO (Bool, Bool, Int64, Int64) beforeHook4 clientEnv pgInfo = do userKey <- createUserPG pgInfo testUser2 articleKeyEither <- runClientM (createArticleClient (testArticle1 userKey)) clientEnv case articleKeyEither of Left _ -> error "DB call failed on spec 4!" Right articleKey -> do fetchResult <- runClientM (fetchArticleClient articleKey) clientEnv let callSucceeds = isRight fetchResult articleInPG <- isJust <$> fetchArticlePG pgInfo articleKey return (callSucceeds, articleInPG, userKey, articleKey) spec4 :: SpecWith (Bool, Bool, Int64, Int64) spec4 = describe "After creating and fetching an article" $ do it "The fetch call should return a result" $ \(succeeds, _, _, _) -> succeeds `shouldBe` True it "The article should be in Postgres" $ \(_, inPG, _, _) -> inPG `shouldBe` True afterHook4 :: PGInfo -> RedisInfo -> (Bool, Bool, Int64, Int64) -> IO () afterHook4 pgInfo redisInfo (_, _, uid, aid) = deleteArtifacts pgInfo redisInfo [uid] [aid] Our next test will create two different users and several different articles. We'll first insert the users and get their keys. Then we can use these keys to create the articles. We create five articles in this test. We assign three to the first user, and two to the second user: ] ... Now we want to test that we when call the articles-by-user endpoint, we only get the right articles. We’ll return each group of articles, the user IDs, and the list of article IDs: ] firstArticles <- runClientM (fetchArticlesByAuthorClient uid1) clientEnv secondArticles <- runClientM (fetchArticlesByAuthorClient uid2) clientEnv case (firstArticles, secondArticles) of (Right as1, Right as2) -> return (entityVal <$> as1, entityVal <$> as2, uid1, uid2, articleIds) _ -> error "Spec 5 failed!" Now we can write the assertion itself, testing that the articles returned are what we expect. spec5 :: SpecWith ([Article], [Article], Int64, Int64, [Int64]) spec5 = describe "When fetching articles by author ID" $ do it "Fetching by the first author should return 3 articles" $ \(firstArticles, _, uid1, _, _) -> firstArticles `shouldBe` [testArticle2 uid1, testArticle3 uid1, testArticle4 uid1] it "Fetching by the second author should return 2 articles" $ \(_, secondArticles, _, uid2, _) -> secondArticles `shouldBe` [testArticle5 uid2, testArticle6 uid2] We would then follow that up with a similar after hook. The final test will follow a similar pattern. Only this time, we’ll be checking the combinations of users and articles. We’ll also make sure to include 12 different articles to test that the API limits results to 10. beforeHook6 :: ClientEnv -> PGInfo -> IO ([(User, Article)], Int64, Int64, [Int64]) beforeHook6 clientEnv pgInfo = do uid1 <- createUserPG pgInfo testUser5 uid2 <- createUserPG pgInfo testUser6 articleIds <- mapM (createArticlePG pgInfo) [ testArticle7 uid1, testArticle8 uid1, testArticle9 uid1, testArticle10 uid2 , testArticle11 uid2, testArticle12 uid1, testArticle13 uid2, testArticle14 uid2 , testArticle15 uid2, testArticle16 uid1, testArticle17 uid1, testArticle18 uid2 ] recentArticles <- runClientM fetchRecentArticlesClient clientEnv case recentArticles of Right as -> return (entityValTuple <$> as, uid1, uid2, articleIds) _ -> error "Spec 6 failed!" where entityValTuple (Entity _ u, Entity _ a) = (u, a) Our spec will check that the list of 10 articles we get back matches our expectations. Then, as always, we remove the entities from our database. Now we call these tests with our other tests, with small wrappers to call the hooks: main :: IO () main = do ... hspec $ before (beforeHook4 clientEnv pgInfo) $ after (afterHook4 pgInfo redisInfo) $ spec4 hspec $ before (beforeHook5 clientEnv pgInfo) $ after (afterHook5 pgInfo redisInfo) $ spec5 hspec $ before (beforeHook6 clientEnv pgInfo) $ after (afterHook6 pgInfo redisInfo) $ spec6 And now we’re done! The tests pass! … After creating and fetching an article The fetch call should return a result The article should be in Postgres Finished in 0.1698 seconds 2 examples, 0 failures When fetching articles by author ID Fetching by the first author should return 3 articles Fetching by the second author should return 2 articles Finished in 0.4944 seconds 2 examples, 0 failures When fetching recent articles Should fetch exactly the 10 most recent articles Conclusion This completes our overview of useful production libraries. Over these articles, we’ve constructed a small web API from scratch. We’ve seen some awesome abstractions that let us deal with only the most important pieces of the project. Both Persistent and Servant generated a lot of extra boilerplate for us. This article showed the power of the Esqueleto library in allowing us to do type-safe joins. We also saw an end-to-end process of adding a new type and endpoints to our API. In the coming weeks, we’ll be dealing with some more issues that can arise when building these kinds of systems. In particular, we’ll see how we can use alternative monads on top of Servant. Doing this can present certain issues that we'll explore. We’ll culminate by exploring the different approaches to encapsulating effects. Be sure to check out our Haskell Stack mini-course!! It'll show you how to use Stack, so you can incorproate all the libraries from this series! If you’re new to Haskell and not ready for that yet, take a look at our Getting Started Checklist and get going!
https://mmhaskell.com/blog/2017/10/30/join-the-club-type-safe-joins-with-esqueleto
CC-MAIN-2018-51
refinedweb
2,836
52.9
File Transfer Guide Overview The File Transfer API allows developers to easily and reliably transfer binary and text files from the mobile companion to the device, even if the application is not running on the device. The API exposes a simplified file queue, and provides a number of events which allow applications to easily interact with that queue. How It Works When a file is queued in the Outbox, the system will begin transferring it to a temporary folder. The file is held in this temporary folder until the transfer has completed. At this point, the file is automatically moved into a staging folder, but is not yet accessible by the application. Finally, an event is raised that a new file exists, and can be processed. This process is transparent to the developer. The mobile application and device will take care of the connection handshake and retries (if required). Note: File transfers and retries are handled automatically, even when if the application or companion are not currently running. Companion Outbox In order to send a file from the companion to the device, we must first generate or download that file. In this example, we will download an image using the Fetch API, then add that file to the Outbox queue in order to transfer it to the device. import { outbox } from "file-transfer" let srcImage = ""; let destFilename = "kitten.jpg"; // Fetch the image from the internet fetch(srcImage).then(function (response) { return response.arrayBuffer(); }).then(data => { outbox.enqueue(destFilename, data).then(ft => { console.log(`Transfer of '${destFilename}' successfully queued.`); }).catch(err => { throw new Error(`Failed to queue '${destFilename}'. Error: ${err}`); }); }).catch(err => { console.error(`Failure: ${err}`); }); At this stage, we know if our file was queued successfully, or not. We could now queue additional files, or retry if there was an issue. If you queue a file with the same filename, it will overwrite the previous file. You can find out more information in the Companion File Transfer API reference documentation. Device Inbox In the previous example, we saw how to queue a file for transfer from the companion outbox to the device. In order to use that file on the device, we must listen for incoming file transfers, and move files from the staging folder into the application folder ( /private/data/). This is achieved by calling nextFile() on the inbox. The inbox may contain multiple files, so keep processing until the inbox is empty. Files may also be received while the device application is not running, so it's important to process the inbox queue when the application is first launched too. import { inbox } from "file-transfer" function processAllFiles() { let fileName; while (fileName = inbox.nextFile()) { console.log(`/private/data/${fileName} is now available`); } } inbox.addEventListener("newfile", processAllFiles); processAllFiles(); If you want to list the contents of the /private/data/ folder, you can use the listDirSync() method. import { listDirSync } from "fs"; const listDir = listDirSync("/private/data"); do { const dirIter = listDir.next(); if (dirIter.done) { break; } console.log(dirIter.value); } while (true); You can find out more information in the Device File Transfer API reference documentation. Device Outbox In order to send a file from the device to the companion, we must add that file to the Outbox queue in order for the system to transfer it. import { outbox } from "file-transfer"; outbox .enqueueFile("/private/data/app.txt") .then(ft => { console.log(`Transfer of ${ft.name} successfully queued.`); }) .catch(err => { console.log(`Failed to schedule transfer: ${err}`); }) Companion Inbox In the previous example, we saw how to queue a file for transfer from the device outbox to the companion. In order to use that file, we must listen for incoming file transfers, and process each file in the queue. This is achieved by calling pop() on the inbox. The inbox may contain multiple files, so keep processing until the inbox is empty. Files may also be received while the companion is not running, so it's important to process the inbox queue as files are received, and also when the companion is first launched. import { inbox } from "file-transfer"; async function processAllFiles() { let file; while ((file = await inbox.pop())) { const payload = await file.text(); console.log(`file contents: ${payload}`); } } inbox.addEventListener("newfile", processAllFiles); processAllFiles() Best Practices Here's a simple list of best practices to follow when using the File Transfer API: - The name of the file must be less than 64 characters and can only include the following characters: a-z A-Z 0-9 ! # $ % & ' ( ) - @ ^ _ { } ~ + , . ; = [ ] Files with invalid names will be rejected. - Try to minimize the size of your files before transferring them to the device. - Images sent to the device must be in JPG or TXI format only. - Check the inbox queue when your application or companion is launched. File Transfer in Action If you're interested in using the File Transfer API within your application, please review the Companion File Transfer API or Device File Transfer API reference documentation.
https://dev.fitbit.com/build/guides/communications/file-transfer/
CC-MAIN-2022-33
refinedweb
821
56.05
Download presentation Presentation is loading. Please wait. Published byTanya Judkins Modified over 4 years ago 1 Section 5 Lists again. Double linked lists – insertion, deletion. Trees 2 Lists: Revisited class List { protected ListCell head; public void delete(Object o) {...} } head null 3 List cells: Cells containg Objects. Each Cell has a pointer to the next cell in the list. class ListCell {... public ListCell getNext(); // returns the next element. public void setNext(ListCell l); // sets the pointer to point to l public Object getDatum(); // returns the object stored in the cell } 4 Deleting Iterative version: public void delete(Object o) { ListCell current = head, previous = null; while (current != null) { if (current.getDatum().equals(o)) // found the object { if (previous == null) return l.getNext() ; // it was the first one else { previous.setNext(current.getNext()); return l; } } else previous = current; current = current.getNext(); } 5 Deleting 2: Revisited Deleting element: recursive way: Intuition: – If list l is empty, return null. – If first element of l is o, return rest of list l. – Otherwise, return list consisting of first element of l, and the list that results from deleting o from the rest of list l. 6 Deleting 3: Revolutions Notation: (x:xs) – a list which first element is x and the rest is list xs Example: (1:(2:(3:null))) Then (pseudo-code): 1. delete o null = return null; 2. delete o (x:xs) = if x = = o then return xs; 3. else { rest = delete o xs; 4. 5. return (x:(delete o rest)); } 7 Deleting 4: New hope Deleting an element from list: public void delete(Object o) { head = deleteRec(o, head); } public static ListCell deleteRec(Object o, ListCell l) { ListCell rest; 1. if (l = = null) return l; 2. if (l.getDatum().equals(o)) return l.getNext(); 3. rest = deleteRec(l.getNext(), o); 4. l.setNext(rest); 5. return l; } 8 Doubly-linked lists class DLLCell { protected Object datum; protected DLLCell next; protected DLLCell previous; ….. } 9 Doubly-linked lists class Dlist { protected DLLCell head; public void insertAfter(Object o, Object a) // inserts o after a { insertAfterRec(head, o, a); } public void delete(Object o); } 10 DLL: Insertion Intuition: The result of inserting o to the empty list is... The result of inserting o to the list starting with a is... The result of inserting o to the list starting with x is... 11 DLL: Insertion Intuition: The result of inserting o to the empty list is a list containing o. The result of inserting o to the list starting with a is a list containing a, o and the rest of original list. The result of inserting o to the list starting with x is the list containing x and the result of inserting o to the rest of the original list. 12 DLL: Insertion DLLCell insertAfterRec(DLLCell l, Object o, Object a) { if (l == null) // empty list return new DLLCell(o, null, null); if (l.getDatum().equals(a)) // list starting with a { DLLCell cell = new DLLCell(o, l, l.getNext()); l.setNext(cell); return l; } //otherwise l.setNext(insertAfterRec(l.getNext(), o, a)); return l; } 13 DLL: Deletion Intuition: The result of deleting o from the empty list is... The result of deleting o from the list starting with o is... The result of deleting o from the list starting with x <> o is... 14 DLL: Deletion Intuition: The result of deleting o from the empty list is the empty list The result of deleting o from the list starting with o is the rest of the list The result of deleting o from the list starting with x <> o is the list containing x and the result of deleting o from the rest of the list 15 DLL: Deletion DLLCell deleteRec(DLLCell l, Object o) { DLLCell rest; if (l == null) // empty list return null; if (l.getDatum().equals(o)) // list starting with o { return l.getNext(); } //otherwise rest = deleteRec(l.getNext(), o); l.setNext(rest); rest.setPrev(l); // to make sure links are updated return l; } 16 Trees! Class for binary tree cells class TreeCell { protected Object datum; protected TreeCell left; protected TreeCell right; public TreeCell(Object o) { datum = o; } public TreeCell(Object o, TreeCell l, TreeCell r) { datum = o; left = l; right = r; } methods called getDatum, setDatum, getLeft, setLeft, getRight, setRight with obvious code } 17 Height of a tree Intuition: The height of a an empty tree is -1 The height of a tree containing leaf is... The height of a tree containing subtrees l1 and l2 is... 18 Height of a tree int height(TreeCell t) { if (t == null) return -1; if (isLeaf(t)) return 0; return Math.max( height(t.getLeft()), height(t.getRight())) + 1; } 19 Number of nodes in the tree int nodes(TreeCell t) { if (t == null) return 0; return nodes(t.getLeft()) + nodes(t.getRight()) + 1; } Similar presentations © 2018 SlidePlayer.com Inc.
http://slideplayer.com/slide/1607654/
CC-MAIN-2018-39
refinedweb
805
63.8
A Very Brief Summary Discussion of major changes is pointless in the absence of a familiarity with internals and implementations. Discussion of minor changes leads to a whole bunch of minor changes which don't end up signifying a whole lot. Discussion was frequently disregarded by proposal authors. Problems with Proposals of Major Changes Somewhere in the middle of the RFC process, I posted a satirical RFC proposing that cars should get 200 miles per gallon of gasoline. I earnestly enumerated all the reasons why this would be a good idea. Then I finished by saying I confess that I'm not an expert in how cars work. Nevertheless, I'll go out on a limb and assert that this will be relatively easy to implement, with relatively few entangling side-issues. This characterizes a common problem with many of the RFCs. Alan Perlis, a famous wizard, once said: When someone says ``I want a programming language in which I need only say what I wish done,'' give him a lollipop. Although it may sometimes seem that when we program a computer we are playing with the purest and most perfectly malleable idea-stuff, bending it into whatever shape we desire, it isn't really so. Code is a very forgiving medium, much easier to work than metal or stone. But it still has its limitations. When someone says that they want hashes to carry a sorting order, and to always produce items in sorted order, they have failed to understand those limitations. Yes, it would be fabulous to have a hash which preserves order and has fast lookup and fast insert and delete and which will also fry eggs for you, but we don't know how to do that. We have to settle for the things that we do know how to do. Many proposals were totally out of line with reality. It didn't matter how pretty such a proposal sounds, or even if Larry accepts it. If nobody knows how to do it, it is not going to go in. Back in August I read the IMPLEMENTATION section of the 166s RFC that were then extant. 15/166 had no implementation section at all. 14/166 had an extensive implementation section that neglected to discuss the implementation at all, and instead discussed the programming language interface. 16/166 contained a very brief remark to the effect that the implementation would be simple, which might or might not have been true. 34 of 166 RFCs had a very brief section with no substantive discussion or a protestation of ignorance: ``Dammit, Jim, I'm a doctor, not an engineer!'' ``I'll leave that to the internals guys. :-) '' ``I've no real concrete ideas on this, sorry.'' RFC 128 proposed a major extension of subroutine prototypes, and then, in the implementation section, said only ``Definitely S.E.P.''. (``Someone Else's Problem'') I think this flippant attitude to implementation was a big problem in the RFC process for several reasons. It leads to a who-will-bell-the-cat syndrome, in which people propose all sorts of impossible features and then have extensive discussions about the minutiae of these things that will never be implemented in any form. You can waste an awful lot of time discussing whether you want your skyhooks painted blue or red, and in the RFC mailing lists, people did exactly this. It distracts attention from concrete implementation discussion about the real possible tradeoffs. In my opinion, it was not very smart to start a perl6-internals list so soon, because that suggested that the other lists were not for discussion of internals. As a result, a lot of the discussion that went on on the perl6-language-* lists bore no apparent relation to any known universe. One way to fix this might have been to require every language list to have at least one liaison with the internals list, someone who had actually done some implementation work and had at least a vague sense of what was possible. Finally, on a personal note, I found this flippancy annoying. There are a lot of people around who do have some understanding of the Perl internals. An RFC author who knows that he does not understand the internals should not have a lot of trouble finding someone to consult with, to ask basic questions like ``Do you think this could be made to work?'' As regex group chair, I offered more than once to hook up RFC authors with experienced Perl developers. RFC authors did not bother to do this themselves, preferring to write ``S.E.P.'' and ``I have no idea how difficult this would be to implement'' and ``Dunno''. We could have done better here, but we were too lazy to bother. Problems with Proposals of Minor Changes Translation Issues Ignored or Disregarded Translation issues were frequently ignored. Larry has promised that 80% of Perl 5 programs would be translatable to Perl 6 with 100% compatibility, and 95% with 95% compatibility. Several proposals were advanced which would have changed Perl so as to render many programs essentially untranslatable. If the authors considered such programs to be in the missing 5%, they never said so. Even when the translation issues were not entirely ignored, they were almost invariably incomplete. For example, RFC 74 proposed a simple change: Rename import and unimport to be IMPORT and UNIMPORT, to make them consistent with the other all-capitals subroutine names that are reserved to Perl. It's not clear what the benefit of this is, since as far as I know nobody has ever reported that they tried to write an import subroutine and then were bitten by the special meaning that Perl attaches to this name, but let's ignore this and suppose that the change is actually useful. The MIGRATION section of this RFC says, in full: The Perl5 -> Perl6 translator should provide a importalias for the IMPORTroutine to ease migration. Likewise for unimport. It's not really clear what that means, unless you suppose that the author got it backwards. A Perl 5 module being translated already has an import routine, so it does not need an import alias. Instead, it needs an IMPORT alias that points at import, which it already has. Then when it is run under perl6, Perl will try to call the IMPORT routine, and, because of the alias, it will get the import routine that is actually there. Now, what if this perl5 module already has an IMPORT subroutine also? Then you can't make IMPORT an alias for import because you must clobber the real IMPORT to do so. Maybe you can rename the original IMPORT and call it _perl5_IMPORT. Now you have broken the following code: $x = 'IMPORT'; &$x(...); because now the code calls import instead of the thing you now call _perl5_IMPORT. It's easy to construct many cases that are similarly untranslatable. None of these cases are likely to be common. That is not the point. The point is that the author apparently didn't give any thought to whether they were common or not; it seems that he didn't get that far. Lest anyone think that I'm picking on this author in particular (and I'm really not, because the problem was nearly universal) I'll just point out that a well-known Perl expert had the same problem in RFC 271. As I said in email to this person, the rule of thumb for the MIGRATION ISSUES section is that 'None' is always the wrong answer. An Anecdote About Translation Issues Here's a related issue, which is somewhat involved, but which I think perfectly demonstrates the unrealistic attitudes and poor understanding of translation issues and Larry's compatibility promises. Perl 5 has an eval function which takes a string argument, compiles the string as Perl code, and executes it. I pointed out that if you want a Perl 5 program to have the same behavior after it is translated, eval'ed code will have to be executed with Perl 5 semantics, not Perl 6 semantics. Presumably the Perl 6 eval will interpret its argument as a Perl 6 program, not a Perl 5 program. For example, the Memoize module constructs an anonymous subroutine like this: my $wrapper = eval "sub $proto { unshift \@_, qq{$cref}; goto &_memoizer; }"; Suppose hypothetically that the unshift function has been eliminated from Perl 6, and that Perl 5 programs that use unshift are translated so that unshift @array, LIST becomes splice @array, 0, 0, LIST instead. Suppose that Memoize.pm has been translated similarly, but the unshift in the statement above cannot be translated because it is part of a string. If Memoize.pm is going to continue working, the unshift in the string above will need to be interpreted as a Perl 5 unshift (which modifies an array) instead of as a Perl 6 unshift (which generates a compile-time error.) The easy solution to this is that when the translator sees eval in a Perl 5 program, it should not translate it to eval, but instead to perl5_eval. perl5_eval would be a subroutine that would call the Perl5-to-Perl6 translator on the argument string, and then the built-in (Perl 6) eval function on the result. A number of people objected to this, and see if you can guess why: Performance! I found this incredible. Apparently these people all come from the planet where it is more important for the program to finish as quickly as possible than for it to do what it was intended to do. Tunnel Vision Probably the largest and most general problem with the proposals themselves was a lack of overall focus in the ideas put forward. Here is a summary of a typical RFC: Feature XYZ of Perl has always bothered me. I do task xyzpdq all the time and XYZ is not quite right for it; I have to use two lines of code instead of only one. So I propose that XYZ change to XY'Z instead. RFCs 148 and 272 are a really excellent example of this. They propose two versions of the same thing, each author having apparently solved his little piece of the problem without considering that the Right Thing is to aim for a little more generality. RFC 262 is also a good example, and there are many, many others. Now, fixing minor problems with feature XYZ, whatever it is, is not necessarily a bad idea. The problem is that so many of the solutions for these problems were so out of proportion to the size of the problem that they were trying to solve. Usually the solution was abetted by some syntactic enormity. The subsequent discussions would usually discover weird cases of tunnel vision. One might say to the author that the solution they proposed seemed too heavyweight to suit the problem, like squashing a mosquito with a sledgehammer. But often the proponent wouldn't be able to see that, because for them, this was an unusually annoying mosquito. People would point out that with a few changes, the proposal could also be extended to cover a slightly different task, xyz'pdq, also, and the proponent would sometimes reply that they doesn't consider that to be an important problem to solve. It's all right to be so short-sighted when you're designing software for yourself, but when you design a language that will be used by thousands or millions of people, you have to have more economy. Every feature has a cost in implementation and maintenance and documentation and education, so the language designer has to make every feature count. If a feature isn't widely useful to many people for many different kinds of tasks, it has negative value. In the limit, to accomplish all the things that people want from a language, unless most of your features are powerful and flexible, you have to include so very many of them that the language becomes an impossible morass. (Of course, there is another theory which states that this has already happened.) This came as no surprise to me. I maintain the Memoize module, which is fairly popular. People would frequently send me mail asking me to add a certain feature, such as timed expiration of cached data. I would reply that I didn't want to do that, because it would slow down the module for everyone, and it would not help the next time I got a similar but slightly different request, such as a request for data that expires when it has been used a fixed number of times. The response was invariably along the lines of ``But what would anyone want to do that for?'' And then the following week I would get mail from someone else asking for expiration of data after it had been used a fixed number of times, and I would say that I didn't want to put this in because it wouldn't help people with the problem of timed expiration and the response would be exactly the same. A module author must be good at foreseeing this sort of thing, and good at finding the best compromise solution for everyone's problems, not just the butt-pimple of the week. A language designer must be even better at doing this, because many, many people will be stuck with the language for years. Many of the people producing RFCs were really, really bad at it. Miscellaneous Problems Lack of Familiarity with Prior Art Many of the people proposing features had apparently never worked in any language other than Perl. Many features were proposed that had been tried in other language and found deficient in one way or another. (For example, the Ruby language has a feature similar to that proposed in RFC 162.) Of course, not everyone knows a lot of other languages, and one has to expect that. It wouldn't have been so bad if the proponents had been more willing to revise their ideas in light of the evidence. Worse, many of the people proposing new features appeared not to be familiar with Perl. RFC 105 proposed a change that had already been applied to Perl 5.6. RFC 158 proposed semantics for $& that had already been introduced in Perl 5.000. Too Much Syntax Too many of the proposals focused on trivial syntactic issues. This isn't to suggest that all the syntactic RFCs were trivial. I particularly appreciated RFC 9's heroic attempt to solve the reference syntax mess. An outstanding example of this type of RFC: The author of RFC 274 apparently didn't like the /${foo}bar/ syntax for separating a variable interpolation from a literal string in a regex, because he proposed a new syntax, /$foo(?)bar/. Wonderful, because then when Perl 7 comes along we can have an RFC that complains that "${foo}bar" works in double-quoted strings but "$foo(?)bar" does not, points out that beginners are frequently confused by this exception, and proposes to fix it by making "(?)" special in double-quoted strings as well. This also stands out as a particularly bad example of the problem of the previous section, in which the author is apparently unfamiliar with Perl. Why? Because the syntaxes /$foo(?:)bar/ and /$foo(?=)bar/ both work today and do what RFC 274 wanted to do, at the cost of one extra character. (This part of the proposal was later withdrawn.) Working Group Chairs Useless Maybe 'regex language working group chair' is a good thing to put on your résumé, but I don't think I'll be doing that soon, because when you put something like that on your résumé, you always run the risk that an interviewer will ask what it actually means, and if that happened to me I would have to say that I didn't know. I asked on the perl6-meta list what the working group chair's duties were, and it turned out that nobody else knew, either. Working group chairs are an interesting idea. Some effort was made to chose experienced people to fill the positions. This effort was wasted because there was nothing for these people to do once they were appointed. They participated in the discussions, which was valuable, but calling them 'working group chairs' did not add anything. Overall Problems Discussion was of Unnecessarily Low Quality The biggest problem with the discussion process was that it was essentially pointless, except perhaps insofar as it may have amused a lot of people for a few weeks. What I mean by 'pointless' is that I think the same outcome would have been achieved more quickly and less wastefully by having everyone mail their proposals directly to Larry. Much of the discussion that I saw was of poor quality because of lack of familiarity with other languages, with Perl, with basic data structures, and so forth. But I should not complain too much about this because many ill-informed people were still trying in good faith to have a reasonable discussion of the issues involved. That is all we can really ask for. Much worse offenses were committed regularly. I got really tired of seeing people's suggestions answered with 'Blecch'. Even the silliest proposal does not deserve to be answered with 'Blecch'. No matter how persuasive or pleasing to the ear, it's hard to see 'Blecch' as anything except a crutch for someone who's too lazy to think of a serious technical criticism. The RFC author's counterpart of this tactic was to describe their own proposal as 'more intuitive' and 'elegant' and everything else as 'counter-intuitive' and 'ugly'. 'Elegant' appears to be RFCese for 'I don't have any concrete reason for believing that this would be better, but I like it anyway.' Several times I saw people respond to technical criticism of their proposals by saying something like ``It is just a proposal'' or ``It is only a request for comments''. Perhaps I am reading too much into it, but that sounds to me like an RFC author who is becoming defensive, and who is not going to listen to anyone else's advice. One pleasant surprise is that the RFCs were mostly free of references to the 'beginners'; I only wish it had been as rare in the following discussion. One exasperated poster said: ``Beginners are confused by X'' is a decent bolstering argument as to why X should be changed, but it's a lousy primary argument. A final bright point: I don't think Hitler was invoked at any point in the discussion. Too Much Criticism and Discussion was Ignored by RFC Authors Here's probably the most serious problem of the whole discussion process: Much of the criticism that wasn't low-quality was ignored anyway; it was not even incorporated into subsequent revisions of the RFCs. And why should it have been? No RFC proponent stood to derive any benefit from incorporating criticism, or even reading it. Suppose you had a nifty idea for Perl 6, and you wrote an RFC. Then three people pointed out problems with your proposal. You might withdraw the RFC, or try to fix the problem, and a few people did actually do these things. But most people did not, or tried for a while and then stopped. Why bother? There was no point to withdrawing an RFC, because if you left it in, Larry might accept it anyway. Kill 'em all and Larry sort 'em out! As a thought experiment, let's try to give the working group chairs some meaning by giving them the power to order the withdrawal of a proposal. Now the chair can tell a recalcitrant proposer that their precious RFC will be withdrawn if they don't update it, or if they don't answer the objections that were raised, or if they don't do some research into feasible implementations. Very good. The proposal is forcibly withdrawn? So what? It is still on the web site. Larry will probably look at it anyway, whether or not it is labeled 'withdrawn'. So we ended up with 360 RFCs, some contradictory, some overlapping, just what you would expect to come out of a group of several hundred people who had their fingers stuck in their ears shouting LA LA LA I CAN'T HEAR YOU. Bottom Line I almost didn't write this article, for several reasons: I didn't have anything good to say about the process, and I didn't have much constructive advice to offer either. I was afraid that it wouldn't be useful without examples, but I didn't want to have to select examples because I didn't want people to feel like I was picking on them. However, my discussions with other people who had been involved in the process revealed that many other people had been troubled by the same problems that I had. They seemed to harbor the same doubts that I did about whether anything useful was being accomplished, and sometimes asked me what I thought. I always said (with considerable regret) that I did not think it was useful, but that Larry might yet prove me wrong and salvage something worthwhile from the whole mess. Larry's past track record at figuring out what Perl should be like has been fabulous, and I trust his judgment. If anyone is well-qualified to distill something of value from the 360 RFCs and ensuing discussion, it is Larry Wall. That is the other reason to skip writing the article: My feelings about the usefulness of the process are ultimately unimportant. If Larry feels that the exercise was worthwhile and produced useful material for him to sort through, then it was a success, no matter how annoying it was to me or anyone else. Nevertheless, we might one day do it all over again for Perl 7. I would like to think that if that day comes we would be able to serve Larry a little better than we did this time.
http://www.perl.com/pub/2000/10/
CC-MAIN-2015-48
refinedweb
3,697
58.82
The QDataSource class is an asynchronous producer of data. More... #include <qasyncio.h> Inherits QAsyncIO. Inherited by QIODeviceSource. List of all member functions. A data source is an object which provides data from some source in an asynchronous manner. This means that at some time not determined by the data source, blocks of data will be taken from it for processing. The data source is able to limit the maximum size of such blocks which it is currently able to provide. See also QAsyncIO, QDataSink, and QDataPump. For example, a network connection may choose to utilize a disk cache of input only if rewinding is enabled before the first buffer-full of data is discarded, returning FALSE in rewindable() if that first buffer is discarded. Reimplemented in QIODeviceSource. The data source should return a value indicating how much data it is ready to provide. This may be 0. If the data source knows it will never be able to provide any more data (until after a rewind()), it may return -1. Reimplemented in QIODeviceSource. Reimplemented in QIODeviceSource. The default returns FALSE. Reimplemented in QIODeviceSource. This function is called to extract data from the source, by sending it to the given data sink. The count will be no more than the amount indicated by the most recent call to readyToSend(). The source must use all the provided data, and the sink will be prepared to accept at least this much data. Reimplemented in QIODeviceSource. This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved.
http://doc.trolltech.com/qtopia2.2/html/qdatasource.html
crawl-001
refinedweb
258
68.16
It's almost always more convenient to work with a windows GUI other than have to use the DOS command prompt. It's easy to convert a DOS command line tool into a Windows like application in C# by simply adding some Windows Forms tools interacting with some code. Here I take the RegenResx.exe as an example. I built a simple windows interface so that I could easily convert old version resx files to be compatible with any version of the .NET framework. This command line tool is very useful when upgrading older version resource files. The tool itself can be found at RegenResx Conversion Tool, or Upgrading from the .NET Framework Beta Versions. I use the System.Diagnostics namespace and the Process.Start Method. This simple approach could similarly be applied to run other command line tools, DOS commands, and other windows applications. Some examples are also provided in the demo project. The demo project is a windows form application. A TabControl is put on a Windows form. Each of the three tabs demonstrates a simple situation, either to run a command line tool, a DOS command, or to run another windows application, such as launch Internet Explorer with given address. using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; regenresx "+textBox1.Text + " " +textBox2.Text; System.Diagnostics.Process.Start("CMD.exe",strCmdLine); process1.Close(); ... Take the command line tool RegenResx as an example, write the windows interface to convert old version resx files to be compatible with any version of the .NET framework. It's easy and fun to write simple useful application in C# using the powerful .NET Framework. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cs/wincmdline.aspx
crawl-002
refinedweb
288
51.75
LeakDB 0.1 LeakDB is a very simple and fast key value store for Python Why ? For the fun o/ Overview LeakDB is a very simple and fast key value store for Python. All data is stored in memory and the persistence is defined by the user. A max queue size can be defined for a auto-flush. API >>> from leakdb import PersistentQueueStorage >>> leak = PersistentQueueStorage(filename='/tmp/foobar.db') # set the value of a key >>> leak.set('bar', {'foo': 'bar'}) >>> leak.set('foo', 2, key_prefix='bar_') # increment a key >>> leak.incr(key='bar_foo', delta=5) 7 >>> leak.incr(key='foobar', initial_value=1000) 1000 # looks up multiple keys >>> leak.get_multi(keys=['bar', 'foobar']) {u'foobar': 1000, u'bar': {u'foo': u'bar'}} # ensure changes are sent to disk >>> print leak /tmp/foobar.db 12288 bytes :: 3 items in queue :: 3 items in storage memory >>> leak.flush(force=True) /tmp/foobar.db 12338 bytes :: 0 items in queue :: 3 items in storage memory >>> leak.close() STORAGE - DefaultStorage :: The default storage, all API operations are implemented set set_multi incr decr get_multi delete - QueueStorage :: Use the DefaultStorage with a queue. You can override the QueueStorage.worker_process method and make what you want when the flush method is called. from leakdb import QueueStorage class MyQueueStorage(QueueStorage): def worker_process(self, item): """ Default action execute by each worker. Must return a True statement to remove the item, otherwise the worker put the item into the queue. """ logger.info('process item :: {}'.format(item)) return True - PersistentStorage :: Use the DefaultStorage, otherwise each operation is stored through the shelve module. - PersistentQueueStorage :: Use the QueueStorage and the PersistentStorage. # see also the API part from leakdb import PersistentQueueStorage storage = PersistentQueueStorage(filename="/tmp/foobar.db", maxsize=1, workers=1) # the queue is auto-flush, each operations check the queue size storage.set('foo', 1) TODO - finish the transport layer through zeroMQ - cleanup the code - write the unittests - write a CLI - benchmark each storage - Downloads (All Versions): - 1 downloads in the last day - 55 downloads in the last week - 210 downloads in the last month - Author: Lujeni - Categories - Package Index Owner: lujeni - DOAP record: LeakDB-0.1.xml
https://pypi.python.org/pypi/LeakDB/0.1
CC-MAIN-2015-40
refinedweb
354
58.18
Introduction II. Logic III. Notation IV. Terminology V. Partitioning the Variance VI. The F Test VII. Formal Example 1. Research Question 2. Hypotheses 3. Assumptions 4. Decision Rules 5. Computation - [Minitab] 6. Decision VIII.Comparisons Among Means - [Minitab] [Spreadsheet] IX. Relation of F to t Homework I.an . Number Number Pairs Groups of Means 3 4 5 6 7 8 3 6 10 15 21 28 effect & error between groups estimate and the other to error within groups estimate an extended. As usual, the critical values are given by a table. Going into the table, one needs to know the degrees of freedom for both the between and within groups variance estimates, as well as the alpha level. For example, if we have 3 groups and 10 subjects in each, then: DfB = p - 1 DfW = p(n - 1) or with unequal N's: =31=2 = 3 * (10-1) = 27 DfT = N - 1 = 30 - 1 = 29 Note that the df add up to the total and with =.05, Fcrit= 3.35 In Symbols In Words HO 1= 2= HA 3. 3 The presence of others does not influence helping. The presence of others does influence helping. Not Ho Assumptions 1) The subjects are sampled randomly. 2) The groups are independent. 3) The population variances are homogenous. 4) The population distribution of the DV is normal in shape. 5) The null hypothesis. 4. Decision rules Given 3 groups with 4, 5, and 5 subjects, respectively, we have (3-1=) 2 df for the between groups variance estimate and (3+4+4=) 11 df for the within groups variance estimate. (Note that it is good to check that the df add up to the total.) Now with an level of .05, the table shows a critical value of F is 3.98. If Fobs Fcrit, reject Ho, otherwise do not reject Ho. 5. Computation - [Minitab] Here is the data (i.e., the number of seconds it took for folks to help): # people present 0 25 30 20 32 2 30 33 29 40 36 168 4 32 39 35 41 44 191 107 26.8 33.6 38.2 For the analysis, we will use a grid as usual for most of the calculations: 0 X2 2 X2 4 X2 25 30 20 625 900 400 30 29 900 841 32 1024 39 1521 35 1225 41 1681 44 1936 191 5 =466 =14 33 1089 40 1600 36 1296 107 4 168 5 32 1024 T N 26.8 33.6 38.2 2949 5726 7387 =16062 II 2862.25 5644.8 7296.2 =15803.25 III Now we need the grand totals and the three intermediate quantities: I. II. III. And: Thus: SSB = III I = 15803.25-15511.14 = 292.11 SSW = II III = 16062-15803.25 = 258.75 SST = II I = 16062-15511.14 = 550.86 Source Within Total SS df MS 23.520 F p Between 292.11 2 146.056 6.21 <.05 258.75 11 550.86 13 6. Decision Since Fobs (6.21) is > Fcrit (3.98), reject Ho and conclude that the more people present, the longer it takes to get help. Post hoc Have a significant overall (or omnibus) F & then w localize the effect. In this case, we might not even compute the omnibus F (this Are more commonly used than preplanned compa approach is somewhat analogous to a one-tailed test). In addition, there are "simple" (involving two means) and "complex" (involving more than two means) comparisons. With three groups (Groups 1, 2 & 3), the following 6 comparisons are possible. Simple Complex 1 vs. 2 (1 + 2) vs. 3 1 vs. 3 1 vs. (2 + 3) 2 vs. 3 (1 + 3) vs. 2 As the number of groups increases, so does the number of comparisons that are possible. Some of these can tell us about trend (a description of the form of the relationship between the IV and DV). The problem with post hoc tests is that the type I error rate increases the more comparisons we perform. This is a somewhat controversial area and there are a number of methods currently in use to deal with this problem. We will consider one of the more simple methods below. The protected t test - [Minitab] [Spreadsheet] It is performed only when the omnibus F is significant. This technique is protected because it requires the omnibus F to be significant (which tells us there is at least one comparison between means that is significant). So, in other words, it is protected because we are not just shooting in the dark. It uses a more stable estimate of the population variance than the t test (i.e., instead of (Where the df's are 1 for the numerator and dfw for the denominator.) So, for our example the critical value of F is 4.84 (from the table) and: Thus, the only comparison that is significant is that between the first and third groups. IX. Relation of F to t Since the F test is just an extension of the t test to more than two groups, they should be related and they are. With two groups, F For example, consider the critical values for df = (1, 15) with = .05: Fcrit (1, 15) = tcrit (15)2 Obtaining the values from the tables, we can see that this is true: 4.54 = 2.1312 Anova Formula See Also Send comments on this topic. Overview An Anova test is used to determine the existence, or absence of a statistically significant difference between the mean values of two or more groups of data. 1. 2. probability: the alpha value (probability). inputSeriesNames: the names of two or more input series. Each series must exist in the series collection at the time of the method call, and have the same number of data points. Statistic Return: An AnovaResult object, which has the following members: DegreeOfFreedomBetweenGroups DegreeOfFreedomTotal DegreeOfFreedomWithinGroups FCriticalValue Note Make sure that all data points have their XValue property set, and that their series' XValueIndexed property has been set to false.: 1. 2. 3. Each group from which a sample is taken is normal. Each group is randomly selected and independent. The variables from each group come from distribution with approximately equal standard deviation. Calculation 1. Calculate the sample average for each group: Example This example demonstrates how to perform an Anova Test, using Series1, Series2, and Series3 for the input series. The results are returned in an AnovaResult object. Visual Basic Copy Code Copy Code. Figure 1: Two Charts; One containing Series data (left), and the other containing the AnovaResult object (right). Visual Basic Copy Code Imports Dundas.Charting.WinControl ... ' The Anova Result Object is Created Dim result As AnovaResult =()) C# Copy Code using Dundas.Charting.WinControl; ... // The Anova Result Object is Created AnovaResult result =()); See Also Financial Formulas Formulas Overview Statistical Formulas Using Statistical Formulas Statistical Formula Listing
https://de.scribd.com/document/59523332/Analysis-of-Variance
CC-MAIN-2019-22
refinedweb
1,150
66.54
Many who want to use the Mail binding want to do so using their GMail account. This tutorial will walk you through how to configure the Mail binding to send mail using the GMail servers. 2-Factor Authentication If you have 2FA configured on your Google account you will need to generate an app password. If you do not have 2FA configured, go do it now. I have an Android phone so I like to use Phone. It used to be the case that you couldn’t have App Passwords (see below) when using a Security Key but it may be supported now. Personally I use Phone and an Authenticator app as a backup. App Password Because you have 2FA enabled, you need to generate an App Password to give openHAB access to your GMail. See for instructions. Select “Mail” for the app: Enter “openHAB” for the device and hit generate: You will get your app password in the yellow box: NOTE: I deleted this app password before posting this tutorial so it’s useless. This will be the only time you see the actual password. If you don’t copy the password now and enter into OH you will have to delete this one and generate a new one. So leave this page open until you are ready to enter the password into openHAB. Install the Binding and configure the Thing In PaperUI browse to Add-ons -> Bindings -> Mail and install it. Once installed you need to manually create an SMTP Thing. Click on Inbox and the blue + icon. Select “Mail Binding” from the list and then select “SMTP Server” from the next list. Click on “Show more” and you should see the following: Enter the following: - Name: GMail (or something else meaningful to you) - Thing ID: gmail (or something else meaningful to you) - Sender: your gmail address (it doesn’t seem to matter if you put something else in here) - Server Hostname: smtp.gmail.com - SMTP Server Security Protocol: STARTTLS - SMTP Server Username: your GMail address - SMTP Server Password: your App password generated above, don’t include the spaces - Server Port: 587 Click the blue check icon at the top to save the Thing. Sending Mail from a Rule Rule Actions in OH 2.x bindings need to be acquired and then called in a Rule. In Scripted Automation Python: from core.rules import rule from core.triggers import when @rule("Send alert") @when("Item Alert received command") def send_alert(event): (actions.get("mail", "mail:smtp:gmail") .sendMail(admin_email, "openHAB Alert", event.itemCommand.toString())) NOTE: admin_email is defined in configuration.py per Python helper libraries instructions. Rules DSL: rule "Send alert" when Item Alert received command then getActions("mail", "mail:smtp:gmail").sendMail(email, "openHAB Alert", receivedCommand.toString) end
https://community.openhab.org/t/how-to-configure-the-mail-binding-to-use-gmail-for-sending-email-from-openhab/98255/9
CC-MAIN-2022-27
refinedweb
461
73.27
SQL Server and ADO.NET. Note To use the .NET Framework Data Provider for SQL Server, an application must reference the System.Data.SqlClient namespace. In This Section SQL Server Security Provides an overview of SQL Server security features, and application scenarios for creating secure ADO.NET applications that target SQL Server. SQL Server Data Types and ADO.NET Describes how to work with SQL Server data types and how they interact with .NET Framework data types. SQL Server Binary and Large-Value Data Describes how to work with large value data in SQL Server. SQL Server Data Operations in ADO.NET Describes how to work with data in SQL Server. Contains sections about bulk copy operations, MARS, asynchronous operations, and table-valued parameters. SQL Server Features and ADO.NET Describes SQL Server features that are useful for ADO.NET application developers. LINQ to SQL Describes the basic building blocks, processes, and techniques required for creating LINQ to SQL applications. For complete documentation of the SQL Server Database Engine, see SQL Server Books Online for the version of SQL Server you are using.
https://docs.microsoft.com/ar-sa/dotnet/framework/data/adonet/sql/
CC-MAIN-2020-10
refinedweb
183
52.66
Character class [\W_] clarification Discussion in 'Perl Misc' started by Fiaz Idris, Dec 10, 2003. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads clarification on Shared class constructorPaul Wu, May 2, 2005, in forum: ASP .Net - Replies: - 2 - Views: - 421 - Paul Wu - May 5, 2005 class definition - namespace nomenclature clarificationKarthik Kumar, Sep 15, 2004, in forum: C++ - Replies: - 2 - Views: - 4,004 - Howard - Sep 15, 2004 class design - clarificationSpitfire, Feb 16, 2007, in forum: Java - Replies: - 1 - Views: - 309 - Chris Smith - Feb 16, 2007 clarification on character handlingaegis, Aug 8, 2005, in forum: C Programming - Replies: - 21 - Views: - 792 - Tim Rentsch - Aug 18, 2005 Regex: Any character in character classSebastian, Jan 30, 2013, in forum: Java - Replies: - 17 - Views: - 366 - Gene Wirchenko - Feb 4, 2013
http://www.thecodingforums.com/threads/character-class-w_-clarification.884108/
CC-MAIN-2014-49
refinedweb
163
61.29
Wanted to ask if anyone else has experienced this, or could provide any insight or ideas. I'd used PythonWin for scripting with arcpy in the past, and it seemed quick and light, no real issues. On and off over the last year or so I've experimented with other more modern IDEs / editors with more bells and whistles, but I've noticed that my scripts take longer to run in every other program... In PythonWin, usually the first run of a .py file takes a few extra seconds, I'm guessing when importing arcpy - but subsequent runs are usually very quick. In the other programs I've tried to-date, each run takes nearly as long as the first, usually adding anywhere from ~10 to 20sec when compared to PythonWin. For example, in PythonWin, a first run might take ~12sec to run, but subsequent runs are under a second. In the others, time elapsed would always be close to the first run time taken. I've observed this behavior in Visual Studio Code, Sublime Text, PyCharm (and I believe Spyder a while back too). I really like at least some features available in each of the programs listed, and the extra time is not a deal-breaker, just bothersome. Would this have something to do with how the programs themselves run, or is something else more likely, such as configuration, or install locations, etc.? I do see that PythonWin is installed in C:\Python27\ArcGIS10.6\Lib\site-packages, so maybe that is part of the answer... PythonWin running 2.7, other programs 2.7 or 3.6 (same running time issue observed in 3.6) Thanks Paul Does your script import packages/modules? Spyder, for instance, has an option to force the reloading of user modules at runtime. This can be disabled if you want and you can force reload of modules in python 3. Some IDE's, like pythonwin don't, forcing you to use "reload" from importlib. (pythonwin running python 3.6.8 from the ArcGIS Pro distribution). Also some can't be 'reloaded' like arcpy, but you can access arcpy functionality by circumventing that constraint and just importing the modules you want from it like gp or env or arcgisscripting. Just examine the __init__.py script in C:\Your_Install_Path\Resources\ArcPy\arcpy for your ArcGIS Pro installation to get an idea what "import arcpy" actually does and why it fails on reload. In any event, in order to compare the run times between IDE's you have to make sure they are being compared on equal footing, and I am pretty sure you won't find a difference beyond what it takes to get beyond the imports and get python running. If this related to importing arcpy, you should really invest some time to examine what happens, so you can skip importing unneed clutter in your namespace and perhaps cut down on import times and permit reloads if desired. One short missive on the topic Import arcpy.... what happens
https://community.esri.com/message/897514-re-time-to-run-scripts-in-various-ides-editors?commentID=897514
CC-MAIN-2020-16
refinedweb
504
71.85
Some Graphics In object-oriented programming, we group together data and operations on that data into objects. First we create the objects - lists, windows, circles, etc., and then we use them. Objects contain data, and they know how to do things (perform operations). Example: Circle Object Data: My center is (20, 20), my radius is 10, my color is blue Operations: draw myself on a graphics window, move myself to a different location on the window, set the color of my interior or outline We haven't used any graphics objects before. We will use John Zelle's graphics module to simplify the creation and use of objects such as circles, rectangles, lines, and graphic windows. To use this module, save the graphics.py file in the same directory as your python program. (More information in the project 8 specification). Example: # import all the functions from graphics from graphics import * # create a graphics window win = GraphWin("My First Graphics Window Title", 300, 300) # window is 300x300 - top left corner is (0, 0) and # bottom right corner is (300, 300) # create a Circle object with center coordinates (150, 150) and radius 40 cir = Circle(Point(150, 150), 40) # set color of circle's interior to green cir.setFill('green') # draw the circle on the window cir.draw(win) # close window when mouse is clicked on it win.getMouse() Output: Windows We create a graphic window using a GraphWin() object. We can then draw graphical images on the window object. A program can create multiple graphic windows. Operations for GraphWin() objects: GraphWin(title, width, height) - constructs a new graphics window on the screen. All parameters are optional - the default size is 200x200. setBackground(color) - set the window's background to the specified color (which is a string) - some options are 'red', 'cyan', 'green', 'blue', 'purple', 'yellow' getMouse() - pause for a mouse click in the graphics window, and return the Point at which the mouse was clicked. We will use a call to this method to allow us to close the graphics window by clicking anywhere on the window. Example: creating a graphics window from graphics import * win = GraphWin() win2 = GraphWin("Second window", 300, 300) Methods for the drawable objects Point, Line, Circle, Oval, Rectangle setFill(color) - set the interior color of the object to the specified color setOutline(color) - set the outline of the object to the specified color setWidth(pixels) - set the width of the object's outline to the specified number of pixels draw(graphicWindow) - draw the object on the specified graphics window undraw() - undraws the objects move(dx, dy) - moves the object dx units in the horizontal direction and dy units in the vertical direction Point Objects Points are often used to define the position of other objects, but can also be drawn on the window. Example: pt = Point(15, 55) # x coord is 15, y coord is 55 pt.setOutline('purple') # set outline color of point to purple pt.draw(win) # draw the point on the window win Operations for Point objects: Point(x, y) - construct a Point with the specified coordinates x and y getX() - returns the x coordinate of the point getY() - returns the y coordinate of the point Circle Objects A Circle is defined by its center coordinates (given as a Point) and its radius. Example: cir = Circle(Point(50, 100), 25) # center = (50, 100), radius = 25 cir.setFill('yellow') # set circle's interior to yello cir.draw(win) # draw the circle on window Operations for Circle objects: Circle(centerPoint, radius) - constructs a Circle with specified center and radius getRadius() - returns the radius of the circle Line Objects Example: win = GraphWin("Another window", 200, 200) diag = Line(Point(0, 0), Point(200, 200)) # line from top left to bottom right of window diag.setWidth(15) # make the line wider diag.setOutline('blue') # make the line blue diag.draw(win) # draw line on window Operations on Line objects: Line(point1, point2) - construct a line from point1 to point2 setArrow(string) - set the arrow status of the line. Arrows can occur at the first point, the end point, or both - possible values of the argument are 'first', 'last', 'both', or 'none'. The default is 'none'. Rectangle Objects Example: win = GraphWin() rec1 = Rectangle(Point(3, 4), Point(8, 10) # opposite corners at specified points rec1.draw(win) Operations on Rectangle objects: Rectangle(point1, point2) - constructs a Rectangle with opposite corners point1 and point2 Exercise: Draw the following shapes on a window: d Exercise: Produce the following window:
http://www.cs.utexas.edu/~eberlein/cs303e/Graphics.html
CC-MAIN-2016-36
refinedweb
749
50.16
SYNOPSISpackage require Tcl package require irc ?0.6.1? ::irc::config ?key? ?value? ::irc::connection ::irc::connections net registerevent event script net getevent event script net eventexists event script net connect hostname ?port? net config ?key? ?value? net log level message net logname net connected net sockname net peername net socket net user username localhostname localdomainname userinfo net nick nick net ping target net serverping net join channel ?key? net part channel ?message? net quit ?message? net privmsg target message net notice target message net ctcp target message net kick channel target ?message? net mode target args net topic channel message net invite channel target net send text net destroy who ?address? action target additional header msg DESCRIPTIONThis package provides low-level commands to deal with the IRC protocol (Internet Relay Chat) for immediate and interactive multi-cast communication. - ::irc::config ?key? ?value? - Sets configuration ?key? to ?value?. The configuration keys currently defined are the boolean flags logger and debug. logger makes irc use the logger package for printing error. debug requires logger and prints extra debug output. If no ?key? or ?value? is given the current values are returned. - ::irc::connection - The command creates a new object to deal with an IRC connection. Creating this IRC object does not automatically create the network connection. It returns a new irc namespace command which can be used to interact with the new IRC connection. NOTE: the old form of the connection command, which took a hostname and port as arguments, is deprecated. Use connect instead to specify this information. - ::irc::connections - Returns a list of all the current connections that were created with connection PER-CONNECTION COMMANDS In the following list of available connection methods net represents a connection command as returned by ::irc::connection. - net registerevent event script - Registers a callback handler for the specific event. Events available are those described in RFC 1459. In addition, there are several other events defined. defaultcmd adds a command that is called if no other callback is present. EOF is called if the connection signals an End of File condition. The events defaultcmd, defaultnumeric, defaultevent, and EOF are required. script is executed in the connection namespace, which can take advantage of several commands (see Callback Commands below) to aid in the parsing of data. - net getevent event script - Returns the current handler for the event if one exists. Otherwise an empty string is returned. - net eventexists event script - Returns a boolean value indicating the existence of the event handler. - net connect hostname ?port? - This causes the socket to be established. ::irc::connection created the namespace and the commands to be used, but did not actually open the socket. This is done here. NOTE: the older form of 'connect' did not require the user to specify a hostname and port, which were specified with 'connection'. That form is deprecated. - net config ?key? ?value? - The same as ::irc::config but sets and gets options for the net connection only. - net log level message - If logger is turned on by config this will write a log message at level. - net logname - Returns the name of the logger instance if logger is turned on. - net connected - Returns a boolean value indicating if this connection is connected to a server. - net sockname - Returns a 3 element list consisting of the ip address, the hostname, and the port of the local end of the connection, if currently connected. - net peername - Returns a 3 element list consisting of the ip address, the hostname, and the port of the remote end of the connection, if currently connected. - net socket - Return the Tcl channel for the socket used by the connection. - net user username localhostname localdomainname userinfo - Sends USER command to server. username is the username you want to appear. localhostname is the host portion of your hostname, localdomainname is your domain name, and userinfo is a short description of who you are. The 2nd and 3rd arguments are normally ignored by the IRC server. - net nick nick - NICK command. nick is the nickname you wish to use for the particular connection. - net ping target - Send a CTCP PING to target. - net serverping - PING the server. - net join channel ?key? - channel is the IRC channel to join. IRC channels typically begin with a hashmark ("#") or ampersand ("&"). - net part channel ?message? - Makes the client leave channel. Some networks may support the optional argument message - net quit ?message? - Instructs the IRC server to close the current connection. The package will use a generic default if no message was specified. - net privmsg target message - Sends message to target, which can be either a channel, or another user, in which case their nick is used. - net notice target message - Sends a notice with message message to target, which can be either a channel, or another user, in which case their nick is used. - net ctcp target message - Sends a CTCP of type message to target - net kick channel target ?message? - Kicks the user target from the channel channel with a message. The latter can be left out. - net mode target args - Sets the mode args on the target target. target may be a channel, a channel user, or yourself. - net topic channel message - Sets the topic on channel to message specifying an empty string will remove the topic. - net invite channel target - Invites target to join the channel channel - net send text - Sends text to the IRC server. - net destroy - Deletes the connection and its associated namespace and information. CALLBACK COMMANDS These commands can be used within callbacks - who ?address? - Returns the nick of the user who performed a command. The optional keyword address causes the command to return the user in the format "[email protected]". - action - Returns the action performed, such as KICK, PRIVMSG, MODE, etc... Normally not useful, as callbacks are bound to a particular event. - target - Returns the target of a particular command, such as the channel or user to whom a PRIVMSG is sent. - additional - Returns a list of any additional arguments after the target. - header - Returns the entire event header (everything up to the :) as a proper list. - msg - Returns the message portion of the command (the part after the :).
https://manpages.org/irc/3
CC-MAIN-2022-33
refinedweb
1,029
57.77
This document covers what functionality and API Prometheus client libraries should offer, with the aim of consistency across libraries, making the easy use cases easy and avoiding offering functionality that may lead users down the wrong path. There are 10 languages already supported at the time of writing, so we’ve gotten a good sense by now of how to write a client. These guidelines aim to help authors of new client libraries produce good libraries. MUST/MUST NOT/SHOULD/SHOULD NOT/MAY have the meanings given in In addition ENCOURAGED means that a feature is desirable for a library to have, but it’s okay if it’s not present. In other words, a nice to have. Things to keep in mind: Take advantage of each language’s features. The common use cases should be easy. The correct way to do something should be the easy way. More complex use cases should be possible. The common use cases are (in order): Counters without labels spread liberally around libraries/applications. Timing functions/blocks of code in Summaries/Histograms. Gauges to track current states of things (and their limits). Monitoring of batch jobs. Clients MUST be written to be callback based internally. Clients SHOULD generally follow the structure described here. The key class is the Collector. This has a method (typically called ‘collect’) that returns zero or more metrics and their samples. Collectors get registered with a CollectorRegistry. Data is exposed by passing a CollectorRegistry to a class/method/function "bridge", which returns the metrics in a format Prometheus supports. Every time the CollectorRegistry is scraped it must callback to each of the Collectors’ collect method. The interface most users interact with are the Counter, Gauge, Summary, and Histogram Collectors. These represent a single metric, and should cover the vast majority of use cases where a user is instrumenting their own code. More advanced uses cases (such as proxying from another monitoring/instrumentation system) require writing a custom Collector. Someone may also want to write a "bridge" that takes a CollectorRegistry and produces data in a format a different monitoring/instrumentation system understands, allowing users to only have to think about one instrumentation system. CollectorRegistry SHOULD offer unregister() functions, and a Collector SHOULD be allowed to be registered to multiple CollectorRegistrys. Client libraries MUST be thread safe. For non-OO languages such as C, client libraries should follow the spirit of this structure as much as is practical. Client libraries SHOULD follow function/method/class names mentioned in this document, keeping in mind the naming conventions of the language they’re working in. For example, set_to_current_time() is good for a method name Python, but SetToCurrentTime() is better in Go and setToCurrentTime() is the convention in Java. Where names differ for technical reasons (e.g. not allowing function overloading), documentation/help strings SHOULD point users towards the other names. Libraries MUST NOT offer functions/methods/classes with the same or similar names to ones given here, but with different semantics. The Counter, Gauge, Summary and Histogram metric types are the primary interface by users. Counter and Gauge MUST be part of the client library. At least one of Summary and Histogram MUST be offered. These should be primarily used as file-static variables, that is, global variables defined in the same file as the code they’re instrumenting. The client library SHOULD enable this." - which rarely tends to go well). There MUST be a default CollectorRegistry, the standard metrics MUST by default implicitly register into it with no special work required by the user. There MUST be a way to have metrics not register to the default CollectorRegistry, for use in batch jobs and unittests. Custom collectors SHOULD also follow this. Exactly how the metrics should be created varies by language. For some (Java, Go) a builder approach is best, whereas for others (Python) function arguments are rich enough to do it in one call. For example in the Java Simpleclient we have: class YourClass { static final Counter requests = Counter.build() .name("requests_total") .help("Requests.").register(); } This will register requests with the default CollectorRegistry. By calling build() rather than register() the metric won’t be registered (handy for unittests), you can also pass in a CollectorRegistry to register() (handy for batch jobs). Counter is a monotonically increasing counter. It MUST NOT allow the value to decrease, however it MAY be reset to 0 (such as by server restart). A counter MUST have the following methods: inc(): Increment the counter by 1 inc(double v): Increment the counter by the given amount. MUST check that v >= 0. A counter is ENCOURAGED to have: A way to count exceptions throw/raised in a given piece of code, and optionally only certain types of exceptions. This is count_exceptions in Python. Counters MUST start at 0. Gauge represents a value that can go up and down. A gauge MUST have the following methods: inc(): Increment the gauge by 1 inc(double v): Increment the gauge by the given amount dec(): Decrement the gauge by 1 dec(double v): Decrement the gauge by the given amount set(double v): Set the gauge to the given value Gauges MUST start at 0, you MAY offer a way for a given gauge to start at a different number. A gauge SHOULD have the following methods: set_to_current_time(): Set the gauge to the current unixtime in seconds. A gauge is ENCOURAGED to have: A way to track in-progress requests in some piece of code/function. This is track_inprogress in Python. A way to time a piece of code and set the gauge to its duration in seconds. This is useful for batch jobs. This is startTimer/setDuration in Java and the time() decorator/context manager in Python. This SHOULD match the pattern in Summary/Histogram (though set() rather than observe()). A summary samples observations (usually things like request durations) over sliding windows of time and provides instantaneous insight into their distributions, frequencies, and sums. A summary MUST NOT allow the user to set "quantile" as a label name, as this is used internally to designate summary quantiles. A summary is ENCOURAGED to offer quantiles as exports, though these can’t be aggregated and tend to be slow. A summary MUST allow not having quantiles, as just _count/ _sum is quite useful and this MUST be the default. A summary MUST have the following methods: observe(double v): Observe the given amount A summary/Histogram. Summary _count/ _sum MUST start at 0. Histograms allow aggregatable distributions of events, such as request latencies. This is at its core a counter per bucket. A histogram MUST NOT allow le as a user-set label, as le is used internally to designate buckets. A histogram MUST offer a way to manually choose the buckets. Ways to set buckets in a linear(start, width, count) and exponential(start, factor, count) fashion SHOULD be offered. Count MUST exclude the +Inf bucket. A histogram SHOULD have the same default buckets as other client libraries. Buckets MUST NOT be changeable once the metric is created. A histogram MUST have the following methods: observe(double v): Observe the given amount A histogram/Summary. Histogram _count/ _sum and the buckets MUST start at 0. Further metrics considerations Providing additional functionality in metrics beyond what’s documented above as makes sense for a given language is ENCOURAGED. If there’s a common use case you can make simpler then go for it, as long as it won’t encourage undesirable behaviours (such as suboptimal metric/label layouts, or doing computation in the client). Labels are one of the most powerful aspects of Prometheus, but easily abused. Accordingly client libraries must be very careful in how labels are offered to users. Client libraries MUST NOT under any circumstances allow users to have different label names for the same metric for Gauge/Counter/Summary/Histogram or any other Collector offered by the library. Metrics from custom collectors should almost always have consistent label names. As there are still rare but valid use cases where this is not the case, client libraries should not verify this. While labels are powerful, the majority of metrics will not have labels. Accordingly the API should allow for labels but not dominate it. A client library MUST allow for optionally specifying a list of label names at Gauge/Counter/Summary/Histogram creation time. A client library SHOULD support any number of label names. A client library MUST validate that label names meet the documented requirements. The general way to provide access to labeled dimension of a metric is via a labels() method that takes either a list of the label values or a map from label name to label value and returns a "Child". The usual .inc()/ .dec()/ .observe() etc. methods can then be called on the Child. The Child returned by labels() SHOULD be cacheable by the user, to avoid having to look it up again - this matters in latency-critical code. Metrics with labels SHOULD support a remove() method with the same signature as labels() that will remove a Child from the metric no longer exporting it, and a clear() method that removes all Children from the metric. These invalidate caching of Children. There SHOULD be a way to initialize a given Child with the default value, usually just calling labels(). Metrics without labels MUST always be initialized to avoid problems with missing metrics. Metric names must follow the specification. As with label names, this MUST be met for uses of Gauge/Counter/Summary/Histogram and in any other Collector offered with the library. Many client libraries offer setting the name in three parts: namespace_subsystem_name of which only the name is mandatory. Dynamic/generated metric names or subparts of metric names MUST be discouraged, except when a custom Collector is proxying from other instrumentation/monitoring systems. Generated/dynamic metric names are a sign that you should be using labels instead. Gauge/Counter/Summary/Histogram MUST require metric descriptions/help to be provided. Any custom Collectors provided with the client libraries MUST have descriptions/help on their metrics. It is suggested to make it a mandatory argument, but not to check that it’s of a certain length as if someone really doesn’t want to write docs we’re not going to convince them otherwise. Collectors offered with the library (and indeed everywhere we can within the ecosystem) SHOULD have good metric descriptions, to lead by example. Clients MUST implement the text-based exposition format outlined in the exposition formats documentation. Reproducible order of the exposed metrics is ENCOURAGED (especially for human readable formats) if it can be implemented without a significant resource cost. Client libraries SHOULD offer what they can of the Standard exports, documented below. These SHOULD be implemented as custom Collectors, and registered by default on the default CollectorRegistry. There SHOULD be a way to disable these, as there are some very niche use cases where they get in the way. These metrics have the prefix process_. If obtaining a necessary value is problematic or even impossible with the used language or runtime, client libraries SHOULD prefer leaving out the corresponding metric over exporting bogus, inaccurate, or special values (like NaN). All memory values in bytes, all times in unixtime/seconds. In addition, client libraries are ENCOURAGED to also offer whatever makes sense in terms of metrics for their language’s runtime (e.g. garbage collection stats), with an appropriate prefix such as go_, hotspot_ etc. Client libraries SHOULD have unit tests covering the core instrumentation library and exposition. Client libraries are ENCOURAGED to offer ways that make it easy for users to unit-test their use of the instrumentation code. For example, the CollectorRegistry.get_sample_value in Python. Ideally, a client library can be included in any application to add some instrumentation without breaking the application. Accordingly, caution is advised when adding dependencies to the client library. For example, if you add a library that uses a Prometheus client that requires version x.y of a library but the application uses x.z elsewhere, will that have an adverse impact on the application? It is suggested that where this may arise, that the core instrumentation is separated from the bridges/exposition of metrics in a given format. For example, the Java simpleclient simpleclient module has no dependencies, and the simpleclient_servlet has the HTTP bits. As client libraries must be thread-safe, some form of concurrency control is required and consideration must be given to performance on multi-core machines and applications. In our experience the least performant is mutexes. Processor atomic instructions tend to be in the middle, and generally acceptable. Approaches that avoid different CPUs mutating the same bit of RAM work best, such as the DoubleAdder in Java’s simpleclient. There is a memory cost though. As noted above, the result of labels() should be cacheable. The concurrent maps that tend to back metric with labels tend to be relatively slow. Special-casing metrics without labels to avoid labels()-like lookups can help a lot. Metrics SHOULD avoid blocking when they are being incremented/decremented/set etc. as it’s undesirable for the whole application to be held up while a scrape is ongoing. Having benchmarks of the main instrumentation operations, including labels, is ENCOURAGED. Resource consumption, particularly RAM, should be kept in mind when performing exposition. Consider reducing the memory footprint by streaming results, and potentially having a limit on the number of concurrent scrapes. This documentation is open-source. Please help improve it by filing issues or pull requests.
https://prometheus.io/docs/instrumenting/writing_clientlibs/
CC-MAIN-2019-43
refinedweb
2,260
55.95
> > # if > 32 || <=== flagged error here > This is valid ANSI C and ISO C 99 code. When you don't pass "-w2" to your > compiler, does it show an error or a warning about this? Omitting -w2 doesn't change anything. (Not surprisingly, since -w2 doesn't soften the effect of true errors but only the effect of warnings.) I agree with you that a strictly conforming compiler would have to close its eyes on the murky "> 32 ||" in the context of the preceeding "#if 0". None of the IDO cc's option to switch to other preprocessor variants ("-acpp", "-oldcpp") will improve the situation. I will invest some time to resolve this specific issue cleanly in some way. > >. No, I still get this error. The responsible stdint_.h code is: #if address@hidden@ -==> #if !0 in stdint.h typedef signed char int8_t; #endif I'll try to have a look at the details. I'd would help me to know what gnulib's approach is or, more important perhaps, how gnulib is used for the CVS project. If somebody could check the following list, that might get me oriented more quickly: [ ] (a) CVS uses gnulib to built upon the C89 standard on any platform. [ ] (b) CVS uses gnulib to built upon the C99 standard ... [ ] (c) CVS uses gnulib to built upon the first POSIX standard. [ ] (d) CVS uses gnulib to built upon the latest POSIX/SUSv3 standard. [ ] (e) CVS ... Martin
http://lists.gnu.org/archive/html/bug-gnulib/2006-06/msg00233.html
CC-MAIN-2015-35
refinedweb
240
74.69
Hello, Since I am no guru, I am having a hard time placing this question. I don’t know if it is an Adruino IDE, ATOM, Platformio IDE, or windows 10 problem. Thus being said, if this is not the right area, I ask the Arduino guru’s to take pitty on me and help me please. When trying to compile, even the original unmodified ATOM platformio IDE build that successfully compiles in that program, even before modifying any parameters, if I try to compile in Arduino 1.8.10, I receive the following error. I did notice the error (in .png form attached to this post) in ATOM platfomrio IDE I uninstalled Arduino and reinstalled, I also re-uploaded a new project to ATOM and started over from scratch, but I encountered the exact same problem with the second attempt. I am definitely not a C++, g++, python guru ! please keep it really simple. I thank you in advance. Arduino: 1.8.10 (Windows 10), Board: “Arduino/Genuino Uno” sketch\src\HAL\HAL_AVR\u8g_com_HAL_AVR_sw_spi.cpp:65:10: fatal error: U8glib.h: No such file or directory #include <U8glib.h> ^~~~~~~~~~ compilation terminated. exit status 1 Error compiling for board Arduino/Genuino Uno. This report would have more information with “Show verbose output during compilation” option enabled in File → Preferences.
https://forum.arduino.cc/t/error-when-compiling-u8glib-no-such-directory/621593
CC-MAIN-2021-43
refinedweb
219
58.69
Are you sure? This action might not be possible to undo. Are you sure you want to continue? As a middle school teacher, author Bart King listened carefully to the wisdom of his girl students..! Bibliography Hello! Take your time with this book. It may be the best book you’ll ever read. (Of course, the odds against that are pretty high, but you never know!) We hope you enjoy it. Now, let’s skip the other boring intro below and get started! First things first, second things never. The collaborators on this book include dozens of girls, young women, teachers, and mothers. [¹] It’s our hope that a preteen, ’tween, or teen girl can find some good laughs, empowerment, and maybe even inspiration in the following pages. We’ve kept a light tone throughout most of the book, partly because that’s more fun but also because there is already plenty of high drama in the literature for and about girls in this age group. Although the voyage from girlhood to the teen years can be a tough one, we strongly believe that being a girl today is exciting and enjoyable. And while this book includes many nontraditional female activities, it also features (for lack of a better term) plenty of classic girl stuff. Why did we do this? Because girls asked us to! Although ostensibly written for eight- to fourteen-year-old girls, The Big Book of Girl Stuff also may appeal to immature adults. And, as a special bonus, we have included a few deliberate and outrageous mistakes in this book to keep you, the adult, vigilant in your reading. If you can find them, write to us (care of the publisher), and you may be eligible for an Amazing Reward. [²] Finally, there are several gratuitous references to loofahs in this book. We just think it’s a funny word. [¹] For the full list of contributors, please see the Acknowledgments. [²] We assume that you will find the satisfaction in being a perfectionist to be Amazingly Rewarding. We spend the first 12 months of our children’s lives teaching them to walk and talk, and the next 12 months telling them to sit down and be quiet. Babysitting is fantastic! You get paid to hang out with younger kids, and if you do a good job, you’ll be their hero. You get paid Plus, almost every culture in the world trusts girl babysitters more than boy babysitters. That’s because girls are responsible (of course!), and they know the secrets of being a good babysitter. These secrets have been recorded in The Ancient Book of Babysitting Wisdom [³] as follows: You really should take a babysitting or first-aid class before you accept responsibility for other people’s kids. (The Red Cross or your local parks department probably offers these in your area.) Besides, you get a cool-looking official card! Do not call the children you babysit any of the following names: orcs rug rats house-apes hobgoblins ankle-biters munchkins droolers li’l monsters smurfs Pay attention to the kids you’re babysitting and play with them. (Stay off the phone!) Even if the parents have weird rules, like Junior should wear his safety helmet and body armor if he goes outside, it’s best to follow them. This keeps the kids in their routine. Once you break one rule, they’ll want to break all the other ones. Clean up any messes that get made. (Better yet, have the kids clean up!) And if you really want to impress the parents, straighten up messes that were there before you arrived. Don’t invite any friends (especially boys) over! That is a big no-no. When the parents come home, tell them some funny stories about what the kids did while they were gone. Parents love to hear stuff like this. Experienced babysitters know that all children are legally required to say, "But my mom always lets me do [fill in the blank] when she’s here." This is often a lie. What the child is asking to do might be really kooky, like feeding a goldfish to his little sister or washing his hands with soap and water. Don’t let yourself be tricked! Instead, use the amazing comeback in the following illustration. It is so clever, no child has ever come up with a response to it. Ta-dah! Problem solved. So you’ve made a babysitting connection. Yes! You have a job! Then comes the hard part. The parents ask how much money you charge to babysit. Uh-oh! How much money should you charge for babysitting? You can always babysit for free, but don’t get into the habit of it, because people will take advantage of your good nature. We think you should get paid at least $10 an hour for taking care of one child. Add another kid, add another $2 to that rate. At the very least, you should get the federal minimum wage, which was $7.15 an hour in 2014. And did you know that states have their own minimum wages? Over 40 states have minimum wages greater than (or equal to) the federal minimum. Sweet! But not all things are equal. Are you a beginning babysitter who’s taking care of one child? Or are you an experienced, in-demand, Red Cross–certified babysitter in charge of four kids? We can agree that these two girls shouldn’t make the same! After all, that second girl might be making over $22 an hour. (And she’s worth a lot more than that!) You can always ask your friends what they charge and use that for a comparison. If you’re still really not sure, ask the parents what they think is fair. Try to be honest if it’s not what you had in mind. A mother’s helper is a girl who helps take care of children while the mother is actually there. Because this isn’t as much responsibility as babysitting, she’s usually paid from 60 to 75 percent of what a babysitter makes. Okay, to earn that money, you’ll end up having to do this sooner or later, so let’s get to the ugly truth of: If baby went poopy, remember to breathe through your mouth. You will need: A strong stomach, a baby, a dirty diaper, a clean diaper, a warm, wet washcloth or wipes, and a changing area covered with a towel. Optional: A toy (to distract the baby with), diaper rash ointment, an assistant. First of all, when changing a boy’s diapers, know that the boy may decide to pee as soon as you get his diapers off, and the pee will go UP . . . toward you! We suggest putting a clean diaper over his private parts as soon as you get his diaper off. (That, or just steer clear of boy babies.) It’s a good idea to get your supplies ready before laying the baby down for a change. That’s because as soon as you try to change a baby’s diapers, she will usually fight against it. We don’t know if the smell of their own poop makes them mad or if they think the diapers you picked out for them are ugly and unfashionable, but babies never make this process easy. Make sure the baby can’t escape your changing area. You don’t want to have to explain to the parents why their new couch got poopified. After getting your supplies ready and washing your hands, put the baby on her back. Open up the diaper. Breathe through your mouth and sing to the baby as you remove the toxic waste. Give her a toy to distract her. With one hand, hold the baby’s ankles gently together and lift up. With the other hand, pull out the dirty diaper. Once the dirty diaper is off, move it off to the side (and don’t step in it later!). Take your baby wipes and start wiping off little baby’s hindquarters. Wipe from back to front with boys, and from front to back with girls. Then stick the dirty wipes into the poopy diaper and wrap the whole mess up tightly. (This is what adults call hazardous material.) Whew! Now baby’s clean. As far as getting a clean diaper on, just make sure the front is laid down on the towel closest to you. Raise the baby’s legs like before, slide the clean diaper under her, rest her down on the diaper, and lift it up between her legs. Fasten the diaper tightly, and you’re done. If the baby’s wearing cloth diapers, make sure to get them on snug and pull those little plastic pants over them. You don’t want anything sneaking out of the diapers and going down Junior’s leg! The baby will show her appreciation for your hard work by going poop right away. This is a baby’s way of saying, I recognize your talent. Another demonstration, please! Do Babies Need Diapers? In China, India, and over 70 other countries, most kids are diaper-free. They either go bottomless or have split-tail pants. Their parents are good at spotting when the little one has to go and then just holds the baby over a toilet or other good spot. No more diaper rash or yucky stuff sticking to the kid’s bottom! And no more landfills of baby diapers—those are really gross and a huge source of pollution. A good trick is to have little prizes or treats with you when you babysit. Depending on the age of the kids, these can come in pretty handy. For example, if you had some M&M’s, Life Savers, or Tic Tacs, you could put them in a container or small bag and call them Proton Pills. If the kids do something good, like pick up their toys or foil a bank robbery, give them a Proton Pill! (Naturally, these give them superpowers.) Or you can use the treats as Sweet Dream candies for when it’s bedtime (but before the kids brush their teeth!). But first be sure it’s okay with their parents to give them a little candy. Silly tricks like these are priceless in babysitting. Let’s say the kids don’t want to go to bed. You could threaten them with a wet noodle until they flee in terror, but why not make it a game? Turn going to bed into a bus ride. You’re the school bus driver, and you pull up in the bus. All aboard! The kids need to line up behind you as the passengers. Then you start driving (with the kids following behind you). Stop anywhere you need to for them to get ready for bed. Once they’re in their pajamas and have brushed their teeth, drive them right to their bed and tuck them in. Cool Word! pajamafy (pa-JAM-uh-fie): The act of getting the kids into their pajamas. It can make going to bed a fun adventure. Babysitter: Who wants to be pajamafied? Little Timmy: I do! It’s pajamafication time! But no matter how skilled a babysitter you are, the time will come when you can’t keep the kids from screaming and running around. Since parents don’t want their children to disappear or be hauled away in an ambulance before they return ( picky, picky, picky !), here’s some important safety advice. DO: Let the baby sit in your lap. DON’T: Sit on the baby. DO: Make sure the child eats his dinner and goes to bed. DON’T: Eat his dinner and go to bed. DO: Let the kids play in the sandbox. DON’T: Let the kids play in the quicksand box. One of the best ways to keep the kids from going nuts is to have some good tricks up your sleeve. The following tricks and activities in this chapter will work with any person, kids included. We have listed them from the easiest to the most challenging. If the kid you’re babysitting is really young, all of these will be a little too advanced. (But this would be a boring chapter if all we had were tips on baby talk and snuggling!) The best trick a babysitter can bring is patience and a smile. Try this trick yourself once, to see how simple it is. Ask Junior to clench his hands together so that his fingers are all interlocking. Have him squeeze tightly for about 20 seconds in a good kung fu grip. Then, while his hands are still gripping each other, have him stick his forefingers (the ones closest to the thumbs) straight out so that they don’t touch. Quickly wave your hands over his hands and say a magic word, such as googly-moogly. The fingers will magically start moving toward each other! (They would have even if you hadn’t said the magic word, but still.) The first sounds out of a little baby’s mouth (besides crying) are usually long vowel sounds that are easy to make, " Aaaa! being the most frequent, followed by Ooooo! Some of the easiest consonants to put in front of these vowels are G, and P." Put it all together and you get Googoo, Mama, Dada, and lots and lots of Poopoo ! M,M, D,D, If you push your belly button, will your legs fall off? [⁴] You will need: Belly buttons, some pets in the house. Ask Junior to show you his belly button. Once you’ve established that he has one, tell him that all people have them. (As you know, before a person is born, an umbilical cord leads into this spot and gives the baby all of its food and oxygen.) Ask Junior how many total belly buttons there are in the house. We’re not sure how smart your kid will be, but the correct answer will be however many mammals there are in the house. Almost all mammals have belly buttons, including any humans, dogs, cats, guinea pigs, mice, ferrets, or woolly mammoths that are around. Note: Anything born in an egg won’t have a belly button, so don’t count fish, frogs, insects, reptiles, or the strange neighbor kid who’s hanging out in your family room. Challenge your challenging child to flex his tootsies! You will need: Any person, an open door. Kids love hearing a challenge like I bet you can’t [insert challenge here]. Start your kid off with some easy challenges. Try, I bet you can’t wink! or I bet you can’t inhale oxygen and exhale carbon dioxide! Once the little troll has met these challenges, try this one: I bet you can’t stand on your tippy-toes! Naturally, the child will be able to do it. Then say, No, that’s the easy way to do it. Try it like this. Open up a door and have the child stand up against the outside edge of the door, so that his feet are on either side and his nose and stomach touch the door. Now, without using his hands, have him try to stand on tiptoe! He won’t be able to because it’s impossible. (Try it yourself!) But that won’t stop him from trying. Kids love candy! Use this to your advantage. You will need: Candy, a rug that’s more than a few feet across (not wall-to-wall carpeting). Put a piece of candy in the middle of a good-sized rug. (Unlike a carpet, a rug isn’t wall-to-wall.) Tell the kids that if they can pick up the candy without their feet touching the rug, they can have it. The key is that the first thing to touch the candy must be one of their hands. Let them brainstorm on that for a while. If you feel nice, let them use tools to reach out to it, even though these won’t be that useful for this challenge. (The easiest solution is just to roll or push the rug up until you are close enough to pick up the candy. Be sure to give them hints as they try to figure it out so they don’t give up and so they can earn the candy.) Don’t let crayons color your opinion of this trick! You will need: Some crayons. Duh! Get from four to 20 crayons and spread them out on a table. Tell Junior that you will magically be able to identify a crayon that he picks without looking. Turn your back on Junior and put your hands behind your back. Tell him to take a mystery crayon and to put it in one of your hands. Then have him come around to where you can see him in front. While he’s coming around, take the hand that’s not holding the crayon and scratch the crayon slightly with your fingernail. When Junior comes around, look him in the eyes and tell him you’re going to read his mind! Place the hand not holding the crayon on Junior’s shoulder or head as you begin reading him. At some point, glance at your fingernail to see what color crayon you scratched. Make a big deal out of getting a color signal from him, and amaze him with the correct answer! Step 1: Secure the Area! Make sure there is nothing painful or uncomfortable afflicting the poor child. Is he stepping on a Lego? Remove the Lego. Are his diapers dirty? Good luck with that! Is he lying on a sledgehammer? What the heck is a sledgehammer doing in his crib? Step 2: TLC! The little sweetie might just need some tender loving care and attention. Pick her up and give her some. In case there is any gas in her system, you might want to put her in burping position. Make sure to get a towel or cloth over your shoulder, so that when baby burps she doesn’t ruin your evening gown. Step 3: Is He Hungry? Offer him a bottle. Not just any bottle, but one of his baby bottles with a rubber nipple at the end. (And make sure there’s some formula in it, too.) Step 4: Bored or Tired? Assuming that her teeth aren’t coming in and there’s no diaper rash, our best guess now is that she is either bored (play with her) or really tired and grumpy (quietly soothe her with lullabies or maybe a loofah). Step 5: Last Resort! If Junior is really getting hysterical and nothing you can do is working, you might want to consider texting or calling his folks. They might have an idea for you. Free Tip! The wrong time for horseplay and games is right after a baby or young kid has eaten (unless you really want your clothes covered with Gerber baby barf) or right before bedtime (unless you want them to be too hyper to go to bed). A good babysitter keeps cool in the face of emergencies. Here are some things that might happen to you, and some solutions to keep the situation under control. It’s a conniption fit! Do you remember how on the first day of kindergarten some kids would completely spazz out as their moms dropped them off for school? Wow. (Maybe you were one of those kids!) A little kid can really hit the wall when his trusted parents leave him with a total stranger. If this happens (or before it can happen), you need to imagine being the kindergarten teacher who is trying to calm down a child and make him feel welcome and safe. One advantage you have over a teacher is you can ask the child to show you his room. Wow! Are all these toys yours? Which one is your favorite? I like this one. Try to engage him by giving him attention; read a book together, go outside to play, or ask for a home tour. Steer the conversation away from his parents. Anything to distract him—even tickling can work! If all else fails, you can try this amazing Jumping Stuffed Animal trick. The Jumping Stuffed Animal If you have rhythm, this one will amaze younger kids. You will need: Some coordination, a small stuffed animal (or any light object that isn’t bouncy), and something you can sit or stand behind, like a table or sofa, for example. This trick is easy, but you’ll want to practice it a few times before doing it for Junior. Sit at a table. (Your audience will be on the other side of it.) Take a small stuffed animal (or any small, nonbouncy item) and hold it out to your shoulder height. Have the toes of one of your feet raised up. Say something like Hey, watch this! Your tiger can jump! Then bring your hand down below the edge of the table, as if you were throwing it on the floor. The next steps happen quickly. As you’re bringing your hand down below the table’s edge, turn your wrist so that the animal is facing up. Tap your toe down, to make a noise, as if the animal hit the floor. And then, with a flick of your wrist (not your arm!) toss the animal straight up. You should practice your timing at home beforehand, so that this looks right. As the animal bounces upwards, look at it, amazed! Junior will have fun trying to figure out how you did it. Teresa Crane: Now before I go, do you have any questions for me? Adrian Monk: Yes, yes, I have a couple of questions. What does [a two-year-old] eat? Teresa Crane: He . . . eats food. He eats whatever you eat, only in smaller portions. Adrian Monk: Oh. So he’s like a person. —From Monk Fun contests to stop you from losing your mind! For some reason, little kids are REALLY good at making a lot of noise and a lot of messes. Since their parents probably don’t want them to do too much of either one of these things, here’s a way to deal with that problem. You will need: The ability to get kids excited about a game. Optional: Silly prizes at the end for all those who enter the contests. Contest 1. If the kids are really screaming and spazzing out, tell them it’s time for a Very Important Contest. Tell them that wise women invented this contest many years ago to see who the best child in a village was. It’s called the Silent Contest. The contestants must sit at a table and see who can be quiet for the longest amount of time. Get everyone seated and then officially start the time. Don’t expect the kids to stay quiet for too long—these contests tend to turn into giggle fests! But anything is better than screaming. Contest 2. Sheesh, there are toys everywhere! The easiest way to get them picked up is to have a See Who Can Pick Up the Most Toys Contest. As before, you may want to give the kids some background about how, in the olden days, the Child Who Could Pick Up the Most Toys was destined to be a great leader. Plus, the kid who wins gets candy! (Make sure you have candy before saying this.) If you can sell this contest to them, any mess the kids have made can be cleaned up in no time. (It’s amazing how fast kids can move when they want to!) Singing a song while you do it will help. Clean up, clean up, everyone do their share. Clean up, clean up, the babysitter’s very unfair. Contest 3. Believe it or not, there may be times when the kids you’re in charge of are really shy and quiet. Maybe they’re naturally quiet and shy. Or maybe they’re afraid of you! Anyway, you’re worried that this visit will be boring for them and you. To pump these kids up and get some energy in the house, put on some music. Then say, "I want any bad kids in this room to be really quiet." At this, the kids should start screaming and going nuts (okay, or at least start talking). If not, it’s time for a Dance Contest. Anyone can be a judge for a Dance Contest. There might be categories for Slickest Moves, Coolest Outfit, or Most Likely to Hurt Himself When Dancing. Why do kids do things like this? You will need: Nail polish remover or hot water. At least once in a kid’s life, he will goof around with Super Glue, and get stuck to something . . . like himself. And for whatever reason, he will do it on your watch. The solution is pretty simple. If there is any nail polish remover around, soak some tissues or cotton balls in it and wedge them between the kid and whatever he is glued to. The acetone in the nail polish quickly dissolves the glue. If you don’t have any nail polish remover, soak the stuck parts in hot water. (Note: Don’t do this if he stuck his lips, eyes, or nostrils together.) This is much slower than using nail polish, but it’s better than listening to him yell. (Note: If he stuck his lips together, the bright side is he won’t be yelling!) Kids are so goofy and funny, and they don’t even realize it! Here are some of the cutest things that the kids you babysit might do. Playing hide-and-seek, you find your three-year-old in the middle of the room with her eyes covered, hiding from you. She figures that if she can’t see you, then you can’t see her! How precious is that? You ask your two-year-old if she has a sister. She says, Yes. Then you ask her if her sister has a sister. She looks at you like you’re nuts and shakes her head. Cute! You tell Junior that you will give him five shiny dimes for his dollar bill. He thinks he’s getting a good deal, and trades with you. Darling! Here. Then announce that you are going to turn the bucket upside down—but no water will spill. (The children will gasp!) Grab the bucket farther, making the whole scene more dramatic. WARNING: Do not ever kick the bucket! This is instantly fatal, and results in death, too. Two of the least cute things Junior can do: Get his head stuck in the railings. Clog the toilet. If he has either of the above two accidents, be sure to consult the Emergencies! chapter of The Big Book of Boy Stuff. You really should have a copy of that book around in case of trouble. The great thing about it is that you can give it to Junior to read after he gets his head out of the railings. (As a matter of fact, it might be the only thing that keeps him from crying!) Upside-Down Wa-Wa: The Encore. Okay, you babysat for a certain family once or twice, and it was a pretty bad experience. Let’s be honest: those kids are brats! If you really feel this way, you shouldn’t work for the family anymore. It gets tricky, though, because they are going to keep asking for you because you are so great. What to do? Sure, if you have the guts you can just tell the parents that you might not be the best person for their kids. The problem is that they will know what this means. And saying Your kids are brats! doesn’t sound like such a good idea. (Honesty may not be the best policy here.) We think this is a good problem to get your parents’ help with. If that family calls, have your own family excuse, like Tonight is family night or I need to study tonight or something. Brainstorm with your mom and dad. They may have some great ideas. After you say no one or two times, the family will get the idea and find another babysitter. [³] This selection from The Ancient Book of Babysitting Wisdom is reprinted with permission of the publisher. [⁴] Okay, just checking to see if you’re awake. Have the courage and the daring to think that you can make a difference. That’s what being young is all about. The United States has slightly more women than men. Yes! And compared to the men, a higher percentage of those women vote. Yes, yes! So, since the United States is a democracy and elects most of its politicians, about half of its politicians must be women, right? WRONG! In the United States, there are 100 senators. Yet
https://www.scribd.com/book/257415437/The-Big-Book-of-Girl-Stuff
CC-MAIN-2018-43
refinedweb
4,858
83.56
NULL, 0, and nullptrNULL). But read on. Uninitialized MemoryIf I declare a variable of a built-in type and I don't provide an initializer, the variable is sometimes? Because it can lead to unnecessary work at runtime. There's no reason to set a variable to zero if, for example, the first thing you do is pass it to a routine that assigns it a value. So let's take a page out of D's book (in particular, page 30 of The D Programming Language) and zero-initialize built-ins by default, but specify that void as an initial value prevents initialization: int x; // always zero-initialized int x = void; // never zero-initializedThe? I do. Break those eggs! This does not make me a crazy man. Keep reading. std::list::remove and std::forward_list::removeTen! Hold your fire. I'm not done yet. overrideC+ Effective Modern C++, but in a blog post such as this, it seems tacky to refer to something not available online for free, and that Item isn't available for free--at least not legally. So kindly allow me to refer you to this article as well as this StackOverflow entry for details on how using override improves your code. Given the plusses that override brings to C++, why do we allow overriding functions to be declared without it? Making it possible for compilers to check for overriding errors is nice, but why not require that they do it? It's not like we make type checking optional, n'est-ce pas?. Backward Compatibility very good reason. Or maybe not. Maybe a reason that's merely decent suffices as long as existing code can be brought into conformance with a revised C++ specification in a way that's automatic, fast, cheap, and reliable. If I have a magic wand that allows me to instantly and flawlessly. All we need is a magic wand that works instantly and flawlessly. very good reason. I want a wand that's so reliable, the Committee could responsibly consider changing the language for reasons that are merely decent..) - The wand should replace all variable definitions that lack explicit initializers and that are currently not zero-initialized with an explicit initializer of void. -.) - The wand should add override to all overriding functions. Clang Revisiting Backward CompatibilityIn recent years, the Standardization Committee's approach to backward compatibility has been to preserve it at all costs unless (1) it could be demonstrated that only very. Incidentally, code using ++ to set a bool to true is another example of the kind of thing that a tool like clang-tidy should be able to easily perform. (Just replace the use of ++ with an assignment from true.) many users. It's not just the pacemaker programmers who care about it.. A Ten-Year Process Here's how I envision this working: - Stage 1a:. - Stage 1b:. - Stage 2a: The Committee looks at the results of Stage 1b and reevaluates the desirability and feasibility of eliminating the features in question. For the features where they like what they see, they deprecate them in the next Standard. - Stage 2b: More time passes. The community gets more experience with the source code transformation tools needed to automatically convert bad eggs (old constructs) to good ones (the semantically equivalent new ones). - Stage 3:. One Little ProblemThe here.] For now, I'm interested in your thoughts on the ideas above. What do you think? 88 comments: I think that if you're going to make a magic backwards compatibility wand and wave it, then the samples you've identified are a tiny tiny subset of what could be acheived. Just make a new language with the desired semantics and then use the wand to instead backwards compat with your new language. I like it! And to be honest it's way overdue. It wouldn't break code either if it comes with a new language version. Legacy projects are likely to very carefully upgrading compilers and language versions. What about code bases that need to compile with older and newer compilers? I think before starting stage 1a, the target of the automatic replacement must be well-established (e.g. feel free to do b = true instead of b++, but might be too soon to try inserting nullptrs everywhere). Bold Stance. I like it :) One solution would probably be for compilers to provide a "strict" mode where deprecated constructs wouldn't be available ? I know Qt used clang-tidy to add override everywhere, with great effect (while keeping compatibility with old compilers through the use of a macro that conditionally resolves to override) I offer a counterexample. Despite the existence of an automated tool to transform most python2 code to python3, python2 remains quite popular. One of the reasons I've heard cited for the difficulty in porting to python3 is getting dependencies ported as well. I can see the same argument applying here. If I am unable to port to (a hypothetical) C++50 until all the C90 headers get their act together, then I won't be able to upgrade to C++50. I'm not going to port Windows.h or unistd.h or QT or whatever on behalf of those vendors / suppliers. Please enlight me: why don't we have a "#pragma version" we can put in our sources and crack even bigger eggs? Nothing would break, but every new file we added to a legacy project would be written in a way superior language... I agree backward compatibility cannot be an ever-increasing burden on the language, and I applaud your proposals in principle. Standard compliance could be a big problem with the wand tool though, and I speak as from experience with MSVC. Correct and compilable that is accepted by MSVC is often not Standard compliant, so I assume Clang would reject it. Equally, MSVC cannot compile many Standard constructs, so a re-written source from Clang may not compile in MSVC. I imagine the latter would be less of an issue if Clang only made very localised changes relevant to the deprecations. I fear that macros will make a lot of legacy code wand-proof, ruinning the day once again. That said, I'm all for deprecating awful things that have a replacement, and seeing if we can remove them 10 years later, be that thanks to automated tools, manual churn, or legacy codebases not planning to update to the latest standard anyways. It's nice to see that C++17 will remove some old entirely superseded stuff, maybe there is some hope for ramping up the deprecations a little bit. Removing operator++ on bool has been voted into C++17, see The major fly in this ointment is C compatibility. Being able to compile random C code as C++ is really useful and getting rid of 0 and NULL for nullptr would break that. The proper solution would be to get nullptr & co into core C but that will be even harder. C does not seem to have a group of people working to improve it through standardisation like C++ has. Thanks for writing this. It's a fun thing to consider, and I know others have not-unrelated things in mind. It's all quite exciting really. One of the advantages of expressing your thought in English rather than C++ is that no one can claim you meant ++b should become b = true when you actually meant (b = true). Operator precedence is one tiny example of why this whole thing isn't going to be easy. To me it seems there's an important distinction between a standard change that turns valid code invalid and requiring a compiler diagnostic vs. leaving it valid but with a different meaning at runtime. It shouldn't be hard to guess that I'd strongly prefer the former, but for thoroughness here's why I'm thinking that: * No one has to die. These deaths only come from runtime bugs. * Someone might forget to run clang-tidy. Compiler errors make that a non-option. * Requiring that the compiler detect the situation makes it not insane to think that many compiler vendors will provide an option (or separate binary from the same sources) that will fix your source for you. Point being if everyone has to implement detecting it, and the fix is straightforward enough, the tools for automatically fixing it may become abundant and convenient. * The other category of changes can still be made, they just have to be broken into multiple decade-long steps. For what it's worth I think there's plenty of other changes we could make if we're willing to take this approach. How about const being the default and non-const being a (better-chosen) keyword? I know some people advocate statically-checked throw specifications similar to what is found in Java, and that others hate the idea on the face of it. But it's always been a purely academic discussion because there's no way we could make such a radically breaking change. Or can we? "break exiting programs", perhaps not what you intend to say. Jussi Pakkanen: in case of 0/NULL/nullptr this can be easily worked around to compile C as C++, I think: #if defined(__cplusplus) && __cplusplus >= 201103L #define NULL nullptr #endif but I agree that being able to compile C code as C++ is valuable, especially for libraries. And so is supporting both old and newer versions of C++, also for libraries. As long as supporting C++98 is a thing, updating libs to only support C++11 and newer is is a bad idea. Adding some hacky #defines for NULL and override might be doable, but workarounds for renamed methods are harder/more invasive. Is there an actual benefit to nullptr that's so important as to break compatibility? Let me rephrase that. Imagine a C++11 program that consistently uses nullptr instead of 0/NULL. Now imagine I do a search & replace of nullptr with 0. What's the absolute worst thing that can realistically happen? @Arseny Kapoulkine: suppose you have two overloads: void fun(int) and void fun(double*). What should fun(NULL) do? It would call fun(int) if NULL was defined as 0, but it would be ambiguous if NULL was defined as 0L. In contrast, fun(nullptr) calls fun(double*). Hi Scott, I'm a big fan of yours but I feel like I'm missing something really basic here. So you want a magic wand, that when run will magically update someone's code to a newer version of the standard. So this is a tool which effectively has to be a backwards compatible compiler for a language, yet exists so that the language itself- and the compilers which implement it- does *not* have to be backwards compatible. First off: as someone who has worked in "modernization" companies producing tools that magically upgrade old code bases, those tools by their very nature do not work as perfectly as you describe. There is always hardship and pain and manual labor. Second: we have those tools today. They are called C++ compilers, which are backwards compatable because the language is backwards compatable. If you want a new version of the language that is not backwards compatable, that also exists in the form of the warnings given off by most compilers. If you don't allow any warnings when compiling, you effectively have a non-backwards compatable version of C++. Additionally, the static analysis tool demo'd by Herb Sutter at CppCon seems to take things another several steps in the right direction. So let me get to my point. If a company making pace makers won't make builds pass with zero warnings or update their CI process to involve additional static analysis tools even when the world's foremost C++ consultant says to, they for damn sure aren't going to deal with upgrading to a backwards-incompatible compiler that advertises breaking changes even if some magic wand tool claims to exist. The day C++0z shipped without support for the version their code used would be the day they never upgraded C++ again. MAJOR KUDOS to the person who mentions Python 2 / 3 above. As someone who spends all day working on Python code I can guarantee you that the grass is *NOT* greener on the other side. There is nothing really that sexy about Python 3 compared to 2, yet the thought leaders of that community thought it was crucial they violate backward compatibility to achieve their goals. In the end they left behind everybody who had a large and already working code base, as well as people who enjoyed or needed to be using libraries that did not yet work in Python 3. I have literally never met a Python fan in real life who strongly advocated dropping 2 in favor of 3. Even people who are in love with the language act like Python 3 is idiotic because it's just different enough to make things a huge pain in a language that is all about ease of use. If C++11 had taken the same path as Python 3, nobody would be using it today but a small fraction of the C++ community that didn't care about backwards compatibility and could afford to update everything, including their libraries. In that nightmare of a parallel timeline C++11 would be competing with other languages like D, Haskell, Rust, etc, because if you used C++11 you would already need to write so much from scratch that you might as well consider something completely different. So the best state of the world is to leave things optional, meaning we can have our cake and eat it too, not alienating the legions of devs who can't or won't update 100% of their code while still keeping them involved with the language itself while the rest of us can continue to apply best practices, use ever better static analysis tools and get to the world you're describing for free. @Rein Halbersma: Yeah, sure, but that is not catastrophic - you just get an ambiguous overload error? I can imagine weird overloads being picked up - I think I have seen code like this once or twice: std::string s(0); Where the developer intended to do a number to string comparison but got a crash; but these cases almost never come up in my experience. If you want to break some eggs, why not do #undef NULL #define NULL nullptr "NULL" is three characters less than "nullptr", so let's save some typing. In my opinion, C++ breaking C's "NULL", and requiring "0" was the original problem. Why not fix it? Requiring yet another way to declare a null pointer just seems silly. Everyone suspicious about the magic wand (clang plugins and rewriters) should have a deep look at: and for stuff that is used in production today. The future is now. If I may, Being a (recent) member of WG21 (the C++ standard committee), what I can say from your (interesting) post is that the committee already pretty much follows a path like the one you recommend here. In Kona, a number of features that were already marked as deprecated were removed from C++ starting at C++17, including such wonderfully (!) weird things as operator++(bool), which behave differently in C and C++ and for which there was no operator--(bool) in C++ anyway. In some cases, there was a push to remove features that we could legitimately place in the «bad eggs» basket, but that was blocked due to the fact that these features had not been marked as deprecated previously. This might not be the proper way to do things according to some, but it does make sense from the perspective of some not-unsignificant parts of the industry, and the removal of such features will probably happen around C++20, after «deprecated» has ha a chance to sink in. Clang tools indeed do a very good job, and I'm glad you mentioned them. I doubt the standard will advocate specific tools, but proof-of-concept is a good approach. We need more tools like these. I think, thus, that what you are suggesting is close to what's actually being done. WG21 is a big group with various interests and viewpoints, which slows it down according to some perspectives but I actually think it's a good thing. Cheers! Thanks for the interesting suggestions! Python is not a good comparison. The move from Python 2 to Python 3 had semantic changes that were impossible to provide automatic translation to. There were several major changes which were effectively impossible to see syntactically, and which required major modifications of logic. In many cases, the fallback option an automatic conversion would require just didn't exist. Further, Python also changed the C API. This meant that any C library (and there were a lot) needed to go through a manual migration process, and for some of these this took a long time and a lot of work. There is certainly no automatic tool for that. Discussing only those changes verified to have perfectly semantics preserving automatic transformations makes this a totally different ballgame. Whether this is the right option is, of course, a different question. Suppose you had such tools, compiler and translator. Their composition, compiler · translator, would be exactly identical to a backwards compatible standards revision. The only difference, then, is that with the two separate one would have to run it over one's code - and its dependencies too. Having them pre-composed, as we do, with an optional clang-tidy step is functionally identical and much less likely to aggravate or cause issues. One should thus focus not on separating them, but in making changes in the C++ standard purposely compatible with such code transformations. That, to me, is the best of both worlds. I'd really like to see something like the Python 2 -> 3 break. Along the lines of the first C++ after 2020 becomes C++2, and has no guarantee of backwards compatibility, and just sweeps out a whole bunch of crap. It's incredibly frustrating that there are so many situations where the simple and obvious code is now considered bad practice. To make things worse, the new and improved waya of doing stuff are usually complicated and ugly because they had to be crammed in some way that would be backwards compatible. It's negative reinforcement for people to do things the wrong way. The correct code should be simple and obvious. If I need to do something unusual, I don't mind jumping through hoops, but I shouldn't have to jump through them all the time for every day stuff. The reality is there are 30 years of books, and 20 years of internet tutorials telling people to use C++ in ways that are frowned upon now, and those aren't going away. As new people start using the language they're just going to add to the backwards compatibility burden when they see some 15 year old tutorial and all the code still works. For the override proposal, a I don't think a magic wand is possible. With templates, you can have code that would be invalid with override and (under proposed rules) invalid without it. Here's an example: How would the new uninitialized memory rule affect arrays? or the special rules around dynamically allocated PODs () If we are going to break backwards compatibility, this seems like a half-measure: int x; // always zero-initialized int x = void; // never zero-initialized Why should we still allow int x; to compile at all? The problem is that allowing the implicit zero initialization can still hide bugs that sanitizers would no longer be able to find. All it does is make buggy code produce repeatable results, which can make it harder, not easier, to find the bug. How about making it so you always have to specify an initializer or void? I would also use this chance to remove two-phase name lookup and replace it with something more sane. Two-phase name lookup is what requires you to write this->m_x instead of m_x if and only if m_x was declared in a base class that has a type parameter (template). It makes absolutely no sense that accessing base-class members is different depending on whether the base class has a type parameter or not. When I write m_x, I don't care what the base class looks like. Having to put 'this->' everywhere pollutes the code badly. The current two-phase name lookup feels like a compiler writers' cop-out. The MS compiler shows that one can do without. This is the first rotten egg I would address in C++. The semantics for uninitialized variables 'int a = void' is nice. I'd prefer something that easier to grep for though... especially in the context of a magic wand tool that is going to automatically update my source. Thanks to everyone for the comments to date. I won't try to address each individually, but I will try to offer a few clarifications: - Tools such as I propose would be useful in practice only if they handle real source code, and that means being able to process platform-specific extensions. Few, if any, nontrivial systems are written in 100% pure ISO Standard C++. I don't know how well current Clang-based tools handle this, but Microsoft already has some support for Clang in Visual Studio, and if they haven't done it yet, I'd be surprised if they didn't do what it takes to get their version of Clang to handle MS-specific language extensions. (A wonderful article about the difference between "standard" programs and "real" ones is A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World.) - The situation as regards Python 2 and Python 3 is a cautionary tale for the Committee, and I'm sure they're well aware of it. If the Committee tries to lead where the community as a whole doesn't want to go, the result is likely to be a splintered community. To avoid that problem, you avoid breaking changes that are not widely accepted by the community. Clang-based tools can demonstrate to the community that a change they already accept in concept is practical, but the tools alone are unlikely to build consensus about breaking changes that important parts of the community oppose in concept. - I agree that the need to compile some headers under both C and C++ would complicate attempts to get rid of NULL. Perhaps C++ could be specified to define NULL only if another preprocessor symbol--something like __C_COMPATIBILITY--is defined. This wouldn't break new conceptual ground. The behavior of the assert macro already depends on whether NDEBUG is defined. - I'm pleased to hear that it looks like C++17 will finally get rid of ++ on bools, but the fact that it will have remained deprecated for nearly 20 years underscores my point that the Committee is currently very conservative about removing language features. More later when I have more time. I'm truly grateful for all the feedback. This might be a minor detail given the intent of the article, but as former C programmer, this seems like a very strange attempt at revisionism to me: "NULL came from C. It interfered with type-safety (it depends on an implicit conversion from void* to typed pointers), so C++ introduced 0 as a better way to express null pointers." This is just flat out wrong. From the very beginning (in C), the null pointer was the integer "0". NULL is just preproceesor macro that turns into "0" to distuingish (in the code, not in the compiler) between an integer and a null pointer. See also I wouldn't usually bother with such a statement on the Internet, but from Scott Meyer? C'mon!! :) I think this would be great in getting C++ to be more competitive with the newer languages nipping at its heels that aren't burdened with design mistakes made 20 years ago. The one place where it becomes challenging is third-party libraries as someone mentioned because unless you fork them, you have to wait for them to update their code. And they may be prevented from upgrading due to another customer. For example, imagine a third-party header: struct bar { int x; int y; }; bar foo () { bar b; b.x = 5; return b; } If compiled under the new mode, this would give y a value of 0. If compiled in the old mode this would be an uninitialized value. If you did also run the tool on that header, then the problem is that now you've had to fork the 3rd-party library. I don't think it's an unsolvable problem. Perhaps it can be managed with compiler switches that only apply the new behavior on files that exist within certain paths or have a pragma & fall back to old behaviour otherwise. Alternatively, it's quite possible this just becomes a non-issue with modules but it's going to take a few years to get good penetration with modules. +1 In Haskell we didn't do this and the new improved spelling for lots of things is now #ifdef Old #else New #endif Couldn't disagree more. Your criticism is inconsistent and picks on issues that hardly matter in real life programming. Uninitialised arrays? If every local array is zeroed, you won't make many friend with those who use C++ for writing highly optimized code. Just as we despise nanny-states we should not make our compilers act like nannies for us by taking care to initialize every variable for us. Backwards compatibility is merely also compatibility with C, algorithms should work on bare-metal embedded systems as well as on modern CPU. Against macros you fail to provide any argument but your personal dislike, and I am happy to know that these will never go. As soon as some committee favours their abolition, I shall happily roll my own preprocessor. Macros are a powerful thing of C/C++ that most other languages fail to provide. If for some undisclosed reason you don't like them, there is an easy solution: don't use them. The Python 2/3 argument IMHO is a showstopper argument against introducing versions of C++ code (once the wand did it's job, the code should not be compiled with an older compiler anymore). So the most basic requirement should be that by default all old code is compiled with backwards-compatibility just as it is right now. However (!) there is no reason why proactive C++ developers who want to get rid of the old cruft cannot be allowed a compiler option to say so. By default, all the old constructs should cause a special compiler warning like "ISO deprecated", which all the legacy maintainers are free to ignore. However everyone who systematically wants to write modern constructs now has a tool to find and fix all the places in his code. "Your criticism is inconsistent and picks on issues that hardly matter in real life programming" - how can Scott Meyers know what a real life programming is. He is an amateur who never wrote a line of production code, not even wanted to do so. He is a sneak oil salesman, that's all he is. The various ways, old and new, to write code make it harder to teach/learn. It also reduce the coherency of a base code shared by people with various habits, and add a useless cognitive load when reading code. Also, newcomers may not be very receptive to the backward compatibility necessity, it's difficult to accept that one must know all the language rusty constructs... My suggestion in N points. First, the commitee should define a set of syntaxical constructs, covering features that can be written in several ways (such as defining 3 syntaxical construct for the null pointer initialisation, one for "= NULL", one for "= 0" and the last one for "= nullptr") Step 2: the commitee mark a subset of thoses syntaxical constructs as deprecated. Step 3: compilers should warn when using deprecated syntaxical constructs Step 4: ignored by legacy code mainteners, enjoyed by everyone else. An other example, for the [default initialized value]. Detect "int i;" as one (deprecated) lexical contruct, and "int i = void;" as a prefered syntaxic construction, with equivalent semantic. - I'm not a compiler developer, at all, but I feel like it's not the hardest thing to implement (sorry if I'm wrong, I'm just ignorant). - I feel like nothing is broken in legacy code as nothing is removed, added or modified in the language itself - the standard is shared by everyone as the deprecated syntaxical constructs are in the ISO text. - easy to handle as a newcomer, or a more experienced programmer cause we are all used to warnings. Something like "x.cpp, line:42 : prefer using int* i = nullptr, instead of the deprecated form" would be great. I'd love a feedback on this :) @Anonymous: Regarding C, NULL, 0, and C++, this is from Bjarne Stroustrup's 1994 The Design and Evolution of C++, page 230: "Unfortunately, there is no portable correct definition of NULL in K&R C. In ANSI C, (void*) 0 is a reasonable and increasingly popular definition for NULL." "However, (void*) 0 is not a good choice for the null pointer in C++. ... A void* cannot be assigned to anything without a cast. Allowing implicit conversions of void* to other pointer types would open a serious hole in the type system." I believe this backs what I wrote in the blog post. I'm certain that when I was working with C++ in the late 1980s, C's NULL was often defined as (void*)0, and if this Wikipedia article is correct, "In C, ... the macro NULL is defined as an implementation-defined null pointer constant, which in C99 can be portably expressed as the integer value 0 converted implicitly or explicitly to the type void*." @Martin Moene: Typo fixed, thanks. @Scott Meyers You lack not only a practical knowledge but also academical. Nobody uses Wikipedia to back their arguments. Unless you're amateur. Then you do things like you just did. Here are a few more general remarks motivated by comments that have been posted: - C++ weighs backward compatibility highly, but the Committee has been willing to introduce breaking changes when they felt it was worth it. As I mentioned, C++11 changed the semantics of auto and introduced a number of new keywords. (Introduction of new keywords is always a breaking change.) The new idea in my post is not that the Committee introduce breaking changes, it's that they consider being more aggressive about it by taking into account how the impact of such changes can be mitigated by Clang-based source-to-source transformation tools. - Experience shows that relying on compiler warnings to inform programmers about "bad" practices is not reliable. In Item 12 of Effective Modern C++, I show code with four different overriding-related mistakes (i.e., valid code where derived class functions look like they should override base class virtuals, but don't). I then write: "With two of the compilers I checked, the code was accepted without complaint, and that was with all warnings enabled. (Other compilers provided warnings about some of the issues, but not all of them.)" To date, the Standardization Committee has shied away from addressing compiler warnings, so there is no such thing as a "mandatory" warning. -. @rnburn: Good point about templates and the indeterminacy of whether a function should be declared override. I don't know if that's necessarily a deal-killer for requiring override on overriding functions, but it's certainly a notable obstacle. Thanks for pointing this out. As the teenagers says "True Dat." Also, I think the "pacemaker example" is a bit of a strawman argument. No one building a pacemaker changes any part of their toolchain without a complete regression test. Death concerns don't need to be a consideration when changing C++. The "keep them living" responsibility lies solely with the people that are creating safety-critical devices. Either they see the benefit of the new version and then update all of their code to conform (and test that they have done so), or they stay on the older version. I'm wholeheartedly with you on cleaning up C++ and breaking some legacy C compatibility. But if you want to break all the eggs, I suggest you go much further. Lets drop all the crud and leave "the much smaller and cleaner language struggling to get out" Bjarne is talking about. This will be painful, but probably not as much as Python 2 vs. Python 3 struggle, which accomplished not that much considering the ramifications. Breaking backward compatibility is a serious business, so let's break it good. I'm pretty sure DEC C on VAX/VMS is just one example of a compiler that used to define NULL as (void*) 0. To "Knowing me, knowing you, a-ha": Scott has written lots of good books that I've found incredibly useful so I don't care what Scott's academic or employment record is - he does good work and that's all that matters. Leave Scott alone and get your facts right. @Anonymous: Yes, my use of the pacemaker example and the risk of people dying was simplified and exaggerated for dramatic effect. I'd expect any company developing safety-critical systems to employ extensive regression testing any time any part of the build process changed. In addition, I'd expect such companies to employ detailed code reviews for any kind of change to their safety-critical source code. Adoption of any new compiler version presumably means that the company incurs the costs associated with regression testing, but breaking changes to the language may additionally cause such companies to incur costs associated with changes to their source code, which, for all I know, involve not just internal code reviews but also new rounds of government certification. My fundamental point is that revising C++ such that old code requires modification in order to retain its semantics can have a dramatic and costly impact on the language's users. This is why the Standardization Committee is very reluctant to adopt breaking changes. @Arseny Kapoulkine: If that C++11 program uses a C interface of some library then the problem might exist when C interface exposes a function with variadic parameters. On 64bit platforms passing a nullptr will send 64 bits set to zero (or to any pattern interpreted as a NULL pointer). Passing literal zero - usually only 32 bits, as literal numbers are usually treated as ints and usually int is 32 bits wide even on 64 bit platforms. So replacing nullptr with a 0 in this case will end with C library expecting a pointer reading 32 zero bits and 32 bits of garbage. An example fix done by me: Passing NULL would probably solve it too. But that's another problem - sometimes you can pass NULL, but not 0, and probably sometimes you can pass 0, but not NULL. With nullptr, you don't have that problem. @Anonymous "Scott has written lots of good books that I've found incredibly useful so I don't care what Scott's academic or employment record is - he does good work and that's all that matters. Leave Scott alone and get your facts right." I got my facts right. His books? Books written by amateur, who never wrote production code. Those are **facts**. I prefer books written by professionals who actually wrote production code in their lives. Like Alexandrescu or Bjarne. Amen brother. C++ can be an elegant language, if consequently avoiding the old and embracing the new features. While I can do that myself, it's not nice to read someone else's code and find all the old idiocraties. It would be great to get a C++ compiler that simply refuses to support the legacy stuff unless some obscure #pragma is set. Or at least a #pragma to support only new features that I can set. Well, you could also drop K&R compatibility, some of the freaky ways of calling functions and I'm guessing C-style casting would get the purge. I'm pretty sure you'd just split the development community. While all these things are lovely in theory, in practice there are large areas where such modern innovations as STL and exceptions are still a bit racy and C++11 is just crazy talk (and for practical reasons too, not just techno-dinosauring. Good luck doing all this if Visual Studio 2005 is the only compiler that works on your codebase). Given that only about one in five developers is even interested in using no-brainer static analysers, or paying attention to compiler warnings, they aren't going to take kindly to forced backwards incompatibility. Dear internet troll of literally gigantic proportions named "knowing me knowing you,aha", Could you please do yourself - and by that, us as well - a favor and consult psychiatric assistance as soon as possible ? In your dissing every single post of Scott Meyers - for weeks or months now - you are resembling a tibetan prayer wheel constantly repeating the same phrases over and over again as a response. It is clear for everyone in here - except you - that your issues are serious and probably pathologic. Get help ! Should you opt for ignoring my sound advice rest instead assured that the whole community reading the blog has already recognized that you are obviously a production code writing genius who knows everything better than the entire abysmal rest of the "programming world", or at least better than Scott Meyers. So, there is actually no need for you adding further comments, right ? Scott: I need to save this blog so I can look at your book. At least for me,I am happy to see that you have written something on the subject of making C++ robust and not offended at the shameless plug.. @Anonymous: Please see my earlier comment that starts with "my use of the pacemaker example and the risk of people dying was simplified and exaggerated for dramatic effect" and concludes with "My fundamental point is that revising C++ such that old code requires modification in order to retain its semantics can have a dramatic and costly impact on the language's users." I definitely missed your point. What I said amplifies your point that changes can have a dramatic and costly impact.. I tend to think of each new version of the standard as a new language and thus there is less reason to have backward compatibility. The whole point of a new version of the standard is to create a new version that is better than the last. C++11 and C++14 are *not* the same language. Driving a stake through the heart of things like ++ on a boolean should absolutely break old code. The idiots who did that operation deserve as much pain as possible since a numeric operation on a boolean has no meaning. FALSE++ and --TRUE are just plain stupid. I like your suggestions on what "features" to just kill. I am looking forward to getting your book. Anonymous CdrJameson said... "Good luck doing all this if Visual Studio 2005 is the only compiler that works on your codebase)." If you are stuck using an older toolchain for the foreseeable future, why does it matter what direction future C++ takes? People will inevitably embrace new features, making their code incompatible with your toolchain. "Driving a stake through the heart of things like ++ on a boolean should absolutely break old code. The idiots who did that operation deserve as much pain as possible since a numeric operation on a boolean has no meaning." Those people are usually not the ones affected. The people that inherited their code base typically are. Those people may be unable to convince management to upgrade because the new version has too many incompatibilities. "If you are stuck using an older toolchain for the foreseeable future, why does it matter what direction future C++ takes? " If the standard breaks a lot of things, then upgrading becomes expensive, making it even more likely that you stay stuck with the old toolchain. The new C++11 features are absolutely better than what they replace. Were the old features so broken and error prone that they need to be broken at compile time though? How many bugs will that fix? If the goal is to fix / prevent bugs, then I can definitely see significant justification for changing the behavior of uninitialized variables, and possibly see justification for forcing override. I think that those are two of the more expensive changes though. People who keep wanting to turn C and C++ into Java need to rethink their priorities or just switch to Java. One of the first descriptions I ever heard of "What is C?" (this was back in the early 90's) was "Gentleman's Assembly". You're trying to morph a language that had a philosophy of "trust the programmer" into a philosophy (much agreed upon by Java developers everywhere) that "Programmers are too stupid to handle pointers and other language features, so let's take those features away from these incompetent developers". When you start believing in your own incompetence, you're going to be incompetent. You're trying to make a system that can prevent developers from doing things that might create bugs. You might as well try to get people to use adverbs correctly when asked, "How are you doing?" Five out of six well educated checkers at my local grocery store answer using "well". People in general tend to answer using "good". I get the feeling there are folks in your community of readers who would like to have such people outfitted with shock collars. Interesting blog post. Thanks Scott! As a matter of fact I had thoughts on this subject some time ago, I even posted a question on StackOverflow [1], as I had no idea where to ask this... I'm not sure my idea would be a step in the right direction but I agree this part of C++ needs improvement. [1] @Adam Romanek: I think many people agree that unintentionally uninitialized memory is a problem waiting to happen, but there are also many who feel passionately that initialization of built-ins should be optional. I like D's solution (zero-initialization by default, no initialization by request), and I think that Clang makes it possible to create a path from the current C/C++ approach to the D approach in a way that breaks no code and forces nothing on anybody (other than a slightly different syntax for new code that wants to avoid zero initialization). I'm struggling with default initialisation. If the argument that defined is better than undefined, or that zero-initialised is 'best'? I don't have experience with languages that default initialise to zero, but my (possibly biased thinking because I grew up with C and then C++) thinking would be that best practice should explicitly initialise before use regardless. An object is default initialise to a state via its ctor, and conceptually has a 'good' default state, defined by the behaviour implementation. But the same isn't true for fundamental type -- what makes 0 a better default that -1 or std::numeric_limits<>::min() or max(), or 18374? The more I think abut this, I think the default uninitialised is actually the right choice :) Scott, what are your thoughts on sanitizers? If there is broken code because of an uninitialized value, that code may still be broken if you implicitly initialize it to 0, but you can no longer use a sanitizer to look for it, because the sanitizer cannot tell the difference between accidentally implicitly using zero initialization vs. deliberately implicitly using zero initialization. This is why, as I mentioned earlier, that I'd rather just ban default initialization in these circumstances. @Craig Henderson: My argument for using 0 is consistency. Both C and C++ already perform zero-initialization on objects of static storage duration. Aggregates that are list-initialized and given too few initializers use 0 as a default value for the members with no corresponding initializer. In the STL, vectors of built-in types are zero-initialized by default. Like it or not, 0 is the default initialization value in many places in C++. I think there's much to be gained and nothing to be lost by extending this from "many places" to everywhere--with the understandings that (1) there will be a way to opt out of zero initialization where you don't want it and (2) there will be a practical way to migrate legacy code to the new default-initialization rules without any change in semantics. @Nevin ":-)": In this post, my interest is in a way to change the language in a way that preserves full backward compatibility, so any context in which sanitizers are currently useful would remain a context where sanitizers would be useful on programs that had been transformed. Note that due to my focus on maintaining strict backward compatibility in this post, nothing I'm proposing would cause existing code that currently has an uninitialized value to become implicitly zero-initialized. My sense is that you'd like to get rid of implicit zero initialization entirely, and I believe that that, too, is something amenable to "magic wand" legacy program transformation: have a Clang-based tool replace all code that currently gets implicitly zero-initialized with the corresponding syntax for explicit zero initialization.. // disables things that make it easy for you to shoot yourself in the foot. lang mode strict; // modern code, sensible language subset lang enable c_varargs; // some code that interfaces C lang disable c_varargs; As an aside, It's also INSANE that -Wall on gcc doesn't really mean everything. This is again for backwards compatibility. No!, if I mean everything I really mean it. If I'm upgrading compiler and I wanted to have yesterday's -Wall, then it should be versioned: -Wall-gcc49. Otherwise what's the point of all those compiler devs spending their time and effort trying to make my life easier if it's so hard to access their efforts?. You are basically allowing more things to legally compile and run, so there is less checking that compilers and sanitizers can do. Example #1: the following does not compile with warnings enabled: int i; std::cout << i << std::endl; If the behavior were changed so that i was initialized to 0, this code would have to legally compile. Example #2: this code compiles but is caught by the sanitizer: foo(int& ii) { std::cout << ii << std::endl; } int i; foo(i); With your proposed change, sanitizers could no longer diagnose such code, as it would be perfectly legal. So, why make the change?. Reason #2: this is a common mistake for beginners. Beginners (as well as experts and everyone in between) ought to be using sanitizers. On the whole, this kind of change seems to mask bugs instead of preventing bugs. What am I missing? . I'm inclined to agree with Nevin on default initialization not being 0, at least with regards to simple stack types like int. I don't want to look at a declaration and assume that the author intended to initialise to zero, I'd rather explicitly know.. One could argue that the other way though, that they'd rather auto initialize to 0 so that if that wasn't intended, one can predict what will/has happened to the data/system better. If one likes that latter view, then could say default initialization to zero is done, but the user/compiler should still not let default initialization to zero affect it's compile time analysis of errors. But it will likely affect runtime. But overall, so far, I'm not in favour of automatic initialization, at least of simple types like int. I want the compiler and the user to be able to assume that anything not explicitly initialized is a potential source of error.. Great idea! Except, there are already enough languages which are perfect candidates for this. So just use one of them instead of making a new one! . Note that /Wall is extremely noisy, as it turns on all the warnings. It is roughly equivalent to GCC's and Clang's -Weverything. P.D I am ready for your next book ;-) Hi Scott,. Cheers, Jens I agree it's time to do this break those eggs, as for the comment on python 2to3 two points 1). the change is happening but slowly and 2). the change has been much slowed by the fact that the 2to3 utility was sloppily implemented it couldn't even do trivial changes like print args --> print(args) we can do better I have found one use for nullptr and deprecation or removing the automatic cast from 0 to nullptr would break it. When you want to provide enums as bitmask.types (17.5.2.1.3), you have to define a few operations on them. set, clear, xor and negate are easy and even documented in the standard. Now taking a enum bitmask and 2 instances X,Y, defined as enum class bitmask : int_type{....}; bitmask X,Y; you would have to support 2 additional operations (noted in the standard): (X & Y) == 0; (X & Y) != 0; In other words, you need operator== and operator!=, which ideally ONLY TAKE CONSTANT 0. The solution I came up with was: constexpr bool operator ==(bitmask X, const decltype(nullptr)) { return X == bitmask(); }. About override why not just write: class ... { ... override type fun(paramlist); }; instead of: virtual type fun(paramlist) override; @npl: From what I can tell, your operator== function returns whether all bits in the bitmask are 0, so I don't see why you want a binary operator to test that. Why don't you just define a function something like this? constexpr bool noBitsAreSet(bitmask X) { return X == bitmask(); } @Vincent G.: I'm not familiar with the history of the placement of "override" at the end of the function declaration, sorry.r: I should have remembered that; I write about it in Effective Modern C++, Item 12. Thanks for reminding me! @Greg Marr: Thank you for the explanation.. Cause the interface of a bitmasktype requires it: Its supposed to be identical in use / interchangeable with a pre11 plain enum @npl: Okay, I think I see what you mean. However, I believe that [bitmask.types]/4 is simply defining terminology, not required expressions. ("The following terms apply...") As such, I think cppreference's interpretation of that clause of the Standard is incorrect. Even if we assume that [bitmask.types]/4 requires that the expression "(X & Y)" be testable to see if it's nonzero, I don't see any requirement that the zero value be a compile-time constant. That is, I believe this would be valid: template<typename T> void setToZero(T& param) { param = 0; } int variable; setToZero(variable); if ((X & Y) == variable) ....: You know, I am talking about an ideal, egg-free omlett world (sounds somehow implausible).: enum EType { eType_Somebit = 1 << 0, eType_Otherbit = 1 << 1, }; bool foo(EType e) { return (e & eType_Somebit) != 0; } // easily transformed (search + replace mainly) to omlette: enum class EType : unsigned { Somebit = 1 << 0, Otherbit = 1 << 1, } // define & | ~ &= |= == != operators for EType, a MACRO can do this bool foo(EType e) { return (e & EType::Somebit) != 0; } BTW, I had written some lines to explain why there cant be a "standard" way to test for constant 0 argument - but while writing I might have found one =) : Its fine, helped me thinking about the problem again. Thanks for your time. The reason I want only constant zero is simply to enforce that code, some simple canonical "bitcheck" that has a well defined meaning whatever the type is. x == 0 is a good candidate because its pretty simple, "builtin" for plain enums, widely known and used. Approach with third party tools is completely wrong and is likely to fail. No "magic wand" will help if we have 10M lines of sources with requirement to do code review and write (and physically sign!) "formal code review reports" (this is what FDA requires from healthcare related project!). The best solution for this problem is to adopt for C++ something like "use strict" does for JavaScript. Now it is up to developer (!) to decide: do we need to update all the 10000files of project sources with "magic wand" (and write tons of those "formal code review reports"), or use this new "strict" mode only for new or refactored files!. The new "#pragma strict" or "using strict" or whatever we call it will be different from this hell just because it is part of C++ standard and every conforming compiler is forced to support this new feature! No more reinvention the wheel and no more trying to tie square wheels invented by my company to triangle wheels invented by third party companies! This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level. Love from
http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html?showComment=1447797928589
CC-MAIN-2018-17
refinedweb
8,963
60.75
Spiritual? Solution Preview Please allow my notes to guide you as you develop your own ideas about the topic of moral or spiritual development. As a science teacher, I feel as though a moral intelligence could be easily integrated into teaching. I think it would be easier, legally, to do a "moral" intelligence as opposed to a spiritual one, as many people have strong and very different religious views. Another problem with a spiritual teaching of any kind is the motivation behind it. For example, the motivation behind a Christian's behavior is to serve God whereas a Morman's motivation lies behind trying to achieve a god-like status via hard work. Taking the motivation out of the teaching leaves a lot to be desired, but is necessary to keep the peace between all the children of various religions in a public classroom. The alternative is to teach every motivation or religion, which is virtually impossible, due to time constraints and due to the numerous religions around the world. However, nearly all religions teach similar morals, such as honesty and kindness. Therefore, helping students develop a moral intelligence is not only possible, but I believe beneficial. Lawrence Kohlberg is one of the most famous researchers of moral development. I think learning about his research will help you decide the how and why of the possibility of teaching a "moral intelligence." Kohlberg expanded on Piaget's original work, which outlined only two stages of moral development. He divided the process into three levels, with each level being broken down into two stages. The stages are thought to be worked through in order, which young children starting in Level 1, Stage 1, and adults ... Solution Summary This solution addresses the pros and cons of adding a spiritual/moral intelligence.
https://brainmass.com/psychology/abnormal-psychology/spiritual-intelligence-244579
CC-MAIN-2017-30
refinedweb
298
51.68
Managing DFS Namespaces (File and Print) Question The Last! The Staging quota of the replicated folder? Victorious Secret Question The First Delegation of the namespace? The referral settings of the namespace Why is referral settings the right answer!? The Cache duration of the namespace? The properties of Active Directory SITE LINKS? Your network contains an Active Directory domain named contoso.com. The domain contains two servers named Server1 and Server2. Both servers run Windows Server 2012.? The referral order of the namespace? Picking which server is FIRST or LAST in a namespace won't help at ALL with bandwidth! Remember we are trying to CONSERVE bandwidth. Priority doesn't do that with REPLICATION. The schudule of the replication group! Why is this the RIGHT answer?! The properties of Active Directory sites? Scheduling Will make to so that the files are syncronized at a single point in time, making for less overall traffic, but a larger amount at the schedualed time. While this might sound good, this option is only storing the information for the Referal computer in the name space. Staging quota will allow a file that has been changed twice between the replication periods to be stored with the newer version being replicated and the older modified version going into the staging DFS Namespaces (File and Print) No description byTweet Adam Smithon 16 October 2013 Please log in to add your comment.
https://prezi.com/pzrkfmfmxin8/managing-dfs-namespaces-file-and-print/
CC-MAIN-2017-51
refinedweb
233
67.65
#include <StelSkyDrawer.hpp> Constructor. Destructor. Init parameters from config file. Update with respect to the time and StelProjector/StelToneReproducer state. Get the painter currently used or NULL. Set the proper openGL state before making calls to drawPointSource. Finalize the drawing of point sources. Draw a point source halo. Draw a disk source halo. The real surface brightness is smaller as if it were a point source because the flux is spread on the disk area Terminate drawing of a 3D model, draw the halo. Compute RMag and CMag from magnitude. Report that an object of luminance lum with an on-screen area of area pixels is currently displayed This information is used to determine the world adaptation luminance This method should be called during the update operations of the main loop. To be called before the drawing stage starts. Compute the luminance for an extended source with the given surface brightness. Compute the surface brightness from the luminance of an extended source. Convert quantized B-V index to float B-V. Convert quantized B-V index to RGB colors. Set the way brighter stars will look bigger as the fainter ones. Get the way brighter stars will look bigger as the fainter ones. Set the absolute star brightness scale. Get the absolute star brightness scale. Set source twinkle amount. Get source twinkle amount. Set flag for source twinkling. Get flag for source twinkling. Set flag for displaying point sources as GLpoints (faster on some hardware but not so nice). Get flag for displaying point sources as GLpoints (faster on some hardware but not so nice). Set the parameters so that the stars disapear at about the limit given by the bortle scale The limit is valid only at a given zoom level (around 60 deg) See. Get the current Bortle scale index. Get the magnitude of the currently faintest visible point source It depends on the zoom level, on the eye adapation and on the point source rendering parameters. Get the luminance of the faintest visible object (e.g. RGB<0.05) It depends on the zoom level, on the eye adapation and on the point source rendering parameters Set the value of the eye adaptation flag. Get the current value of eye adaptation flag.
http://www.stellarium.org/doc/0.10.2/classStelSkyDrawer.html
CC-MAIN-2013-20
refinedweb
375
67.96
Tips and Tricks for Developing with Windows SharePoint Services Marco Bellinaso Code Architects Srl December 2005 Applies to: Microsoft Windows SharePoint Services Microsoft Office FrontPage 2003 Summary: Learn programming techniques that can help make your. (26 printed pages) Contents Setting Default Values for Web Part Properties in the .dwp File Writing Web Parts with Preview Support for FrontPage Customizing the Web Part Shortcut Menu Using Impersonation to Access Data that the Current User Cannot Access Referring to Lists and Fields Making Updates with the AllowUnsafeUpdates Property of SPWeb Class Filtering a Collection of List Items by Using Views or CAML Queries Using the DataTable Object to Work with List Items in Memory Customizing the Context Menu of Document Library Items Conclusion About the Author Introduction Learn programming techniques that can help make your Microsoft. Setting Default Values for Web Part Properties in the .dwp File The .dwp files are XML files that are added to Microsoft Windows SharePoint Services Web Part catalogs. These files tell the SharePoint site the Web Part's name and description, which are shown on the catalog sidebar, and also the assembly and type name of the Web Part class, which is necessary to load and instantiate it. In addition to the <Title>, <Description>, <Assembly>, and <TypeName> tags that define these settings, you can add other tags named after built-in or custom Web Part properties, to specify their default value. The following code example shows the content of a sample .dwp file that defines the defaults for the built-in FrameType property (set to None, so that the Web Part does not have borders and title bar) and the custom Operator property. <?xml version="1.0" encoding="utf-8"?> <WebPart xmlns="" > <Title>Calculator</Title> <Description>A scientific online calculator</Description> <Assembly>MsdnParts</Assembly> <TypeName>MsdnParts.Calculator</TypeName> <FrameType>None</FrameType> <Operator xmlns="MsdnParts">Addition</Operator> </WebPart> The only difference between setting a built-in and a custom property is that for the latter case, you also need to specify the xmlns attribute after the property name. Its value must be the same as that used for the Namespace property of the XmlRoot attribute added to the Web Part class, as shown in the following example. [ToolboxData("<{0}:Hello runat=server></{0}:Hello>"), XmlRoot(Namespace="MsdnParts")] public class Hello : Microsoft.SharePoint.WebPartPages.WebPart { ... } Setting the default this way, instead of just from code within the class itself, allows the developer to distribute a single, compiled package. Web site administrators can then change the default values on their own, without requesting a change and recompilation of the source code. Writing Web Parts with Preview Support for FrontPage When you try to modify a Web Part from Microsoft Office FrontPage 2003, you may get an error message saying that a preview for that Web Part is not available. To add a preview for FrontPage 2003, a Web Part must implement the Microsoft.SharePoint.WebControls.IDesignTimeHtmlProvider interface. This interface exposes a single method, GetDesignTimeHtml, which returns the static HTML shown from within the editor. Often, what you can display at design time is not what the Web Part actually displays once deployed, because the output may depend on dynamic queries to a database, on the role of the current user, and on other factors. So, what you return from GetDesignTimeHtml is sample HTML, which should give the designer an idea of what the Web Part will look like. You can, however, build the resulting HTML according to the Web Part's style properties, so that at least the appearance of the Web Part—if not the content—does not change at run time. In the simplest case, you may even return a static HTML description, saying that in that place, the user will see the real Web Part. Here is an example that shows how to implement the interface to produce a preview representing a descriptive string over a yellow background. public class Calculator : Microsoft.SharePoint.WebPartPages.WebPart, Microsoft.SharePoint.WebPartControls.IDesignTimeHtmlProvider { public string GetDesignTimeHtml() { return @"<div style=""border: black 1px solid; background-color: yellow""> The definitive online calculator will be shown here...</div>"; } // the rest of the Web Part's code... } Figure 1 shows the preview from inside FrontPage 2003. Figure 1. The Web Part preview from inside FrontPage 2003 An alternative solution consists of using a static image as a preview. To do this, you set the GetDesignTimeHtml method to output an <img> tag that refers to it. This approach has an important limitation: the output is fixed. That is, it does not change according to the values of the Web Part's style properties. An advantage, however, is that you can change the preview image without modifying and recompiling the source code. If you want to try this approach, add the image to your project, set its Build Action property to "Content", and set a reference for it in the Manifest.xml file. If you have a CAB project in your Microsoft Visual Studio .NET solution and have added the Primary Project's Content Files to it, all the Web Part project's files for which the Build Action property is set to "Content" are included in the resulting .cab file, including the Manifest.xml file and all the .dwp files. When you run the Stsadm.exe tool, it reads the Manifest.xml file and deploys all the referenced resources (the preview image and other files that you may have, such as documentation or license files) to a directory named as the assembly and located in the path, /wpresources. This is something you must do by hand if you do not use Stsadm.exe and .cab packages. At this point, in the GetDesignTimeHtml method, you return the HTML code that displays the image, whose virtual path is automatically retrieved through the ClassResourcePath property of the WebPart base class, as shown in the following example. public string GetDesignTimeHtml() { return string.Format( @"<img border=""0"" src=""{0}"">", this.ClassResourcePath + "/preview.jpg"); } Customizing the Web Part Shortcut Menu By default, a Web Part's shortcut menu shows selections that allow you to minimize, restore, close, or delete the Web Part, or create a connection to another Web Part. You can customize this menu by modifying existing items (to hide or disable them, according to a custom logic) or adding new ones. To do this, you must override the CreateWebPartMenu method of the WebPart base class and access the MenuItems collection of the Web Part's WebPartMenu property. Every single menu item is represented by a MenuItem instance, which exposes properties that allow you to set the item's visibility, enabled state, title, client-side JavaScript, and a reference to a server-side event handler. It is possible to get a reference to an existing item by its index or ID. Here is a list of the default items' IDs: - MSOMenu_Minimize - MSOMenu_Restore - MSOMenu_Close - MSOMenu_ - MSOMenu_Edit - MSOMenu_Connections - MSOMenu_Export - MSOMenu_Help The next code example does the following things: Gets a reference to the "Minimize" item (ID = MSOMenu_Minimize), disables it, and sets its BeginSection property to true so that a separator line is created between it and the item above it. Creates a new menu item, added to the top of the menu, that posts back to the server when clicked. The server-side event handler sets the Web Part's Text property to a log message. Creates another new menu item, added below the previous new item, that when clicked executes client-side JavaScript code that displays a greeting message for the current user. public override void CreateWebPartMenu() { MenuItem mnuMinimize = this.WebPartMenu.MenuItems.ItemFromID( "MSOMenu_Minimize"); mnuMinimize.BeginSection = true; mnuMinimize.Enabled = false; SPWeb web = SPControl.GetContextWeb(this.Context); MenuItem mnuGreeting = new MenuItem( "Test Client-side command", string.Format( "javascript:return alert('Hi {0}, nice work!');", web.CurrentUser.Name)); MenuItem mnuPostBack = new MenuItem( "Test server-side command", "MSDNCMD1", new EventHandler(mnuPostBack_Click)); this.WebPartMenu.MenuItems.Insert(0, mnuGreeting); this.WebPartMenu.MenuItems.Insert(0, mnuPostBack); } private void mnuPostBack_Click(object sender, EventArgs e) { this.Text = "Successful postback!"; } Figure 2 shows the customized shortcut menu. Figure 2. The customized Web Part shortcut menu Using Impersonation to Access Data that the Current User Cannot Access Windows SharePoint Services uses Microsoft Windows integrated security to authenticate users. Windows SharePoint Services is based on the Microsoft ASP.NET infrastructure that allows impersonating the authenticated user so that the page requests run under the current user's context. This allows the administrator to give different permissions, at the Web and list level, to different users and groups. If the user tries to gain access to a page of a list or document library to which they do not have access, they are prompted to insert the user name and password for a user with the proper privileges. A user may also have permission to read data but not to modify it. Many security checks are done at the object model level. This means that if you access Windows SharePoint Services classes from a custom Web Part or page and you try to read or modify some data, the code runs under the context of the user sending the request. If the user does not have permissions for that operation, calls to the object model classes send back an error code that makes the browser ask for a new user name and password. At times, however, you may want to do something with the object model that the current user is not allowed to do directly. For example, you may need to read or update data from a lookup list placed in a top-level site and shared among many subsites that the user does not know and does not have access to. Alternatively, you may want to deny to all users the option to upload a file to a document library through the default user interface. You may want to build a custom page to do this and check the user's permissions against your own database. Typically, this is necessary when you create a portal that integrates multiple external services and you have some sort of external single sign-on database. In these cases, you can temporarily impersonate a user that does have all the permissions your code needs to run, and then revert to the original user as soon as your procedure completes. The LogonUser function of the Microsoft Windows API takes as input the user name, password, and domain of a Windows user and returns (as an output by-ref parameter) a token. This token is used to create an instance of the .NET System.Security.Principal.WindowsIdentity class representing that Windows user. Then, you impersonate the specified user by calling the object's Impersonate method. In the following example, the class wraps the details of the LogonUser function and exposes an easy-to-use method. class SecurityHelpers { private SecurityHelpers() {} ); public static WindowsIdentity CreateIdentity( string userName, string domain, string password) { IntPtr tokenHandle = new IntPtr(0); const int LOGON32_PROVIDER_DEFAULT = 0; const int LOGON32_LOGON_NETWORK_CLEARTEXT = 3; tokenHandle = IntPtr.Zero; bool returnValue = LogonUser(userName, domain, password, LOGON32_LOGON_NETWORK_CLEARTEXT, LOGON32_PROVIDER_DEFAULT, ref tokenHandle); if (false == returnValue) { int ret = Marshal.GetLastWin32Error(); throw new Exception("LogonUser failed with error code: " + ret); } WindowsIdentity id = new WindowsIdentity(tokenHandle); CloseHandle(tokenHandle); return id; } } The call to the Impersonate method of the WindowsIdentity class returns an instance of WindowsImpersonationContext, whose Undo method cancels the impersonation and reverts to the previous user. The following code example represents a skeleton for impersonating an administrative user, doing some actions that require high-level privileges, and canceling the impersonation. WindowsImpersonationContext wic = null; try { wic = SecurityHelpers.CreateIdentity( "AdminName", "DomainName", "AdminPwd").Impersonate(); } finally { if ( wic != null ) wic.Undo(); } Of course, you should save the user name, password, and domain settings to a configuration file (typically, web.config, in the case of a Web application) or the registry, and not hard-code these items. Then, you do not have to modify and recompile the code if you need to rename the account or impersonate a different user. An additional aspect of security in Windows SharePoint Services has to do with Code Access Security (CAS). The .NET runtime can grant different assemblies different permissions (to do I/O on some parts of the local or remote disk drive, run queries against a database, execute unmanaged API functions, and the like), according to the origin, strong name, digital certificate, or other properties of the assembly. For ASP.NET applications, and thus for Web Parts, the available permissions are defined by a policy file referred to in the machine.config file (at the machine level) or the web.config file (at the site level). Windows SharePoint Services comes with two custom policy files that are added to the standard ones: wss_minimaltrust.config and wss_mediumtrust.config. These policy files are referred to in the web.config file located in the root directory of the IIS site extended by Windows SharePoint Services (typically drive:\Inetpub\wwwroot). By default, the trust level used by Windows SharePoint Services is set to WSS_Minimal (corresponds to wss_minimaltrust.config), which defines very limited permissions. With this trust level, you cannot perform any I/O on the disk (or on the Isolated Storage), access the Windows registry and Event Log, query a database, use Reflection, and more. The WSS_Medium trust level (corresponds to wss_mediumtrust.config) allows you to do more—such as query a Microsoft SQL Server database through the SqlClient managed provider or access a Web service deployed on the same Web server—but the medium level can still be too constraining in many situations. The following code example shows an extract of web.config that defines these settings. <?xml version="1.0" encoding="utf-8"?> <configuration> <!—other SharePoint configuration settings... --> <system.web> > <trust level="WSS_Minimal" originUrl="" /> <!-- more settings... --> </system.web> </configuration> This example uses a couple of Windows API functions to impersonate a specific account, but neither of the two Windows SharePoint Services policy files includes the permissions to execute those unmanaged functions. As a developer, you have two possible solutions for this: - Write your own policy file, add a reference to it into web.config, and set the "level" attribute of the <trust> tag to the name of the new <trustLevel> entry. This approach is the safest and most flexible, because it allows you to give a particular Web Part (or maybe all Web Parts with a particular strong name or publisher) just the permissions it actually needs, and nothing more. However, it is also the most difficult to implement, because it requires you to know in detail many permission classes that have to do with CAS. - Set the "level" attribute of the <trust> tag to "Full". In this solution, you assign full-trust (permissions for doing everything) to all assemblies located under the local /bin folder. This could, of course, seem like it decreases the overall level of security, but you should also consider that when you install a Web Part into the GAC (by executing the Stsadm.exe tool with the –globalinstall parameter), you are assigning full trust to it anyway. A more comprehensive coverage of CAS is beyond the scope of this article, because it is something that has to do with ASP.NET and the .NET runtime in general, and is not specific to Windows SharePoint Services. You can find more detailed information about this topic in the following documents: - Code Access Security in the Microsoft .NET Framework Developer's Guide - Microsoft Windows SharePoint Services and Code Access Security, a technical article in the MSDN Library Referring to Lists and Fields There are two ways you can refer to a specific list from the Lists collection of an SPWeb object: by ID or by title. The ID is a GUID, and as such is unique and is generated when the list is created, so you cannot know it at design-time. Using the title seems much easier, because you can decide to title a list with a particular name, which you can expect to find at run time. However, the problem with referring to a list by title is that a Web designer or administrator can change the title of a list and thus break your custom code. The simplest and probably best solution to the problem is to save the list title in the web.config file under the IIS site's root folder, or under your custom application's root folder. You can then retrieve it from code when you need to refer to the list. Here is how to add a new setting to the web.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="TasksListTitle" value="CustomTasks" /> </appSettings> <system.web> ... </system.web> </configuration> Here is how you retrieve the value and get a reference to the correct list. If you are within a Web context, you can use the following SPControl object. SPWeb web = SPControl.GetContextWeb(this.Context); Otherwise, you must instantiate a Web object by using a full URL, as follows. SPWeb web = new SPSite("").OpenWeb(); string listTitle = ConfigurationSettings.AppSettings["TasksListTitle"]; SPList tasks = web.Lists[listTitle]; Every time the administrator renames a list, the administrator must also change the value in the configuration file. A slightly different approach is to store the list ID instead of the title in the web.config file, so that the administrator has to set the value only once, when the list is created. In fact, since the ID is a unique GUID, it can always (and actually does) safely remain the same, even if the list is renamed. Your setting would look like the following. <add key="TasksListID" value="dba4c9f4-5ad5-4a0e-8463-55f9c839b738" /> From your code, you retrieve the value as a string, and then create a new GUID from it, as follows. Guid listID = new Guid(ConfigurationSettings.AppSettings["TasksListID"]); SPList tasks = web.Lists[listID]; To find the ID of a list, you can write a custom ASP.NET page—to be deployed to a folder under the standard /_layouts virtual folder—that executes the following code to print the title and ID of each list for the Web site under whose context the custom page is to run. SPWeb web = SPControl.GetContextWeb(this.Context); foreach ( SPList list in web.Lists ) { Response.Write("<li>" + list.Title + " - " + list.ID.ToString() + "</li>"); } You encounter a similar problem when you refer to fields, because they also can be renamed. If you refer to them in code by using a fixed title, you may get an exception because the field is not found. Here, the solution is simpler, because each field can be identified by two means from the Fields collection of the SPList object: by title or by internal name. The title is the string that you see on the screen in the list's page. The internal name can be the same as the title, but it can also be anything else. There are two important differences between title and internal name. First, there can be multiple fields with the same title, but the internal name must be unique. Second, the title can be renamed, but the internal name cannot. Thus, it is easy to guess that identifying the field by internal name is recommended, because if you know it, you can be sure that it is not renamed. You can be sure that you refer to the right field and not to another one with the same caption. In addition, when you access the Fields indexer property and pass a string, internally this string is first interpreted as an internal name. Only if there is no field named as such is it used to search the field by title. Therefore, passing the internal name is safer and even faster than using the title. Here is how to retrieve the mappings between the titles and the internal names. The following code example shows how to loop through all the lists of the current Web and print their fields' titles, internal names, and types. SPWeb web = SPControl.GetContextWeb(this.Context); foreach ( SPList list in web.Lists ) { Response.Write("<b>" + list.Title + "</b><ul>"); foreach ( SPField field in list.Fields ) Response.Write("<li>" + field.Title + " (" + field.InternalName + ") - " + field.TypeAsString + "</li>"); Response.Write("</ul><p>"); } As an example, here is the output for the default Announcements list: - ID (ID) – Counter - Title (Title) – Text - Modified (Modified) – DateTime - Created (Created) – DateTime - Created By (Author) – User - Modified By (Editor) – User - owshiddenversion (owshiddenversion) – Integer - Attachments (Attachments) – Attachments - Approval Status (_ModerationStatus) – ModStat - Approver Comments (_ModerationComments) – Note - Edit (Edit) – Computed - Title (LinkTitleNoMenu) – Computed - Title (LinkTitle) – Computed - Select (SelectTitle) – Computed - InstanceID (InstanceID) – Integer - Order (Order) – Number - GUID (GUID) – Guid - Body (Body) – Note - Expires (Expires) – DateTime Alternatively, you can use a free tool from Navigo Systems A/S, called SharePoint Explorer, to retrieve the mappings. This freeware desktop application uses the Windows SharePoint Services object model to provide an easy-to-navigate, treeview-based representation of the virtual server/top-level site/subsites hierarchy deployed on the local server. It also displays a list of properties for each type of object, including the list IDs, the fields' titles and names, and many other properties. You can use one of the aforementioned two approaches for the built-in lists, but for custom lists, you must decide how to name and title the columns. When you first create a column, the name you specify is used for both the internal name and the title. After you have created a column, you can only rename the title. The internal name cannot contain spaces, accented letters, or other special characters, because when you create a column using one of these things, the characters are replaced with their hexadecimal representation. For example, "Due date" becomes "Due_x0020_date" and "E-mail" becomes "E_x002d_mail". You should use a simple name when you first create the column—for example "DueDate" or "Email"—so that the internal name does not get modified, and then rename the title of the column. When you refer to the column from code, you can still safely get it by passing "DueDate" and "Email" to the Fields indexer. Making Updates with the AllowUnsafeUpdates Property of SPWeb Class Updating the properties of a Web, list, or list item requires that one of the following two conditions is true: A <FormDigest> control instance is placed on the Web Form that is executing the update. This control generates a security token that is validated when the page is posted to the server and the code performing the update runs. This has the limitation that the update can be done only from within a POST request (that is, a postback in ASP.NET pages and Web Parts), and that you can, of course, place this control only on Web pages. You cannot use it from inside other types of applications such as Windows Forms or console programs. The following code shows the control reference and declaration into a Web page. <%@ Page <SharePoint:FormDigest /> ... The AllowUnsafeUpdates property of SPWeb is set to true so that the security token validation is not performed. This allows running updates from GET and POST requests and from any type of client application, thus making this the preferred approach in most situations. The following code example shows how to disable the security checks temporarily so that you can rename the current Web's title. SPWeb web = SPControl.GetContextWeb(this.Context); web.AllowUnsafeUpdates = true; web.Title = web.Title + " MOD"; web.Update(); web.AllowUnsafeUpdates = false; Note that once you enable "unsafe" updates at the Web level, you can also perform updates on its child lists and list items. Filtering a Collection of List Items by Using Views or CAML Queries Many developers retrieve content from a list just by calling the Items property of an SPList instance, or its GetItems method without any arguments. By doing this, they retrieve all the list items contained in the list, which can easily mean several thousand items. Most of the time, you only need to work with a few elements, but first retrieving the complete collection and then checking items one by one to find those that match a certain condition is definitely the wrong approach. The correct approach is to apply some sort of filtering up front so that you directly retrieve only items that you actually need to work with. That way, you do not waste database, memory, and CPU resources. You can choose from a couple of approaches to retrieving filtered results, such as using pre-built static views or using Collaborative Application Markup Language (CAML) to define one or more filter conditions. In the former case, you have a view, defined from inside the browser, that filters the data with hard-coded conditions and values. Consider, for example, the "Active Tasks" view of the Tasks lists. You can use an existing view from your code to retrieve only those items that match that view's conditions. A view is represented by the SPView class, and you get a reference to a view through the Views collection of SPWeb by specifying its title. Once you have the SPView object, you can use it as an input parameter when you call the GetItems method of SPList to get back an SPListItemCollection reference, as shown in the following example. SPWeb web = SPControl.GetContextWeb(this.Context); SPList tasks = web.Lists["Tasks"]; SPView activeTasks = tasks.Views["Active Tasks"]; SPListItemCollection items = tasks.GetItems(activeTasks); foreach ( SPListItem item in items ) { // process this task item... } This approach is acceptable in some situations; however, most of the time you must apply dynamic filters according to the user's input. In this situation, predefined views are not much help. Instead of passing an SPView object to the GetItems method, you pass an SPQuery object, which defines the filtering conditions and optionally the sorting options. The following example shows how to retrieve all the task items whose Title field contains the word "Approve". SPQuery query = new SPQuery(); query.Query = @"<Where><Contains> <FieldRef Name='Title'/><Value Type='Text'>Approve</Value> </Contains></Where>"; // retrieve the filtered items SPListItemCollection items = tasks.GetItems(query); Of course, you typically build the query string dynamically according to the user's search criteria. The query is written in CAML, the XML language used to define the schema of any Windows SharePoint Services object (site, list, field, view, and so forth). The query resides either in the site and list definition XML files (located in the path, drive:\Program Files\Common Files\Microsoft Shared\Web server extensions\60\Template) that serve as templates when creating new sites and lists, or in the content database's Lists table where the lists are configured. The structure of a simple CAML query is defined as follows. <Where><[Operator]> <FieldRef Name='[FieldTitle]'/><Value Type='[FieldType]'>[Value]</Value> </[Operator]></Where> You can replace the [Operator] placeholder with one of the following operators: - Eq = equal to - Neq = not equal to - BeginsWith = begins with - Contains = contains - Lt = less than - Leq = less than or equal to - Gt = greater than - Geq = greater than or equal to - IsNull = is null - IsNotNull = is not null Possible values that can replace [FieldType] are Boolean, Choice, Currency, DateTime, Guid, Integer, Lookup, Note, Text, User, and the others listed in the SPListItem Class****topic in the Microsoft SharePoint Products and Technologies Software Development Kit. You can also define a query that contains multiple OR/AND conditions. Here is how you can define a query with two OR conditions, to select all items whose Name field is equal to either "Tony" or "John": <Where> <Or> <Eq><FieldRef Name='Name'/><Value Type='Text'>Tony</Value></Eq> <Eq><FieldRef Name='Name'/><Value Type='Text'>John</Value></Eq> </Or> </Where> A limitation of the <Or> and <And> blocks is that they can contain just two conditions. If you want to have more, you have to define an <Or> / <And> section that contains an inner <Or> / <And> section in place of one of the two conditions. The following examples show how to add a further possible value for the previously described query. <Where> <Or> <Eq><FieldRef Name='Name'/><Value Type='Text'>Tony</Value></Eq> <Or> <Eq><FieldRef Name='Name'/><Value Type='Text'>John</Value></Eq> <Eq><FieldRef Name='Name'/><Value Type='Text'>Mary</Value></Eq> </Or> </Or> </Where> As soon as you need to build more complex queries with even more conditions and different operators, you will start to feel the need for a tool that can make things smoother to write and test. The CAML Builder tool available from U2U Community Tools is freeware that allows you to build the CAML query by selecting the fields and operators from pre-filled list boxes. You can define a single condition, join multiple conditions with AND / OR operators, and preview the results selected from the local content database. Sometimes it can be tricky to guess the correct type of list field, because it may not be immediately clear whether it is a Choice, Text, or Calculated field, or something else. In this situation, a tool like Ontolica SharePoint Explorer, introduced previously, is useful. You can also use it to look at the XML schemas of existing (built-in or custom) views, which you can use as templates for your own dynamic queries. This article provides only as much information about CAML as you need to build queries. For a more comprehensive coverage of CAML, with regard to either schema definition or output generation, refer to the Collaborative Application Markup Language section of the Microsoft SharePoint Products and Technologies Software Development Kit (SDK), or download the SharePoint Products and Technologies 2003 Software Development Kit (SDK) to work with programming tasks and samples. Using the DataTable Object to Work with List Items in Memory The SPListItemCollection class represents a collection of list items (SPListItem instances) and is returned by either the Items property of SPList, which returns the complete collection of list items, or by the GetItems method of SPList described previously. SPListItemCollection exposes a method called GetDataTable. This method returns an ADO.NET DataTable object that has the same schema as the parent SharePoint list and that is filled with the items of the SPListItemCollection instance. DataTable is useful for a number of reasons: - You can easily cache the data in memory (usually with the ASP.NET Cache, HttpApplicationState, and HttpSessionState classes), or add the table to a DataSet. If you add it to a DataSet, you can call its WriteXml method to serialize the items to disk as an XML file and retrieve them later by using the ReadXml method of the DataSet object. - You can use the ADO.NET DataView class to have multiple views on the same physical DataTable, each one with different filter conditions and sorting options. Multiple views are useful when you want to filter items or sort them without querying the database (by means of CAML queries, through the object model, as shown previously) multiple times. Using data caching and in-memory sorting and filtering can dramatically increase the performance of your application. - You can easily bind a DataTable/DataView to a DataGrid, DataList, Repeater, or any other ASP.NET data-bound control, to have the data rendered to the client from your custom pages or Web Parts. You may choose to write methods that wrap the code to retrieve the list's data and return a DataTable, and then the rest of the .aspx page would not differ at all from a typical .aspx page. The following code example shows how to retrieve a DataTable from an SPListItemCollection instance, get its default view, apply a filter to it, and output all the items of the resulting view. SPList tasks = web.Lists["Tasks"]; DataTable table = tasks.Items.GetDataTable(); DataView view = table.DefaultView; view.RowFilter = "Title LIKE 'Approve'"; for ( int i = 0; i <= view.Count-1; i++ ) Response.Write(view[i]["Title"].ToString() + "<br>"); The code can be even simpler if you have wrapper methods that hide the code to access the raw data through the Windows SharePoint Services object model. public DataTable GetTasks() { // if the DataTable is not already in cache, get a reference // to the SPList object, retrieve the DataTable from its // SPListItemCollection, put it in cache for 10 minutes, // and return it. Return it immediately if already in cache. if ( this.Cache["TaskItems"] == null ) { SPWeb web = SPControl.GetContextWeb(this.Context); SPList tasks = web.Lists["Tasks"]; DataTable table = tasks.Items.GetDataTable(); this.Cache.Insert("TaskItems", table, null, DateTime.Now.AddMinutes(10), Cache.NoSlidingExpiration); return table; } else { return (DataTable) this.Cache["TaskItems"]; } } // ... DataTable table = GetTasks(); // the rest does not change... The only limitation of using a DataTable instead of accessing the SPListItem object directly is that there is no built-in method to synchronize a modified DataTable (with edited or added rows) with the original SharePoint list. Thus, you should use this method every time you cache a list's data and need to apply various filters and sorting to it. As the following example shows, you still must retrieve the real SPListItem reference (typically through the GetItemById method of SPList, by passing in the ID value read from a DataRow/DataRowView) when you want to update it. int taskID = (int) view[rowIndex]["ID"]; SPListItem task = tasks.GetItemById(taskID); task["Title"] = task["Title"].ToString() + " MOD"; task.Update(); Response.Write(task["ID"].ToString() + " - " + task["Title"].ToString()); Customizing the Context Menu of Document Library Items Each document in a document library has a context menu with items such as Check In, Check Out, View Properties, Edit Properties, and others. This menu is created by the AddListMenuItems JavaScript function of the Ows.js file located in the path, C:\Program Files\Common Files\Microsoft Shared\web server extensions\60\TEMPLATE\LAYOUTS\1033. The function starts as shown in the following example. function AddListMenuItems(m, ctx) { if (typeof(Custom_AddListMenuItems) != "undefined") { if (Custom_AddListMenuItems(m, ctx)) return; } // ... } This code checks whether another function named Custom_AddListMenuItems exists, and if it does, it calls it. That function is not defined by Windows SharePoint Services itself but is a function that you can write. The function serves as an injection point to add your own menu items to the document's standard context menu. If the function returns a value of true, the standard AddListMenuItems exits immediately; otherwise, it adds the default items. Here is an example implementation that adds a new item called "Say hello" and a separator line. The new item command item shows a greeting message when clicked. function Custom_AddDocLibMenuItems(m, ctx) { CAMOpt(m, 'Say hello', 'alert("Hello there!"); ', '/images/greeting.gif'); CAMSep(m); return false; } As you see, you can add a new item by calling CAMOpt, which takes as input the target menu (received as a parameter by the Custom_AddDocLibMenuItems function), the name of the item, the JavaScript that runs when the item is clicked, and the URL of an image shown to the left of the item. CAMSep adds the separator line. This example is simple to write and understand but is not realistic. You typically need to add new commands according to a number of rules and conditions that make up your business logic. For example, some commands may be available only to picture documents; others may be available only if the item is not checked out or according to the current user's roles. You could also retrieve some of this information from the JavaScript code, by using DHTML DOC to read them directly from the page's HTML, or from the ctx parameter that represents the context. However, in more complex cases, you must retrieve the list of available commands from the server, because only there you can run your business logic and perhaps get the commands from a custom database. Typically, you want to do this if you are implementing a workflow solution where each document has its own process state, with commands associated to it. The solution for this situation is to have the Custom_AddDocLibMenuItems dynamically call a custom ASP.NET page. This page takes the ID of the document library and the specific item on the query string, and returns an XML string containing all the information for the commands available for that particular document. These commands are available according to the document's process status (or some other custom business logic). The returned XML may be something like the following. <?xml version="1.0" encoding="UTF-8" ?> <Commands> <Command> <Name><![CDATA[Say hello]]></Name> <ImageUrl><![CDATA[/images/greeting.gif]]></ImageUrl> <Script><![CDATA[alert('Hello there');]]></Script> </Command> ...other commands... </Commands> The sample page that generates this XML can return either one or two commands: - If the document is an image (its FieldType attribute is "GIF", "BMP", "JPG" or "PNG"), it returns a command that opens the file in a secondary browser window, without toolbars, menu bars, and status bar. - For any type of document, it returns a command that, when clicked, runs a search for that document's title on the Google search service. In reality, you could do both these things directly from the JavaScript function, but this is just an example to show how to write and call server-side logic from the client, for each document's context menu. The following code example shows the sample page's Load event handler. SPWeb web = SPControl.GetContextWeb(this.Context); Guid listID = new Guid(this.Request.Params["ListID"]); int itemID = int.Parse(this.Request.Params["ItemID"]); SPListItem item = web.Lists[listID].Items.GetItemById(itemID); string fileUrl = (web.Url + "/" + item.File.Url); string fileName = item["Name"].ToString(); this.Response.ClearHeaders(); this.Response.ClearContent(); this.Response.Cache.SetCacheability(HttpCacheability.NoCache); this.Response.AddHeader("Content-type", "text/xml" ); string cmdPattern = @"<Command> <Name><![CDATA[{0}]]></Name> <ImageUrl><![CDATA[{1}]]></ImageUrl> <Script><![CDATA[{2}]]></Script> </Command>"; this.Response.Write(@"<?xml version=""1.0"" encoding=""UTF-8"" ?>"); this.Response.Write("<Commands>"); string fileType = item["File_x0020_Type"].ToString().ToUpper(); if ( fileType == "BMP" || fileType == "GIF" || fileType == "JPG" || fileType == "PNG" ) { string jsOpenWin = "window.open('" + fileUrl + "', '', 'menubar=no,toolbar=no,status=no,scrollbars=yes,resizable=yes');"; this.Response.Write(string.Format(cmdPattern, "View in new window", Page.ResolveUrl("~/images/preview.gif"), jsOpenWin)); } string jsSearch = "location.href='" + fileName + "';"; this.Response.Write(string.Format(cmdPattern, "Search on the web", Page.ResolveUrl("~/images/search.gif"), jsSearch)); this.Response.Write("</Commands>"); this.Response.End(); At this point, Custom_AddDocLibMenuItems is written as a generic function that uses the XMLHTTP Microsoft ActiveX object to send a request to the custom ASP.NET page. The function passes the current list and document IDs to the query string and then parses the returned XML. For each <Command> element it finds, it adds a new menu item with the specified name, image, and JavaScript URL, as shown in the following example. <script language="javascript"> function Custom_AddDocLibMenuItems(m, ctx) { var request; var url = ctx.HttpRoot + "/_layouts/CustomMenuItems/GetCommands.aspx?ListID=" + ctx.listName + "&ItemID=" + currentItemID + "&DateTime=" + Date(); if ( window.XMLHttpRequest ) { request = new XMLHttpRequest(); request.open("GET", url, false); req.send(null); } else if ( window.ActiveXObject ) { request = new ActiveXObject("Microsoft.XMLHTTP"); if ( request ) { request.open("GET", url, false); request.send(); } } if ( request ) { var commands = request.responseXML.getElementsByTagName("Command"); // for each command found in the returned XML, extract the name, // image Url and script, and a new menu item with these properties for ( var i = 0; i < commands.length; i++ ) { var cmdName = commands[i].getElementsByTagName( "Name")[0].firstChild.nodeValue; var imageUrl = commands[i].getElementsByTagName( "ImageUrl")[0].firstChild.nodeValue; var js = commands[i].getElementsByTagName( "Script")[0].firstChild.nodeValue; CAMOpt(m, cmdName, js, imageUrl); } // if at least one command was actually added, add a separator if ( commands.length > 0 ) CAMSep(m); // returning false makes SharePoint render the rest of the standard menu return false; } } </script> Notice that the function appends the current date and time to the query string of the GetCommands.aspx page that is called. This is done so that each request is different from a previous one, to avoid caching of the response XML. This is necessary if you are implementing a workflow solution, and a different user may change the document's state after you load the document library's page. If you consider that the Custom_AddDocLibMenuItems function is called every time the drop-down menu pops up, this trick allows you to retrieve the commands to the real current document state, without refreshing the whole page. Figure 3 shows the customized menu. Figure 3. The new items for the document's context menu In this sample implementation, GetCommands.aspx returns commands with simple JavaScript code associated to them. However, in a real workflow, you typically return JavaScript that executes another XMLHTTP request to a page that actually performs some server-side action on the clicked document. Conclusion The Windows SharePoint Services object model allows you to have access to its services and data from your own applications and Web Parts, so that you can really integrate Windows SharePoint Services with your information processes and business logic. In this article, you learned some best practices and tricks that, together with the product documentation, can help you write faster and more reliable code, and customize small aspects of the default user interface for workflow integration. About the Author Marco Bellinaso works as a consultant, developer, and trainer for Code Architects Srl, and focuses on Web programming with ASP.NET, Windows SharePoint Services, and related technologies. He is a frequent speaker at important Microsoft conferences in Italy, and was a co-author for a number of Wrox Press books including ASP.NET Website Programming. He also writes articles for programming magazines and Web sites, and recently founded the Italian User Group for SharePoint. This article was produced in partnership with A23 Consulting.
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint2003/dd583162(v=office.11)
CC-MAIN-2018-34
refinedweb
6,902
53.21
Grails: execute sql script in Bootstrap This blog might help you to speed up your bootstrap process, especially when you need to populate records in tables. Earlier we used populate our database table by reading line by line from a CSV file and creating Domain Class object ad Save. But this was taking a huge time. And in our case, this data (more like a static information) was always same. So we came up with an idea – we took mysqldump for this particular table and saved it to our application’s web-app/data directory. Now the thing was, we just need to execute this “sql” file during bootstrap! Below is the example how to execute database dump files in bootstrap! import groovy.sql.Sql import org.codehaus.groovy.grails.commons.ConfigurationHolder as CH String sqlFilePath = ApplicationHolder.application.parentContext.servletContext.getRealPath("/data/table_dump.sql") String sqlString = new File(sqlFilePath).text Sql sql = Sql.newInstance(CH.config.dataSource.url, CH.config.dataSource.username, CH.config.dataSource.password, CH.config.dataSource.driverClassName) sql.execute(sqlString) Hope this helped! Cheers! Anshul Sharma GrewGreat !!!! And how executing a mysqldump in the same way ? I mean … How to execute a backup dump in a specific directory ? Thanks I am working on grails project with Database MySQL, i want to take a backup of Database through project. How to do this thing. Please give me a solution as soon as Possible. Thanks. Bhavesh Shah Hi Anshul…. Thanks foe nice post.I have one query regarding running sql script in bootstrap.. Suppose we have to run total Database backup (mydatabase.sql) which contains all create,insert queries of tables. How can we do this.? Or we have to create separate separate sql files for only insert..Please guide me.. Thanking You.. hi Anshul, thanks for the post. The Oracle databases do not run with semicolon at the end of sentence. Then I have to do a variation: String sqlString = new File(sqlFilePath).eachLine { sql.execute(it) } Separating sentences in lines and executing each one. Hope this help to Oracle victims
http://www.tothenew.com/blog/grails-execute-sql-script-in-bootstrap/?replytocom=37750
CC-MAIN-2019-18
refinedweb
343
61.63
Dart scrabble word unscrambler, for use in word games (i.e. Scrabble) or as an int32 benchmark reference. Unscrambler requires a word list, sowpods is included (official Scrabble word list, over 200000 words). The program will preprocess each word and convert it into a list of int32 values. To find matches, it will then run over all word int32 values and find matches using only 2 bitwise operations. The algo is quite fast, on my machine it can find all matching words for a given scrambled word in about 4ms. Add the unscramble package to your pubspec.yaml file: dependencies: unscramble: any void main() { final String V = 'stbalet'; // scrambled word final int numBlanks = 0; // Scrabble blank letters final String C = new File('bin/sowpods.txt').readAsStringSync(); // word list final Dictionary D = new Dictionary(C); print(match(D, V, numBlanks)); // all matches print(anagrams(D, V)); // all anagrams of the same word length } example/example.dart import 'dart:io'; import 'package:unscrambler/unscrambler.dart'; void main() { /// Fetch the English words dictionary file final allWords = new File('bin/sowpods.txt').readAsStringSync(), dictionary = new Dictionary(allWords); /// let's define a Scrabble play state: /// We have 7 letters, 6 of those are random, 1 is blank (wildcard) const rndLetters = 'paenxd', numWildcards = 1; final allWordMatches = dictionary.match(rndLetters, numWildcards); /// now we only care for words that are 7 letters large print(allWordMatches.where((bin) => bin.word.length >= 7)); /// yields (expands, spandex) /// you can also list all anagrams of any given word: print(dictionary.anagrams('battles', 0)); /// yields [batlets, battels, battles, blatest, tablets] } Add this to your package's pubspec.yaml file: dependencies: unscrambler: ^1.0.0+2 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:unscrambler/unscrambler.
https://pub.dartlang.org/packages/unscrambler
CC-MAIN-2019-18
refinedweb
312
56.96
Download presentation Presentation is loading. Please wait. Published byJevon Winchester Modified over 2 years ago 1 Data Abstraction UW CSE 190p Summer 2012 2 Recap of the Design Exercise You were asked to design a module – a set of related functions. Some of these functions operated on the same data structure – a list of tuples of measurements – a dictionary associating words with a frequency count Both modules had a common general form – One function to create the data structure from some external source – Multiple functions to query the data structure in various ways – This kind of situation is very common 3 What we’ve learned so far data structure – a collection of related data – the relevant functions are provided for you – Ex: list allows append, sort, etc. – What if we want to make our own kind of “list,” with its own special operations? module – a named collection of related functions – but shared data must be passed around explicitly – What if we want to be sure that only our own special kind of list is passed to each function? 4 What if we want to make our own kind of collection of data, with its own special operations? What if we want to be sure that only our own special kind of list is passed to each function? First attempt: Write several fn 5 Text Analysis def read_words(filename): “””Return a dictionary mapping each word in filename to its frequency””” words = open(filename).read().split() wordcounts = {} for w in words: cnt = wordcounts.setdefault(w, 0) wordcounts[w] = cnt + 1 return wordcounts def wordcount(wordcounts, word): “””Return the count of the given word””” return wordcounts[word] def topk(wordcounts, k=10): “””Return top 10 most frequent words””” scores_with_words = [(s,w) for (w,s) in wordcounts.items()] scores_with_words.sort() return scores_with_words[0:k] def totalwords(wordcounts): “””Return the total number of words in the file””” return sum([s for (w,s) in wordcounts]) # program to compute top 10: wordcounts = read_words(filename) result = topk(wordcounts, 10) 6 Quantitative Analysis import matplotlib.pyplot as plt def read_measurements(filename): “””Return a dictionary mapping column names to data. Assumes the first line of the file is column names.””” datafile = open(filename) rawcolumns = zip(*[row.split() for row in datafile]) columns = dict([(col[0], col[1:]) for col in rawcolumn return columns def STplot(measurements): “””Generate a scatter plot comparing salinity and temperature””” xs = tofloat(measurements, “salt”) ys = tofloat(measurements, “temp”) plt.plot(xs, ys) plt.show() def minimumO2(measurements): “””Return the minimum value of the oxygen measurement””” return min(tofloat(measurements, “o2”)) def tofloat(measurements, columnname): “””Convert each value in the given iterable to a float””” return [float(x) for x in measurements[columnname]] 7 Abstraction: Emphasis on exposing a useful interface. Encapsulation: Emphasis on hiding the implementation details. Information Hiding: The process by which you achieve encapsulation. Your job: Choose which details to hide and which details to expose. Terms of Art 8. Grady Booch 9 Data abstraction Data structures can get complicated We don’t want to have to say “a dictionary mapping strings to lists, where each list has the same length and each key corresponds to one of the fields in the file.” We want to say “FieldMeasurements” Why? 10 Tools for abstraction: Default Values As you generalize a function, you tend to add parameters. Downsides: – A function with many parameters can be awkward to call – Existing uses need to be updated 11 def twittersearch(query): ""”Return the responses from the query””” url = "" + query remote_file = urllib.urlopen(url) raw_response = remote_file.read() response = json.loads(raw_response) return [tweet["text"] for tweet in response["results"]] def twittersearch(query, page=1): ""”Return the responses from the query for the given page””” resource = “” qs = “?q=" + query + “&page=“ + page url = resource + qs remote_file = urllib.urlopen(url) raw_response = remote_file.read() response = json.loads(raw_response) return [tweet["text"] for tweet in response["results"]] 13 Procedural Abstraction: Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3550843/
CC-MAIN-2017-17
refinedweb
652
52.8
This is a short & sweet UUID function. Uniqueness is based on network address, time, and random. Another good point: does not create a "hot spot" when used in a b-tree (database) index. In other words, if your IDs look something like "abc100" and "abc101" and "abc102"; then they will all hit the same spot of a b-tree index, and cause the index to require frequent reorganization. On the other hand, if your IDs look more like "d29fa" and "67b2c" and "e5d36" (nothing alike); then they will spread out over the index, and your index will require infrequent reorganization. Discussion Down side: Not especially fast. I would integrate a similar C or java version of this (without the MD5) into my python code if this is a problem. socket missing... It's a tad faster if actually import socket... Beware the blanket exception. What about compliance with the UUID spec? This algorithm does not appear to generate correct UUIDs. UUIDs are officially and specifically defined as part of the ISO-11578 standard [1]. The WebDAV spec [2] also defines, in section 6.4.1, a safe way to calculate the 'node' data required by the UUID algorithm in situations where a network address is either not available or could be a security risk. I have not seen ISO-11578 but I understand that it is very similar to the algorithm defined in the UUIDs and GUIDs Internet Draft [3] which does not appear to be similar to this code at all. Am I missing something obvious here? [1] ISO/IEC 11578 - Remote Procedure Call (RPC) [2] HTTP Extensions for Distributed Authoring -- WEBDAV [3] UUIDs and GUIDs If you are on a UNIX system with the uuidgen command. You can use: import commands def uuidgen(): return commands.getoutput('uuidgen') That should be. A UUID module is now included in the standard distribution. A uuid module is included in python 2.5+. 2.3 and 2.4 users can get the module from
http://code.activestate.com/recipes/213761/
crawl-002
refinedweb
332
74.29
Nick Coghlan <ncoghlan at gmail.com> wrote: > > Jim Jewett wrote: > > That said, I'm not sure the benefit is enough to justify the > > extra complications, and your suggestion of allowing strings > > for method names may be close enough. I agree that the > > use of strings is awkward, but ... probably no worse than > > using them with __dict__ today. > > An idea that was kicked around on c.l.p a long while back was "statement local > variables", where you could define some extra names just for a single simple > statement: > > x = property(get, set, delete, doc) given: > def get(self): > try: > return self._x > except AttributeError: > self._x = 0 > return 0 > def set(self, value): > if value >= 5: raise ValueError("value too big") > self._x = x > def delete(self): > del self._x > > As I recall, the idea died due to problems with figuring out how to allow the > simple statement to both see the names from the nested block and modify the > surrounding namespace, but prevent the names from the nested block from > affecting the surrounding namespace after the statement was completed. You wouldn't be able to write to the surrounding namespace, but a closure would work fine for this. def Property(fcn): ns = fcn() return property(ns.get('get'), ns.get('set'), ns.get('delete'), ns.get('doc')) class foo(object): @Property def x(): doc = "Property x (must be less than 5)" def get(self): try: return self._x except AttributeError: self._x = 0 return 0 def set(self, value): if value >= 5: raise ValueError("value too big") self._x = value def delete(self): del self._x return locals() In an actual 'given:' statement, one could create a local function namespace with the proper func_closure attribute (which is automatically executed), then execute the lookup of the arguments to the statement in the 'given:' line from this closure, but assign to surrounding scope. Then again, maybe the above function and decorator approach are better. An unfortunate side-effect of with statement early-binding of 'as VAR' is that unless one works quite hard at mucking about with frames, the following has a wholly ugly implementation (whether or not one cares about the persistance of the variables defined within the block, you still need to modify x when you are done, which may as well cause a cleanup of the objects defined within the block...if such things are possible)... with Property as x: ... > Another option would be to allow attribute reference targets when binding > function names: *shivers at the proposal* That's scary. It took me a few minutes just to figure out what the heck that was supposed to do. - Josiah
https://mail.python.org/pipermail/python-dev/2005-October/057420.html
CC-MAIN-2016-40
refinedweb
441
62.07
package Algorithm::LUHN; $Algorithm::LUHN::VERSION = '1.02'; use 5.006; use strict; use warnings; use Exporter; our @ISA = qw/Exporter/; our @EXPORT = qw//; our @EXPORT_OK = qw/check_digit is_valid valid_chars/; our $ERROR; # The hash of valid characters. my %map = map { $_ => $_ } 0..9; =pod =head1 NAME Algorithm::LUHN - Calculate the Modulus 10 Double Add Double checksum =head1 SYNOPSIS"); =head1 DESCRIPTION". =head1 FUNCTION =over 4 =cut =item is_valid CHECKSUMMED_NUM This function takes a credit-card number and returns true if the number passes the LUHN check. Ie it) For example, C<4242 4242 4242 4242> is a valid Visa card number, that is provided for test purposes. The final digit is '2', which is the right check digit. If you change it to a '3', it's not a valid card number. Ie: is_valid('4242424242424242'); # true is_valid('4242424242424243'); # false =cut sub is_valid { my $N = shift; my $c = check_digit(substr($N, 0,length($N)-1)); if (defined $c) { if (substr($N,length($N)-1, 1) eq $c) { return 1; } else { $ERROR = "Check digit incorrect. Expected $c"; return ''; } } else { # $ERROR will have been set by check_digit return ''; } } =item check_digit NUM This function returns the checksum of the given number. If it cannot calculate the check_digit it will return undef and set $Algorithm::LUHN::ERROR to contain the reason why. =cut sub check_digit { my @buf = reverse split //, shift; my $totalVal = 0; my $flip = 1; foreach my $c (@buf) { unless (exists $map{$c}) { $ERROR = "Invalid character, '$c', in check_digit calculation"; return; } my $posVal = $map{$c}; $posVal *= 2 unless $flip = !$flip; while ($posVal) { $totalVal += $posVal % 10; $posVal = int($posVal / 10); } } return (10 - $totalVal % 10) % 10; } =item valid_chars> =E<gt> C<value>. For example, Standard & Poor's maps A..Z to 10..35 so the LIST to add these valid characters would be (A, 10, B, 11, C, 12, ...) Please note that this I<adds> or I<re-maps> characters, so any characters already considered valid but not in LIST will remain valid. If you do not provide LIST, this function returns the current valid character map. =cut sub valid_chars { return %map unless @_; while (@_) { my ($k, $v) = splice @_, 0, 2; $map{$k} = $v; } } sub _dump_map { my %foo = valid_chars(); my ($k,$v); print "$k => $v\n" while (($k, $v) = each %foo); } =back =cut 1; __END__ =head1 SEE ALSO L<Algorithm::CheckDigits> provides a front-end to a large collection of modules for working with check digits. L<Business::CreditCard> provides three functions for checking credit card numbers. L<Business::CreditCard::Object> provides an OO interface to those functions. L<Business::CardInfo> provides a class for holding credit card details, and has a type constraint on the card number, to ensure it passes the LUHN check. L<Business::CCCheck> provides a number of functions for checking credit card numbers. L<Regexp::Common> supports combined LUHN and issuer checking against a card number. L<Algorithm::Damm> implements a different kind of check digit algorithm, the L<Damm algorithm|> (Damm, not Damn). L<Math::CheckDigits> implements yet another approach to check digits. I have also written a L<review of LUHN modules|>, which covers them in more detail than this section. =head1 REPOSITORY L<> =head1 AUTHOR This module was written by Tim Ayers (). =head1 COPYRIGHT Copyright (c) 2001 Tim Ayers. All rights reserved. =head1 LICENSE This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =cut
https://metacpan.org/release/Algorithm-LUHN/source/lib/Algorithm/LUHN.pm
CC-MAIN-2019-51
refinedweb
569
63.8
30 May 2016 A number of articles have been written about the new Configuration model in ASP.NET Core, but one of the things which does not seem to be highlighted quite often, is how it can protect you from accidently checking secrets (such as connection string passwords or OAuth keys) into source control. The have been various cases in the media over the past number of years where people have ended up on the wrong side of an Amazon Web Services bill after an unscrupulous operator have managed to get a hold of their AWS keys and used it to create EC2 instances. In ASP.NET Core this is dead-simple. Let us first look at a sample piece of code from the Startup class generated by one of the default ASP.NET Core application templates:(); } // Rest of class omitted for brevity... } As you can see in the configuration of the ConfigurationBuilder , by default the configuration will be read from 3 different sources. - The appsettings.jsonfile - The appsettings file which correlates with the current environment, e.g. appsettings.Development.json - The environment variables Configuration settings will be read from these 3 sources in order. Using the enviroment-specific appsettings file One of the first ways you can avoid checking in secrets is by using the environment-specific appsettings file and excluding that file from source control. So in the configuration specified above, if you run on your local machine and have configured the Development environment (read more about Working with Multiple Environments ), then the ASP.NET Core runtime is going to try and load settings from an optional appsettings.Development.json file. As you can see in the code snippet, this file is specified as optional, so what you can do is to specify your secret values inside this file, e.g. { "twitter": { "consumerKey": "your consumer key goes here", "consumerSecret": "your consumer secret goes here" } } This will make the configuration settings with the keys twitter:consumerKey and twitter:consumerSecret will be available inside your application. All you need to do is exlude the file from source conrol, so if you use Git then simply add the file to your .gitignore file , so it does not get checked in. You can even make it more explicit that the file contains secrets, by naming it secrets.json and excluding the secrets.json file from source control: public class Startup { public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) .AddJsonFile($"secrets.json", optional: true) .AddEnvironmentVariables(); Configuration = builder.Build(); } // Rest of class omitted for brevity... } Store them as environment variables You may have noticed the call to AddEnvironmentVariables in the code samples above. What this does is that it will load configuration values from environment variables. So using the example of the Twitter Consumer Key and Secret above, I can simple specify environment variables twitter:consumerKey and twitter:consumerSecret with the relevant values: And because of the call to AddEnvironmentVariables , the configuration settings with the keys twitter:consumerKey and twitter:consumerSecret will once again be available inside my application Use the Secret Manager tool One more (and probably the best) way in which you can do this is to actually use the Secret Manager Tool which is available as a .NET Core tool and was built specifically for this purpose. You can read the article above for more detail on exactly how to use this, but what it boils down to is that there is a .NET Core tool available which you can add to your application, called the Secret Manager Tool. You can use this tool you can specify the values for any secrets you use inside your application, and it will be stored securely on your local machine without any chance of them being checked into source control. At runtime you can use the AddUserSecrets method to load the values of the configuration variables from the secret storage:(); } A handy side-effect for our Auth0 samples Let’s quickly look again at the code for specifying the configuration sources inside our application: var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) .AddEnvironmentVariables(); Configuration = builder.Build(); I mentioned before that the environment variables get loaded in the order in which the configuration sources are specified, but what I did not make clear is that all configuration sources can declare the configuration settings with the same key. What will happen in this case is that the values from a subsequent configuration source will override the values from a previous configuration source. This is useful for me when developing our Auth0 samples, because we have a clever little trick where we replace configuration values with the actual values from your Auth0 instance. Here is the contents from the configuration file of one of our samples : { "AppSettings": { "SiteTitle": "Auth0 - ASP.NET 5 Web App Sample" }, "Auth0": { "ClientId": "{CLIENT_ID}", "ClientSecret": "{CLIENT_SECRET}", "Domain": "{DOMAIN}", "RedirectUri": "" } } Do you see those values {CLIENT_ID} , {CLIENT_SECRET} and {DOMAIN} ? When you download this sample application through our documentation website, and you are signed in to your Auth0 account, we will automatically replace those with the correct values from your Auth0 instance, so you do not have to do any configuration of the application after you have downloaded it – you can just run it immediately and it is pre-configured to work with your specific Auth0 instance. Now previously when I worked on these samples to code and test them, I had to set the values for those configuration settings to the actual values. So instead of {CLIENT_ID} , I would have to specify the actual Client ID. I then also had to remember that everytime I checked a sample application in to GitHub that I once again replaced the actual Client ID I used while testing the sample application with the string {CLIENT_ID} , so our sample downloader worked correctly. From time to time I forgot to do this… With the new multiple configuration sources in ASP.NET Core, this is a thing of the past. I never have to touch the values of those configuration settings in appsettings.json again. All I do is to specify environment variables with the correct values which will then override the values in the appsettings.json file becuase of the call to AddEnvironmentVariables . So when I use them on my computer, the environment variables I use get specified, but when a user downloads the sample they will have the correct values specified in appsettings.json and I do not have to worry about messing things up by accident. 评论 抢沙发
http://www.shellsec.com/news/23704.html
CC-MAIN-2017-26
refinedweb
1,111
50.36
WarpScript has support for Apache Arrow Format. Discover what this format is with examples using WarpScript with R, Python and Spark. Apache Arrow format is an increasingly popular format for columnar in-memory data. Its core goal is to allow a fast, flexible and standardized way to share in-memory data between processes. In WarpScript, it is possible to convert an object to and from this format using the functions ->ARROW and ARROW->. These functions are available from the extension warp10-ext-arrow. Installation If you are an administrator of you warp10 instance, you can install the extension from WarpFleet: wf g io.warp10 warp10-ext-arrow --confDir=/path/to/warp10/etc/conf.d --libDir=/path/to/warp10/lib Else, ask an administrator to install it. Alternatively, you can try it on the sandbox where the extension is installed. Arrow format The format used by Apache Arrow is columnar. The columns are called fields and they are named. An object in this format is a set of fields that can be associated with some metadata. For example, an object in this format can be represented like this: In memory, its data is represented by a set of field-related buffers that can be shared between processes with zero-copy. There are two kinds of optimization that are done to represent such an object efficiently in memory. First, fields with few possible values (like the three first fields in the previous example) are dictionary-encoded. Second, NULL values are identified by a validity bit buffer that is associated with the value buffer of each field, so that they don't take more space. In order to share this format, the fields are specified by a schema. In WarpScript, this schema is defined under the hood by the ->ARROW function. To Arrow format The function ->ARROW converts a list of GTS into Arrow streaming format (a byte array). At the same time, it moves the data off-heap, so that other processes can pick up the data buffers with zero-copy. The result, a byte buffer in Arrow format, will have at least a field for the classnames. Then, it will have one field per existing label and attribute key. If there are datapoints, then there will be a field for each type of values that can be found in the input list: timestamp, latitude, longitude, elevation, double, long, string, bytes, boolean. Here is an example: We obtain a byte array that can be represented as: The input of ->ARROW was a list of three GTS. Note that a row has also been generated when a GTS was empty (Seattle). Some metadata are also encoded in the output byte array: the platform number of time unit per second, its revision and the input type. Also, note that the input list of ->ARROW can also contain GTS encoders or a mix of GTS and GTS encoders. From Arrow format The ARROW-> function takes an array of bytes as the argument. If these bytes encode an object in Arrow format, then it will output a list with two objects: a map of metadata, and a map of lists. For example, append ARROW-> at the end of the previous WarpScript to obtain: [ { "WarpScriptTimeUnitsPerSecond": "1000000", "WarpScriptVersion": "2.2.0", "WarpScriptType": "LIST" }, { "classname": [ "temperature", "temperature", "temperature", "temperature", "temperature", "temperature", "temperature", "temperature" ], "city": [ "Portland", "Portland", "Portland", "Portland", "San Francisco", "San Francisco", "San Francisco", "Seattle" ], "state": [ "Oregon", "Oregon", "Oregon", "Oregon", "California", "California", "California", "Washington" ], "DOUBLE": [ 289.63, 289.63, 289.26, 289.11, 302.82, 301.71, 302.08, null ], "timestamp": [ 1508968800000000, 1508965200000000, 1508961600000000, 1508958000000000, 1508968800000000, 1508965200000000, 1508961600000000, null ] } ] Other input types The ->ARROW function also supports other input types even though we encourage to use lists of GTS or GTS encoders as presented above. For example, you can give as input to ->ARROW a single GTS or a GTS Encoder. In this case, there will be no classname and label/attribute fields in the output but instead, they will be wrapped in its metadata. In this case, ARROW-> will be able to convert such results directly back into a GTS. More generally, you can also give as input to ->ARROW a list with a map of metadata and a map of lists, similar to the output of ARROW->. Example with R To have a working example with R, you need to be able to post a WarpScript (for example with the package warp10r available from its github repository). You also need the R arrow package (installation instructions here). In the WarpScript example above, use ->ARROW ->HEX to retrieve a hexadecimal string representation of a byte array from the WarpScript json response, then you can use it in R to efficiently retrieve GTS data: > library(warp10r) > library(arrow) > script <- "[ @senx/dataset/temperature 0 2 SUBLIST + bucketizer.mean 1508968800000000 1 h 4 ] BUCKETIZE 'data' STORE + [ $data 0 GET $data 1 GET -3 SHRINK $data 2 GET 0 SHRINK ] ->ARROW ->HEX" > ep <- "" > response <- postWarpscript(script, outputType="list", endpoint=ep) Status: 200 > reader <- RecordBatchStreamReader(hex2raw(response)) > tbl <- read_arrow(reader) > data.frame(tbl) classname city state timestamp DOUBLE 1 temperature Portland Oregon 1508968800000000 289.63 2 temperature Portland Oregon 1508965200000000 289.63 3 temperature Portland Oregon 1508961600000000 289.26 4 temperature Portland Oregon 1508958000000000 289.11 5 temperature San Francisco California 1508968800000000 302.82 6 temperature San Francisco California 1508965200000000 301.71 7 temperature San Francisco California 1508961600000000 302.08 8 temperature Seattle Washington <NA> NA Example with Python: Arrow vs Pickle Another example of a library that also supports Arrow is the Pandas library in Python. To convert a Geo Time Series into a Pandas DataFrame, we can then either use Pickle or Arrow. Which one is faster? Let's find out. You will need Warp 10 Jupyter ( pip install warp10-jupyter ) and the Py4J plugin installed on your Warp10 instance. Then start a Jupyter notebook. %load_ext warpscript %%warpscript --stack stack NEWGTS 'name' RENAME 1 1000000 <% s RAND RAND RAND 1000 * TOLONG RAND ADDVALUE %> FOR 'gts' STORE import pickle as pk import pyarrow as pa import pandas as pd WarpScript to Pandas via Arrow %%timeit -n 1 -r 10 stack.exec('$gts ->ARROW') // $gts has 1M datapoints pa.RecordBatchStreamReader(stack.pop()).read_pandas() 3.34 s ± 98.6 ms per loop (mean ± std. dev. of 10 runs, 1 loop each) WarpScript to Pandas via Pickle %%timeit -n 1 -r 10 stack.exec('$gts ->PICKLE') // $gts has 1M datapoints gts = pk.loads(stack.pop()) gts.pop('classname') gts.pop('labels') gts.pop('attributes') pd.DataFrame.from_dict(gts) 5.55 s ± 157 ms per loop (mean ± std. dev. of 10 runs, 1 loop each) As observed, using Arrow is much faster. This is due to a more compact serialization scheme and a zero-copy read of data buffers. Example with PySpark Spark also has built-in support for Apache Arrow format. To enable Arrow in Spark, you have to set 'spark.sql.execution.arrow.enabled' to 'true' in Spark configuration. Then, conversions between Pandas and Spark dataframe will be done using Arrow under the hood. Start another notebook to test that: %load_ext warpscript import pyspark import numpy as np import pandas as pd import pyarrow as pa from pyspark.sql import SparkSession from pyspark.ml.feature import VectorAssembler from pyspark.ml.clustering import KMeans import matplotlib.pyplot as plt %matplotlib inline Generate a random Geo Time Series and convert to Arrow We generate a GTS indexed from 1s to 1000s, with random latitude and longitude, without elevation nor value: %%warpscript -v NEWGTS 1 1000 <% s RAND RAND NaN NaN ADDVALUE %> FOR ->ARROW If you want more details about how to generate random GTS, follow this link. Set the Spark session and configuration to enable Arrow spark = SparkSession.builder.appName('single_machine_test').getOrCreate() spark.conf.set("spark.sql.execution.arrow.enabled", "true") Create a Spark DataFrame from GTS data pdf = pa.RecordBatchStreamReader(stack.pop()).read_pandas() # zero-copy read df = spark.createDataFrame(pdf) # also uses Arrow buffers under the hood df.show(10) +---------+----------+-----------+------+ |timestamp| latitude| longitude|DOUBLE| +---------+----------+-----------+------+ | 1000000|0.15794925|0.042424582| null| | 2000000|0.77125925| 0.19479936| null| | 3000000| 0.8854866| 0.5375395| null| | 4000000| 0.8703175| 0.02251245| null| | 5000000|0.57104236| 0.6803281| null| | 6000000|0.56812847| 0.69224846| null| | 7000000|0.77577066| 0.5169972| null| | 8000000| 0.5041205| 0.9198266| null| | 9000000| 0.5558393| 0.42196748| null| | 10000000|0.27434605| 0.4202067| null| +---------+----------+-----------+------+ only showing top 10 rows Example of using an algorithm implemented in Spark df = VectorAssembler(inputCols=["latitude", "longitude"], outputCol="features").transform(df) model = KMeans(k=4, seed=1).fit(df.select('features')) df = model.transform(df) df.select('timestamp', 'latitude', 'longitude', 'prediction').show(10) +---------+----------+-----------+----------+ |timestamp| latitude| longitude|prediction| +---------+----------+-----------+----------+ | 1000000|0.15794925|0.042424582| 0| | 2000000|0.77125925| 0.19479936| 2| | 3000000| 0.8854866| 0.5375395| 1| | 4000000| 0.8703175| 0.02251245| 2| | 5000000|0.57104236| 0.6803281| 1| | 6000000|0.56812847| 0.69224846| 1| | 7000000|0.77577066| 0.5169972| 1| | 8000000| 0.5041205| 0.9198266| 1| | 9000000| 0.5558393| 0.42196748| 2| | 10000000|0.27434605| 0.4202067| 0| +---------+----------+-----------+----------+ only showing top 10 rows Reading data back in Pandas for visualization (still using Arrow buffers under the hood) pdf = df.select('latitude', 'longitude', 'prediction').toPandas() fig = plt.scatter(pdf.latitude, pdf.longitude, c=pdf.prediction) In this simple example, we converted a Geo Time Series in an Arrow format in WarpScript, then Spark and Pandas were able to use the same Arrow buffer. The same kind of processing pipeline can also be done with R and SparklyR. Conclusion Arrow provides a powerful data representation format that enables fast communication between processes. Arrow is supported in WarpScript through the extension warp10-ext-arrow.
https://blog.senx.io/conversions-to-apache-arrow-format/
CC-MAIN-2020-45
refinedweb
1,600
58.18
function in array...thank you for your explaination...but sorry,,i don't really get your point.can you explain more. function in array....i got a problem in running this pgrm.supposely this coding is to print the output of random frequen... why the exe. file print nothing when i try to run it???#include <iostream> using namespace std; int main() {int n,even,odd; while(n>0); cout<<"pls... switch.i need to make a pgrmm using this function and have to test it for a given time .but it is not work it!why?#include <iostream> using namespace std; int main() { int x,charge; float a,b,c,; cout<<"case... help with switch!mine is not work it...#include <iostream> #include<cmath> using namespace std; int main() {int x; char time, charge; ... This user does not accept Private Messages
http://www.cplusplus.com/user/su_li/
CC-MAIN-2017-04
refinedweb
142
81.29
Subject: Re: [boost] C++17 detection From: James E. King, III (jking_at_[hidden]) Date: 2017-09-11 01:11:28 I did the same thing with smart_ptr, allowing for a transition between boost and std in the Apache Thrift project going from C++03 to C++11. Unfortunately it was a bunch of trial and error as no macro seemed to cover it completely. - Jim On Sun, Sep 10, 2017 at 8:28 PM, Niall Douglas via Boost < boost_at_[hidden]> wrote: > On 10/09/2017 20:07, Glen Fernandes via Boost wrote: > > On Sun, Sep 10, 2017 at 1:30 PM, Robert Ramey via Boost wrote: > >> How do I get ready for C++17? > >> > >> I have a piece of code which requires C++14. I want to use something > from > >> C++17 but I also want my code to work now. So I have > >> > >> #if C++14 being used > >> namespace std { > >> // implement C++ function > >> } > > > > Don't define those things inside namespace std. Instead: > > > > namespace boost { > > namespace yours { > > namespace detail { > > #if /* C++17 thing available */ > > using std::thing; > > #else > > /* Define thing yourself */ > > #endif > > } } } > > > > And then use boost::yours::detail::thing in your library. > > And here is an example of use of exactly that technique which uses > std::optional if available with the current compiler's configuration, > otherwise a conforming optional<T> implementation: > > > >
https://lists.boost.org/Archives/boost/2017/09/238567.php
CC-MAIN-2021-39
refinedweb
221
66.47
Re: High Memory Consumption of Classes and Arrays - From: "Frank Hileman" <frankhil@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Sun, 1 May 2005 05:13:06 -0700 Rüdiger Klaehn is correct; the most efficient way to use memory in .NET is to use arrays of structs. If you don't need polymorphism on your "objects" this works very well: there are 0 bytes of overhead per struct in an array. Only the array itself has overhead. If you have too many arrays you might have to think of pooling mechanisms such as the struct linked list. Linked lists do have overhead though, even if each node is a struct in an array. The least amount of overhead is to store the representation of each "list" in your object as an index and a count into a common contiguous array of structs. This makes insertion wasteful (all higher indices invalidated), so you might only use this for a read-only list. There are hybrid list/array structures like a deque that have some of the savings of an array and flexibility of a list. It could be that your data structures are too fine-grained and you need a whole different algorithm. ArrayList is a bit wasteful if you examine it in Reflector. You should never put a struct into an ArrayList either; that boxes it and takes just as much memory as a reference type (class). Regards, Frank Hileman Animated vector graphics system Integrated Visual Studio .NET graphics editor "Christian Rattat" <groupanswers@xxxxxxxxxx> wrote in message news:ewpApJjTFHA.548@xxxxxxxxxxxxxxxxxxxxxxx > >(note that all numbers from here assume you are using a >32bit > processor, if >>you are using a 64 bit proc and runtime then references >and probably > the >>object header will have to increase). > > I have a 32 bit architecture. > >>I'm pretty sure it should be 8(which is the size of the >class header, > 12 if >>you consider the reference as well). How are you >determining the size > of a >>given object? > > I use the GC GetTotalMemory in a loop where I allocate my objects. To > prevent side-effects I do this very often and you can see that after > each object created the free memory decreased by (in my example) 52 > bytes. If I delete the objects the other way round I can measure that 52 > bytes are freed again. I don't say this is the size of the object, but > this is the size the runtime allocates for each object. > >>Then you were mistaken. First off, why would you assume >that two > collections >>are only the size of their reference? While the data >contained in > *your* >>object will only be 4 bytes, the ArrayList class's fields >take up...16 >>bytes. These bytes are taken up by > > I have never assumed that a collection is of the ref size only, haven't > I? As the collection that I use (ArrayList) does nothing else than > manging an array. I would have assumed that it just has the size of the > objects stored in an array plus maybe 4-8 bytes for management > information. > > > .. > >>And assuming that the way C++ does things is the way >everyone does > this will >>be your undoing. Learn the framework and its >idiosynchrosies instead > of >>trying to use it like another language. > > First off, I'm an 12 years experienced software developer I pretty well > know the dotnet platform but also c++, java, perl and other programming > languages. A have participated in very huge software projects with > extreme high performance requirements (Sun Enterprise 10000 clusters > with 32 processors to manage several millions of customer request per > day). So I think I don't need to discuss on that level... > > .. > >>Because there is no way to accuratly determine the size >of a managed > type in >>managed memory. The runtime is pretty much free to layout >a class in > any >>order it wants(unless you tell it otherwise, and even >then I don't > think tis >>required to for reference types) or even theoretically >adjust the > space >>taken up by a variable if it can statically determine it >safe, > therefore the >>size of an object can vary between one runtime and >another or > potentially >>even on different runs(or maybe even the same instance, >if a garbage > collect >>happens between the two sizeof calls). > > If you read the text above, you have given the answer, that there is > obviously a mechanism that calculates the size. The clr must know at a > specific point how much memory is required to place an object into > memory. So, how could any memory be allocated if there is no information > about the size of that object? Impossible, right? So finally, why isn't > that information available to the user? > My answer: poor design > > > >>Why would you say a sealed class wouldn't need >polymorphic stuff? Can > a >>sealed class not override ToString or other virtual >methods it > inherited? >>Does being sealed mean the class cannot be refered to by >a variable > with a >>base type? This statement makes no sense at all. If >sealed classes > don't >>need a vtable pointer then what would the following do?(Mind you, > string is >>sealed) > > Well, this is true. Must have been my anger that had blocked my brain > .. > > .. > >>Without knowing your architecture, I certainly can't >offer much > advice. Have >>you tried using structures instead of classes? > > Sure. Structures are of same size as classes. I also tried to > explicitely layout the class and structures, pack it (using different > sizes 1,2,4) and so on. > > > Finally, none of my problems has been solved. Fact is: > a) instances of > > public class MyClass > { > } > > which have no fields, methods (only the default ctor) > > consume 24 bytes. You can simply check it. > > b) The code > > object[] o = new object[0]; > > consumes 16 bytes. Check it, too. > > c) Due to a) and b) any class that manages a list-like structure will at > least consume 40 bytes of memory. > > > As my structure requires to have 2 of this lists each element of this > structure will at least consume > > 2 x 40 bytes + 24 bytes for the containing class itself = 104 bytes. > > Now 104 bytes for each node means for a million nodes 104000000 bytes > and still none of my data is included. > In other words this structure will need around onehundred megabyte > memory just to manage the structure. By using the ArrayList class memory > consumption increases more again. > > This really sucks. I think I'm very deep into the dotnet paradigm and > know the clr/cls and most of the concepts of dotnet in-deep as well. > > So, what do you want to tell me? Do you have any ideas how I will be > able to significantly decrease the size of the objects instead? From my > current understanding, there is no way. So what's left to say: The > concepts of the dotnet platform regarding object memory storage seem to > be poor designed. There is obviously no way to manage large collections > of small objects in memory in any application based on the dotnet > platform. In my case this means concretely that managing a million > objects with actually 24 million bytes of data consume around 480 > megabytes of memory. > > If not this is poor design then what would you call poor design? > > > > > > > > > > > > > > > > > > > > > > > > > *** Sent via Developersdex *** . - Follow-Ups: - Re: High Memory Consumption of Classes and Arrays - From: Christian Rattat - References: - Re: High Memory Consumption of Classes and Arrays - From: Daniel O'Connell [C# MVP] - Re: High Memory Consumption of Classes and Arrays - From: Christian Rattat - Prev by Date: Re: High Memory Consumption of Classes and Arrays - Next by Date: Re: High Memory Consumption of Classes and Arrays - Previous by thread: Re: High Memory Consumption of Classes and Arrays - Next by thread: Re: High Memory Consumption of Classes and Arrays - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.performance/2005-05/msg00006.html
crawl-002
refinedweb
1,301
70.33
Can I Interest You in 5000 Classes? Scott Swigart Swigart Consulting LLC. March 2006 Applies to: Microsoft Visual Basic 6 Microsoft Visual Basic .NET 2003 Microsoft Visual Basic 2005 Microsoft Visual Studio 2005 Summary: In the previous article (Using the .NET Framework Class Library from Visual Basic 6), you saw how it was possible to start to unlock the functionality in the .NET framework to make it available to your Visual Basic 6 applications. This time, you will see how anything in the .NET Framework can be utilized in Visual Basic 6 by creating simple wrapper classes. This can let you quickly add powerful functionality to existing Visual Basic 6 applications, without the need to rewrite those applications in .NET. (13 printed pages) Contents Prerequisites Accessing the Web Regular Expressions Putting It All Together Conclusion Resources Prerequisites To use the code for this article, you should have Visual Basic 6.0, and either Visual Studio 2005 or Visual Basic Express. Visual Basic Express is a free and relatively small download, available to all from the MSDN site. Accessing the Web In the previous article, you saw how you could use the .NET WebClient.Download file method to download a file from the Internet and save it to disk. However, the WebClient class has a lot more functionality that you might want to use. For example, you might want to download a file into a string so that you can work with it programmatically. The WebClient has the ability to do just that, through the OpenRead method, but you can't call this method directly from Visual Basic 6. To access the method, you need to write a trivial wrapper around the WebClient class. To see the wrapper in action, you can download the code associated with this article, and double-click the included "Install.bat" file. If you want to create the wrapper by hand, you can perform the steps in the following walkthrough: Walkthrough 1: Creating a wrapper for WebClient - If you're using Visual Studio Express, download the code files for this article, and copy ComClass.zip to My Documents\Visual Studio 2005\Templates\ItemTemplates\Visual Basic. This installs the ComClass item template, which makes it easy to create COM objects in Visual Basic .NET. If you have Visual Studio 2005, this template is already installed. - Start Visual Studio 2005 or Visual Basic Express. - Select the File | New Project menu command. - If you're using Visual Studio 2005, for Project Type select Visual Basic. This step is not necessary if you're using Visual Basic Express, as it automatically creates Visual Basic projects. - For Templates, select Class Library. - For Name enter NetFxWrapper, and click OK. - In the Solution Explorer, delete Class1.vb. - Select the Project | Add new item menu command. - For Template select COM Class. - For Name enter WebClientWrapper, and click Add. - At the very top of the file, enter the following lines of code: Before the "End Class" statement, enter the following function. The DownloadFileAsString function uses the .NET WebClient class to open the URL and start reading the contents. The entire contents of the URL are read into a string, and the string is returned. - Select the Build | Build NetFxWrapper menu command. This will compile the wrapper and register it as a COM object. Using this from your Visual Basic 6 application is simple. Walkthrough 2: Using the wrapper from Visual Basic 6.0 - Start Visual Basic 6.0. - In the New Project dialog select Standard Exe, and click Open. - Select the Project | References menu command. - Select NetFxWrapper, and click OK. - Add controls to the user interface so that it appears as follows. Let the controls keep their default names. Figure 1. Constructing the Visual Basic 6 user interface - Set the MultiLine property of the larger text box to True. - Double-click the Download button to generate the code for its event handler. Enter the following code. - Press F5 to run the application. - For the URL, enter. - Click Download. The text for the Web page should appear in the text box. Figure 2. Application that downloads a page using the wrapper You can see that with just a few lines of .NET code, a significant amount of new functionality became available to Visual Basic 6. At this point, it's worth dissecting the .NET wrapper code, and talking about what's required to register this code so that it can be referenced from Visual Basic 6. The first couple of lines of code import namespaces: Think of these as almost being like a global With statement. If you don't import these namespaces, then you need to prefix them to every .NET class that you want to use with the full namespace. For example, with the Imports, you can declare an instance of WebClient as: Without the imports, you would have to specify the fully qualified namespace that WebClient is part of, as in: The next section of code declares the wrapper class itself: The first line applies an attribute to the class that will make it COM callable. Next, a wrapper method is declared: Normally, the OpenRead method of the WebClient class is not directly callable from a Visual Basic 6 application. However, DownloadFileAsString wraps this method so that you can pass in a URL as a string, and get the page contents back as a string. Because it is taking and returning types that Visual Basic 6 can easily deal with, the DownloadFileAsString method can be called right from Visual Basic 6. The method doesn't have to do much. It just uses the WebClient class to open the URL, read the entire contents, and return the results. At this point, the class is complete, and ready to be used from your Visual Basic 6 application. The only tasks remaining are to compile the class and register it so that it will show up in the list of Visual Basic 6 available references. A batch file is provided, called "Build And Register.bat," which accomplishes this. If you want to examine the contents of the batch file, you will see that it does a few things. First, it installs the wrapper into the global assembly cache (GAC). The GAC is a common location at which .NET DLLs can be placed so that they are easily usable by multiple applications. Next, the registry entries are created so that your .NET class appears as a regular COM object, and a type library is created for your class. The wrapper class can now be used from Visual Basic 6, just like any COM object. From a Visual Basic 6 application, you can use the Project | References menu to add a reference to NetFxWrapper: Figure 3. Referencing the wrapper class from Visual Basic 6 The Visual Basic 6 code to use the wrapper is now trivial: You can create an instance of the WebClientWrapper, just like any other COM object, and just call the DownloadFileAsString method to download a Web page and get the results as a string. Regular Expressions Now that you have a string of potentially interesting information, you'll see how you can use regular expressions to search the string and extract data from it. Regular Expressions are classes that can be used to match patterns (such as phone numbers, e-mail addresses, zip codes, and so on) in strings and extract them. To access this powerful functionality, a simple wrapper has been provided for the .NET Regex class in the code download for this article. Imports System.Text.RegularExpressions <ComClass(RegexWrapper.ClassId, RegexWrapper.InterfaceId, RegexWrapper.EventsId)> _ Public Class RegexWrapper Private r As Regex #Region "COM GUIDs" ' These GUIDs provide the COM identity for this class ' and its COM interfaces. If you change them, existing ' clients will no longer be able to access the class. Public Const ClassId As String = "88fbf42f-26e1-4909-9c18-4694fbbbbd80" Public Const InterfaceId As String = "0ae77958-9197-4642-9cdd-821091252c85" Public Const EventsId As String = "4b973489-094e-4c2f-8a70-5d9b29c57a33" #End Region ' A creatable COM class must have a Public Sub New() ' with no parameters, otherwise, the class will not be ' registered in the COM registry and cannot be created ' via CreateObject. Public Sub New() MyBase.New() End Sub Public Sub SetExpression(ByVal patternString As String) r = New Regex(patternString) End Sub Public Function Matches(ByVal input As String) As String() Dim matchList As MatchCollection matchList = r.Matches(input) Dim matchValues(matchList.Count - 1) As String For i As Integer = 0 To matchList.Count - 1 matchValues(i) = matchList(i).Groups(0).Value Next Return matchValues End Function End Class There are two important methods in this class. The first method, SetExpression, is used to define the pattern that will be searched for. This is how you determine if this regular expression should extract numbers, e-mail addresses, and so on. The second method, Matches, returns an array of strings for all the extracted data. In other words, an array of e-mail addresses, zipcodes, and so on. The "Regex Test" project, included with this article, includes this functionality to extract a number of kinds of data from a string. To use "Regex Test," simply execute the "install.bat" file in the code download for this article. You can then open the "Regex Test" project in Visual Basic 6, and run it: Figure 4. Extracting data using a regular expression You can see in that this application is able to use a regular expression to extract information like zip codes, phone numbers, and words beginning with a capital letter from a string. The regular expression (in this case \d{5}) looks for a pattern of five consecutive numbers to find zip codes. The code for this application uses the RegexWrapper to utilize the .NET Regex class to perform the search. Private Sub cmdSearch_Click() txtResults = "" Dim r As NetFxWrapper.RegexWrapper Set r = New NetFxWrapper.RegexWrapper r.SetExpression (txtExpression) Dim s As Variant For Each s In r.Matches(txtData) txtResults = txtResults & s & vbCrLf Next End Sub Private Sub optCapWords_Click() txtExpression = "[A-Z]\w*" End Sub Private Sub optPhone_Click() txtExpression = "\d{3}-\d{3}-\d{4}" End Sub Private Sub optZipcode_Click() txtExpression = "\d{5}" End Sub You can see that the code simply sets the regular expression based on the value of txtExpression, and then calls the Matches method to get back an array of resulting matching strings. Constructing Regular Expressions You can spend literally years learning all the ins and outs of regular expressions, so just the basics will be covered here. In its simplest form, a regular expression can used to search for literal strings. For example, the regular expression Fred could be used to search a string for all occurrences of "Fred." However, regular expressions are really useful when you know what the data will "look" like, but you don't have a literal string to search for. For example, a zip code is typically five numbers, but it could be practically any five numbers. For this kind of search, you could use /d{5} This means match any number (0 - 9), exactly five consecutive times. So this would match 11111, but would not match 11X111. This kind of expression could be put together to match a phone number, as /d{3}-/d{3}-/d{4}, which means, search for three numbers, followed by a dash, followed by three more numbers, followed by another dash, followed by four numbers. Regular expressions can be made much more sophisticated to, for example, match a five or nine digit zip code, or match a phone number that follows a variety of patterns (such as (503)111-1111). There is extensive information about regular expressions online, and in a variety of books. Putting It All Together It's now time to use these classes to accomplish something I found personally useful. One problem that I frequently encounter is that sites will often post a set of useful slides or applications, but each slide or application will be posted as a separate file. For example, when Microsoft posted the slides from the Professional Developers Conference (PDC), they posted each PowerPoint as a separate download. I just didn't want to have to click on each one, and download it individually. Also, the Tablet PC PowerToys are all posted as separate downloads. "Wouldn't it be nice," I thought, "if I could build an application that would search the Web page and download all files of a specific type." With the WebClient and Regex classes, you have all the tools you need. WebClient can be used to download the page as a string. Regex can be used to search all links to files of a specific type, and then WebClient can be used again to perform the actual file downloads. First, the WebClient class is used to download the page containing the links, and then a regular expression is used to extract the link href information. The results are returned as an array of strings. Private Function GetHrefs(url As String) As String() ' ' Download the source page ' Dim wc As WebClientWrapper Set wc = New WebClientWrapper Dim file As String file = wc.DownloadFileAsString(url) ' ' Use a regular expression to extract the hrefs from ' the page. ' Dim r As RegexWrapper Set r = New RegexWrapper r.SetIgnoreCase r.SetExpression ("(?<=href\s*=\s*[""']).*?(?=[""'])") GetHrefs = r.Matches(file) End Function You can see that the first part of the function downloads the URL as a string. The next part of the function creates an instance of the Regex class and specifies the expression. This expression is a little complex, but given a string such as <a href=''>, this regular expression would return just the "" part. If you use this against an entire Web page, you get all the URLs of the anchor tags. At this point, the application has what it needs to start downloading all the documents referenced on the page: For Each url In urls If Right(url, 4) = ".ppt" Or Right(url, 4) = ".zip" Or _ Right(url, 4) = ".doc" Or Right(url, 4) = ".pdf" Or _ Right(url, 4) = ".exe" Then ' Get the filename from the end of the URL ' Dim fileName As String fileName = Right(url, Len(url) - InStrRev(url, "/")) fileName = Replace(fileName, "%20", " ") ' ' Download files ' If Not f.FileExists(dest & "\" & fileName) Then txtFileList = txtFileList & "Downloading " & _ fileName & vbCrLf DoEvents Dim w As WebClient Set w = New WebClient w.DownloadFile url, dest & "\" & fileName End If End If Next Here the application loops through all the URLs, and examines them to see if they point to a desired file type. In this case, that's any PowerPoint, Word Document, PDF, ZIP, or executable file. The application then uses the WebClient class to download the file. Conclusion This application would have been quite challenging to write with just Visual Basic 6. With the .NET Framework and some simple wrapper classes, the needed functionality became immediately available. WebClient and Regex are just a couple examples of the literally thousands of classes in the Framework that can speed your development. By creating simple wrapper classes, the full power of the Framework is at your disposal. Resources Deploying Hybrid Visual Basic 6 and Visual Basic .NET Applications.
http://msdn.microsoft.com/en-us/library/Aa719105
CC-MAIN-2014-15
refinedweb
2,527
63.7
27 January 2012 09:48 [Source: ICIS news] By Nurluqman Suratman and Samuel Wong ?xml:namespace> In December 2011, About 70% of In terms of volume, the country’s overseas shipments of most petrochemical products had an annual increase in December, data from the For aromatics, the exports of benzene surged by 45.4%, while toluene shipments more than doubled to 106,250 tonnes. (please see table below) Paraxylene (PX) exports more than doubled to 168,258 tonnes, while shipments of naphtha increased by 44.5%, it showed. While export volumes of petrochemicals may continue to grow in the next few months, the outlook on prices is not as buoyant despite the high values of upstream crude and naphtha, said Kim of Woori Securities. “Despite strong export numbers polymer prices have remained almost stagnant since November [2011],” Kim said. Over the next six months, a double-digit growth in petrochemical exports posted in December is highly unlikely to be seen, since even the Chinese economy is slowing down, said “We forecast a recovery phase for the petrochemical industry [when] Asian economies start to head into an uptrend from the second half of 2012,” Park said. Growth in A shift to a more loose monetary policy in the country is widely expected this year, in support of economic growth, as inflation has been contained. “ For the whole of 2011, Its export of petroleum products, meanwhile, surged 63.9% to $51.7bn, the data showed. “Even amid external uncertainties, including political unrest in the Middle East and the devastating earthquake
http://www.icis.com/Articles/2012/01/27/9527292/s-korea-h1-petrochemical-exports-to-slow-down-on-poor-demand.html
CC-MAIN-2014-35
refinedweb
258
60.55
This glossary is taken from my book Object-Oriented Programming with Java: An Introduction, published by Prentice Hall. I would welcome corrections or requests for additional terms. David Ashby has kindly provided both a PDF version and a a Word version of this glossary. c:\Java\bin\javac.exeSee relative filename. abstractreserved word in its header. Abstract classes are distinguished by the fact that you may not directly construct objects from them using the newoperator. An abstract class may have zero or more abstract methods. abstractreserved word in its header. An abstract method has no method body. Methods defined in an interface are always abstract. The body of an abstract method must be defined in a sub class of an abstract class, or the body of a class implementing an interface. java.awtpackages. Included are classes for windows, frames, buttons, menus, text areas, and so on. Related to the AWT classes are those for the Swing packages. privateattribute of a class. By convention, we name accessors with a getprefix followed by the name of the attribute being accessed. For instance, the accessor for an attribute named speedwould be getSpeed. By making an attribute private, we prevent objects of other classes from altering its value other than through a mutator method. Accessors are used both to grant safe access to the value of a private attribute and to protect attributes from inspection by objects of other classes. The latter goal is achieved by choosing an appropriate visibility for the accessor. // Create an anonymous array of integers. YearlyRainfall y2k = new YearlyRainfall( new int[]{ 10,10,8,8,6,4,4,0,4,4,7,10,});An anonymous array may also be returned as a method result. quitButton.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent e){ System.exit(0); } }); private Point[] vertices = { new Point(0,0), new Point(0,1), new Point(1,1), new Point(1,0), };See anonymous class, as these often result in the creation of anonymous objects. Appletor JAppletclasses. They are most closely associated with the ability to provide active content within Web pages. They have several features which distinguish them from ordinary Java graphical applications, such as their lack of a user-defined main method, and the security restrictions that limit their abilities to perform some normal tasks. +, -, *, /and %take arithmetic expressions as their operands and produce arithmetic values as their results. +, -, *, /and %, that produce a numerical result, as part of an arithmetic expression. int[] pair = { 4, 2, };is equivalent to the following four statements. int[] pair; pair = new int[2]; pair[0] = 4; pair[1] = 2; =) used to store the value of an expression into a variable, for instance variable = expression;The right-hand-side is completely evaluated before the assignment is made. An assignment may, itself, be used as part of an expression. The following assignment statement stores zero into both variables. x = y = 0; int[] numbers;the base type of numbersis int. Where the base type is a class type, it indicates the lowest super type of objects that may be stored in the array. For instance, in Ship[] berths;only instances of the Shipclass may be stored in berths. If the base type of an array is Object, instances of any class may be stored in it. 0and 1are used. Digit positions represent successive powers of 2. See bit. +, -, *, /and %, and the boolean operators &&, ||and ^, amongst others. 0and 1. Bits are the fundamental building block of both programs and data. Computers regularly move data around in multiples of eight-bit units (bytes for the sake of efficiency). &, |and ^, that are used to examine an manipulate individual bits within the bytes of a data item. The shift operators, <<, >>and >>>, are also bit manipulation operators. {and }). For instance, a class body is a block, as is a method body. A block encloses a nested scope level. booleantype has only two values: trueand false. boolean, i.e. gives a value of either trueor false. Operators such as &&and ||take boolean operands and produce a boolean result. The relational operators take operands different types and produce boolean results. java.lang, java.ioand java.iopackages. IndexOutOfBoundsExceptionexception being thrown. .classfiles. numbers Arrays.sort(numbers);The sortmethod will change the order of the values stored in the object referred to by numbers. However, it is impossible for the sortmethod to change which array numbersrefers to - a sorted copy, for instance. Some languages provide an argument passing semantics known as call-by-reference, in which an actual argument's value may be changed. Java does not provide this, however. \rcharacter. Also used as a synonym for the `Return' or `Enter' key used to terminate a line of text. The name derives from the carriage on a mechanical typewriter. 'A') or lower-case (e.g., 'a'). ClassCastExceptionexception will be thrown for illegal ones. finaland static. extendsa super class or implementsany interfaces. privateaccess modifier. public static void main(String[] args)The arguments are stored as individual strings. ~, is used to invert the value of each bit in a binary pattern. For instance, the complement of 1010010is 0101101. ?:) is used in the form bexpr ? expr1 : expr2where bexpris a boolean expression. The the boolean expression has the value truethen the result of the operation is the value of expr1, otherwise it is the value of expr2. public class Ship { public Ship(String name){ ... } ... }A class with no explicit constructor has an implicit no-arg constructor, which takes no arguments and has an empty body. public class Point { // Use p's attributes to initialize this object. public Point(Point p){ ... } ... }The argument is used to define the initial values of the new object's attributes. synchronizedmethods or statements. --) that adds one to its operand. It has two forms: pre-decrement ( --x) and post-decrement ( x--). In its pre-decrement form, the result of the expression is the value of its argument after the decrement. In its post-decrement form, the result is the value of its argument before the decrement is performed. After the following, int a = 5, b = 5; int y,z; y = --a; z = b-- yhas the value 4and zhas the value 5. Both aand bhave the value 4. double, float, int, longand short. The remaining three are used to representing single-bit values ( boolean), single byte values ( byte) and two-byte characters from the ISO Unicode character set ( char). 0to 9are used. Digit positions represent successive powers of 10. int numStudents = 23; Ship argo = new Ship(); Student[] students = new Student[numStudents];Instance variables that are not explicitly initialized when they are declared have a default initial value that is appropriate to their type. Uninitialized local variables have an undefined initial value. booleanvariables have the value false, charvariables have the value \u0000and object references have the value null. The initial values of local variables are undefined, unless explicitly initialized. false. The statements in the loop body will always be executed at least once. // Downcast from Object to String String s = (String) o;See upcast. privateand channeling access to them through accessor and mutator methods. public interface States { public static final int Stop = 0, Go = 1; }However, the compiler type checking usually available with enumerated types is not available with this form. Throwableclass. See checked exception and unchecked exception. ^) is both a boolean operator and a bit manipulation operator. The boolean version gives the value trueif only one of its operands is true, otherwise it gives the value false. Similarly, the bit manipulation version produces a 1bit wherever the corresponding bits in its operands are different. DataInputStreamand DataOutputStream. finalreserved word in its header. A final class may not be extended by another class. finalizemethod is called. This gives it the opportunity to free any resources it might be holding on to. finalreserved word in its header. A final method may not be overridden by a method defined in a sub class. finalreserved word in its declaration. A final may not assigned to once it has been initialized. Initialization often takes place as part of its declaration. However, the initialization of an uninitialized final field (known as a blank final variable) may be deferred to the class's constructor, or an initializer. true. The third expression is evaluated after each completion of the loop's body. The loop terminates when the termination test gives the value false. The statements in the loop body might be executed zero or more times. package oddments; class Outer { public class Inner { ... } ... }The fully qualified name of Inneris oddments.Outer.Inner +, are fully evaluating. In contrast, some boolean operators, such as &&, are short-circuit operators. HashMap. hashValuemethod, inherited from the Objectclass, to define their own hash function. 0to 9and the letters Ato Fare used. Arepresents 10 (base 10), Brepresents 11 (base 10), and so on. Digit positions represent successive powers of 16. if(boolean-expression){ // Statements performed if expression is true. ... } else{ // Statements performed if expression is false. ... }It is controlled by a boolean expression. See if statement. if(boolean-expression){ // Statements performed if expression is true. ... }It is controlled by a boolean expression. See if-else statement. Stringclass are immutable, for instance - their length and contents are fixed once created. ++) that adds one to its operand. It has two forms: pre-increment ( ++x) and post-increment ( x++). In its pre-increment form, the result of the expression is the value of its argument after the increment. In its post-increment form, the result is the value of its argument before the increment is performed. After the following, int a = 5, b = 5; int y,z; y = ++a; z = b++ yhas the value 6and zhas the value 5. Both aand bhave the value 6. Ycalling method X, when an existing call from Xto Yis still in progress. false. Sometimes this is a deliberate act on the part of the programmer, using a construct such as while(true) ...or for( ; ; ) ...but it can sometimes be the result of a logical error in the programming of a normal loop condition or the statements in the body of the loop. private, is one of the ways that we seek to promote information hiding. Objectclass is the ultimate ancestor of all classes - at the top of the hierarchy. Two classes that have the same immediate super class can be thought of as sibling sub classes. Multiple inheritance of interfaces gives the hierarchy a more complex structure than that resulting from simple class inheritance. byte, short, intand longare used to hold integer values within narrower or wider ranges. InterruptedExceptionobject being received by the interrupted thread. Waiting for an interrupt is an alternative to polling. Iteratorand ListIteratorinterfaces. <<) is a bit manipulation operator. It moves the bits in its left operand zero or more places to the left, according to the value of its right operand. Zero bits are added to the right of the result. &&, ||, &, |and ^that take two boolean operands and produce a boolean result. Used as part of a boolean expression, often in the condition of a control structure. 12could mean many different things - the number of hours you have worked today, the number of dollars you are owed by a friend, and so on. As far as possible, such values should be associated with an identifier that clearly expresses their meaning. final int maxSpeed = 50;If stored in a final variable, it is unlikely that any execution overhead will be incurred by doing so. public static void main(String[] args) void. privateattribute of a class. By convention, we name mutators with a setprefix followed by the name of the attribute being modified. For instance, the mutator for an attribute named speedwould be setSpeed. By making an attribute private, we prevent objects of other classes from altering its value other than through its mutator. The mutator is able to check the value being used to modify the attribute and reject the modification if necessary. In addition, modification of one attribute might require others to be modified in order to keep the object in a consistent state. A mutator method can undertake this role. Mutators are used both to grant safe access to the value of a private attribute and to protect attributes from modification by objects of other classes. The latter goal is achieved by choosing an appropriate visibility for the mutator. \ncharacter. newoperator publicaccess. Its role is purely to invoke the no-arg constructor of the immediate super class. \u0000character. Care should be taken not to confuse this with the nullreference. nullreference object, usually via the newoperator. When an object is created, an appropriate constructor from its class is invoked. argo Ship argo;is capable of holding an object reference, but is not, itself, an object. It can refer to only a single object at a time, but it is able to hold different object references from time to time. 0to 7are used. Digit positions represent successive powers of 8. \ddd, where each d is an octal digit. This may be used for characters with a Unicode value in the range 0-255. -, ==or ?:taking one, two or three operands and yielding a result. Operators are used in both arithmetic expressions and boolean expressions. readmethod of InputStreamreturns -1to indicate that the end of a stream has been reached, for instance, instead of the normal positive byte-range value. public, protectedor privateaccess modifier {access!modifier} have package visibility. Public classes and interfaces may be imported into other packages via an import statement. package java.lang; Iteratorencapsulate a pattern of access to the items in a collection, while freeing the client from the need to know details of the way in which the collection is implemented. PipedInputStreamand PipedOutputStream. waitand notifymechanism associated with threads. class Rectangle extends Polygon implements Comparablean object whose dynamic type is Rectangle can behave as all of the following types: Rectangle, Polygon, Comparable, Object. x+y*z, the multiplication is performed before the addition because *has a higher precedence than -. boolean, byte, char, double, float, int, longand short. protectedaccess modifier. Such a member is accessible to all classes defined within the enclosing package, and any sub classes extending the enclosing class. publicaccess modifier. All such members are visible to every class within a program. 5/3, 5is the dividend and 3is the divisor. This gives a quotient of 1and a remainder of 2. Readerabstract, defined in the java.iopackage. Reader classes translate input from a host-dependent character set encoding into Unicode. See Writer class. doubleand floatare used to represent real numbers. public static void countDown(int n){ if(n >= 0){ System.out.println(n); countDown(n-1); } // else - base case. End of recursion. }See direct recursion, indirect recursion and mutual recursion for the different forms this can take. Classclass, and other classes in the java.lang.reflectpackage. Reflection makes it possible, among other things, to create dynamic programs. <, >, <=, >=, ==and !=, that produce a boolean result, as part of a boolean expression. ../bin/javac.exeA relative filename could refer to different files at different times, depending upon the context in which it is being used. See absolute filename. class, int, public, etc. Such words may not be used as ordinary identifiers. voidreturn type may only have return statements of the following form return;A method with any other return type must have at least one return statement of the form return expression;where the type of expressionmust match the return type of the method. voidin public static void main(String[] args)or Point[]in public Point[] getPoints() >>) is a bit manipulation operator. It moves the bits in its left operand zero or more places to the right, according to the value of its right operand. The most significant bit from before the shift is replicated in the leftmost position - this is called sign extension. An alternative right shift operator ( >>>) replaces the lost bits with zeros at the left. ivaris an intvariable, the following statement is syntactically correct ivar = true;However, it is semantically incorrect, because it is illegal to assign a booleanvalue to an integer variable. &&) and logical-or ( ||) operators are the most common example, although the conditional operator ( ?:) also only ever evaluates two of its three operands. See fully evaluating operator. 1bit indicates a negative number, and a 0bit indicates a positive number. bytevariable contains the bit pattern, 10000000. If this is stored in a shortvariable, the resulting bit pattern will be 1111111110000000. If the original value is 01000000, the resulting bit pattern will be 0000000001000000. // This line will be ignored by the compiler. ;) is used to indicate the end of a statement. staticreserved word. A static initializer is defined outside the methods of its enclosing class, and may only access the static fields and methods of its enclosing class. staticreserved word in its header. Static methods differ from all other methods in that they are not associated with any particular instance of the class to which they belong. They are usually accessed directly via the name of the class in which they are defined. staticreserved word in its header. Unlike inner classes, objects of static nested classes have no enclosing object. They are also known as nested top-level classes. staticvariable defined inside a class body. Such a variable belongs to the class as a whole, and is, therefore, shared by all objects of the class. A class variable might be used to define the default value of an instance variable, for example, and would probably also be defined as final, too. They are also used to contain dynamic information that is shared between all instances of a class. For instance the next account number to be allocated in a bank account class. Care must be taken to ensure that access to shared information, such as this, is synchronizedwhere multiple threads could be involved. Class variables are also used to give names to application-wide values or objects since they may be accessed directly via their containing class name rather than an instance of the class. Stringclass. Strings consist of zero or more Unicode characters, and they are immutable, once created. A literal string is written between a pair of string delimiters ( "), as in "hello, world" extendsits super class. A sub class inherits all of the members of its super class. All Java classes are sub classes of the Objectclass, which is at the root of the inheritance hierarchy. See sub type Objectclass as a super class. See super type. javax.swingpackages. They provide a further set of components that extend the capabilities of the Abstract Windowing Toolkit (AWT). Of particular significance is the greater control they provide over an application's look-and-feel. switch(choice){ case 'q': quit(); break; case 'h': help(); break; ... default: System.out.println("Unknown command: "+choice); break; } // Initialise with default values. public Heater() { // Use the other constructor. this(15, 20); } // Initialise with the given values. public Heater(int min,int max) { ... } public Heater(int min,int max) { this.min = min; this.max = max; ... } talker.talkToMe(this); Threadclass in the java.langpackage. public int find(String s) throws NotFoundException throw new IndexOutOfBoundsException(i+" is too large."); trueand false, onand off, or 1and 0. try{ statement; ... } catch(Exception e){ statement; ... } finally{ statement; ... }Either of the catch clause and finally clause may be omitted, but not both. 1bit indicates a negative number, and a 0bit indicates a positive number. A positive number can be converted to its negative value by complementing the bit pattern and adding 1. The same operation is used to convert a negative value to its positive equivalent. -, +, !, !, ++and --.) a name (e.g. index.html) and a scheme (e.g. http). // Upcast from VariableController to HeaterController VariableController v; ... HeaterController c = v;See downcast. Java's rules of polymorphism mean that an explicit upcast is not usually required. null. A variable of a class type acts as a holder for a reference to an object that is compatible with the variable's class type. Java's rules of polymorphism allow a variable of a class type to hold a reference to any object of its declared type or any of its sub types. A variable with a declared type of Object, therefore, may hold a reference to an object of any class, therefore. 80is the well-known port number for servers using the HyperText Transfer Protocol (HTTP). false. The statements in the loop body might be executed zero or more times. java.langpackage. They consist of a class for each primitive type: Boolean, Byte, Character, Double, Float, Integer, Longand Short. These classes provide methods to parse strings containing primitive values, and turn primitive values into strings. The Doubleand Floatclasses also provide methods to detect special bit patterns for floating point numbers, representing values such as NaN, +infinity and -infinity. Writerabstract, defined in the java.iopackage. Writer classes translate output from Unicode to a host-dependent character set encoding . See Reader class.
https://www.cs.kent.ac.uk/people/staff/djb/book/glossary.html
CC-MAIN-2019-22
refinedweb
3,453
58.69
Information gathering 4 files By argumentumgenerates a function that returns all results as an array from the WMI query. 2015.11.28 added a StatusBar. ( was not easy to resize in win10 ) fixed COM handler in generated code 2015.06.11 added a "nice" COM error handler ( in Autoit v3.2 if there is a COM error it'll tell you and no more run, in v3.3 it will let it slide but you don't realize there was one. So I put together a COM error handler that will gather all the errors and show'em to you in an array displayed by _ArrayDisplay. It includes the line number and the line of code it self. nice. ) added the Scite lexer. ( There are code generated that is over 6000 lines long and that is a bit too much for the edit control, so, I decided that since is gonna run in a PC that most likely is to have ScITE, using the DLL just makes sense. Everything that is needed is taken from your installation, colors, fonts, etc. In case that ScITE is not there, then, the edit control would be used. ) 2015.06.08 changed the CIMv2 button to switch between CIMv2 and WMI. ( is a more practical use of the button ) added some support for remote connections. ( executes remotely based in the classes discovered in local PC ) added Save to Disk for the filter by right-click the button. ( is anoying having to set it every time ) fix CPU usage was higher than needed in the main loop. ( ..less abuse on the PC ) added the position in the array to the "select properties". ( when an error pops up, the position is there, making it easier to find it in the listview ) 2015.05.25 fixed "Send to ScITE" ( wasn't working well ) added the ability to remove fields/properties from the generated arrays 2015.05.16 fixed the combobox not working everywhere. added setting to, in addition to Dynamic Classes, to include Abstract Classes. added a filter ( easyer to look for what you need ). 2015.05.15 added custom default setting for display array limit. added custom GoogleIt search ( @SelectedClass is the macro word ). added cache for the Class too ( since is much faster and there is no need to discover every time it runs ). change cache from an entry in the ini to a file of its own and created a subfolder for them when in portable mode ( cleaner that way ). changed the function generation to not have to pass an integer. changed function names when longer than 60 characters ( Au3Check don't like infinitely long names ). changed how F5 works. Now F5 runs and ESC stops. changed code generation from "$Output = $Output &" to $sReturn &=". added \root\namespace:class to the title bar. added a class description above the list of methods ( it just makes sense ). change the default spacing of Array_Join() ( to better identify it was joined ). added a watcher for "Autoit error", to move it to the current screen ( ANSI version ). fixed "Send To ScITE" incomplete send ( it was too much at once ). added to "Send To ScITE" the option to send to a new tab or cursor position. added to the ini,) for the user to change the default color and style changed the debug info. to ToolTip, to be better aware of running status. changed the way it writes TEXT and HTML to be so every 100 records. added the ability to open the files from failed or prior runs by double-clicking the radio button. changed file creation naming format to better identify them. 2015.05.06 Better readability for HTML and Text outputs. Left the state of the source code ready for the current version. Added a v3.3.12.0 compiled version ( better behavior under Win 8.1 and 10 ) and renamed the ANSI version. 2015.05.05 Fix a logic that would say fail when it was a limited listing of Namespaces, it now tells that the listing is limited. 2015.05.04 Added announcement of Admin. rights ( user might not know ) rethought the "_Array2D_toHtml.au3" to be an #include ( as it should have been ) reworked the Namespace combobox loading went from MsgBox and ToolTip to TrayTip and TrayIconDebug ( less intrusive ) www search now "Google it" ( MSDN change the links format ) Fixed the compiled _ArrayDisplay from displaying off center 2015.05.03 Added an array rotator ( at times I'd rather see it rotated ) Added an array to html ( to see the array on the browser ), that also can be rotated ( it uses the array rotator ) 2015.05.02 Enable Ctrl-A, C, V, X, Z to use the keyboard Prettified the output a bit more and corrected some typos. 2015.05.01 And added press F5 like in ScITE, ( my finger just goes there ) to the Run button. Also a "STOP" to the run ( at times I mess up and need to ProcessClose ) And set $MB_TOPMOST to the MsgBox ( so I don't loose them from sight ) And made the output of the array to be a function ( easier to use in a script ) And prettified the GUI a bit, not much. ==== EoF ==== In the zip is the source code for the current version and an AutoIt v3.2.12.1 ANSI compiled file that should run on any Windows version. 663 downloads Updated By TheDcoderPlease refer to the main thread 106 downloads Updated By UEZis a small tool in widget style to show the clock, current cpu usage, cpu speed, memory usage and network activity (tcp, ip and udp). Additionally you can use it as an alarm clock (to stop alarm clock tone press the left LED (mail) or wait 60 seconds). The current cpu usage code is beta and might be not working for some CPU models! Autoit SysInfo Clock should work with all operating systems beginning from Windows XP. Br, UEZ This project is discontinued! 895 downloads Updated By JonThis! 22,819 downloads Updated
https://www.autoitscript.com/forum/files/category/13-information-gathering/
CC-MAIN-2017-13
refinedweb
1,001
73.27
From: Aleksey Gurtovoy (agurtovoy_at_[hidden]) Date: 2003-06-12 05:19:21 Hi Eric, First of all, thanks for the report! Eric Friedman wrote: > I've found that mpl::is_sequence fails to operate correctly on > certain types under MSVC7. To be precise, on class types that have a member named 'begin' that is not a typename. > I haven't tested extensively, but there certainly seems to be some > problem with class templates from namespace std. (The problem likely > extends to other types in other namespaces, and perhaps other > compilers, but I have not investigated the issue thoroughly enough.) Nope, it doesn't have anything to do with namespaces, see the above. The affected compilers are MSVC 6.5 and 7.0. > > In particular, this is posing a problem for me in incorporating variant > into the next Boost release. Is this a known problem? Nope, it wasn't. > Any insight welcome. Fixed in the CVS. Aleksey Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/06/48959.php
CC-MAIN-2022-40
refinedweb
177
70.39
19 September 2012 07:27 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company has completed unit tests at the plant on 8 September, the source said. The plant is equipped with a 4,500 cbm storage tank and will mainly consume feedstock gas supplied by PetroChina’s Changqing Oilfield, the source added. “Output from the LNG plant will be supplied to households and our LNG-refuelling stations in The company owns one LNG-refuelling station in the city and is planning to build more in future, according to the source. Baotou-based Xinyuan Natural Gas largely engages in related businesses of LNG production and
http://www.icis.com/Articles/2012/09/19/9596642/chinas-xinyuan-natural-gas-to-run-trials-in-lng-unit-late-sept.html
CC-MAIN-2014-15
refinedweb
105
60.24
Need help cloning? Learn how to clone a repository. Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. to download and install the scons-{version}.tar.gz or scons-{version}.zip package rather than to work with the packaging logic in this tree. -To the extent that this tree is about building SCons packages, the -*full* development cycle (enforced by Aegis) is not -- (Optional.) Install from a pre-packaged SCons package that does not require distutils: - Red Hat Linux scons-0.96.96.noarch.rpm + Red Hat Linux scons-0.97.noarch.rpm Debian GNU/Linux use apt-get to get the official package - Windows scons-0.96.96.win32.exe + Windows scons-0.97.win32.exe -- (Recommended.) Download the latest distutils package from the following URL: You can also execute the local SCons directly from the src/ subdirectory by first setting the SCONS_LIB_DIR environment variable to the local -src/engine subdirectory, and then execute the local src/script/scons.py +src/engine subdirectory, and then executing the local src/script/scons.py script to populate the build/scons/ subdirectory. You would do this as follows on a Linux or UNIX system (using sh or a derivative like bash or ksh): By default, the above commands will do the following: - -- Install the version-numbered "scons-0.96.96" and "sconsign-0.96.96" + -- Install the version-numbered "scons-0.97" and "sconsign-0.97" scripts in the default system script directory (/usr/bin or C:\Python*\Scripts, for example). This can be disabled by specifying the "--no-version-script" option on the command-0.96.96" and "sconsign-0.96.96" scripts - by specifying the "--hardlink-scons" or "--symlink-scons" - options on the command line. + making it the default on your system. - -- Install "scons-0.96.96.bat" and "scons.bat" wrapper scripts in the + On UNIX or Linux systems, you can have the "scons" and "sconsign" + scripts be hard links or symbolic links to the "scons-0.97" and + "sconsign-0.97" scripts by specifying the "--hardlink-scons" or + "--symlink-scons" options on the command line. + + -- Install "scons-0.97-0.96.96.bat" - and "scons.bat" files installed in the default system script - directory, which is useful if you want to install SCons in a - shared file system directory that can be used to execute SCons - from both UNIX/Linux and Windows systems. + on the command line. + On UNIX or Linux systems, the "--install-bat" option may be + specified to have "scons-0.97-0.96.96 or C:\Python*\scons-0.96.96, for example). + (/usr/lib/scons-0.97 or C:\Python*\scons-0.97, for example). See below for more options related to installing the build engine library. modules that make up SCons. The src/script/scons.py wrapper script exists mainly to find the appropriate build engine library and then execute it. -In order to make your own change locally and test them by hand, simply -edit modules in the local src/engine/SCons subdirectory tree and -either use the local bootstrap.py script: +In order to make your own changes locally and test them by hand, simply +edit modules in the local src/engine/SCons subdirectory tree and either +use the local bootstrap.py script: $ python bootstrap.py [arguments] set up environment variables to do this on a UNIX or Linux system: $ setenv MYSCONS=`pwd`/src - $ setenv SCONS_LIB_DIR=$MYSCONS + $ setenv SCONS_LIB_DIR=$MYSCONS/engine $ python $MYSCONS/script/scons.py [arguments] Or on Windows: C:\scons>set MYSCONS=%cd%\src - C:\scons>set SCONS_LIB_DIR=%MYSCONS% + C:\scons>set SCONS_LIB_DIR=%MYSCONS%\engine C:\scons>python %MYSCONS%\script\scons.py [arguments] You can use the -C option to have SCons change directory to another "con" on Windows). By adding Trace() calls to the SCons source code: def sample_method(self, value): - fromn SCons.Debug import Trace + from SCons.Debug import Trace Trace('called sample_method(%s, %s)\n' % (self, value)) You can then run automated tests that print any arbitrary information the screen: Trace('called sample_method(%s, %s)\n' % (self, value), file='trace.out') ^D $ - -- Now debug the test failures and fix them, either by changing + -- Now debug the test failures and fix them, either by changing SCons, or by making necessary changes to the tests (if, for example, you have a strong reason to change functionality, or if you find that the bug really is in the test script itself). Repeat this until all of the tests that originally failed now pass. - -- Now you need to go back and validate that any changes you - made while getting the tests to pass didn't break the fix you - originally put in, or introduce any *additional* unintended side - effects that broke other tests: + -- Now you need to go back and validate that any changes you + made while getting the tests to pass didn't break the fix + you originally put in, and didn't introduce any *additional* + unintended side effects that broke other tests: $ python script/scons.py -C /home/me/broken_project . $ python runtest.py -a If you find any newly-broken tests, add them to your "failed.txt" file and go back to the previous step. -Of course, the above is only one suggested workflow. In practice, there's -a lot of room for judgment and experience to make things go quicker. ". Depending on the utilities installed on your system, any or all of the following packages will be built: - build/dist/scons-0.96.96-1.noarch.rpm - build/dist/scons-0.96.96-1.src.rpm - build/dist/scons-0.96.96.linux-i686.tar.gz - build/dist/scons-0.96.96.tar.gz - build/dist/scons-0.96.96.win32.exe - build/dist/scons-0.96.96.zip - build/dist/scons-doc-0.96.96.tar.gz - build/dist/scons-local-0.96.96.tar.gz - build/dist/scons-local-0.96.96.zip - build/dist/scons-src-0.96.96.tar.gz - build/dist/scons-src-0.96.96.zip - build/dist/scons_0.96.96-1_all.deb + build/dist/scons-0.97-1.noarch.rpm + build/dist/scons-0.97-1.src.rpm + build/dist/scons-0.97.linux-i686.tar.gz + build/dist/scons-0.97.tar.gz + build/dist/scons-0.97.win32.exe + build/dist/scons-0.97.zip + build/dist/scons-doc-0.97.tar.gz + build/dist/scons-local-0.97.tar.gz + build/dist/scons-local-0.97.zip + build/dist/scons-src-0.97.tar.gz + build/dist/scons-src-0.97.zip + build/dist/scons_0.97-1_all.deb The SConstruct file is supposed to be smart enough to avoid trying to build packages for which you don't have the proper utilities installed. SCons itself -- a copy of xml_export, which can retrieve project data from SourceForge + -- scripts and a Python module for translating the SCons + home-brew XML documentation tags into DocBook and + man page format bootstrap.py A build script for use with Aegis. This collects a current copy SCons documentation. A variety of things here, in various stages of (in)completeness. -etc/ - A subdirectory for miscellaneous things that we need. Right - now, it has copies of Python modules that we use for testing, - and which we don't want to force people to have to install on - their own just to help out with SCons development. - gentoo/ Stuff to generate files for Gentoo Linux. the licensing terms are for SCons itself, not any other package that includes SCons. +QMTest/ + The Python modules we use for testing, some generic modules + originating elsewhere and some specific to SCons. README What you're looking at right now. REPORTING BUGS ============== -Please report bugs by following the "Tracker - Bugs" link on the SCons -project page and filling out the form: +Please report bugs by following the detailed instructions on our Bug +Submission page: - + -You can also send mail to the SCons developers mailing list: +You can also send mail to the SCons developers' mailing list: - scons-devel@lists.sourceforge.net + dev@scons.tigris.org -But please make sure that you also submit a bug report to the project -page bug tracker, because bug reports in email can sometimes get lost -in the general flood of messages. +But even if you send email to the mailing list please make sure that you +ALSO submit a bug report to the project page bug tracker, because bug +reports in email often get overlooked in the general flood of messages. MAILING LISTS
https://bitbucket.org/scons/scons/diff/README?diff2=284bdbfc9dbc&at=default
CC-MAIN-2015-48
refinedweb
1,421
55.95
Time::Moment can save time A long time ago in a galaxy far, far away, the rebel alliance ran into a slight problem when the starship carrying the princess left two hours late because its software was in the wrong time zone, running into an imperial cruiser that was patrolling an hour early for a similar reason. The bad guys unwittingly solved the rebels’ problem by removing the wrong time zone when they removed that special case—a solution familiar to programmers. The rebels exploited an imperial bug when a literal hole in their defense was left open an hour late. You might think that we are in the computer revolution (Alan Kay says we aren’t), but for all of our fancy hardware, the cheap or free platforms and services, and the amazing programming tools we have, the way we handle and times is often a mess. Y2K has nothing on this. When Dave Rolsky came out with DateTime, everyone rejoiced. It’s a masterful piece of software that strives to be pedantically correct down to the nanosecond and leap seconds. Before then, I used a hodge-podge of modules to deal with dates and avoided date math. DateTime can represent dates and tell me various things about them, such as the day of the quarter, give me locale-specific names, format them in interesting ways, and also give me the difference between dates: use Date::Time; my $dt = DateTime->new( year => 2014, month => 12, day => 18, hour => 12, minute => 37, second => 57, nanosecond => 0, time_zone => 'UTC', ); my $quarter = $dt->quarter; my $day_of_quarter = $dt->day_of_quarter; my $month_name = $dt->month_name; # can be locale specific my $ymd = $dt->ymd('/'); # 2015/02/06 my $now = DateTime->now; my $duration = $now - $dt; DateTime doesn’t parse dates. Separate modules in the same namespace can do that while returning a DateTime object. For instance, the DateTime::Format::W3CDTF module parses dates and turn them into objects:); Brilliant. DateTime is the standard answer to any date question. It works with almost no thought on my side. But DateTime has a problem. It creates big objects and in the excitement to use something that works (slow and correct is better than fast and wrong), I might end up with hundreds of those objects, not leaving much space for other things. Try dumping one of these objects to see its extent. I won’t waste space with that in this article. Although DateTime is exactingly correct, sometimes I’d like to be a little less exact and quite a bit faster. That’s where Christian Hansen’s Time::Moment comes in (see his Time::Moment vs DateTime). It works in UTC, ignores leap seconds, and limits its dates to the years 1 to 9999. It’s objects are immutable, so it can be a bit faster. To get a new datetime, you get a new object. And, it has many of the common features and an interface close to DateTime. The Time::Moment distribution comes with a program, dev/bench.pl, that allows me to compare the performance. Here’s some of the output: $ perl dev/bench.pl Benchmarking constructor: ->new() Rate DateTime Time::Moment DateTime 14436/s -- -99% Time::Moment 1064751/s 7276% -- Let’s make a more interesting benchmark that constructs an object from a string, add a day to it, and check if it’s before today. As with every benchmark, you have to check it against your particular use: use Benchmark; use DateTime; use Time::Moment; use DateTime::Format::W3CDTF; my $dtf_string ='2014-02-01T13:01:37-05:00'; my $time_moment = sub { my $tm = Time::Moment->from_string( $dtf_string ); my $tm2 = $tm->plus_days( 1 ); my $now = Time::Moment->now; my $comparison = $now > $tm2; }; my $datetime = sub { my $w3c = DateTime::Format::W3CDTF->new; my $dt = $w3c->parse_datetime( $dtf_string ); $dt->add( days => 1 ); my $now = DateTime->now; my $comparison = $now > $dt; }; Benchmark::cmpthese( -10, { 'Time::Moment' => $time_moment, 'DateTime' => $datetime, }); Time::Moment is still really fast. Amazingly fast: $ perl dtf_bench.pl Rate DateTime Time::Moment DateTime 1889/s -- -99% Time::Moment 273557/s 14384% -- If my problem is within the limits of Time::Moment (and, who ever needs more than 640k?), I can get big wins. When that no longer applies, with a little work I can switch to DateTime. Either way, you might want to wipe the memory of your droids. This article was originally posted on PerlTricks.com.
https://www.perl.com/article/148/2015/2/2/Time-Moment-can-save-time/
CC-MAIN-2022-40
refinedweb
733
60.04
Additional profile information on Alfred Thompson at Google+ Since I brought up C++ the other day I’ve had a few questions in email. Some of the common questions I get revolve around doing more interesting things with the console. By interesting I mean different colors, placing the cursor at specific parts of the screen and the like. You know, all those things we used to do the hard way back in the day before really GUI (Graphical User Interface) objects like we have today with Windows Forms or some of the objects available in Java with Swing. Well I have good news for you. Using a modern C++ compiler like the one in Visual Studio 2005 does not mean you can’t do all that fun stuff. Today I decided to play with a bit of it. The object you want to use is the Console object and you want to have the IDE create a CLR Console application. The following code is fairly self-explanatory. Console::Title sets the title of the DOS/Console window that is opened. The code in the loop sets the background color to black and prints out two spaces to replace characters in row 10 and position i. Then a sort of arrow (foreground already set to red) is printed on a white background. Then a sleep. Note that you need to add “using namespace System::Threading;” after the line that says “using namespace System;” for the call to the Threading object to work. The int i; Console::Title = "Rocket Ship?"; Console::CursorTop = 10; Console::ForegroundColor = ConsoleColor::Red; for (i=0; i < 76; i++) { Console::BackgroundColor = ConsoleColor::Black; Console::CursorLeft = i; Console::Write(L" "); Console::BackgroundColor = ConsoleColor::White; Console::Write(L"->"); Thread::Sleep(10); } There are other methods available like Clear and Beep and more. Give it a try if you do interesting console applications with your students. Since this Console object is part of .NET you can do the same kind of things in console applications written in C# or Visual Basic as well. Now. Yesterday.
http://blogs.msdn.com/b/alfredth/archive/2006/07.aspx?PageIndex=1&PostSortBy=MostViewed
CC-MAIN-2014-52
refinedweb
344
62.78
Session is a well understood term for all of us and as per our common understanding, it is (well, less or more) some duration in which entities recognize each other. Some of us might have played with it in ASP.NET as well. The concept is almost similar in WCF although the technique and usage are a bit different. In WCF, there is always a service class instance that handles incoming service requests. These instances may already be there (at server when request arrives) or may be created as needed. In WCF, the concept of session is mainly to manage these service instances so that server can be utilized in an optimized way. At the server, there is one special class named InstanceContext that creates/loads service class instance and dispatches requests to it. The correlation can be perceived as: InstanceContext You can see here how stuff is engaged. When some request arrives, it is routed to service instance via instance context. Suppose there is a hit of thousand requests, then service will have to create thousand instance contexts (which in turn will create thousand service instances) to handle these requests. If requests are served in this way, then service is called PERCALL service as each request is served by a new instance context and service instance object (call them as service objects onwards). Consider there is a client that made 100 requests. If service identifies this client and always serves it by a dedicated service object, then this type of service will be known as PERSESSION service as it recognizes the client and serves it by a single instance of service object. On the other hand, if all the requests irrespective of client are served by a single instance of service objects, then the service will be known as SINGLETON service. The following pictures summarize the concept: To configure sessions in WCF, one should know the following three elements: Binding NetNamedPipeBinding SessionMode Allowed Required NotAllowed Required InstanceContextMode PerCall PerSession Single Let’s consolidate this. First a suitable binding should be there which makes the ground to support the session. Then SessionMode of service contract should be supportive so that service could allow sessionful requests and at last InstanceContextMode should be configured such that it responds in a sessionful manner. The last setting is crucial as it is the only one that decides whether a request will be served by a new service instance or by an existing one. There can be numerous combinations of such settings where one has to judge meaningful ones only. E.g. If you specify session supportive binding, Allowed session mode but specify Per Call instance context mode then there is no meaning of such session as each request will be served by a new instance of service class in spite of all other supports. Download the attached sample code and follow this article. Doing practical along with theory will make you understand the concept better. In the attached sample code, there are two projects - one a window app and another, a console app. Window project works like a client while another is WCF server. There are two bindings: The two button handlers call WCF service on two different bindings. At the service side, the called operation analyzes three objects – Instance context, service context and session Id. Here the intention is to check how different objects and session id are created to handle incoming requests. The form application has a label control that displays session id at the client side. Now run the code and click the Http button. You will observe that there is no session id at form and neither at service side, now click this button twice, see the instance context and service objects are different each time. This scenario depicts Per Call service as service always uses new instance context and service instance. Now click TCP button twice and observe the values at service and at client. You will find the session id is not same at service and client, still the service responds by the same service objects. When the TCP button is clicked, the client makes a request to service over sessionful channel, service accepts this session responds as sessionful service that’s why each request is served by same instance of service object (Instance context + service instance). There is one thing to understand that unlike ASP.NET in WCF, session id is not mandatorily to be same at service and client. This is because for the two bindings - NetTCPBinding and NetNamedPipeBinding, WCF identifies the client by the underlying transport channel but for some other bindings (like WSHttp) which uses connectionless HTTP as underlying channel, WCF needs some way to identify the client. In such cases, it adds a new field – Session Id to message header to identify the client. Let’s check this behavior. Replace basicHttpBinding with wsHttpBinding in service’s app.config and update the client with the newer files. Your new endpoint in service’s app.config should look like this: NetTCPBinding NetNamedPipeBinding WSHttp basicHttpBinding wsHttpBinding <endpoint address="pqr" binding="wsHttpBinding" contract="wcfservice.Iservice" name="b"/> Run the app and click on HTTP button twice. You will observe that both client and server have the same session id, instance context and service instance during the two service calls. These objects remained persistent because the default value for InstanceContextMode is PerSession and default value for SessionMode is Allowed. Therefore a sessionful binding’s session was accepted by service and served by sessioned service objects. To experiment further, let’s decorate our code with these attributes. Your code should look like this: Allowed [__strong__][ServiceContract(SessionMode = SessionMode.Allowed)] public interface Iservice { //some code… } [ServiceBehavior(InstanceContextMode = InstanceContextMode.[__strong__]PerSession)] public class serviceclass : Iservice { // some code… } When you run the code after these changes and click the HTTP button, you will not observe any change from the previous result. Now change instance context mode to PerCall, generate new proxy (by svcutil) and update the client. Your code should look like this: PerCall [ServiceBehavior(InstanceContextMode = InstanceContextMode.[__strong__]PerCall)] public class serviceclass : Iservice { // some code… } Run the code and click the Http button twice, you will notice that this time always a new pair of service objects is created to serve the request. The same will happen even if you click the Tcp button. Now just make the InstanceContextMode to single. Your code should look like this: [ServiceBehavior(InstanceContextMode = InstanceContextMode.[__strong__]Single)] public class serviceclass : Iservice { // some code… } Now run the code and click both the buttons. This time you will see that both Tcp and Http requests are served by the same service objects. This is because InstanceContextMode.Single configures the service to serve all the requests by same service object. InstanceContextMode.Single I hope the concept covered so far is clear to you. Sometimes you can derive the result yourself even without running the code by just analyzing the value of different configuration of SessionMode and InstanceContextMode property. I would recommend you to make some more random configurations and just play with it to make the concept clearer a bit more. [Do remember that by default, InstanceContextMode is PerSession and SessionMode is Allowed. If the underlying binding is not supportive to session, then though having sessionful the service responds as Per call.] Demarcation is to annotate service operations by some special attribute to determine the first and last operation in their execution order. Consider a service having 4 methods/operations named- SignIn(), GetDetails(). TransferFund() and SignOut(). In such a scenario, a user must be signed in and then try to fetch details and do transfer. If user signs out, then he should not be allowed for further requests until he signs in. To configure such type of execution order, Demarcation is required. There are two attributes: GetDetails() TransferFund() SignOut() IsInitiating True IsTerminating False These attributes decide which operation should be called first and which should be last? For the above four operations, following can be one possible sequence: [OperationContract(IsInitiating = True)] Bool SignIn() [OperationContract(IsInitiating = false)] String GetDetails() [OperationContract(IsInitiating = false)] Bool TransferFund() [OperationContract(IsInitiating = false, IsTerminating = True)] Bool SignOut() Here initiation and termination refers to a session which is mandatory for demarcation, as service needs to know whether client has followed the particular sequence or not. Here operation 2,3 and 4 are set to IsInitiating = false so cannot be called first but can be called after an Isinitiating = True operation is called. Similarly, Operation 4 is annotated as IsTerminating = True that’s why when it is called, it terminates the session (along with underlying channel) and then client cannot make further calls until a fresh proxy is created and an IsInitiating = True operation is called. To use demarcation, the following configuration is necessary: IsInitiating = false Isinitiating = True IsTerminating = True IsInitiating = True When IsTerminating operation is called, WCF discards the channel and never accepts further messages from it. If an operation is not decorated explicitly with any of these attributes, then default value of these attributes will be applicable to it. That’s all for now. At last, just recapitulate once - There are 3 things to remember for WCF session: Sessionful Demarcation defines first and last operation in execution order. WCF session is somewhat different than ASP.NET sessions in the following ways as in WCF: Hope this small article has given you some brief idea about WCF session and now you can feel some base for further reading. Please do some experiments on your own and refer to MSDN for in depth analysis of the concept. Please let me know if something is missing or needs correction as it will be helpful for all of us. You can refer to my other article here for WCF transaction. Thanks..
https://www.codeproject.com/Articles/188749/WCF-Sessions-Brief-Introduction?msg=4395575
CC-MAIN-2017-39
refinedweb
1,620
52.8
ES2020 — Dynamic Import A really cool new feature that’s part of the new JavaScript standard, ES2020, is dynamic import, which we can use with lazy loading (if you’re into that sort of thing). But first off, let’s get things going with a very brief explanation of regular import. Standard JavaScript code is typically divided into various modules. These modules can be classes or functions, but at the end of the day we need to organize our code into sections. Each file is generally organized as a module that has an export. So for example, if I want to create a particular module, I’ll create it with export: export default class MyModule { constructor() { this.myText = 'Hello World!'; } sayHello() { console.log(this.myText); } } Then I can use it by doing something like this: import MyModule from ('/path/to/MyModule.js'); const myModule = new MyModule; myModule.sayHello(); This way, I can organize my code into several different files. There are of course a few different ways to do import/export, but in general it looks like that. So what’s the problem with this type of import? In short, it’s static. It’s found at the top of the file and is loaded by default. If you try to use it as a condition, you’ll get an error: SyntaxError: ... 'import' and 'export' may only appear at the top level. What that problem? If I want to delay the loading and have it be dependant upon an action taken by the user, I can’t do it. This delayed loading is known as lazy loading—loading that happens in accordance with a user action and prevents a long load time. If we want to use lazy loading in the front end, we’d need to use webpack to do so. That was the only solution, until ES2020 came along. The new JavaScript standard allows us to use import as a function in any piece or part of our code. We’re talking about an asynchronous function that returns a promise. If you don’t know what a promise is, check out my article about promises in JavaScript here. Now, we can use import in a variety of ways. First, we can just pass the name of the file that we want to pass to it. Then, it’s just as if we used script src. Ah, but wait—it will only load when we want it to. For instance, in this example the import only happens when the button is clicked on. If there’s an export that we want to receive, here’s what the promise returns. For instance, this example from MDN: let module = await import('/modules/my-module.js'); We can use async await’s syntax with a regular promise. For instance, if we take our example module that I mentioned earlier, something like this: import('/path/to/MyModule.js') .then((MyModule) => { const myModule = new MyModule; myModule.sayHello(); }); For the die-hard webpack users out there, this may look like the most useless thing ever. But still, this is a fantastic and important feature that’s now in the new JavaScript standard. For those writing simple scripts, it will make it easier to keep their modules organized and to use lazy loading in a more elegant way. It’s not a complicated feature, but it is an important one.
http://www.discoversdk.com/blog/es2020-%E2%80%94-dynamic-import
CC-MAIN-2020-40
refinedweb
564
65.93
3. Library calls (functions within program libraries) FREADSection: Linux Programmer's Manual (3) Updated: 2015-07-23 Index | Return to Main Contents NAMEfread, fwrite - binary stream input/output SYNOPSIS #include <stdio.h> size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream); size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream); DESCRIPTIONFor an explanation of the terms used in this section, see attributes(7). CONFORMING TOPOSIX.1-2001, POSIX.1-2008, C89. SEE ALSOread(2), write(2), feof(3), ferror(3), unlocked_stdio(3) COLOPHONThis page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Index Return to Main Contents
https://eandata.com/linux/?chap=3&cmd=fread
CC-MAIN-2020-16
refinedweb
124
53.71
Frank da Cruz The Kermit Project, Columbia University As of: C-Kermit 9.0.302, 20 August 2011 This page last updated: Sun Aug 21 12:08:52 2011 (New York USA Time) IF YOU ARE READING A PLAIN-TEXT version of this document, note it is a plain-text dump of a Web page. You can visit the original (and possibly more up-to-date) Web page here: Since the material in this file has been accumulating since 1985, some (much) of it might be dated. [ C-Kermit ] [ Installation Instructions ] [ TUTORIAL ] Quick Links: [ Linux ] [ *BSD ] [Mac OS X] [ AIX ] [ HP-UX ] [ Solaris ] [ SCO ] SECTION CONTENTS 1.1. Documentation 1.2. Technical Support 1.3. The Year 2000 1.4. The Euro THIS IS WHAT USED TO BE CALLED the "beware file" for the Unix version of C-Kermit, previously distributed as ckubwr.txt and, before that, as ckuker.bwr, after the fashion of old Digital Equipment Corporation (DEC) software releases that came with release notes (describing what had changed) and a "beware file" listing known bugs, limitations, "non-goals", and things to watch out for. The C-Kermit beware file has been accumulating since 1985, and it applies to many different hardware platforms and operating systems, and many versions of them, so it is quite large. Prior to C-Kermit 8.0, it was distributed only in plain-text format. Now it is available as a Web document with links, internal cross references, and so on, to make it easier to use. This document applies to Unix C-Kermit in general, as well as to specific Unix variations like Linux, AIX, HP-UX, Solaris, and so on, and should be read in conjunction with the platform-independent C-Kermit beware file, which contains similar information, but applying to all versions of C-Kermit (VMS, Windows, OS/2, AOS/VS, VOS, etc, as well as to Unix). There is much in this document that is (only) of historical interest. The navigation links should help you skip directly to the sections that are relevant to you. Numerous offsite Web links are supposed to lead to further information but, as you know, Web links go stale frequently and without warning. If you can supply additional, corrected, updated, or better Web links, please feel free to let me know. C-Kermit 6.0 is documented in the book Using C-Kermit, Second Edition, by Frank da Cruz and Christine M. Gianone, Digital Press, Burlington, MA, USA, ISBN 1-55558-164-1 (1997), 622 pages. This remains the definitive C-Kermit documentation. Until the third edition is published (sorry, there is no firm timeframe for this), please also refer to: For information on how to get technical support, please visit: The Unix version of C-Kermit, release 6.0 and later, is "Year 2000 compliant", but only if the underlying operating system is too. Contact your Unix operating system vendor to find out which operating system versions, patches, hardware, and/or updates are required. (Quite a few old Unixes are still in operation in the new millenium, but with their date set 28 years in the past so at least the non-year parts of the calendar are correct.) As of C-Kermit 6.0 (6 September 1996), post-millenium file dates are recognized, transmitted, received, and reproduced correctly during the file transfer process in C-Kermit's File Attribute packets. If post-millenium dates are not processed correctly on the other end, file transfer still takes place, but the modification or creation date of the received file might be incorrect. The only. The \v(ndate) is a numeric-format date of the form yyyymmdd, suitable for both lexical and numeric comparison and sorting: e.g. 19970208 or 20011231. If the underlying operating system returns the correct date information, these variables will have the proper values. If not, then scripts that make decisions based on these variables might not operate correctly. Most date-related code is based upon the C Library asctime() string, which always has a four-digit year. In Unix, the one bit of code in C-Kermit that is an exception to this rule is several calls to localtime(), which returns a pointer to a tm struct, in which the year is presumed to be expressed as "years since 1900". The code depends on this assumption. Any platforms that violate it will need special coding. As of this writing, no such platforms are known. Command and script programming functions that deal with dates use C-Kermit specific code that always uses full years. C-Kermit 7.0 and later support Unicode (ISO 10646), ISO 8859-15 Latin Alphabet 9, PC Code Page 858, Windows Code Pages 1250 and 1251, and perhaps other character sets, that encode the Euro symbol, and can translate among them as long as no intermediate character-set is involved that does not include the Euro. It is often dangerous to run a binary C-Kermit (or any other) program built on a different computer. Particularly if that computer had a different C compiler, libraries, operating system version, processor features, etc, and especially if the program was built with shared libraries, because as soon as you update the libraries on your system, they no longer match the ones referenced in the binary, and the binary might refuse to load when you run it, in which case you'll see error messages similar to: Could not load program kermit Member shr4.o not found or file not an archive Could not load library libcurses.a[shr4.o] Error was: No such file or directory (These samples are from AIX.) To avoid this problem, we try to build C-Kermit with statically linked libraries whenever we can, but this is increasingly impossible as shared libraries become the norm. It is often OK to run a binary built on an earlier OS version, but it is rarely possible (or safe) to run a binary built on a later one, for example to run a binary built under Solaris 8 on Solaris 2.6. Sometimes even the OS-or-library patch/ECO level makes a difference. A particularly insidious problem occurs when a binary was built on a version of the OS that has patches from the vendor (e.g. to libraries); in many cases you won't be able to run such a binary on an unpatched version of the same platform. When in doubt, build C-Kermit from the source code on the computer where it is to be run (if possible!). If not, ask us for a binary specific to your configuration. We might have one, and if we don't, we might be able to find somebody who will build one for you. SECTION CONTENTS 3.0. C-KERMIT ON PC-BASED UNIXES 3.1. C-KERMIT AND AIX 3.2. C-KERMIT AND HP-UX 3.3. C-KERMIT AND LINUX 3.4. C-KERMIT AND NEXTSTEP 3.5. C-KERMIT AND QNX 3.6. C-KERMIT AND SCO 3.7. C-KERMIT AND SOLARIS 3.8. C-KERMIT AND SUNOS 3.9. C-KERMIT AND ULTRIX 3.10. C-KERMIT AND UNIXWARE 3.11. C-KERMIT AND APOLLO SR10 3.12. C-KERMIT AND TANDY XENIX 3.0 3.13. C-KERMIT AND OSF/1 (DIGITAL UNIX) (TRU64 UNIX) 3.14. C-KERMIT AND SGI IRIX 3.15. C-KERMIT AND THE BEBOX 3.16. C-KERMIT AND DG/UX 3.17. C-KERMIT AND SEQUENT DYNIX 3.18. C-KERMIT AND {FREE,OPEN,NET}BSD 3.19. C-KERMIT AND MAC OS X 3.20. C-KERMIT AND COHERENT The following sections apply to specific Unix versions. Most of them contain references to FAQs (Frequently Asked Questions), but these tend to be ephemeral. For possibly more current information see: One thread that runs through many of them, and implicitly perhaps through all, concerns the problems that occur when trying to dial out on a serial device that is (also) enabled for dialing in. The "solutions" to this problem are many, varied, diverse, and usually gross, involving configuring the device for bidirectional use. This is done in a highly OS-dependent and often obscure manner, and the effects (good or evil) are also highly dependent on the particular OS (and getty variety, etc). Many examples are given in the OS-specific sections below. An important point to keep in mind is that C-Kermit is a cross-platform, portable software program. It was not designed specifically and only for your particular Unix version, or for that matter, for Unix in particular at all. It also runs on VMS, AOS/VS, VOS, and other non-Unix platforms. All the Unix versions of C-Kermit share common i/o modules, with compile-time #ifdef constructions used to account for the differences among the many Unix products and releases. If you think that C-Kermit is behaving badly or missing something on your particular Unix version, you might be right -- we can't claim to be expert in hundreds of different OS / version / hardware / library combinations. If you're a programmer, take a look at the source code and send us your suggested fixes or changes. Or else just send us a report about what seems to be wrong and we'll see what we can do. Also see:. SECTION CONTENTS 3.0.1. Interrupt Conflicts 3.0.2. Windows-Specific Hardware 3.0.3. Modems 3.0.4. Character Sets 3.0.5. Keyboard, Screen, and Mouse Access 3.0.6. Laptops PCs are not the best platform for real operating systems like Unix. The architecture suffers from numerous deficiencies, not the least of which is the stiflingly small number of hardware interrupts (either 7 or 15, many of which are preallocated). Thus adding devices, using multiple serial ports, etc, is always a challenge and often a nightmare. The free-for-all nature of the PC market and the lack of standards combined with the diversity of Unix OS versions make it difficult to find drivers for any particular device on any particular version of Unix. Of special interest to Kermit users is the fact that there is no standard provision in the PC architecture for more than 2 communication (serial) ports. COM3 and COM4 (or higher) will not work unless you (a) find out the hardware address and interrupt for each, (b) find out how to provide your Unix version with this information, and (c) actually set up the configuration in the Unix startup files (or whatever other method is used). Watch out for interrupt conflicts, especially when using a serial mouse, and don't expect to be able to use more than two serial ports. The techniques for resolving interrupt conflicts are different for each operating system (Linux, NetBSD, etc). In general, there is a configuration file somewhere that lists COM ports, something like this: com0 at isa? port 0x3f8 irq 4 # DOS COM1 com1 at isa? port 0x2f8 irq 3 # DOS COM2 The address and IRQ values in this file must agree with the values in the PC BIOS and with the ports themselves, and there must not be more than one device with the same interrupt. Unfortunately, due to the small number of available interrupts, installing new devices on a PC almost always creates a conflict. Here is a typical tale from a Linux user (Fred Smith) about installing a third serial port: ...problems can come from a number of causes. The one I fought with for some time, and finally conquered, was that my modem is on an add-in serial port, cua3/IRQ5. By default IRQ5 has a very low priority, and does not get enough service in times when the system is busy to prevent losing data. This in turn causes many resends. There are two 'fixes' that I know of, one is to relax hard disk interrupt hogging by using the correct parameter to hdparm, but I don't like that one because the hdparm man page indicates it is risky to use. The other one, the one I used, was to get 'irqtune' and use it to give IRQ5 the highest priority instead of nearly the lowest. Completely cured the problem. Here's another one from a newsgroup posting: After much hair pulling, I've discovered why my serial port won't work. Apparently my [PC] has three serial devices (two comm ports and an IR port), of which only two at a time can be active. I looked in the BIOS setup and noticed that the IR port was activated, but didn't realize at the time that this meant that COM2 was thereby de-activated. I turned off the IR port and now the serial port works as advertised. To complicate matters, the PC platform is becoming increasingly and inexorably Windows-oriented. More and more add-on devices are "Windows only" -- meaning they are incomplete and rely on proprietary Windows-based software drivers to do the jobs that you would expect the device itself to do. PCMCIA, PCI, or "Plug-n-Play" devices are rarely supported on PC-based Unix versions such as SCO; Winmodems, Winprinters, and the like are not supported on any Unix variety (with a few exceptions). The self-proclaimed Microsoft PC 97 (or later) standard only makes matters worse since its only purpose to ensure that PCs are "optimized to run Windows 95 and Windows NT 4.0 and future versions of these operating systems". With the exception noted (the Lucent modem, perhaps a handful of others by the time you read this), drivers for "Win" devices are available only for Windows, since the Windows market dwarfs that of any particular Unix brand, and for that matter all Unixes (or for that matter, all non-Windows operating systems) combined. If your version of Unix (SCO, Linux, BSDI, FreeBSD, etc) does not support a particular device, then C-Kermit can't use it either. C-Kermit, like any Unix application, must access all devices through drivers and not directly because Unix is a real operating system. Don't waste time thinking that you, or anybody else, could write a Linux (or other Unix) driver for a Winmodem or other "Win" device. First of all, these devices generally require realtime control, but since Unix is a true multitasking operating system, realtime device control is not possible outside the kernel. Second, the specifications for these devices are secret and proprietary, and each one (and each version of each one) is potentially different. Third, a Winmodem driver would be enormously complex; it would take years to write and debug, by which time it would be obsolete. A more recent generation of PCs (circa 1999-2000) is marketed as "Legacy Free". One can only speculate what that could mean. Most likely it means it will ONLY run the very latest versions of Windows, and is made exclusively of Winmodems, Winprinters, Winmemory, and Win-CPU-fans (Legacy Free is a concept pioneered by Microsoft). Before you buy a new PC or add-on equipment, especially serial ports, internal modems, or printers, make sure they are compatible with your version of Unix. This is becoming an ever-greater challenge; only a huge company like Microsoft can afford to be constantly cranking out and/or verifying drivers for the thousands of video boards, sound cards, network adapters, SCSI adapters, buses, etc, that spew forth in an uncontrolled manner from all corners of the world on a daily basis. With very few exceptions, makers of PCs assemble the various components and then verify them only with Windows, which they must do since they are, no doubt, preloading the PC with Windows. To find a modern PC that is capable of running a variety of non-Windows operating systems (e.g. Linux, SCO OpenServer, Unixware, and Solaris) is a formidable challenge requiring careful study of each vendor's "compatibility lists" and precise attention to exact component model numbers and revision levels. External modems are recommended: Internal PC modems (even when they are not Winmodems, which is increasingly unlikely in new PCs) are always trouble, especially in Unix. Even when they work for dialing out, they might not work for dialing in, etc. Problems that occur when using an internal modem can almost always be eliminated by switching to an external one. Even when an internal modem is not a Winmodem or Plug-n-Play, it is often a no-name model of unknown quality -- not the sort of thing you want sitting directly on your computer's bus. (Even if it does not cause hardware problems, it probably came without a command list, so no Unix software will know how to control it.) For more about Unix compatible modems, see: Remember that PCs, even now -- more than two decades after they were first introduced -- are not (in general) capable of supporting more than 2 serial devices. Here's a short success story from a recent newsgroup posting: "I have a Diamond SupraSonic II dual modem in my machine. What I had to end up doing is buying a PS/2 mouse and port and install it. Had to get rid of my serial mouse. I also had to disable PnP in my computer bios. I was having IRQ conflicts between my serial mouse and 'com 3'. Both modems work fine for me. My first modem is ttyS0 and my second is ttyS1." Special third-party multiport boards such as DigiBoard are available for certain Unix platforms (typically SCO, maybe Linux) that come with special platform-specific drivers. PCs generally have PC code pages such as CP437 or CP850, and these are often used by PC-based Unix operating systems, particularly on the console. These are supported directly by C-Kermit's SET FILE CHARACTER-SET and SET TERMINAL CHARACTER-SET commands. Some PC-based Unix versions, such as recent Red Hat Linux releases, might also support Microsoft Windows code pages such as CP1252, or even Latin Alphabet 1 itself (perhaps displayed with CP437 glyphs). (And work is in progress to support Unicode UTF8 in Linux.) Certain Windows code pages are not supported directly by C-Kermit, but since they are ISO Latin Alphabets with nonstandard "extensions" in the C1 control range, you can substitute the corresponding Latin alphabet (or other character set) in any C-Kermit character-set related commands: Windows Code Page Substitution CP 1004 Latin-1 CP 1051 HP Roman-8 Other Windows code pages are mostly (or totally) incompatible with their Latin Alphabet counterparts (e.g. CP1250 and Latin-2), and several of these are already supported by C-Kermit 7.0 and later (1250, 1251, and 1252). Finally, note that as a real operating system, Unix (unlike Windows) does not provide the intimate connection to the PC keyboard, screen, and mouse that you might expect. Unix applications can not "see" the keyboard, and therefore can not be programmed to understand F-keys, Editing keys, Arrow keys, Alt-key combinations, and the like. This is because: (To be filled in . . .) SECTION CONTENTS 3.1.1. AIX: General 3.1.2. AIX: Network Connections 3.1.3. AIX: Serial Connections 3.1.4. AIX: File Transfer 3.1.5. AIX: Xterm Key Map For additional information see: and/or read the comp.unix.aix newsgroup. About AIX version numbers: "uname -a" tells the two-digit version number, such as 3.2 or 4.1. The three-digit form can be seen with the "oslevel" command (this information is unavailable at the API level and is reportedly obtained by scanning the installed patch list). Supposedly all three-digit versions within the same two-digit version (e.g. 4.3.1, 4.3.2) are binary compatible; i.e. a binary built on any one of them should run on all others, but who knows. Most AIX advocates tell you that any AIX binary will run on any AIX version greater than or equal to the one under which it was built, but experience with C-Kermit suggests otherwise. It is always best to run a binary built under your exact same AIX version, down to the third decimal place, if possible. Ideally, build it from source code yourself. Yes, this advice would be easier to follow if AIX came with a C compiler. File transfers into AIX 4.2 or 4.3 through the AIX Telnet or Rlogin server have been observed to fail (or accumulate huge numbers of correctable errors, or even disconnect the session), when exactly the same kind of transfers into AIX 4.1 work without incident, as do such transfers into all non-AIX platforms on the same kind of connections (with a few exceptions noted elsewhere in this document). AIX 4.3.3 seems to be particularly fragile in this regard; the weakness seems to be in its pseudoterminal (pty) driver. High-speed streaming transfers work perfectly, however, if the AIX Telnet server and pty driver are removed from the picture; e.g, by using "set host * 3000" on AIX. The problem can be completely cured by replacing the IBM Telnet server with MIT's Kerberos Telnet server -- even if you don't actually use the Kerberos part. Diagnosis: AIX pseudoterminals (which are controlled by the Telnet server to give you a login terminal for your session) have quirks that not even IBM knows about. The situation with AIX 5.x is not known, but if it has the same problem, the same cure is available. Meanwhile, the only remedy when going through the IBM Telnet server is to cut back on Kermit's performance settings until you find a combination that works: In some cases, severe cutbacks are required, e.g. those implied by the ROBUST command. Also be sure that the AIX C-Kermit on the remote end has "set flow none" (which is the default). NOTE: Maybe this one can also be addressed by starting AIX telnetd with the "-a" option. The situation with SSH connections is not known, but almost certainly the same. When these problems occur, the system error log contains: LABEL: TTY_TTYHOG IDENTIFIER: 0873CF9F Type: TEMP Resource Name: pts/1 Description TTYHOG OVER-RUN Failure Causes EXCESSIVE LOAD ON PROCESSOR Recommended Actions REDUCE SYSTEM LOAD. REDUCE SERIAL PORT BAUD RATE Before leaving the topic of AIX pseudoterminals, it is very likely that Kermit's PTY and SSH commands do not work well either, for the same reason that Telnet connections into AIX don't work well. A brief test with "pty rlogin somehost" got a perfectly usable terminal (CONNECT) session, but file-transfer problems like those just described. Reportedly, telnet from AIX 4.1-point-something to non-Telnet ports does not work unless the port number is in the /etc/services file; it's not clear from the report whether this is a problem with AIX Telnet (in which case it would not affect Kermit), or with the sockets library (in which case it would). The purported fix is IBM APAR IX61523. C-Kermit SET HOST or TELNET from one AIX 3.1 (or earlier) system to another won't work right unless you set your local terminal type to something other than AIXTERM. When your terminal type is AIXTERM, AIX TELNET sends two escapes whenever you type one, and the AIX telnet server swallows one of them. This has something to do with the "hft" device. This behavior seems to be removed in AIX 3.2 and later. In AIX 3, 4, or 5, C-Kermit won't be able to "set line /dev/tty0" (or any other dialout device) if you haven't installed "cu" or "uucp" on your system, because installing these is what creates the UUCP lockfile directory. If SET LINE commands always result in "Sorry, access to lock denied", even when C-Kermit has been given the same owner, group, and permissions as cu: -r-sr-xr-x 1 uucp uucp 67216 Jul 27 1999 cu and even when you run it as root, then you must go back and install "cu" from your AIX installation media. According to IBM's "From Strength to Strength" document (21 April 1998), in AIX 4.2 and later "Async supports speeds on native serial ports up to 115.2kbps". However, no API is documented to achieve serial speeds higher than 38400 bps. Apparently the way to do this -- which might or might not work only on the IBM 128-port multiplexer -- is: cxma-stty fastbaud /dev/tty0 which, according to "man cxma-stty": fastbaud Alters the baud rate table, so 50 baud becomes 57600 baud. -fastbaud Restores the baud rate table, so 57600 baud becomes 50 baud. Presumably (but not certainly) this extrapolates to 110 "baud" becomes 76800 bps, and 150 becomes 115200 bps. So to use high serial speeds in AIX 4.2 or 4.3, the trick would be to give the "cxma-stty fastbaud" command for the desired tty device before starting Kermit, and then use "set speed 50", "set speed 110", or "set speed 150" to select 56700, 76800, or 115200 bps. It is not known whether cxma-stty requires privilege. According to one report, "Further investigation with IBM seems to indicate that the only hardware capable of doing this is the 128-port multiplexor with one (or more) of the 16 port breakout cables (Enhanced Remote Async Node 16-Port EIA-232). We are looking at about CDN$4,000 in hardware just to hang a 56kb modem on there. Of course, we can then hang 15 more, if we want. This hardware combo is described to be good to 230.4kbps." Another report says (quote from AIX newsgroup, March 1999): The machine type and the adapter determine the speed that one can actually run at. The older microchannel machines have much slower crystal frequencies and may not go beyond 76,800. A feature put into AIX 421 allows one to key in non-POSIX baud rates and if the uart can support that speed, it will get set. this applies also to 43p's and beyond. 115200 is the max for the 43P's native serial port. As crytal frequencies continue to increase, the built-in serial ports speeds will improve. To use 'uucp' or 'ate' at the higher baud rates, configure the port for the desired speed, but set the speed of uucp or ate to 50. Any non-POSIX speeds set in the ttys configuration will the be used. In the case of the 128-port adapters or the ISA 8-port or PCI 8-port adapter, there are only a few higher baud rates. - Change the port to enable high baud rates: - B50 for 57600 - B75 for 76800 - B110 for 115200 - B200 for 230000 - chdev -l ttyX -a fastbaud=enable - For the 128 ports original style rans, only 57600 bps is supported. - For the new enhanced RANs, up to 230Kbps is supported. In AIX 2.2.1 on the RT PC with the 8-port multiplexer, SET SPEED 38400 gives 9600 bps, but SET SPEED 19200 gives 19200 (on the built-in S1 port). Note that some RS/6000s (e.g. the IBM PowerServer 320) have nonstandard rectangular 10-pin serial ports; the DB-25 connector is NOT a serial port; it is a parallel printer port. IBM cables are required for the serial ports, (The IBM RT PC also had rectangular serial ports -- perhaps the same as these, perhaps different.) If you dial in to AIX through a modem that is connected directly to an AIX port (e.g. on the 128-port multiplexer) and find that data is lost, especially when uploading files to the AIX system (and system error logs report buffer overruns on the port): Downloads -> Software Fixes -> Download FixDist gets an application for looking up known problems. Many problems reported with bidirectional terminal lines on AIX 3.2.x on the RS/6000. Workaround: don't use bidirectional terminal lines, or write a shell-script wrapper for Kermit that turns getty off on the line before starting Kermit, or before Kermit attempts to do the SET LINE. (But note: These problems MIGHT be fixed in C-Kermit 6.0 and later.) The commands for turning getty off and on (respectively) are /usr/sbin/pdisable and /usr/sbin/penable. Evidently AIX 4.3 (I don't know about earlier versions) does not allow open files to be overwritten. This can cause Kermit transfers to fail when FILE COLLISION is OVERWRITE, where they might work on other Unix varieties or earlier AIX versions. Transfer of binary -- and maybe even text -- files can fail in AIX if the AIX terminal has particular port can have character-set translation done for it by the tty driver. The following advice from a knowledgeable AIX user: [This feature] has to be checked (and set/cleared) with a separate command, unfortunately stty doesn't handle this. To check: $ setmaps input map: none installed output map: none installed If it says anything other than "none installed" for either one, it is likely to cause a problem with kermit. To get rid of installed maps: $ setmaps -t NOMAP However, I seem to recall that with some versions of AIX before 3.2.5, only root could change the setting. I'm not sure what versions - it might have only been under AIX 3.1 that this was true. At least with AIX 3.2.5 an ordinary user can set or clear the maps. On the same problem, another knowledgeable AIX user says: The way to get information on the NLS mapping under AIX (3.2.5 anyway) is as follows. From the command line type: lsattr -l tty# -a imap -a omap -E -H Replace the tty number for the number sign above. This will give a human readable output of the settings that looks like this; # lsattr -l tty2 -a imap -a omap -E -H attribute value description user_settable imap none INPUT map file True omap none OUTPUT map file True If you change the -H to a -O, you get output that can easily be processed by another program or a shell script, for example: # lsattr -l tty2 -a imap -a omap -E -O #imap:omap none:none To change the settings from the command line, the chdev command is used with the following syntax. chdev -l tty# -a imap='none' -a omap='none' Again substituting the appropriate tty port number for the number sign, "none" being the value we want for C-Kermit. Of course, the above can also be changed by using the SMIT utility and selecting devices - tty. (...end quote) In 2007 I noticed the following on high-speed SSH connections (local network) into AIX 5.3: streaming transfers into AIX just don't work. The same might be true for Telnet connections; I have no way to check. It appears that the AIX pty driver and/or the SSH (and possibly Telnet) server are not capable of receiving a steady stream of incoming data at high speed. Solution: unknown. Workaround: put "set streaming off" in your .kermrc or .mykermrc file, since streaming is the default for network connections. Here is a sample configuration for setting up an xterm keyboard for VT220 or higher terminal emulation on AIX, courtesy of Bruce Momjian, Drexel Hill, PA. Xterm can be started like this: xterm $XTERMFLAGS +rw +sb +ls $@ -tm 'erase ^? intr ^c' -name vt220 \ -title vt220 -tn xterm-220 "$@" & --------------------------------------------------------------------------- XTerm*VT100.Translations: #override \n\ <Key>Home: string(0x1b) string("[3~") \n \ <Key>End: string(0x1b) string("[4~") \n vt220*VT100.Translations: #override \n\ Shift <Key>F1: string("[23~") \n \ Shift <Key>F2: string("[24~") \n \ Shift <Key>F3: string("[25~") \n \ Shift <Key>F4: string("[26~") \n \ Shift <Key>F5: string("[K~") \n \ Shift <Key>F6: string("[31~") \n \ Shift <Key>F7: string("[31~") \n \ Shift <Key>F8: string("[32~") \n \ Shift <Key>F9: string("[33~") \n \ Shift <Key>F10: string("[34~") \n \ Shift <Key>F11: string("[28~") \n \ Shift <Key>F12: SECTION CONTENTS 3.2.0. Common Problems 3.2.1. Building C-Kermit on HP-UX 3.2.2. File Transfer 3.2.3. Dialing Out and UUCP Lockfiles in HP-UX 3.2.4. Notes on Specific HP-UX Releases 3.2.5. HP-UX and X.25 REFERENCES For further information, read the comp.sys.hp.hpux newsgroup. C-Kermit is included as part of the HP-UX operating system by contract between Hewlett Packard and Columbia University for HP-UX 10.00 and later. Each level of HP-UX includes a freshly built C-Kermit binary in /bin/kermit, which should work correctly. Binaries built for regular HP-UX may be used on Trusted HP-UX and vice-versa, except for use as IKSD because of the different authentication methods. Note that HP does not update C-Kermit versions for any but its most current HP-UX release. So, for example, HP-UX 10.20 has C-Kermit 6.0; 11.00 has C-Kermit 7.0, and 11.22 has 8.0. Of course, as with all software, older Kermit versions have bugs (such as buffer overflow vulnerabilities) that are fixed in later versions. From time to time, HP discovers one of these (long-ago fixed) bugs and issues a security alert for the older OS's, recommending some draconian measure to avoid the problem. The true fix in each situation is to install the current release of C-Kermit. Some HP workstations have a BREAK/RESET key. If you hit this key while C-Kermit is running, it might kill or suspend the C-Kermit process. C-Kermit arms itself against these signals, but evidently the BREAK/RESET key is -- at least in some circumstances, on certain HP-UX versions -- too powerful to be caught. (Some report that the first BREAK/RESET shows up as SIGINT and is caught by C-Kermit's former SIGINT handler even when SIGINT is currently set to SIG_IGN; the second kills Kermit; other reports suggest the first BREAK/RESET sends a SIGTSTP (suspend signal) to Kermit, which it catches and suspends itself. You can tell C-Kermit to ignore suspend signals with SET SUSPEND OFF. You can tell C-Kermit to ignore SIGINT with SET COMMAND INTERRUPTION OFF. It is not known whether these commands also grant immunity to the BREAK/RESET key (one report states that with SET SUSPEND OFF, the BREAK/RESET key is ignored the first four times, but kills Kermit the 5th time). In any case: When HP-UX is on the remote end of the connection, it is essential that HP-UX C-Kermit be configured for Xon/Xoff flow control (this is the default, but in case you change it and then experience file-transfer failures, this is a likely reason). This section applies mainly to old (pre-10.20) HP-UX version on old, slow, and/or memory-constrained hardware. During the C-Kermit 6.0 Beta cycle, something happened to ckcpro.w (or, more precisely, the ckcpro.c file that is generated from it) which causes HP optimizing compilers under HP-UX versions 7.0 and 8.0 (apparently on all platforms) as well as under HP-UX 9.0 on Motorola platforms only, to blow up. In versions 7.0 and 8.0 the problem has spread to other modules. The symptoms vary from the system grinding to a halt, to the compiler crashing, to the compilation of the ckcpro.c module taking very long periods of time, like 9 hours. This problem is handled by compiling the modules that tickle it without optimization; the new C-Kermit makefile takes care of this, and shows how to do it in case the same thing begins happening with other modules. On HP-UX 9.0, a kernel parameter, maxdsiz (maximum process data segment size), seems to be important. On Motorola systems, it is 16MB by default, whereas on RISC systems the default is much bigger. Increasing maxdsiz to about 80MB seems to make the problem go away, but only if the system also has a lot of physical memory -- otherwise it swaps itself to death. The optimizing compiler might complain about "some optimizations skipped" on certain modules, due to lack of space available to the optimizer. You can increase the space (the incantation depends on the particular compiler version -- see the makefile), but doing so tends to make the compilations take a much longer time. For example, the "hpux0100o+" makefile target adds the "+Onolimit" compiler flag, and about an hour to the compile time on an HP-9000/730. But it *does* produce an executable that is about 10K smaller :-) In the makefile, all HP-UX entries automatically skip optimization of problematic modules. Telnet connections into HP-UX versions up to and including 11.11 (and possibly 11.20) tend not to lend themselves to file transfer due to limitations, restrictions, and/or bugs in the HP-UX Telnet server and/or pseudoterminal (pty) driver. In C-Kermit 6.0 (1996) an unexpected slowness was noted when transferring files over local Ethernet connections when an HP-UX system (9.05 or 10.00) was on the remote end. The following experiment was conducted to determine the cause. C-Kermit 6.0 was used; the situation is slightly better using C-Kermit 7.0's streaming feature and HP-UX 10.20 on the far end. The systems were HP-UX 10.00 (on 715/33) and SunOS 4.1.3 (on Sparc-20), both on the same local 10Mbps Ethernet, packet length 4096, parity none, control prefixing "cautious", using only local disks on each machine -- no NFS. In the C-Kermit 6.0 (ACK/NAK) case, the window size was 20; in the streaming case there is no window size (i.e. it is infinite). The test file was C-Kermit executable, transferred in binary mode. Conditions were relatively poor: the Sun and the local net heavily loaded; the HP system is old, slow, and memory-constrained. C-Kermit 6.0... C-Kermit 7.0... Local Remote ACK/NAK........ Streaming...... Client Server Send Receive Send Receive Sun HP 36 18 64 18 HP HP 25 15 37 16 HP Sun 77 83 118 92 Sun Sun 60 60 153 158 So whenever HP is the remote we have poor performance. Why? BUT... If I start HP-UX C-Kermit as a TCP service: set host * 3000 server and then from the client "set host xxx 3000", I get: C-Kermit 6.0... C-Kermit 7.0... Local Remote ACK/NAK........ Streaming...... Client Server Send Receive Send Receive Sun HP 77 67 106 139 HP HP 50 50 64 62 HP Sun 57 85 155 105 Sun Sun 57 50 321 314 Therefore the HP-UX telnet server or pty driver seems to be adding more overhead than the SunOS one, and most others. When going through this type of connection (a remote telnet server) there is little Kermit can do improve matters, since the telnet server and pty driver are between the two Kermits, and neither Kermit program can have any influence over them (except putting the Telnet connection in binary mode, but that doesn't help). (The numbers for the HP-HP transfers are lower than the others since both Kermit processes are running on the same slow 33MHz CPU.) Matters seem to have deteriorated in HP-UX 11. Now file transfers over Telnet connections fail completely, rather than just being slow. In the following trial, a Telnet connection was made from Kermit 95 to HP-UX 11.11 on an HP-9000/785/B2000 over local 10Mbps Ethernet running C-Kermit 8.00 in server mode (under the HP-UX Telnet server): Text........ Binary...... Stream Pktlen GET SEND GET SEND On 4000 Fail Fail Fail Fail Off 4000 Fail Fail Fail Fail Off 2000 OK Fail OK Fail On 2000 OK Fail OK Fail On 3000 Fail Fail Fail Fail On 2500 Fail Fail Fail Fail On 2047 OK Fail OK Fail On 2045 OK Fail OK Fail Off 500 OK OK OK OK On 500 OK Fail OK Fail On 240 OK Fail OK Fail As you can see, downloads are problematic unless the receiver's Kermit packet length is 2045 or less, but uploads work only with streaming disabled and the packet length restricted to 500. To force file transfers to work on this connection, the desktop Kermit must be told to: set streaming off set receive packet-length 2000 set send packet-length 500 However, if a connection is made between the same two programs on the same two computers over the same network, but this time a direct socket-to-socket connection bypassing the HP-UX Telnet server and pty driver (tell HP-UX C-Kermit to "set host /server * 3000 /raw"; tell desktop client program to "set host blah 3000 /raw"), everything works perfectly with the default Kermit settings (streaming, 4K packets, liberal control-character unprefixing, 8-bit transparency, etc): Text........ Binary...... Stream Pktlen GET SEND GET SEND On 4000 OK OK OK OK And in this case, transfer rates were approximately 900,000 cps. To verify that the behavior reported here is not caused by the new Kermit release, the same experiment was performed on a Telnet connection from the same PC over the same network to the old 715/33 running HP-UX 10.20 and C-Kermit 8.00. Text and binary uploads and downloads worked perfectly (albeit slowly) with all the default settings -- streaming, 4K packets, etc. HP workstations do not come with dialout devices configured; you have to do it yourself (as root). First look in /dev to see what's there; for example in HP-UX 10.00 or later: ls -l /dev/cua* ls -l /dev/tty* If you find a tty0p0 device but no cua0p0, you'll need to creat one if you want to dial out; the tty0p0 does not work for dialing out. It's easy: start SAM; in the main Sam window, double-click on Peripheral Device, then in the Peripheral Devices window, double-click on Terminals and Modems. In the Terminals and Modems dialog, click on Actions, then choose "Add modem" and fill in the blanks. For example: Port number 0, speed 57600 (higher speeds tend not to work reliably), "Use device for calling out", do NOT "Receive incoming calls" (unless you know what you are doing), leave "CCITT modem" unchecked unless you really have one, and do select "Use hardware flow control (RTS/CTS)". Then click OK. This creates cua0p0 as well as cul0p0 and ttyd0p0 If the following sequence: set line /dev/cua0p0 ; or other device set speed 115200 ; or other normal speed produces the message "?Unsupported line speed". This means either that the port is not configured for dialout (go into SAM as described above and make sure "Use device for calling out" is selected), or else that speed you have given (such as 460800) is supported by the operating system but not by the physical device (in which case, use a lower speed like 57600). In HP-UX 9.0, serial device names began to change. The older names looked like "/dev/cua00", "/dev/tty01", etc (sometimes with only one digit). The newer names have two digits with the letter "p" in between. HP-UX 8.xx and earlier have the older form, HP-UX 10.00 and later have the newer form. HP-UX 9.xx has the newer form on Series 800 machines, and the older form on other hardware models. The situation is summarized in the following table (the Convio 10.0 column applies to HP-UX 10 and 11). Converged HP-UX Serial I/O Filenames : TTY Mux Naming --------------------------------------------------------------------- General meaning Old Form S800 9.0 Convio 10.0 --------------------------------------------------------------------- tty* hardwired ports tty<YY> tty<X>p<Y> tty<D>p<p> diag:mux<X> diag:mux<D> --------------------------------------------------------------------- ttyd* dial-in modems ttyd<YY> ttyd<X>p<Y> ttyd<D>p<p> diag:ttyd<X>p<Y> diag:ttyd<D>p<p> --------------------------------------------------------------------- cua* auto-dial out cua<YY> cua<X>p<Y> cua<D>p<p> diag:cua<X>p<Y> --------------------------------------------------------------------- cul* dial-out cul<YY> cul<X>p<Y> cul<D>p<p> diag:cul<X>p<Y> --------------------------------------------------------------------- <X>= LU (Logical Unit) <D>= Devspec (decimal card instance) <Y> or <YY> = Port <p>= Port For dialing out, you should use the cua or cul devices. When C-Kermit's CARRIER setting is AUTO or ON, C-Kermit should pop back to its prompt automatically if the carrier signal drops, e.g. when you log out from the remote computer or service. If you use the tty<D>p<d> (e.g. tty0p0) device, the carrier signal should be ignored. The tty<D>p<d> device should be used for direct connections where the carrier signal does not follow RS-232 conventions (use the cul device for hardwired connections through a true null modem). Do not use the ttyd<D>p<d> device for dialing out. Kermit's access to serial devices is controlled by "UUCP lockfiles", which are intended to prevent different users using different software programs (Kermit, cu, etc, and UUCP itself) from accessing the same serial device at the same time. When a device is in use by a particular user, a file with a special name is created in: /var/spool/locks (HP-UX 10.00 and later) /usr/spool/uucp (HP-UX 9.xx and earlier) The file's name indicates the device that is in use, and its contents indicates the process ID (pid) of the process that is using the device. Since serial devices and the locks directory are not both publicly readable and writable, Kermit and other communication software must be installed setuid to the owner (bin) of the serial device and setgid to the group (daemon) of the /var/spool/locks directory. Kermit's setuid and setgid privileges are enabled only when opening the device and accessing the lockfiles. Let's say "unit" means a string of decimal digits (the interface instance number) followed (in HP-UX 10.00 and later) by the letter "p" (lowercase), followed by another string of decimal digits (the port number on the interface), e.g.: "0p0", "0p1", "1p0", etc (HP-UX 10.00 and later) "0p0", "0p1", "1p0", etc (HP-UX 9.xx on Series 800) "00", "01", "10", "0", etc (HP-UX 9.xx not on Series 800) "00", "01", "10", "0", etc (HP-UX 8.xx and earlier) Then a normal serial device (driver) name consists of a prefix ("tty", "ttyd", "cua", "cul", or possibly "cuad" or "culd") followed by a unit, e.g. "cua0p0". Kermit's treatment of UUCP lockfiles is as close as possible to that of the HP-UX "cu" program. Here is a table of the lockfiles that Kermit creates for unit 0p0: Selection Lockfile 1 Lockfile 2 /dev/tty0p0 LCK..tty0p0 (none) * /dev/ttyd0p0 LCK..ttyd0p0 (none) /dev/cua0p0 LCK..cua0p0 LCK..ttyd0p0 /dev/cul0p0 LCK..cul0p0 LCK..ttyd0p0 /dev/cuad0p0 LCK..cuad0p0 LCK..ttyd0p0 /dev/culd0p0 LCK..culd0p0 LCK..ttyd0p0 <other> LCK..<other> (none) (* = Dialin device, should not be used.) In other words, if the device name begins with "cu", a second lockfile for the "ttyd" device, same unit, is created, which should prevent dialin access on that device. The <other> case allows for symbolic links, etc, but of course it is not foolproof since we have no way of telling which device is really being used. When C-Kermit tries to open a dialout device whose name ends with a "unit", it searches the lockfile directory for all possible names for the same unit. For example, if user selects /dev/cul2p3, Kermit looks for lockfiles named: LCK..tty2p3 LCK..ttyd2p3 LCK..cua2p3 LCK..cul2p3 LCK..cuad2p3 LCK..culd2p3 If any of these files are found, Kermit opens them to find out the ID (pid) of the process that created them; if the pid is still valid, the process is still active, and so the SET LINE command fails and the user is informed of the pid so s/he can use "ps" to find out who is using the device. If the pid is not valid, the file is deleted. If all such files (i.e. with same "unit" designation) are successfully removed, then the SET LINE command succeeds; up to six messages are printed telling the user which "stale lockfiles" are being removed. When the "set line" command succeeds in HP-UX 10.00 and later, C-Kermit also creates a Unix System V R4 "advisory lock" as a further precaution (but not guarantee) against any other process obtaining access to the device while you are using it. If the selected device was in use by "cu", Kermit can't open it, because "cu" has changed its ownership, so we never get as far as looking at the lockfiles. In the normal case, we can't even look at the device to see who the owner is because it is visible only to its (present) owner. In this case, Kermit says (for example): /dev/cua0p0: Permission denied When Kermit releases a device it has successfully opened, it removes all the lockfiles that it created. This also happens whenever Kermit exits "under its own power". If Kermit is killed with a device open, the lockfile(s) are left behind. The next Kermit program that tries to assign the device, under any of its various names, will automatically clean up the stale lockfiles because the pids they contain are invalid. The behavior of cu and other communication programs under these conditions should be the same. Here, by the way, is a summary of the differences between the HP-UX port driver types from John Pezzano of HP: There are three types of device files for each port. The ttydXXX device file is designed to work as follows: - The process that opens it does NOT get control of the port until CD is asserted. This was intentional (over 15 years ago) to allow getty to open the port but not control it until someone called in. If a process wants to use the direct or callout device files (ttyXXX and culXXX respectively), they will then get control and getty would be blocked. This eliminated the need to use uugetty (and its inherent problems with lock files) for modems. You can see this demonstrated by the fact that "ps -ef" shows a ? in the tty column for the getty process as getty does not have the port yet. - Once CD is asserted, the port is controlled by getty (or the process handling an incoming call) if there was no process using the port. The ? in the "ps" command now shows the port. At this point, the port accepts data. Therefore you should use either the callout culXXX device file (immediate control but no data until CD is asserted) or the direct device file ttyXXX which gives immediate control and immediate data and which ignores by default modem control signals. The ttydXXX device should be used only for callin and my recommendation is to use it only for getty and uugetty. 3.2.4.1. HP-UX 11 3.2.4.2. HP-UX 10 3.2.4.3. HP-UX 9 3.2.4.4. HP-UX 8 3.2.4.5. HP-UX 7 and Earlier As noted in Section 3.2.2, the HP-UX 11 Telnet server and/or pseudoterminal driver are a serious impediment to file transfer over Telnet connections into HP-UX. If you have a Telnet connection into HP-UX 11, tell your desktop Kermit program to: set streaming off set receive packet-length 2000 set send packet-length 500 File transfer speeds over connections from HP-UX 11 (dialed or Telnet) are not impeded whatsoever, and can go at whatever speed is allowed by the connection and the Kermit partner on the far end. PA-RISC binaries for HP-UX 10.20 or later should run on any PA-RISC system, S700 or S800, as long as the binary was not built under a later HP-UX version than the host operating system. HP-UX 11.00 and 11.11 are only for PA-RISC systems. HP-UX 11.20 is only for IA64 (subsequent HP-UX releases will be for both PA-RISC and IA64). To check binary compatibility, the following C-Kermit 8.0 binaries were run successfully on an HP-9000/785 with HP-UX 11.11: Beginning in HP-UX 10.10, libcurses is linked to libxcurses, the new UNIX95 (X/Open) version of curses, which has some serious bugs; some routines, when called, would hang and never return, some would dump core. Evidently libxcurses contains a select() routine, and whenever C-Kermit calls what it thinks is the regular (sockets) select(), it gets the curses one, causing a segmentation fault. There is a patch for this from HP, PHCO_8086, "s700_800 10.10 libcurses patch", "shared lib curses program hangs on 10.10", "10.10 enhanced X/Open curses core dumps due to using wrong select call", 96/08/02 (you can tell if the patch is installed with "what /usr/lib/libxcurses.1"; the unpatched version is 76.20, the patched one is 76.20.1.2). It has been verified that C-Kermit works OK with the patched library, but results are not definite for HP-UX 10.20 or higher. To ensure that C-Kermit works even on non-patched HP-UX 10.10 systems, separate makefile entries are provided for HP-UX 10.00/10.01, 10.10, 10.20, etc, in which the entries for 10.10 and above link with libHcurses, which is "HP curses", the one that was used in 10.00/10.01. HP-UX 11.20 and later, however, link with libcurses, as libHcurses disappeared in 11.20. HP-UX 9.00 and 9.01 need patch PHNE_10572 (note: this replaces PHNE_3641) for hptt0.o, asio0.o, and ttycomn.o in libhp-ux.a. Contact Hewlett Packard if you need this patch. Without it, the dialout device (tty) will be hung after first use; subsequent attempts to use will return an error like "device busy". (There are also equivalent patches for s700 9.03 9.05 9.07 (PHNE_10573) and s800 9.00 9.04 (PHNE_10416). When C-Kermit is in server mode, it might have trouble executing REMOTE HOST commands. This problem happens under HP-UX 9.00 (Motorola) and HP-UX 9.01 (RISC) IF the C-Shell is the login shell AND with the C-Shell Revision 70.15. Best thing is to install HP's Patch PHCO_4919 for Series 300/400 and PHCO_5015 for the Series 700/800. PHCO_5015 is called "s700_800 9.X cumulative csh(1) patch with memory leak fix" which works for HP-UX 9.00, 9.01, 9.03, 9.04, 9.05 and 9.07. At least you need C-Shell Revision 72.12! C-Kermit works fine -- including its curses-based file-transfer display -- on the console terminal, in a remote session (e.g. when logged in to the HP 9000 on a terminal port or when telnetted or rlogin'd), and in an HP-VUE hpterm window or an xterm window. To make C-Kermit work on HP-UX 8.05 on a model 720, obtain and install HP-UX patch PHNE_0899. This patch deals with a lot of driver issues, particularly related to communication at higher speeds. One user reports: On HP-UX 8 DON'T install 'tty patch' PHKL_4656, install PHKL_3047 instead! Yesterday I tried this latest tty patch PHKL_4656 and had terrible problems. This patch should fix RTS/CTS problems. With text transfer all looks nice. But when I switched over to binary files the serial interface returned only rubish to C-Kermit. All sorts of protocol, CRC and packed errors I had. After several tests and after uninstalling that patch, all transfers worked fine. MB's of data without any errors. So keep your fingers away from that patch. If anybody needs the PHKL_3047 patch I have it here. It is no longer available from HP's patch base. When transferring files into HP-UX 5 or 6 over a Telnet connection, you must not use streaming, and you must not use a packet length greater than 512. However, you can use streaming and longer packets when sending files from HP-UX on a Telnet connection. In C-Kermit 8.0, the default receive packet length for HP-UX 5 and 6 was changed to 500 (but you can still increase it with SET RECEIVE PACKET-LENGTH if you wish, e.g. for non-Telnet connections). Disable streaming with SET STREAMING OFF. The HP-UX 5.00 version of C-Kermit does not include the fullscreen file-transfer because of problems with the curses library. If HP-UX 5.21 with Wollongong TCP/IP is on the remote end of a Telnet connection, streaming transfers to HP-UX invariably fail. Workaround: SET STREAMING OFF. Packets longer than about 1000 should not be used. Transfers from these systems, however, can use streaming and/or longer packets. Reportedly, "[there is] a bug in C-Kermit using HP-UX version 5.21 on the HP-9000 series 500 computers. It only occurs when the controlling terminal is using an HP-27140 six-port modem mux. The problem is not present if the controlling terminal is logged into an HP-27130 eight-port mux. The symptom is that just after dialing successfully and connecting Kermit locks up and the port is unusable until both forks of Kermit and the login shell are killed." (This report predates C-Kermit 6.0 and might no longer apply.) Although C-Kermit presently does not include built-in support for HP-UX X.25 (as it does for the Sun and IBM X.25 products), it can still be used to make X.25 connections as follows: start Kermit and then telnet to localhost. After logging back in, start padem as you would normally do to connect over X.25. Padem acts as a pipe between Kermit and X.25. In C-Kermit 7.0, you might also be able to avoid the "telnet localhost" step by using: C-Kermit> pty padem address This works if padem uses standard i/o (who knows?). SECTION CONTENTS 3.3.1. Problems Building C-Kermit for Linux 3.3.2. Problems with Serial Devices in Linux 3.3.3. Terminal Emulation in Linux 3.3.4. Dates and Times 3.3.5. Startup Errors 3.3.6. The Fullscreen File Transfer Display (August 2010) Reportedly C-Kermit packages for certain Linux distributions such as Centos and Ubuntu have certain features disabled, for example the SSH command, SET HOST PTY /SSH, and perhaps anything else to do with SSH and/or pseudoterminals and who knows what else. If you download the regular package ("tarball") from the Kermit Project and build from it ("make linux"), everything is fine. C-Kermit in Ubuntu 10.04 and 9.10 was reported slow to start because it was trying to resolve the IP address 255.255.255.255. Later, also in recent Debian versions. The following is seen in the strace:REFERENCES write(3, "RESOLVE-ADDRESS 255.255.255.255\n", 32) This is not Kermit Project code. Turns out to be something in glibc's resolver, and can be fixed by changing /etc/nsswitch.conf, but it might break other software, such as Avahi or anything (such as Gnome, Java, or Cups) that depends on it. I'm not sure where it happens; I don't think Kermit tries to get its IP address at startup time, only when it's needed or asked for, e.g. when making a connection or evaluating \v(ipaddress). For further information, read the comp.os.linux.misc, comp.os.linux.answers, and other Linux-oriented newsgroups, and see: Also see general comments on PC-based Unixes in Section 3.0. What Linux version is it? -- "uname -a" supplies only kernel information, but these days it's the distribution that matters: Red Hat 7.3, Debian 2.2, Slackware 8.0, etc. Unfortunately there's no consistent way to get the distribution version. Usually it's in a distribution-specific file: Did you know: DECnet is available for Linux? See: (But there is no support for it in C-Kermit -- anybody interested in adding it, please let me know). Before proceeding, let's handle the some of the most frequently asked question in the Linux newsgroups: Modern Linux distributions like Red Hat give you a choice at installation whether to include "developer tools". Obviously, you can't build C-Kermit or any other C program from source code if you have not installed the developer tools. But to confuse matters, you might also have to choose (separately) to install the "curses" or "ncurses" terminal control library; thus it is possible to install the C compiler and linker, but omit the (n)curses library and headers. If curses is not installed, you will not be able to build a version of C-Kermit that supports the fullscreen file-transfer display, in which case you'll need to use the "linuxnc" makefile target (nc = No Curses) or else install ncurses before building. There are all sorts of confusing issues caused by the many and varied Linux distributions. Some of the worst involve the curses library and header files: where are they, what are they called, which ones are they really? Other vexing questions involve libc5 vs libc6 vs glibc vs glibc2 (C libraries), gcc vs egcs vs lcc (compilers), plus using or avoiding features that were added in a certain version of Linux or a library or a distribution, and are not available in others. As of C-Kermit 8.0, these questions should be resolved by the "linux" makefile target itself, which does a bit of looking around to see what's what, and then sets the appropriate CFLAGS. Also see: "man setserial", "man irqtune". And: Sections 3.0, 6, 7, and 8 of this document. NOTE: Red Hat Linux 7.2 and later include a new API that allows serial-port arbitration by non-setuid/gid programs. This API has not yet been added to C-Kermit. If C-Kermit is to be used for dialing out on Red Hat 7.2 or later, it must still be installed as described in in Sections 10 and 11 of the Installation Instructions. Don't expect it to be easy. Queries like the following are posted to the Linux newsgroups almost daily: Problem of a major kind with my Compaq Presario 1805 in the sense that the pnpdump doesn't find the modem and the configuration tells me that the modem is busy when I set everything by hand! I have <some recent SuSE distribution>, kernel 2.0.35. Using the Compaq tells me that the modem (which is internal) is on COM2, with the usual IRQ and port numbers. Running various Windows diagnostics show me AT-style commands exchanged so I have no reason to believe that it is a Winmodem. Also, the diagnostics under Win98 tell me that I am talking to an NS 16550AN. [Editor's note: This does not necessarily mean it isn't a Winmodem.] Under Linux, no joy trying to talk to the modem on /dev/cua1 whether via minicom, kppp, or chat; kppp at least tells me that tcgetattr() failed. Usage of setserial: setserial /dev/cua1 port 0x2F8 irq 3 autoconfig setserial -g /dev/cua1 tells me that the uart is 'unknown'. I have tried setting the UART manually via. setserial to 16550A, 16550, and the other one (8550?) (I didn't try 16540). None of these manual settings resulted in any success. A look at past articles leads me to investigate PNP issues by calling pnpdump but pnpdump returns "no boards found". I have looked around on my BIOS (Phoenix) and there is not much evidence of it being PNP aware. However for what it calls "Serial port A", it offers a choice of Auto, Disabled or Manual settings (currently set to Auto), but using the BIOS interface I tried to change to 'manual' and saw the default settings offered to be were 0x3F8 and IRQ 4 (COM1). The BIOS menus did not give me any chance to configure COM2 or any "modem". I ended up not saving any BIOS changes in the course of my investigations. You can also find out a fair amount about your PC's hardware configuration in the text files in /proc, e.g.: -r--r--r-- 1 root 0 Sep 4 14:00 /proc/devices -r--r--r-- 1 root 0 Sep 4 14:00 /proc/interrupts -r--r--r-- 1 root 0 Sep 4 14:00 /proc/ioports -r--r--r-- 1 root 0 Sep 4 14:00 /proc/pci From the directory listing they look like empty files, but in fact they are text files that you "cat": $ cat /proc/pci Bus 0, device 14, function 0: Serial controller: US Robotics/3Com 56K FaxModem Model 5610 (rev 1). IRQ 10. I/O at 0x1050 [0x1057]. $ setserial -g /dev/ttyS4 /dev/ttyS4, UART: 16550A, Port: 0x1050, IRQ: 10 $ cat /proc/ioports 1050-1057 : US Robotics/3Com 56K FaxModem Model 5610 1050-1057 : serial(auto) $ cat /proc/interrupts CPU0 0: 7037515 XT-PIC timer 1: 2 XT-PIC keyboard 2: 0 XT-PIC cascade 4: 0 XT-PIC serial 8: 1 XT-PIC rtc 9: 209811 XT-PIC usb-uhci, eth0 14: 282015 XT-PIC ide0 15: 6 XT-PIC ide1 Watch out for PCI, PCMCIA and Plug-n-Play devices, Winmodems, and the like (see cautions in Section 3.0 Linux supports Plug-n-Play devices to some degree via the isapnp and pnpdump programs; read the man pages for them. (If you don't have them, look on your installation CD for isapnptool or download it from sunsite or a sunsite mirror or other politically correct location du jour). PCI modems do not use standard COM port addresses. The I/O address and IRQ are assigned by the BIOS. All you need to do to get one working, find out the I/O address and interrupt number with (as root) "lspci -v | more" and then give the resulting address and interrupt number to setserial. Even when you have a real serial port, always be wary of interrupt conflicts and similar PC hardware configuration issues: a PC is not a real computer like other Unix workstations -- it is generally pieced together from whatever random components were the best bargain on the commodity market the week it was built. Once it's assembled and boxed, not even the manufacturer will remember what it's made of or how it was put together because they've moved on to a new model. Their job is to get it (barely) working with Windows; for Linux and other OS's you are on your own. "set line /dev/modem" or "set line /dev/ttyS2", etc, results in an error, "/dev/modem is not a tty". Cause unknown, but obviously a driver issue, not a Kermit one (Kermit uses "isatty()" to check that the device is a tty, so it knows it will be able to issue all the tty-related ioctl's on it, like setting the speed & flow control). Try a different name (i.e. driver) for the same port, e.g. "set line /dev/cua2" or whatever. To find what serial ports were registered at the most recent system boot, type (as root): "grep tty /var/log/dmesg". "set modem type xxx" (where xxx is the name of a modem) followed by "set line /dev/modem" or "set line /dev/ttyS2", etc, hangs (but can be interrupted with Ctrl-C). Experimentation shows that if the modem is configured to always assert carrier (&C0) the same command does not hang. Again, a driver issue. Use /dev/cua2 (or whatever) instead. (Or not -- hopefully none of these symptoms occurs in C-Kermit 7.0 or later.) "set line /dev/cua0" reports "Device is busy", but "set line /dev/ttyS0" works OK. In short: If the cua device doesn't work, try the corresponding ttyS device. If the ttyS device doesn't work, try the corresponding cua device -- but note that Linux developers do not recommend this, and are phasing out the cua devices. From /usr/doc/faq/howto/Serial-HOWTO: It was discovered during development of C-Kermit 7.0 that rebuilding C-Kermit with -DNOCOTFMC (No Close/Open To Force Mode Change) made the aforementioned problem with /dev/ttyS0 go away. It is not yet clear, however, what its affect might be on the /dev/cua* devices. As of 19 March 1998, this option has been added to the CFLAGS in the makefile entries for Linux ("make linux"). Note that the cua device is now "deprecated", and new editions of Linux will phase (have phased) it out in favor of the ttyS device. See (if it's still there): (no, of course it isn't; you'll have to use your imagination). One user reported that C-Kermit 7.0, when built with egcs 1.1.2 and run on Linux 2.2.6 with glibc 2.1 (hardware unknown but probably a PC) dumps core when given a "set line /dev/ttyS1" command. When rebuilt with gcc, it works fine. All versions of Linux seem to have the following deficiency: When a modem call is hung up and CD drops, Kermit can no longer read the modem signals; SHOW COMMUNICATIONS says "Modem signals not available". The TIOCMGET ioctl() returns -1 with errno 5 ("I/O Error"). The Linux version of POSIX tcsendbreak(), which is used by C-Kermit to send regular (275msec) and long (1.5sec) BREAK signals, appears to ignore its argument (despite its description in the man page and info topic), and always sends a regular 275msec BREAK. This has been observed in Linux versions ranging from Debian 2.1 to Red Hat 7.1. C-Kermit is not a terminal emulator. For a brief explanation of why not, see Section 3.0.5. For a fuller explanation, ClICK HERE. In Unix, terminal emulation is supplied by the Window in which you run Kermit: the regular console screen, which provides Linux Console "emulation" via the "console" termcap entry, or under X-Windows in an xterm window, which gives VTxxx emulation. An xterm that includes color ANSI and VT220 emulation is available with Xfree86: Before starting C-Kermit in an xterm window, you might need to tell the xterm window's shell to "stty sane". To set up your PC console keyboard to send VT220 key sequences when using C-Kermit as your communications program in an X terminal window (if it doesn't already), create a file somewhere (e.g. in /root/) called .xmodmaprc, containing something like the following: keycode 77 = KP_F1 ! Num Lock => DEC Gold (PF1) keycode 112 = KP_F2 ! Keypad / => DEC PF1 keycode 63 = KP_F3 ! Keypad * => DEC PF3 keycode 82 = KP_F4 ! Keypad - => DEC PF4 keycode 111 = Help ! Print Screen => DEC Help keycode 78 = F16 ! Scroll Lock => DEC Do keycode 110 = F16 ! Pause => DEC Do keycode 106 = Find ! Insert => DEC Find keycode 97 = Insert ! Home => DEC Insert keycode 99 = 0x1000ff00 ! Page Up => DEC Remove keycode 107 = Select ! Delete => DEC Select keycode 103 = Page_Up ! End => DEC Prev Screen keycode 22 = Delete ! Backspace sends Delete (127) Then put "xmodmap filename" in your .xinitrc file (in your login directory), e.g. xmodmap /root/.xmodmaprc Of course you can move things around. Use the xev program to find out key codes. Console-mode keys are mapped separately using loadkeys, and different keycodes are used. Find out what they are with showkey. For a much more complete VT220/320 key mapping for Xfree86 xterm, CLICK HERE. If C-Kermit's date-time (e.g. as shown by its DATE command) differs from the system's date and time: C-Kermit should work on all versions of Linux current through March 2003, provided it was built on the same version you have, with the same libraries and header files (just get the source code and "make linux"). Binaries tend not to travel well from one Linux machine to another, due to their many differences. There is no guarantee that a particular C-Kermit binary will not stop working at a later date, since Linux tends to change out from under its applications. If that happens, rebuild C-Kermit from source. If something goes wrong with the build process, look on the C-Kermit website for a newer version. If you have the latest version, then report the problem to us. Inability to transfer files in Red Hat 7.2: the typical symptom would be if you start Kermit and tell it to RECEIVE, it fails right away with "?/dev/tty: No such device or address" or "?Bad file descriptor". One report says this is because of csh, and if you change your shell to bash or other shell, it doesn't happen. Another report cite bugs in Red Hat 7.2 Telnetd "very seldom (if ever) providing a controlling tty, and lots of other people piled on saying they have the same problem.") A third theory is that this happens only when Linux has been installed without "virtual terminal support". A search of RedHat's errata pages shows a bug advisory (RHBA-2001-153) issued 13 November 2001, but updated 6 December, about this same symptom (but with tcsh and login.) Seems that login was not always assigning a controlling TTY for the session, which would make most use of "/dev/tty" somewhat less than useful. Quoting: "Due to terminal handling problems in /bin/login, tcsh would not find the controlling terminal correctly, and a shell in single user mode would exhibit strange terminal input characteristics. This update fixes both of these problems." Since the Red Hat 5.1 release (circa August 1998), there have been numerous reports of prebuilt Linux executables, and particularly the Kermit RPM for Red Hat Linux, not working; either it won't start at all, or it gives error messages about "terminal type unknown" and refuses to initialize its curses support. The following is from the Kermit newsgroup: From: rchandra@hal9000.buf.servtech.com Newsgroups: comp.protocols.kermit.misc Subject: Red Hat Linux/Intel 5.1 and ncurses: suggestions Date: 22 Aug 1998 15:54:46 GMT Organization: Verio New York Keywords: RedHat RPM 5.1 Several factors can influence whether "linux" is recognized as a terminal type on many Linux systems. - Your program, or the libraries it linked with (if statically linked), or the libraries it dynamically links with at runtime, are looking for an entry in /etc/termcap that isn't there. (not likely, but possible... I believe but am not certain that this is a very old practice in very old [n]curses library implementations to use a single file for all terminal descriptions.) - Your program, or the libraries...are looking for a terminfo file that just plain isn't there. (also not so likely, since many people in other recent message threads said that other programs work OK). - Your program, or the libraries...are looking for a terminfo file that is stored at a pathname that isn't expected by your program, the libraries--and so on. I forgot if I read this in the errata Web page or where exactly I discovered this (Netscape install? Acrobat install?), but it may just be that one libc (let's say for sake of argument, libc5, but I don't know this to be true) expects your terminfo to be in /usr/share/terminfo, and the other (let's say libc6/glibc) expects /usr/lib/terminfo. I remember that the specific instructions in this bugfix/workaround were to do the following or equivalent: cd /usr/lib ln -s ../share/terminfo ./terminfo or: ln -s /usr/share/terminfo /usr/lib/terminfo So what this says is that the terminfo database/directory structure can be accessed by either path. When something goes to reference /usr/lib/terminfo, the symlink redirects it to essentially /usr/share/terminfo, which is where it really resides on your system. I personally prefer wherever possible to use relative symlinks, because they still hold, more often than break, across mount points, particularly NFS mounts, where the directory structure may be different on the different systems. Evidently the terminfo file moved between Red Hat 5.0 and 5.1, but Red Hat did not include a link to let applications built prior to 5.1 find it. Users reported that installing the link fixes the problem. Starting with ncurses versions dated 1998-12-12 (about a year before ncurses 5.0), ncurses sets the terminal for buffered i/o, but unfortunately is not able to restore it upon exit from curses (via endwin()). Thus after a file transfer that uses the fullscreen file transfer display, the terminal no longer echos nor responds immediately to Tab, ?, and other special command characters. The same thing happens on other platforms that use ncurses, e.g. FreeBSD. Workarounds: In Red Hat 7.1, when using C-Kermit in a Gnome terminal window, it was noticed that when the fullscreen file transfer display exits (via endwin()), the previous (pre-file-transfer-display) screen is restored. Thus you can't look at the completed display to see what happened. This is a evidently a new feature of xterm. I can only speculate that initscreen() and endwin() must send some kind of special escape sequences that command xterm to save and restore the screen. To defeat this effect, tell Linux you have a vt100 or other xterm-compatible terminal that is not actually an xterm, or else tell Kermit to SET TRANSFER DISPLAY to something besides FULLSCREEN. Run C-Kermit in a Terminal, Stuart, or xterm window, or when logged in remotely through a serial port or TELNET connection. C-Kermit does not work correctly when invoked directly from the NeXTSTEP File Viewer or Dock. This is because the terminal-oriented gtty, stty, & ioctl calls don't work on the little window that NeXTSTEP pops up for non-NeXTSTEP applications like Kermit. CBREAK and No-ECHO settings do not take effect in the command parser -- commands are parsed strictly line at a time. "set line /dev/cua" works. During CONNECT mode, the console stays in cooked mode, so characters are not transmitted until carriage return or linefeed is typed, and you can't escape back. If you want to run Kermit directly from the File Viewer, then launch it from a shell script that puts it in the desired kind of window, something like this (for "Terminal"): Terminal -Lines 24 -Columns 80 -WinLocX 100 -WinLocY 100 $FONT $FONTSIZE \ -SourceDotLogin -Shell /usr/local/bin/kermit & C-Kermit does not work correctly on a NeXT with NeXTSTEP 3.0 to which you have established an rlogin connection, due to a bug in NeXTSTEP 3.0, which has been reported to NeXT. The SET CARRIER command has no effect on the NeXT -- this is a limitation of the NeXTSTEP serial-port device drivers. Hardware flow control on the NeXT is selected not by "set flow rts/cts" in Kermit (since NeXTSTEP offers no API for this), but rather, by using a specially-named driver for the serial device: /dev/cufa instead /dev/cua; /dev/cufb instead of /dev/cub. This is available only on 68040-based NeXT models (the situation for Intel NeXTSTEP implementations is unknown). NeXT-built 68030 and 68040 models have different kinds of serial interfaces; the 68030 has a Macintosh-like RS-422 interface, which lacks RTS and CTS signals; the 68040 has an RS-423 (RS-232 compatible) interface, which supports the commonly-used modem signals. WARNING: the connectors look exactly the same, but the pins are used in completely DIFFERENT ways -- different cables are required for the two kinds of interfaces. IF YOU GET LOTS OF RETRANSMISSIONS during file transfer, even when using a /dev/cuf* device and the modem is correctly configured for RTS/CTS flow control, YOU PROBABLY HAVE THE WRONG KIND OF CABLE. On the NeXT, Kermit reportedly (by TimeMon) causes the kernel to use a lot of CPU time when using a "set line" connection. That's because there is no DMA channel for the NeXT serial port, so the port must interrupt the kernel for each character in or out. One user reported trouble running C-Kermit on a NeXT from within NeXT's Subprocess class under NeXTstep 3.0, and/or when rlogin'd from one NeXT to another: Error opening /dev/tty:, congm: No such device or address. Diagnosis: Bug in NeXTSTEP 3.0, cure unknown. See also: The comp.os.qnx newsgroup. Support for QNX 4.x was added in C-Kermit 5A(190). This is a full-function implementation, thoroughly tested on QNX 4.21 and later, and verified to work in both 16-bit and 32-bit versions. The 16-bit version was dropped in C-Kermit 7.0 since it can no longer be built successfully (after stripping most most features, I succeeded in getting it to compile and link without complaint, but the executable just beeps when you run it); for 16-bit QNX 4.2x, use C-Kermit 6.0 or earlier, or else G-Kermit. The 32-bit version (and the 16-bit version prior to C-Kermit 7.0) supports most of C-Kermit's advanced features including TCP/IP, high serial speeds, hardware flow-control, modem-signal awareness, curses support, etc. BUG: In C-Kermit 6.0 on QNX 4.22 and earlier, the fullscreen file transfer display worked fine the first time, but was fractured on subsequent file transfers. Cause and cure unknown. In C-Kermit 7.0 and QNX 4.25, this no longer occurs. It is not known if it would occur in C-Kermit 7.0 or later on earlier QNX versions. Dialout devices are normally /dev/ser1, /dev/ser2, ..., and can be opened explicitly with SET LINE. Reportedly, "/dev/ser" (no unit number) opens the first available /dev/sern device. Like all other Unix C-Kermit implementations, QNX C-Kermit does not provide any kind of terminal emulation. Terminal specific functions are provided by your terminal, terminal window (e.g. QNX Terminal or xterm), or emulator. QNX C-Kermit, as distributed, does not include support for UUCP line-locking; the QNX makefile entries (qnx32 and qnx16) include the -DNOUUCP switch. This is because QNX, as distributed, does not include UUCP, and its own communications software (e.g. qterm) does not use UUCP line locking. If you have a UUCP product installed on your QNX system, remove the -DNOUUCP switch from the makefile entry and rebuild. Then check to see that Kermit's UUCP lockfile conventions are the same as those of your UUCP package; if not, read the UUCP lockfile section of the Installation Instructions and make the necessary changes to the makefile entry (e.g. add -DHDBUUCP). QNX does, however, allow a program to get the device open count. This can not be a reliable form of locking unless all applications do it, so by default, Kermit uses this information only for printing a warning message such as: C-Kermit>set line /dev/ser1 WARNING - "/dev/ser1" looks busy... However, if you want to use it as a lock, you can do so with: SET QNX-PORT-LOCK { ON, OFF } This is OFF by default; if you set in ON, C-Kermit will fail to open any dialout device when its open count indicates that another process has it open. SHOW COMM (in QNX only) displays the setting, and if you have a port open, it also shows the open count. As of C-Kermit 8.0, C-Kermit's "open-count" form of line locking works only in QNX4, not in QNX6 (this might change in a future C-Kermit release). SECTION CONTENTS 3.6.1. SCO XENIX 3.6.2. SCO UNIX and OSR5 3.6.3. Unixware 3.6.4. Open UNIX 8 REFERENCES The same comments regarding terminal emulation and key mapping apply to SCO operating systems as to all other Unixes. C-Kermit is not a terminal emulator, and you can't use it to map F-keys, Arrow keys, etc. The way to do this is with xmodmap (xterm) or loadkeys (console). For a brief explanation, see Section 3.0.5. For a fuller explanation, ClICK HERE. Also see general comments on PC-based Unixes in Section 3.0. Old Xenix versions... Did you know: Xenix 3.0 is *older* than Xenix 2.0? In Xenix 2.3.4 and probably other Xenix versions, momentarily dropping DTR to hang up a modem does not work. DTR goes down but does not come up again. Workaround: Use SET MODEM HANGUP-METHOD MODEM-COMMAND. Anybody who would like to fix this is welcome to take a look at tthang() in ckutio.c. Also: modem signals can not be read in Xenix, and the maximum serial speed is 38400. There is all sorts of confusion among SCO versions, particularly when third- party communications boards and drivers are installed, regarding lockfile naming conventions, as well as basic functionality. As far as lockfiles go, all bets are off if you are using a third-party multiport board. At least you have the source code. Hopefully you also have a C compiler :-) Xenix 2.3.0 and later claim to support RTSFLOW and CTSFLOW, but this is not modern bidirectional hardware flow control; rather it implements the original RS-232 meanings of these signals for unidirectional half-duplex line access: If both RTSFLOW and CTSFLOW bits are set, Xenix asserts RTS when it wants to send data and waits for CTS assertion before it actually starts sending data (also, reportedly, even this is broken in Xenix 2.3.0 and 2.3.1). SCO systems tend to use different names (i.e. drivers) for the same device. Typically /dev/tty1a refers to a terminal device that has no modem control; open, read, write, and close operations do not depend on carrier. On the other hand, /dev/tty1A (same name, but with final letter upper case), is the same device with modem control, in which carrier is required (the SET LINE command does not complete until carrier appears, read/write operations fail if there is no carrier, etc). SCO OpenServer 5.0.5 and earlier do not support the reading of modem signals. Thus "show comm" does not list modem signals, and C-Kermit does not automatically pop back to its prompt when the modem hangs up the connection (drops CD). The ioctl() call for this is simply not implemented, at least not in the standard drivers. OSR5.0.6 attempts to deal with modem signals but fails; however OSR5.0.6a appears to function properly. Dialing is likely not to work well in SCO OpenServer 5.0.x because many of the serial-port APIs simply do not operate when using the standard drivers. For example, if DTR is dropped by the recommended method (setting speed to 0 for half a seconds, then restoring the speed), DTR and RTS go down but never come back up. When in doubt SET MODEM HANGUP-METHOD MODEM-COMMAND or SET DIAL HANGUP OFF. On the other hand, certain functions that might not (do not) work right or at all when using SCO drivers (e.g. high serial speeds, hardware flow control, and/or reading of modem signals) might work right when using third-party drivers. (Example: hardware flow control works, reportedly, only on uppercase device like tty1A -- not tty1a -- and only when CLOCAL is clear when using the SCO sio driver, but there are no such restrictions in, e.g., Digiboard drivers). One user reports that he can't transfer large files with C-Kermit under SCO OSR5.0.0 and 5.0.4 -- after the first 5K, everything falls apart. Same thing without Kermit -- e.g. with ftp over a PPP connection. Later, he said that replacing SCO's SIO driver with FAS, an alternative communications driver, made the problem go away: With regard to bidirectional serial ports on OpenServer 5.0.4, the following advice appeared on an SCO-related newsgroup: No amount of configuration information is going to help you on 5.0.4 unless it includes the kludge for the primary problem. With almost every modem, the 5.0.4 getty will barf messages and may or may not connect. There are 2 solutions and only one works on 5.0.4. Get the atdialer binary from a 5.0.0 system and substitute it for the native 5.0.4 atdialer. The other solution is to upgrade to 5.0.5. And, most of all, on any OpenServer products, do NOT run the badly broken Modem Manager. Configure the modems in the time honored way that dates back to Xenix. Use SCO-provided utilities for switching the directionality of a modem line, such as "enable" and "disable" commands. For example, to dial out on tty1a, which is normally set up for logins: disable tty1a kermit -l /dev/tty1a enable tty1a If a tty device is listed as an ACU in /usr/lib/uucp/Devices and is enabled, getty resets the ownership and permissions to uucp.uucp and 640 every time the device is released. If you want to use the device only for dialout, and you want to specify other owners or permissions, you should disable it in /usr/lib/uucp/Devices; this will prevent getty from doing things to it. You should also changes the device's file modes in /etc/conf/node.d/sio by changing fields 5-7 for the desired device(s); this determines how the devices are set if you relink the kernel. One SCO user of C-Kermit 5A(190) reported that only one copy of Kermit can run at a time when a Stallion Technologies multiport boards are installed. Cause, cure, and present status unknown (see Section 14 for more info regarding Stallion). Prior to SCO OpenServer 5.0.4, the highest serial port speed supported by SCO was 38400. However, in some SCO versions (e.g. OSR5) it is possible to map rarely-used lower speeds (like 600 and 1800) to higher ones like 57600 and 115200. To find out how, go to and search for "115200". In OSR5.0.4, serial speeds up to 921600 are supported through the POSIX interface; C-Kermit 6.1.193 or later, when built for OSR5.0.4 using /bin/cc (NOT the UDK, which hides the high-speed definitions from CPP), supports these speeds, but you might be able to run this binary on earlier releases to get the high serial speeds, depending on various factors, described by Bela Lubkin of SCO: Serial speeds under SCO Unix / Open Desktop / OpenServer ======================================================== Third party drivers (intelligent serial boards) may provide any speeds they desire; most support up to 115.2Kbps. SCO's "sio" driver, which is used to drive standard serial ports with 8250/16450/16550 and similar UARTs, was limited to 38400bps in older releases. Support for rates through 115.2Kbps was added in the following releases: SCO OpenServer Release 5.0.0 (requires supplement "rs40b") SCO OpenServer Release 5.0.2 (requires supplement "rs40a" or "rs40b") SCO OpenServer Release 5.0.4 or later SCO Internet FastStart Release 1.0.0 or later SCO supplements are at; the "rs40" series are under directory /Supplements/internet Kermit includes the high serial speeds in all OSR5 builds, but that does not necessarily mean they work. For example, on our in-house 5.0.5 system, SET SPEED 57600 or higher seems to succeed (no error occurs) but when we read the speed back the driver says it is 50. Similarly, 76800 becomes 75, and 115200 becomes 110. Testing shows the resulting speed is indeed the low one we read back, not the high one we asked for. Moral: Use speeds higher than 38400 with caution on SCO OSR5. Reportedly, if you have a script that makes a TCP/IP SET HOST (e.g. Telnet) connection to SCO 3.2v4.2 with TCP/IP 1.2.1, and then does the following: script $ exit hangup this causes a pseudoterminal (pty) to be consumed on the SCO system; if you do it enough times, it will run out of ptys. An "exit" command is being sent to the SCO shell, and a HANGUP command is executed locally, so the chances are good that both sides are trying to close the connection at once, perhaps inducing a race condition in which the remote pty is not released. It was speculated that this would be fixed by applying SLS net382e, but it did not. Meanwhile, the workaround is to insert a "pause" between the SCRIPT and HANGUP commands. (The situation with later SCO releases is not known.) SCO UNIX and OpenServer allow their console and/or terminal drivers to be configured to translate character sets for you. DON'T DO THIS WHEN USING KERMIT! First of all, you don't need it -- Kermit itself already does this for you. And second, it will (a) probably ruin the formatting of your screens (depending on which emulation you are using); and (b) interfere with all sorts of other things -- legibility of non-ASCII text on the terminal screen, file transfer, etc. Use: mapchan -n to turn off this feature. Note that there is a multitude of SCO entries in the makefile, many of them exhibiting an unusually large number of compiler options. Some people actually understand all of this. Reportedly, things are settling down with SCO OpenServer 5.x and Unixware 7 (and Open UNIX 8 and who knows what the next one will be -- Linux probably) -- the SCO UDK compiler is said to generate binaries that will run on either platform, by default, automatically. When using gcc or egcs, on the other hand, differences persist, plus issues regarding the type of binary that is generated (COFF, ELF, etc), and where and how it can run. All of this could stand further clarification by SCO experts. Unixware changed hands several times before landing at SCO, and so has its own section in this document. (Briefly: AT&T UNIX Systems Laboratories sold the rights to the UNIX name and to System V R4 (or R5?) to Novell; later Novell spun its UNIX division off into a new company called Univel, which eventually was bought by SCO, which later was bought by Caldera, which later sort of semi-spun-off SCO...) SCO was bought by Caldera in 2000 or 2001 and evolved Unixware 7.1 into Caldera Open UNIX 8.00. It's just like Unixware 7.1 as far as Kermit is concerned (the Unixware 7.1 makefile target works for Open UNIX 8.00, and in fact a Unixware 7.1 Kermit binary built on Unixware 7.1 runs under OU8; a separate OU8 makefile target exists simply to generate an appropriate program startup herald). Open Unix is now defunct; subsequent releases are called UnixWare again (e.g. UnixWare 7.1.3). SECTION CONTENTS 3.7.1. Serial Port Configuration 3.7.2. Serial Port Problems 3.7.3. SunLink X.25 3.7.4. Sun Workstation Keyboard Mapping 3.7.5. Solaris 2.4 and Earlier REFERENCES And about serial communications in particular, see "Celeste's Tutorial on Solaris 2.x Modems and Terminals": In particular: For PC-based Solaris, also see general comments on PC-based Unixes in Section 3.0. Don't expect Solaris or any other kind of Unix to work right on a PC until you resolve all interrupt conflicts. Don't expect to be able to use COM3 or COM4 (or even COM2) until you have configured their addresses and interrupts. Your serial port can't be used -- or at least won't work right -- until it is enabled in Solaris. For example, you get a message like "SERIAL: Operation would block" when attempting to dial. This probably indicates that the serial port has not been enabled for use with modems. You'll need to follow the instructions in your system setup or management manual, such as (e.g.) the Desktop SPARC Sun System & Network Manager's Guide, which should contain a section "Setting up Modem Software"; read it and follow the instructions. These might (or might not) include running a program called "eeprom", editing some system configuration file (such as, for example: /platform/i86pc/kernel/drv/asy.conf and then doing a configuration reboot, or running some other programs like drvconfig and devlinks. "man eeprom" for details. Also, on certain Sun models like IPC, the serial port hardware might need to have a jumper changed to make it an RS-232 port rather than RS-423. eeprom applies only to real serial ports, not to "Spiff" devices (serial port expander), in which case setup with Solaris' admintool is required. Another command you might need to use is pmadm, e.g.: pmadm -d -p zsmon -s tty3 pmadm -e -p zsmon -s tty3 You can use the following command to check if a process has the device open: fuser -f /dev/term/3 In some cases, however (according to Sun support, May 2001) "It is still possible that a zombie process has hold of the port EVEN IF there is no lock file and the fuser command comes up empty. In that case, the only way to resolve the problem is by rebooting." If you can't establish communication through a serial port to a device that is not asserting CD (Carrier Detect), try setting the environment variable "ttya-ignore-cd" to "true" (replace "ttya" with the port name). Current advice from Sun is to always the /dev/cua/x devices for dialing out, rather than the /dev/term/x. Nevertheless, if you have trouble dialing out with one, try the other. Reportedly, if you start C-Kermit and "set line" to a port that has a modem connected to it that is not turned on, and then "set flow rts/cts", there might be some (unspecified) difficulties closing the device because the CTS signal is not coming in from the modem. The built-in SunLink X.25 support for Solaris 2.3/2.4./25 and SunLink 8.01 or 9.00 works OK provided the X.25 system has been installed and initialized properly. Packet sizes might need to be reduced to 256, maybe even less, depending on the configuration of the X.25 installation. On one connection where C-Kermit 6.0 was tested, very large packets and window sizes could be used in one direction, but only very small ones would work in the other. In any case, according to Sun, C-Kermit's X.25 support is superfluous with SunLink 8.x / Solaris 2.3. Quoting an anonymous Sun engineer: ... there is now no need to include any X.25 code within kermit. As of X.25 8.0.1 we support the use of kermit, uucp and similar protocols over devices of type /dev/xty. This facility was there in 8.0, and should also work on the 8.0 release if patch 101524 is applied, but I'm not 100% sure it will work in all cases, which is why we only claim support from 8.0.1 onwards. When configuring X.25, on the "Advanced Configuration->Parameters" screen of the x25tool you can select a number of XTY devices. If you set this to be > 1, press Apply, and reboot, you will get a number of /dev/xty entries created. Ignore /dev/xty0, it is a special case. All the others can be used exactly as if they were a serial line (e.g. /dev/tty) connected to a modem, except that instead of using Hayes-style commands, you use PAD commands. From kermit you can do a 'set line' command to, say, /dev/xty1, then set your dialing command to be "CALL 12345678", etc. All the usual PAD commands will work (SET, PAR, etc). I know of one customer in Australia who is successfully using this, with kermit scripts, to manage some X.25-connected switches. He used standard kermit, compiled for Solaris 2, with X.25 8.0 xty devices. Hints for using a Sun workstation keyboard for VT emulation when accessing VMS, from the comp.os.vms newsgroup: From: Jerry Leichter <leichter@smarts.com> Newsgroups: comp.os.vms Subject: Re: VT100 keyboard mapping to Sun X server Date: Mon, 19 Aug 1996 12:44:21 -0400 > I am stuck right now using a Sun keyboard (type 5) on systems running SunOS > and Solaris. I would like to use EVE on an OpenVMS box with display back to > the Sun. Does anyone know of a keyboard mapping (or some other procedure) > which will allow the Sun keyboard to approximate a VT100/VT220? You can't get it exactly - because the keypad has one fewer key - but you can come pretty close. Here's a set of keydefs I use: Put this in a file - I use "keydefs" in my home directory and feed it into xmodmap: xmodmap - <$HOME/keydefs This takes care of the arrow keys and the "calculator" key cluster. The "+" key will play the role of the DEC "," key. The Sun "-" key will be like the DEC "-" key, though it's in a physically different position - where the DEC PF4 key is. The PF4 key is ... damn, I'm not sure where "key 105" is. I *think* it may be on the leftmost key of the group of four just above the "calculator" key cluster. I also execute the following (this is all in my xinitrc file): xmodmap -e 'keysym KP_Decimal = KP_Decimal' xmodmap -e 'keysym BackSpace = Delete BackSpace' \ -e 'keysym Delete = BackSpace Delete' xmodmap -e 'keysym KP_Decimal = Delete Delete KP_Decimal' xmodmap -e 'add mod1 = Meta_R' xmodmap -e 'add mod1 = Meta_L' Beware of one thing about xmodmap: Keymap changes are applied to the *whole workstation*, not just to individual windows. There is, in fact, no way I know of to apply them to individual windows. These definitions *may* confuse some Unix programs (and/or some Unix users). If you're using Motif, you may also need to apply bindings at the Motif level. If just using xmodmap doesn't work, I can try and dig that stuff up for you. The following is a report from a user of C-Kermit 8.0 on Solaris 8 and 9, who had complained that while Kermit file transfers worked perfectly on direct (non-PPP) dialout connections, they failed miserably on PPP connections. We suggested that the PPP dialer probably was not setting the port and/or modem up in the same way that Kermit did: I want to get back on this and tell you what the resolution was. You pointed me in the direction of flow control, which turned out to be the key. Some discussion on the comp.unix.solaris newsgroup led to some comments from Greg Andrews about the need to use the uucp driver to talk to the modem (/dev/cua/a). I had to remind Greg that no matter what the manpages for the zs and se drivers say, the ppp that Sun released with Solaris 8 7/01, and has in Solaris 9, is a setuid root program, and simply trying to make a pppd call from user space specifying /dev/cua/a would fail because of permissions. Greg finally put the question to the ppp people, who came back with information that is not laid out anywhere in the docs available for Solaris users. Namely, put /dev/cua/a in one of the privileged options files in the /etc/ppp directory. That, plus resetting the OBP ttya-ignore-cd flag (this is Sun hardware) to false, seems to have solved the problems. While I note that I had installed Kermit suid to uucp to use /dev/cua/a on this particular box, it seems to run fine through /dev/term/a. Not so with pppd. With this change in place, I seem to be able to upload and download through telnet run on Kermit with the maximum length packets. I note that the window allocation display does show STREAMING, using telnet. Running ssh on Kermit, I see the standard 1 of 30 windows display, and note that there appears to be a buffer length limit between 1000 and 2000 bytes. Run with 1000, and it's tick-tock, solid as a rock. With 2000 I see timeout errors and RTS/CTS action on the modem. Kermit's packet-length and other controls let you make adjustments like this to get around whatever obstacles might be thrown up -- in this case (running Kermit over ssh), the underling Solaris PTY driver. C-Kermit can't be compiled successfully under Solaris 2.3 using SUNWspro cc 2.0.1 unless at least some of the following patches are applied to cc (it is not known which one(s), if any, fix the problem): With unpatched cc 2.0.1, the symptom is that certain modules generate truncated object files, resulting in many unresolved references at link time. The rest of the problems in this section have to do with bidirectional terminal ports and the Solaris Port Monitor. A bug in C-Kermit 5A ticked a bug in Solaris. The C-Kermit bug was fixed in version 6.0, and the Solaris bug was fixed in 2.4 (I think, or maybe 2.5). Reportedly, "C-Kermit ... causes a SPARCstation running Solaris 2.3 to panic after the modem connects. I have tried compiling C-Kermit with Sun's unbundled C compiler, with GCC Versions 2.4.5 and 2.5.3, with make targets 'sunos51', 'sunos51tcp', 'sunos51gcc', and even 'sys5r4', and each time it compiles and starts up cleanly, but without fail, as soon as I dial the number and get a 'CONNECT' message from the modem, I get: BAD TRAP kermit: Data fault kernel read fault at addr=0x45c, pme=0x0 Sync Error Reg 80 <INVALID> ... panic: Data Fault. ... Rebooting... The same modem works fine for UUCP/tip calling." Also (reportedly), this only happens if the dialout port is configured as in/out via admintool. If it is configured as out-only, no problem. This is the same dialing code that works on hundreds of other System-V based Unix OS's. Since it should be impossible for a user program to crash the operating system, this problem must be chalked up to a Solaris bug. Even if you SET CARRIER OFF, CONNECT, and dial manually by typing ATDTnnnnnnn, the system panics as soon as the modem issues its CONNECT message. (Clearly, when you are dialing manually, C-Kermit does not know a thing about the CONNECT message, and so the panic is almost certainly caused by the transition of the Carrier Detect (CD) line from off to on.) This problem was reported by many users, all of whom say that C-Kermit worked fine on Solaris 2.1 and 2.2. If the speculation about CD is true, then a possible workaround might be to configure the modem to leave CD on (or off) all the time. Perhaps by the time you read this, a patch will have been issued for Solaris 2.3. The following is from Karl S. Marsh, Systems & Networks Administrator, AMBIX Systems Corp, Rochester, NY: Environment: Solaris 2.3 Patch 101318-45 C-Kermit 5A(189) (and presumably this applies to 188 and 190 also). eeprom setting: ttya-rts-dtr-off=false ttya-ignore-cd=false ttya-mode=19200,8,n,8,- To use C-Kermit on a bidirectional port in this environment, do not use admintool to configure the port. Use admintool to delete any services running on the port and then quit admintool and issue the following command: pmadm -a -p zsmon -s ttyb -i root -fu -v 1 -m "`ttyadm -b -d /dev/term/b \ -l conttyH -m ldterm,ttcompat -s /usr/bin/login -S n`" [NOTE: This was copied from a blurry fax, so please check it carefully] where: -a = Add service -p = pmtag (zsmon) -s = service tag (ttyb) -i = id to be associated with service tag (root) -fu = create utmp entry -v = version of ttyadm -m = port monitor-specific portion of the port monitor administrative file entry for the service -b = set up port for bidirectional use -d = full path name of device -l = which ttylabel in the /etc/ttydefs file to use -m = a list of pushable STREAMS modules -s = pathname of service to be invoked when connection request received -S = software carrier detect on or off (n = off) "This is exactly how I was able to get Kermit to work on a bi-directional port without crashing the system." On the Solaris problem, also see SunSolve Bug ID 1150457 ("Using C-Kermit, get Bad Trap on receiving prompt from remote system"). Another user reported "So, I have communicated with the Sun tech support person that submitted this bug report [1150457]. Apparently, this bug was fixed under one of the jumbo kernel patches. It would seem that the fix did not live on into 101318-45, as this is EXACTLY the error that I see when I attempt to use kermit on my system." Later (Aug 94)... C-Kermit dialout successfully tested on a Sun4m with a heavily patched Solaris 2.3. The patches most likely to have been relevant: Still later (Nov 94): another user (Bo Kullmar in Sweden) reports that after using C-Kermit to dial out on a bidirectional port, the port might not answer subsequent incoming calls, and says "the problem is easy enough to fix with the Serial Port Manager; I just delete the service and install it again using the graphical interface, which underneath uses commands like sacadm and pmadm." Later Bo reports, "I have found that if I run Kermit with the following script then it works. This script is for /dev/cua/a, "-s a" is the last a in /dev/cua/a: #! /bin/sh kermit sleep 2 surun pmadm -e -p zsmon -s a For additional information, see "Celeste's Tutorial on SunOS 4.1.3+ Modems and Terminals": For FAQs, etc, from Sun, see: For history of Sun models and SunOS versions, see (should be all the same): Sun SPARCstation users should read the section "Setting up Modem Software" in the Desktop SPARC Sun System & Network Manager's Guide. If you don't set up your serial ports correctly, Kermit (and other communications software) won't work right. Also, on certain Sun models like IPC, the serial port hardware might need to have a jumper changed to make it an RS-232 port rather than RS-423. Reportedly, C-Kermit does not work correctly on a Sun SPARCstation in an Open Windows window with scrolling enabled. Disable scrolling, or else invoke Kermit in a terminal emulation window (xterm, crttool, vttool) under SunView (this might be fixed in later SunOS releases). On the Sun with Open Windows, an additional symptom has been reported: outbound SunLink X.25 connections "magically" translate CR typed at the keyboard into LF before transmission to the remote host. This doesn't happen under SunView. SET CARRIER ON, when used on the SunOS 4.1 version of C-Kermit (compiled in the BSD universe), causes the program to hang uninterruptibly when SET LINE is issued for a device that is not asserting carrier. When Kermit is built in the Sys V universe on the same computer, there is no problem (it can be interrupted with Ctrl-C). This is apparently a limitation of the BSD-style tty driver. SunOS 4.1 C-Kermit has been observed to dump core when running a complicated script program under cron. The dump invariably occurs in ttoc(), while trying to output a character to a TCP/IP TELNET connection. ttoc() contains a write() call, and when the system or the network is very busy, the write() call can get stuck for long periods of time. To break out of deadlocks caused by stuck write() calls, there is an alarm around the write(). It is possible that the core dump occurs when this alarm signal is caught. (This one has not been observed recently -- possibly fixed in edit 190.) On Sun computers with SunOS 4.0 or 4.1, SET FLOW RTS/CTS works only if the carrier signal is present from the communication device at the time when C-Kermit enters packet mode or CONNECT mode. If carrier is not sensed (e.g. when dialing), C-Kermit does not attempt to turn on RTS/CTS flow control. This is because the SunOS serial device driver does not allow characters to be output if RTS/CTS is set (CRTSCTS) but carrier (and DSR) are not present. Workaround (maybe): SET CARRIER OFF before giving the SET LINE command, establish the connection, then SET FLOW RTS/CTS It has also been reported that RTS/CTS flow control under SunOS 4.1 through 4.1.3 works only on INPUT, not on output, and that there is a patch from Sun to correct this problem: Patch-ID# T100513-04, 20 July 1993 (this patch might apply only to SunOS 4.1.3). It might also be necessary to configure the eeprom parameters of the serial port; e.g. do the following as root at the shell prompt: eeprom ttya-ignore-cd=false eeprom ttya-rts-dtr-off=true There have been reports of file transfer failures on Sun-3 systems when using long packets and/or large window sizes. One user says that when this happens, the console issues many copies of this message: chaos vmunix: zs1: ring buffer overflow This means that SunOS is not scheduling Kermit frequently enough to service interrupts from the zs serial device (Zilog 8350 SCC serial communication port) before its input silo overflows. Workaround: use smaller packets and/or a smaller window size, or use "nice" to increase Kermit's priority. Use hardware flow control if available, or remove other active processes before running Kermit. SunLink X.25 support in C-Kermit 5A(190) was built and tested successfully under SunOS 4.1.3b and SunLink X.25 7.00. See also: The comp.unix.ultrix and comp.sys.dec newsgroups. There is no hardware flow control in Ultrix. That's not a Kermit deficiency, but an Ultrix one. When sending files to C-Kermit on a Telnet connection to a remote Ultrix system, you must SET PREFIXING ALL (or at least prefix more control characters than are selected by SET PREFIXING CAUTIOUS). Reportedly, DEC ULTRIX 4.3 is immune to C-Kermit's disabling of SIGQUIT, which is the signal that is generated when the user types Ctrl-\, which kills the current process (i.e. C-Kermit) and dumps core. Diagnosis and cure unknown. Workaround: before starting C-Kermit -- or for that matter, when you first log in because this applies to all processes, not just Kermit -- give the following Unix command: stty quit undef Certain operations driven by RS-232 modem signal do not work on DECstations or other DEC platforms whose serial interfaces use MMP connectors (DEC version of RJ45 telephone jack with offset tab). These connectors convey only the DSR and DTR modem signals, but not carrier (CD), RTS, CTS, or RI. Use SET CARRIER OFF to enable communication, or "hotwire" DSR to CD. The maximum serial speed on the DECstation 5000 is normally 19200, but various tricks are available (outside Kermit) to enable higher rates. For example, on the 5000/200, 19200 can be remapped (somehow, something to do with "a bit in the SIR", whatever that is) to 38400, but in software you must still refer to this speed as 19200; you can't have 19200 and 38400 available at the same time. 19200, reportedly, is also the highest speed supported by Ultrix, but NetBSD reportedly supports speeds up to 57600 on the DECstation, although whether and how well this works is another question. In any case, given the lack of hardware flow control in Ultrix, high serial speeds are problematic at best. See also: Also see general comments on PC-based Unixes in Section 3.0. By the way, this section is separate from the SCO (Caldera) section because at the time this section was started, Unixware was owned by a company called Univel. Later it was sold to Novell, and then to SCO. Still later, SCO was sold to Caldera. In Unixware 2.0 and later, the preferred serial device names (drivers) are /dev/term/00 (etc), rather than /dev/tty00 (etc). Note the following correspondence of device names and driver characteristics: New name Old name Description /dev/term/00 /dev/tty00 ??? /dev/term/00h /dev/tty00h Modem signals and hardware flow control /dev/term/00m /dev/tty00m Modem signals(?) /dev/term/00s /dev/tty00s Modem signals and software flow control /dev/term/00t /dev/tty00t ??? Lockfile names use device.major.minor numbers, e.g.: /var/spool/locks/LK.7679.003.005 The minor number varies according to the device name suffix (none, h, m, s, or t). Only the device and major number are compared, and thus all of the different names for the same physical device (e.g. all of those shown in the table above) interlock effectively. Prior to UnixWare 7, serial speeds higher than 38400 are not supported. In UnixWare 7, we also support 57600 and 115200, plus some unexpected ones like 14400, 28800, and 76800, by virtue of a strange new interface, evidently peculiar to UnixWare 7, discovered while digging through the header files: tcsetspeed(). Access to this interface is allowed only in POSIX builds, and thus the UnixWare 7 version of C-Kermit is POSIX-based, unlike C-Kermit for Unixware 1.x and 2.x (since the earlier UnixWare versions did not support high serial speeds, period). HOWEVER, turning on POSIX features engages all of the "#if (!_POSIX_SOURCE)" clauses in the UnixWare header files, which in turn prevent us from having modem signals, access to the hardware flow control APIs, select(), etc -- in short, all the other things we need in communications software, especially when high speeds are used. Oh the irony. And so C-Kermit must be shamelessly butchered -- as it has been so many times before -- to allow us to have the needed features from the POSIX and non-POSIX worlds. See the UNIXWAREPOSIX sections of ckutio.c. After the butchery, we wind up with Unixware 2.x having full modem-signal capability, but politically-correct Unixware 7.x lacking the ability to automatically detect a broken connection when carrier drops. Meanwhile the Unixware tcsetspeed() function allows any number at all (any long, 0 or positive) as an argument and succeeds if the number is a legal bit rate for the serial device, and fails otherwise. There is no list anywhere of legal speeds. Thus the SET SPEED keyword table ("set speed ?" to see it) is hardwired based on trial and error with all known serial speeds, the maximum being 115200. However, to allow for the possibility that other speeds might be allowed in the future (or with different port drivers), the SET SPEED command for UnixWare 7 only allows you to specify any number at all; a warning is printed if the number is not in the list, but the number is accepted anyway; the command succeeds if tcsetspeed() accepts the number, and fails otherwise. In C-Kermit 8.0 testing, it was noticed that the POSIX method for hanging up the phone by dropping DTR (set speed 0, pause, restore speed) did not actually drop DTR. The APIs do not return any error indication, but nothing happens. I changed tthang() to skip the special case I had made for Unixware and instead follow the normal path: if TIOCSDTR is defined use that, otherwise blah blah... It turns out TIOCSDTR *is* defined, and it works. So in Unixware (at least in 2.1.3) we can read modem signals, hangup by toggling DTR, and so on, BUT... But once the remote hangs up and Carrier drops, the API for reading modem signals ceases to function; although the device is still open, the TIOCMGET ioctl always raises errno 6 = ENXIO, "No such device or address". Old business: Using C-Kermit 6.0 on the UnixWare 1.1 Application Server, one user reported a system panic when the following script program is executed: set line /dev/tty4 set speed 9600 output \13 connect The panic does not happen if a PAUSE is inserted: set line /dev/tty4 set speed 9600 pause 1 output \13 connect This is using a Stallion EasyIO card installed as board 0 on IRQ 12 on a Gateway 386 with the Stallion-supplied driver. The problem was reported to Novell and Stallion and (reportedly) is now fixed. Reportedly, version 5A(190), when built under Apollo SR10 using "make sr10-bsd", compiles, links, and executes OK, but leaves the terminal unusable after it exits -- the "cs7" or "cs8" (character size) parameter has become cs5. The terminal must be reset from another terminal. Cause and cure unknown. Suggested workaround: Wrap Kermit in a shell script something like: kermit @* stty sane C-Kermit 7.0 was too big to be built on Tandy Xenix, even in a minimum configuration; version 6.0 is the last one that fits. Reportedly, in C-Kermit 6.0, if you type lots of Ctrl-C's during execution of the initialization file, ghost Kermit processes will be created, and will compete for the keyboard. They can only be removed via "kill -9" from another terminal, or by rebooting. Diagnosis -- something strange happening with the SIGINT handler while the process is reading the directory (it seems to occur during the SET PROMPT [\v(dir)] ... sequence). Cure: unknown. Workaround: don't interrupt C-Kermit while it is executing its init file on the Tandy 16/6000. While putting together and testing C-Kermit 8.0, it was discovered that binaries built for one version of Tru64 Unix (e.g. 4.0G) might exhibit very strange behavior if run on a different version of Tru64 Unix (e.g. 5.1A). The typical symptom was that a section of the initialization file would be skipped, notably locating the dialing and/or network directory as well as finding and executing the customization file, ~/.mykermrc. This problem also is reported to occur on Tru64 Unix 5.0 (Rev 732) even when running a C-Kermit binary that was built there. However, the Tru64 5.1A binary works correctly on 5.0. Go figure. When making Telnet connections to a Digital Unix or Tru64 system, and your Telnet client forwards your user name, the Telnet server evidently stuffs the username into login's standard input, and you see: login: ivan Password: This is clearly going to play havoc with scripts that look for "login:". Workaround (when Kermit is your Telnet client): SET LOGIN USER to nothing, to prevent Kermit from sending your user ID. Before you can use a serial port on a new Digital Unix system, you must run uucpsetup to enable or configure the port. Evidently the /dev/tty00 and 01 devices that appear in the configuration are not usable; uucpsetup turns them into /dev/ttyd00 and 01, which are. Note that uucpsetup and other uucp-family programs are quite primitive -- they only know about speeds up to 9600 bps and their selection of modems dates from the early 1980s. None of this affects Kermit, though -- with C-Kermit, you can use speeds up to 115200 bps (at least in DU4.0 and later) and modern modems with hardware flow control and all the rest. Reportedly, if a modem is set for &S0 (assert DSR at all times), the system resets or drops DTR every 30 seconds; reportedly DEC says to set &S1. Digital Unix 3.2 evidently wants to believe your terminal is one line longer than you say it is, e.g. when a "more" or "man" command is given. This is has nothing to do with C-Kermit, but tends to annoy those who use Kermit or other terminal emulators to access Digital Unix systems. Workaround: tell Unix to "stty rows 23" (or whatever). Reportedly, there is some bizarre behavior when trying to use a version of C-Kermit built on one Digital Unix 4.0 system on another one, possibly due to differing OS or library revision levels; for example, the inability to connect to certain TCP/IP hosts. Solution: rebuild C-Kermit from source code on the system where you will be using it. Digital Unix tgetstr() causes a segmentation fault. C-Kermit 7.0 added #ifdefs to avoid calling this routine in Digital Unix. As a result, the SCREEN commands always send ANSI escape sequences -- even though curses knows your actual terminal type. Reportedly the Tru64 Unix 4.0E 1091 Telnet server does not tolerate streaming transfers into itself, at least not when the sending Kermit is on the same local network. Solution: tell one Kermit or the other (or both) to "set streaming off". This might or might be the case with earlier and/or later Tru64, Digital Unix, and OSF/1 releases. See also: About IRIX version numbers: "uname -a" tells the "two-digit" version number, such as "5.3" or "6.5". The three-digit form can be seen with "uname -R". (this information is unavailable at the simple API level). Supposedly all three-digit versions within the same two-digit version (e.g. 6.5.2, 6.5.3) are binary compatible; i.e. a binary built on any one of them should run on all others. The "m" suffix denotes just patches; the "f" suffix indicates that features were added. An IRIX binary built on lower MIPS model (Instruction Set Architecture, ISA) can run on higher models, but not vice versa: Furthermore, there are different Application Binary Interfaces (ABIs):Furthermore, there are different Application Binary Interfaces (ABIs): Thus a prebuilt IRIX binary works on a particular machine only if (a) the machine's IRIX version (to one decimal place) is equal to or greater than the version under which the binary was built; (b) the machine's MIPS level is greater or equal to that of the binary; and (c) the machine supports the ABI of the binary. If all three conditions are not satisfied, of course, you can build a binary yourself from source code since, unlike some other Unix vendors, SGI does supply a C compiler and libraries. SGI did not supply an API for hardware flow control prior to IRIX 5.2. C-Kermit 6.1 and higher for IRIX 5.2 and higher supports hardware flow control in the normal way, via "set flow rts/cts". For hardware flow control on earlier IRIX and/or C-Kermit versions, use the ttyf* (modem control AND hardware flow control) devices and not the ttyd* (direct) or ttym* (modem control but no hardware flow control) ones, and obtain the proper "hardware handshaking" cable from SGI, which is incompatible with the ones for the Macintosh and NeXT even though they look the same ("man serial" for further info) and tell Kermit to "set flow keep" and "set modem flow rts/cts". Serial speeds higher than 38400 are available in IRIX 6.2 and later, on O-class machines (e.g. Origin, Octane) only, and are supported by C-Kermit 7.0 and later. Commands such as "set speed 115200" may be given on other models (e.g. Iris, Indy, Indigo) but will fail because the OS reports an invalid speed for the device. Experimentation with both IRIX 5.3 and 6.2 shows that when logged in to IRIX via Telnet, that remote-mode C-Kermit can't send files if the packet length is greater than 4096; the Telnet server evidently has this restriction (or bug), since there is no problem sending long packets on serial or rlogin connections. However, it can receive files with no problem if the packet length is greater than 4096. As a workaround, the FAST macro for IRIX includes "set send packet-length 4000". IRIX 6.5.1 does not have this problem, so evidently it was fixed some time after IRIX 6.2. Tests show file-transfer speeds are better (not worse) with 8K packets than with 4K packets from IRIX 6.5.1. Reportedly some Indys have bad serial port hardware. IRIX 5.2, for example, needs patch 151 to work around this; or upgrade to a later release. Similarly, IRIX 5.2 has several problems with serial i/o, flow control, etc. Again, patch or upgrade. Reportedly on machines with IRIX 4.0, Kermit cannot be suspended by typing the suspend ("swtch") character if it was started from csh, even though other programs can be suspended this way, and even though the Z and SUSPEND commands still work correctly. This is evidently because IRIX's csh does not deliver the SIGTSTP signal to Kermit. The reason other programs can be suspended in the same environment is probably that they do not trap SIGTSTP themselves, so the shell is doing the suspending rather than the application. Also see notes about IRIX 3.x in the C-Kermit for Unix Installation Instructions. If you have problems making TCP/IP connections in versions of IRIX built with GCC 2.95.2, see the bugs section of:. Reportedly, if you allow gcc to compile C-Kermit on Irix you should be aware that there might be problems with some of the network code. The specifics are at; scroll down to the "known bugs" section at the end of the document. See also: The comp.sys.be newsgroup. The BeBox has been discontinued and BeOS repositioned for PC platforms. The POSIX parts of BeOS are not finished, nor is the sockets library, therefore a fully functional version of C-Kermit is not possible. In version 6.0 of C-Kermit, written for BeOS DR7, it was possible to: The following do not work: C-Kermit does not work on BeOS DR8 because of changes in the underlying APIs. Unfortunately not enough changes were made to allow the regular POSIX-based C-Kermit to work either. Note: the lack of a fork() service requires the select()-based CONNECT module, but there is no select(). There is a select() in DR8, but it doesn't work. C-Kermit 7.0 was built for BeOS 4.5 and works in remote mode. It does not include networking support since the APIs are still not there. It is not known if dialing out works, but probably not. Be experts are welcome to lend a hand. Somebody downloaded the C-Kermit 6.0 binary built under DG/UX 5.40 and ran it under DG/UX 5.4R3.10 -- it worked OK except that file dates for incoming files were all written as 1 Jan 1970. Cause and cure unknown. Workaround: SET ATTRIBUTE DATE OFF. Better: Use a version of C-Kermit built under and for DG/UX 5.4R3.10. Reportedly, when coming into a Sequent Unix (DYNIX) system through an X.25 connection, Kermit doesn't work right because the Sequent's FIONREAD ioctl returns incorrect data. To work around, use the 1-character-at-a-time version of myread() in ckutio.c (i.e. undefine MYREAD in ckutio.c and rebuild the program). This is unsatisfying because two versions of the program would be needed -- one for use over X.25, and the other for serial and TCP/IP connections. Some NebBSD users have reported difficulty escaping back from CONNECT mode, usually when running NetBSD on non-PC hardware. Probably a keyboard issue. NetBSD users have also reported that C-Kermit doesn't pop back to the prompt if the modem drops carrier. This needs to be checked out & fixed if possible. (All the above seems to work properly in C-Kermit 7.0 and later.) Mac OS X is Apple's 4.4BSD Unix variety, closely related to FreeBSD, but different. "uname -a" is singularly uninformative, as in Linux, giving only the Darwin kernel version number. The way to find out the actual Mac OS X version is with /usr/bin/sw_vers -productNameor: /usr/bin/sw_vers -productVersion fgrep -A 1 'ProductVersion' /System/Library/CoreServices/SystemVersion.plist Here are some points to be aware of: set file eol cr set file character-set apple-quickdraw send /text filename C-Kermit 9.0 works "out of the box" with third-party serial ports on Mac OS X, because it is built by default ("make macosx") without the "UUCP lockfile" feature. If you have C-Kermit 9.0 on a personal Macintosh, you can skip the next section. (where xxxx is the name of the group for users to whom serial-port access is to be granted). Use "admin" or other existing group, or create a new group if desired. NB: In the absence of official guidance from Apple or anyone else, we choose /var/spool/lock as the lockfile directory because this directory (a) already exists on vanilla Mac OS X installations, and (b) it is the directory used for serial-port lockfiles on many other platforms. chmod g+rw,o-rw /dev/cu.* If you do the above, then there's no need to become root to use Kermit, or to make Kermit suid or sgid. Just do this: chmod 775 wermit mv wermit /usr/local/kermit (or whatever spot is more appropriate, e.g. /usr/bin/). For greater detail about installation, CLICK HERE. Alternatively, to build a pre-9.0 version of C-Kermit without UUCP lockfile support, set the NOUUCP flag; e.g. (for Mac OS 10.4): make macosx10.4 KFLAGS=-DNOUUCP This circumvents the SET PORT failure "?Access to lockfile directory denied". But it also sacrifices Kermit's ability to ensure that only one copy of Kermit can have the device open at a time, since Mac OS X is the same as all other varieties of Unix in that exclusive access to serial ports is not enforced in any way. But if it's for your own desktop machine that nobody else uses, a -DNOUUCP version might be adequate and preferable to the alternatives. To build C-Kermit 9.0 with UUCP support, do: make macosx KFLAGS=-UNOUUCP (note: "-U", not "-D). Keyspan also sells a USB Twin Serial Adapter that gives you two Mini-Din8 RS-422 ports, that are no better (or worse) for communicating with modems or serial devices than a real Mac Din-8 port was. In essence, you get Data In, Data Out, and two modem signals. It looks to me as if the signals chosen by Keyspan are RTS and CTS. This gives you hardware flow control, but at the expense of Carrier Detect. Thus to use C-Kermit with a Keyspan USB serial port, you must tell C-Kermit to: set modem type none ; (don't expect a modem) set carrier-watch off ; (ignore carrier signal) set port /dev/cu.USA19H3b1P1.1 ; (open the port) set flow rts/cts ; (this is the default) set speed 57600 ; (or whatever) connect ; (or DIAL or whatever) Use Ctrl-\C in the normal manner to escape back to the C-Kermit> prompt. Kermit can't pop back to its prompt automatically when Carrier drops because there is no Carrier signal in the physical interface. Here's a typical sequence for connecting to Cisco devices (using a mixture of command-line options and interactive commands at the prompt): $ ckermit -l /dev/cu.USA19H3b1P1.1 -b 9600 C-Kermit> set carrier-watch off C-Kermit> connect Instructions for the built-in modem (if any) remain to be written due to lack of knowledge. If you can contribute instructions, hints, or tips, please send them in. Also see: Mark Williams COHERENT was perhaps the first commercial Unix-based operating system for PCs, first appearing about 1983 or -84 for the PC/XT (?), and popular until about 1993, when Linux took over. C-Kermit, as of version 8.0, is still current for COHERENT 386 4.2 (i.e. only for i386 and above). Curses is included, but lots of other features are omitted due to lack of the appropriate OS features, APIs, libraries, hardware, or just space: e.g. TCP/IP, floating-point arithmetic, learned scripts. Earlier versions of COHERENT ran on 8086 and 80286, but these are to small to build or run C-Kermit, but G-Kermit should be OK (as might be ancient versions of C-Kermit). You can actually build a version with floating point support -- just take -DNOFLOAT out of CFLAGS and add -lm to LIBS; NOFLOAT is the default because COHERENT tends to run on old PCs that don't have floating-point hardware. You can also add "-f" to CFLAGS to have it link in the floating-point emulation library. Also I'm not sure why -DNOLEARN is included, since it depends on select(), which COHERENT has. [/usr/olga] C-Kermit> (In C-Kermit 7.0 the square braces were replaced by round parentheses to avoid conflicts with ISO 646 national character sets.) If that directory is on an NFS-mounted disk, and NFS stops working or the disk becomes unavailable, C-Kermit will hang waiting for NFS and/or the disk to come back. Whether you can interrupt C-Kermit when it is hung this way depends on the specific OS. Kermit has called the operating systems's getcwd() function, and is waiting for it to return. Some versions of Unix (e.g. HP-UX 9.x) allow this function to be interrupted with SIGINT (Ctrl-C), others (such as HP-UX 8.x) do not. To avoid this effect, you can always use SET PROMPT to change your prompt to something that does not involve calling getcwd(), but if NFS is not responding, C-Kermit will still hang any time you give a command that refers to an NFS-mounted directory. Also note that in some cases, the uninterruptibility of NFS-dependent system or library calls is considered a bug, and sometimes there are patches. For HP-UX, for example: replaced by: HP-UX 10.20 libc PHCO_8764 PHCO_14891/PHCO_16723 HP-UX 10.10 libc PHCO_8763 PHCO_14254/PHCO_16722 HP-UX 9.x libc PHCO_7747 S700 PHCO_13095 HP-UX 9.x libc PHCO_6779 S800 PHCO_11162 The same can apply to any other environment in which the user's session is captured, monitored, recorded, or manipulated. Examples include the 'script' program (for making a typescript of a session), the Computronics PEEK package and pksh (at least versions of it prior to 1.9K), and so on. You might try the following -- what we call "doomsday Kermit" -- settings to push packets through even the densest and most obstructive connections, such as "screen" and "splitvt" (and certain kinds of 3270 protocol emulators): Give these commands to BOTH Kermit programs: SET FLOW NONE SET CONTROL PREFIX ALL SET RECEIVE PACKET-LENGTH 70 SET RECEIVE START 62 SET SEND START 62 SET SEND PAUSE 100 SET BLOCK B If it works, it will be slow. C-Kermit> DATE 20011028 05:01:02 GMT ; EDT 20011028 01:01:02 C-Kermit> DATE 20011028 06:01:02 GMT ; EST 20011028 01:01:02 C-Kermit> but the implicit change in timezone offset is not recognized: C-Kermit> echo \fdiffdate(20011028 05:01:02 GMT, 20011028 06:01:02 GMT) +0:00 C-Kermit>Date/time arithmetic, offsets, delta times, and timezone support are new to C-Kermit 8.0, and might be expected to evolve and improve in subsequent releases. On some platforms, files downloaded with HTTP receive the current timestamp, rather than the HTTP "Last Modified" time (this can be fixed by including utime.h, e.g. in SunOS and Tru64...). SSH and PTY commands can fail if (a) all pseudoterminals are in use; or (b) you do not have read/write access to the pseudoterminal that was assigned. An example of (b) was reported with the Zipslack Slackware Linux distribution, in which the pseudoterminals were created with crw-r--r-- permission, instead of crw-rw-rw-. C-Kermit's initialization file for Unix is .kermrc (lowercase, starts with period) in your home directory, unless Kermit was built with the system-wide initialization-file option (see the C-Kermit for Unix Installation Instructions). C-Kermit identifies your home directory based on the environment variable, HOME. Most Unix systems set this variable automatically when you log in. If C-Kermit can't find your initialization file, check your HOME variable: echo $HOME (at the Unix prompt) or: echo \$(HOME) (at the C-Kermit prompt) If HOME is not defined, or is defined incorrectly, add the appropriate definition to your Unix .profile or .login file, depending on your shell: setenv HOME full-pathname-of-your-home-directory (C-Shell, .login file) or: HOME=full-pathname-of-your-home-directory (sh, ksh, .profile file) export HOME NOTE: Various other operations depend on the correct definition of HOME. These include the "tilde-expansion" feature, which allows you to refer to your home directory as "~" in filenames used in C-Kermit commands, e.g.: send ~/.kermrc as well as the \v(home) variable. Prior to version 5A(190), C-Kermit would look for its initialization file in the current directory if it was not found in the home directory. This feature was removed from 5A(190) because it was a security risk. Some people, however, liked this behavior and had .kermrc files in all their directories that would set up things appropriately for the files therein. If you want this behavior, you can accomplish it in various ways, for example: alias kd="kermit -Y ./.kermrc" take ./.kermrc Suppose you need to pass a password from the Unix command line to a C-Kermit script program, in such a way that it does not show up in "ps" or "w" listings. Here is a method (not guaranteed to be 100% secure, but definitely more secure than the more obvious methods): echo mypassword | kermit myscript The "myscript" file contains all the commands that need to be executed during the Kermit session, up to and including EXIT, and also includes an ASK or ASKQ command to read the password from standard input, which has been piped in from the Unix 'echo' command, but it must not include a CONNECT command. Only "kermit myscript" shows up in the ps listing. Version-7 based Unix implementations, including 4.3 BSD and earlier and Unix systems based upon BSD, use a 4-bit field to record a serial device's terminal speed. This leaves room for 16 speeds, of which the first 14 are normally: 0, 50, 75, 110, 134.5, 150, 200, 300, 600, 1200, 1800, 2400, 4800, and 9600 The remaining two are usually called EXTA and EXTB, and are defined by the particular Unix implementation. C-Kermit determines which speeds are available on your system based on whether symbols for them are defined in your terminal device header files. EXTA is generally assumed to be 19200 and EXTB 38400, but these assumptions might be wrong, or they might not apply to a particular device that does not support these speeds. Presumably, if you try to set a speed that is not legal on a particular device, the driver will return an error, but this can not be guaranteed. On these systems, it is usually not possible to select a speed of 14400 bps for use with V.32bis modems. In that case, use 19200 or 38400 bps, configure your modem to lock its interface speed and to use RTS/CTS flow control, and tell C-Kermit to SET FLOW RTS/CTS and SET DIAL SPEED-MATCHING OFF. The situation is similar, but different, in System V. SVID Third Edition lists the same speeds, 0 through 38400. Some versions of Unix, and/or terminal device drivers that come with certain third-party add-in high-speed serial communication interfaces, use the low "baud rates" to stand for higher ones. For example, SET SPEED 50 gets you 57600 bps; SET SPEED 75 gets you 76800; SET SPEED 110 gets 115200. SCO ODT 3.0 is an example where a "baud-rate-table patch" can be applied that can rotate the tty driver baud rate table such that 600=57600 and 1800=115k baud. Similarly for Digiboard multiport/portservers, which have a "fastbaud" setting that does this. Linux has a "setserial" command that can do it, etc. More modern Unixes support POSIX-based speed setting, in which the selection of speeds is not limited by a 4-bit field. C-Kermit 6.1 incorporates a new mechanism for finding out (at compile time) which serial speeds are supported by the operating system that does not involve editing of source code by hand; on systems like Solaris 5.1, IRIX 6.2, and SCO OSR5.0.4, "set speed ?" will list speeds up to 460800 or 921600. In C-Kermit 7.0 and later: When Kermit is given a "set speed" command for a particular device, the underlying system service is called to set the speed; its return code is checked and the SET SPEED command fails if the return code indicates failure. Regardless of the system service return status, the device's speed is then read back and if it does not match the speed that was requested, an error message is printed and the command fails. Even when the command succeeds, this does not guarantee successful operation at a particular speed, especially a high one. That depends on electricity, information theory, etc. How long is the cable, what is its capacitance, how well is it shielded, etc, not to mention that every connection has two ends and its success depends on both of them. (With the obvious caveats about internal modems, is the cable really connected, interrupt conflicts, etc etc etc). Note, in particular, that there is a certain threshold above which modems can not "autobaud" -- i.e. detect the serial interface speed when you type AT (or whatever else the modem's recognition sequence might be). Such modems need to be engaged at a lower speed (say 2400 or 9600 or even 115200 -- any speed below their autobaud threshold) and then must be given a modem-specific command (which can be found in the modem manual) to change their interface speed to the desired higher speed, and then the software must also be told to change to the new, higher speed. For additional information, read Section 9.5 of the Installation Instructions, plus any platform-specific notes in Section 3 above. Similarly, if you give a SET MODEM TYPE HAYES (or USR, or any other modem type besides DIRECT, NONE, or UNKNOWN) and then SET LINE to an empty port, the subsequent close (implicit or explicit) is liable to hang or even crash (through no fault of Kermit's -- the hanging or crashing is inside a system call such as cfsetospeed() or close()). The SET CARRIER-WATCH command works as advertised only if the underlying operating system and device drivers support this feature; in particular only if a read() operation returns immediately with an error code if the carrier signal goes away or, failing that, if C-Kermit can obtain the modem signals from the device driver (you can tell by giving a "set line" command to a serial device, and then a "show communications" command -- if modem signals are not listed, C-Kermit won't be able to detect carrier loss, the WAIT command will not work, etc). Of course, the device itself (e.g. modem) must be configured appropriately and the cables convey the carrier and other needed signals, etc. If you dial out from Unix system, but then notice a lot of weird character strings being stuck into your session at random times (especially if they look like +++ATQ0H0 or login banners or prompts), that means that getty is also trying to control the same device. You'll need to dial out on a device that is not waiting for a login, or else disable getty on the device. As of version 7.0, C-Kermit makes explicit checks for the Carrier Detect signal, and so catches hung-up connections much better than 6.0 and earlier. However, it still can not be guaranteed to catch every ever CD on-to-off transition. For example, when the HP-UX version of C-Kermit is in CONNECT mode on a dialed connection and CARRIER-WATCH ON or AUTO, and you turn off the modem, HP-UX is stuck in a read() that never returns. (C-Kermit does not pop back to its prompt automatically, but you can still escape back.) If, on the other hand, you log out from the remote system, and it hangs up, and CD drops on the local modem, C-Kermit detects this and pops back to the prompt as it should. (Evidently there can be a difference between CD and DSR turning off at the same time, versus CD turning off while DSR stays on; experimentation with &S0/&S1/&S2 on your modem might produce the desired results). When Unix C-Kermit exits, it closes (and must close) the communications device. If you were dialed out, this will most likely hang up the connection. If you want to get out of Kermit and still use Kermit's communication device, you have several choices: If you are having trouble dialing: Make sure your dialout line is correctly configured for dialing out (as opposed to login). The method for doing this is different for each kind of Unix system. Consult your system documentation for configuring lines for dialing out (for example, Sun SparcStation IPC users should read the section "Setting up Modem Software" in the Desktop SPARC Sun System & Network Manager's Guide; HP-9000 workstation users should consult the manual Configuring HP-UX for Peripherals, etc). Symptom: DIAL works, but a subsequent CONNECT command does not. Diagnosis: the modem is not asserting Carrier Detect (CD) after the connection is made, or the cable does not convey the CD signal. Cure: Reconfigure the modem, replace the cable. Workaround: SET CARRIER OFF (at least in System-V based Unix versions). For Berkeley-Unix-based systems (4.3BSD and earlier), Kermit includes code to use LPASS8 mode when parity is none, which is supposed to allow 8-bit data and Xon/Xoff flow control at the same time. However, as of edit 174, this code is entirely disabled because it is unreliable: even though the host operating system might (or might not) support LPASS8 mode correctly, the host access protocols (terminal servers, telnet, rlogin, etc) generally have no way of finding out about it and therefore render it ineffective, causing file transfer failures. So as of edit 174, Kermit once again uses rawmode for 8-bit data, and so there is no Xon/Xoff flow control during file transfer or terminal emulation in the Berkeley-based versions (4.3 and earlier, not 4.4). Also on Berkeley-based systems (4.3 and earlier), there is apparently no way to configure a dialout line for proper carrier handling, i.e. ignore carrier during dialing, require carrier thereafter, get a fatal error on any attempt to read from the device after carrier drops (this is handled nicely in System V by manipulation of the CLOCAL flag). The symptom is that carrier loss does not make C-Kermit pop back to the prompt automatically. This is evident on the NeXT, for example, but not on SunOS, which supports the CLOCAL flag. This is not a Kermit problem, but a limitation of the underlying operating system. For example, the cu program on the NeXT doesn't notice carrier loss either, whereas cu on the Sun does. On certain AT&T Unix systems equipped with AT&T modems, DIAL and HANGUP don't work right. Workarounds: (1) SET DIAL HANGUP OFF before attempting to dial; (2) If HANGUP doesn't work, SET LINE, and then SET LINE <device> to totally close and reopen the device. If all else fails, SET CARRIER OFF. C-Kermit does not contain any particular support for AT&T DataKit devices. You can use Kermit software to dial in to a DataKit line, but C-Kermit does not contain the specialized code required to dial out from a DataKit line. If the Unix system is connected to DataKit via serial ports, dialout should work normally (e.g. set line /dev/ttym1, set speed 19200, connect, and then see the DESTINATION: prompt, from which you can connect to another computer on the DataKit network or to an outgoing modem pool, etc). But if the Unix system is connected to the DataKit network through the special DataKit interface board, then SET LINE to a DataKit pseudodevice (such as /dev/dk031t) will not work (you must use the DataKit "dk" or "dkcu" program instead). In C-Kermit 7.0 and later, you can make Kermit connections "though" dk or dkcu using "set line /pty". In some BSD-based Unix C-Kermit versions, SET LINE to a port that has nothing plugged in to it with SET CARRIER ON will hang the program (as it should), but it can't be interrupted with Ctrl-C. The interrupt trap is correctly armed, but apparently the Unix open() call cannot be interrupted in this case. When SET CARRIER is OFF or AUTO, the SET LINE will eventually return, but then the program hangs (uninterruptibly) when the EXIT or QUIT command (or, presumably, another SET LINE command) is given. The latter is probably because of the attempt to hang up the modem. (In edit 169, a timeout alarm was placed around this operation.) With SET DIAL HANGUP OFF in effect, the DIAL command might work only once, but not again on the same device. In that case, give a CLOSE command to close the device, and then another SET LINE command to re-open the same device. Or rebuild your version of Kermit with the -DCLSOPN compile-time switch. The DIAL command says "To cancel: Type your interrupt character (normally Ctrl-C)." This is just one example of where program messages and documentation assume your interrupt character is Ctrl-C. But it might be something else. In most (but not necessarily all) cases, the character referred to is the one that generates the SIGINT signal. If Ctrl-C doesn't act as an interrupt character for you, type the Unix command "stty -a" or "stty all" or "stty everything" to see what your interrupt character is. (Kermit could be made to find out what the interrupt character is, but this would require a lot of platform-dependent coding and #ifdefs, and a new routine and interface between the platform-dependent and platform-independent parts of the program.) In general, the hangup operation on a serial communication device is prone to failure. C-Kermit tries to support many, many different kinds of computers, and there seems to be no portable method for hanging up a modem connection (i.e. turning off the RS-232 DTR signal and then turning it back on again). If HANGUP, DIAL, and/or Ctrl-\H do not work for you, and you are a programmer, look at the tthang() function in ckutio.c and see if you can add code to make it work correctly for your system, and send the code to the address above. (NOTE: This problem has been largely sidestepped as of edit 188, in which Kermit first attempts to hang up the modem by "escaping back" via +++ and then giving the modem's hangup command, e.g. ATH0, when DIAL MODEM-HANGUP is ON, which is the default setting.) Even when Kermit's modem-control software is configured correctly for your computer, it can only work right if your modem is also configured to assert the CD signal when it is connected to the remote modem and to hang up the connection when your computer drops the DTR signal. So before deciding Kermit doesn't work with your modem, check your modem configuration AND the cable (if any) connecting your modem to the computer -- it should be a straight-through modem cable conducting the signals FG, SG, TD, RD, RTS, CTS, DSR, DTR, CD, and RI. Many Unix systems keep aliases for dialout devices; for example, /dev/acu might be an alias for /dev/tty00. But most of these Unix systems also use UUCP lockfile conventions that do not take this aliasing into account, so if one user assigns (e.g.) /dev/acu, then another user can still assign the same device by referring to its other name. This is not a Kermit problem -- Kermit must follow the lockfile conventions used by the vendor-supplied software (cu, tip, uucp). The SET FLOW-CONTROL KEEP option should be given *before* any communication (dialing, terminal emulation, file transfer, INPUT/OUTPUT/TRANSMIT, etc) is attempted, if you want C-Kermit to use all of the device's preexisting flow-control related settings. The default flow-control setting is XON/XOFF, and it will take effect when the first communication-related command is given, and a subsequent SET FLOW KEEP command will not necessarily know how to restore *all* of the device's original flow-control settings. If file transfer does not work through a host to which you have rlogin'd, use "rlogin -8" rather than "rlogin". If that doesn't work, tell both Kermit programs to "set parity space". The Encore TELNET server does not allow long bursts of input. When you have a TELNET connection to an Encore, tell C-Kermit on the Encore to SET RECEIVE PACKET-LENGTH 200 or thereabouts. SET FLOW RTS/CTS is available in Unix C-Kermit only when the underlying operating system provides an Application Program Interface (API) for turning this feature on and off under program control, which turns out to be a rather rare feature among Unix systems. To see if your Unix C-Kermit version supports hardware flow control, type "set flow ?" at the C-Kermit prompt, and look for "rts/cts" among the options. Other common situations include: System V R4 based Unixes are supposed to supply a <termiox.h> file, which gives Kermit the necessary interface to command the terminal driver to enable/disable hardware flow control. Unfortunately, but predictably, many implementations of SVR4 whimsically place this file in /usr/include/sys rather than /usr/include (where SVID clearly specifies it should be; see SVID, Third Edition, V1, termiox(BA_DEV). Thus if you build C-Kermit with any of the makefile entries that contain -DTERMIOX or -DSTERMIOX (the latter to select <sys/termiox.h>), C-Kermit will have "set flow rts/cts" and possibly other hardware flow-control related commands. BUT... That does not necessarily mean that they will work. In some cases, the underlying functions are simply not coded into the operating system. WARNING: When hardware flow control is available, and you enable in Kermit on a device that is not receiving the CTS signal, Kermit can hang waiting for CTS to come up. This is most easily seen when the local serial port has nothing plugged in to it, or is connected to an external modem that is powered off. C-Kermit is not a terminal emulator. Refer to page 147 of Using C-Kermit, 2nd Edition: "Most versions of C-Kermit -- Unix, VMS, AOS/VS, VOS, etc -- provide terminal connection without emulation. These versions act as a 'semitransparent pipe' between the remote computer and your terminal, terminal emulator, console driver, or window, which in turn emulates (or is) a specific kind of terminal." The environment in which you run C-Kermit is up to you. If you are an X Windows user, you should be aware of an alternative to xterm that supports VT220 emulation, from Thomas E. Dickey: Unix C-Kermit's SET KEY command currently can not be used with keys that generate "wide" scan codes or multibyte sequences, such as workstation function or arrow keys, because Unix C-Kermit does not have direct access to the keyboard. However, many Unix workstations and/or console drivers provide their own key mapping feature. With xterm, for example, you can use 'xmodmap' ("man xmodmap" for details); here is an xterm mapping to map the Sun keyboard to DEC VT200 values for use with VT-terminal oriented applications like VMS EVE: Users of Linux consoles can use loadkeys ("man dumpkeys loadkeys keytables" for details. The format used by loadkeys is compatible with that used by Xmodmap, although it is not definitely certain that the keycodes are compatible for different keyboard types (e.g. Sun vs HP vs PC, etc). On most platforms, C-Kermit can not handle files longer than 231 or 232 bytes long, because it uses the traditional file i/o APIs that use 32-bit words to represent the file size. To accommodate longer files, we would have to switch to a new and different API. Unfortunately, each platform has a different one, a nightmare to handle in portable code. The C-Kermit file code was written in the days long before files longer than 2GB were supported or even contemplated in the operating systems where C-Kermit ran. If uploads (or downloads) fail immediately, give the CAUTIOUS command to Kermit and try again. If they still fail, then try SET PREFIXING ALL. If they still fail, try SET PARITY SPACE. If they still fail, try ROBUST. If reception (particularly of large files and/or binary files) begins successfully but then fail consistently after a certain amount of bytes have been sent, check: If none of these seem to explain it, then the problem is not size related, but reflects some clash between the file contents and the characteristics of the connection, in which case follow the instructions in the first paragraph of this section. Suppose two copies of Kermit are receiving files into the same directory, and the files have the same name, e.g. "foo.bar". Whichever one starts first opens an output file called "foo.bar". The second one sees there is already a foo.bar file, and so renames the existing foo.bar to foo.bar.~1~ (or whatever). When the first file has been received completely, Kermit goes to change its modification time and permissions to those given by the file sender in the Attribute packet. But in Unix, the APIs for doing this take a filename, not a file descriptor. Since the first Kermit's file has been renamed, and the second Kermit is using the original name, the first Kermit changes the modtime and permissions of the second Kermit's file, not its own. Although there might be a way to work around this in the code, e.g. using inode numbers to keep track of which file is which, this would be tricky and most likely not very portable. It's better to set up your application to prevent such things from happening, which is easy enough using the script language, filename templates, etc. Suppose you start C-Kermit with a command-line argument to send or receive a file (e.g. "kermit -r") and then type Ctrl-\c immediately afterwards to escape back and initiate the other end of the transfer, BUT your local Kermit's escape character is not Ctrl-\. In this case, the local Kermit passes the Ctrl-\ to the remote system, and if this is Unix, Ctrl-\ is likely to be its SIGQUIT character, which causes the current program to halt and dump core. Well, just about the first thing C-Kermit does when it starts is to disable the SIGQUIT signal. However, it is still possible for SIGQUIT to cause Kermit to quit and dump core if it is delivered while Kermit is being loaded or started, before the signal can be disabled. There's nothing Kermit itself can do about this, but you can prevent it from happening by disabling SIGQUIT in your Unix session. The command is usually something like: stty quit undef Unix C-Kermit does not reject incoming files on the basis of size. There appears to be no good (reliable, portable) way to determine in advance how much disk space is available, either on the device, or (when quotas or other limits are involved) to the user. Unix C-Kermit discards all carriage returns from incoming files when in text mode. If C-Kermit has problems creating files in writable directories when it is installed setuid or setgid on BSD-based versions of Unix such as NeXTSTEP 3.0, it probably needs to be rebuilt with the -DSW_ACC_ID compilation switch. If you SET FILE DISPLAY FULLSCREEN, and C-Kermit complains "Sorry, terminal type not supported", it means that the terminal library (termcap or termlib) that C-Kermit was built with does not know about a terminal whose name is the current value of your TERM environment variable. If this happens, but you want to have the fullscreen file transfer display, EXIT from C-Kermit and set a Unix terminal type from among the supported values that is also supported by your terminal emulator, or else have an entry for your terminal type added to the system termcap and/or terminfo database. If you attempt to suspend C-Kermit during local-mode file transfer and then continue it in the background (via bg), it will block for "tty output" if you are using the FULLSCREEN file transfer display. This is apparently a problem with curses. Moving a local-mode file transfer back and forth between foreground and background works correctly, however, with the SERIAL, CRT, BRIEF, or NONE file transfer displays. If C-Kermit's command parser no longer echoes, or otherwise acts strangely, after returning from a file transfer with the fullscreen (curses) display, and the curses library for your version of Unix includes the newterm() function, then try rebuilding your version of C-Kermit with -DCK_NEWTERM. Similarly if it echoes doubly, which might even happen during a subsequent CONNECT session. If rebuilding with -DCK_NEWTERM doesn't fix it, then there is something very strange about your system's curses library, and you should probably not use it. Tell C-Kermit to SET FILE DISPLAY CRT, BRIEF, or anything else other than FULLSCREEN, and/or rebuild without -DCK_CURSES, and without linking with (termlib and) curses. Note: This problem seemed to have escalated in C-Kermit 7.0, and -DCK_NEWTERM had to be added to many builds that previously worked without it: Linux, AIX 4.1, DG/UX, etc. In the Linux case, it is obviously because of changes in the (n)curses library; the cause in the other cases is not known. C-Kermit creates backup-file names (such as "oofa.txt.~1~") based on its knowledge of the maximum filename length on the platform where it is running, which is learned at compile time, based on MAXNAMLEN or equivalent symbols from the system header files. But suppose C-Kermit is receiving files on a Unix platform that supports long filenames, but the incoming files are being stored on an NFS-mounted file system that supports only short names. NFS maps the external system to the local APIs, so C-Kermit has no way of knowing that long names will be truncated. Or that C-Kermit is running on a version of Unix that supports both long-name and short-name file systems simultaneously (such as HP-UX 7.00). This can cause unexpected behavior when creating backup files, or worse. For example, you are sending a group of files whose names are differentiated only by characters past the point at which they would be truncated, each file will overwrite the previous one upon arrival. SECTION CONTENTS 11.1. C-Kermit as an External Protocol 11.2. Invoking External Protocols from C-Kermit Unix C-Kermit can be used in conjunction with other communications software in various ways. C-Kermit can be invoked from another communications program as an "external protocol", and C-Kermit can also invoke other communication software to perform external protocols. This sort of operation makes sense only when you are dialing out from your Unix system (or making a network connection from it). If the Unix system is the one you have dialed in to, you don't need any of these tricks. Just run the desired software on your Unix system instead of Kermit. When dialing out from a Unix system, the difficulty is getting two programs to share the same communication device in spite of the Unix UUCP lockfile mechanism, which would normally prevent any sharing, and preventing the external protocol from closing (and therefore hanging up) the device when it exits back to the program that invoked it. (This section deleted; see Using C-Kermit, 2nd Ed, Chapter 14.) "pcomm" is a general-purpose terminal program that provides file transfer capabilities itself (X- and YMODEM variations) and the ability to call on external programs to do file transfers (ZMODEM and Kermit, for example). You can tell pcomm the command to send or receive a file with an external protocol: Send Receive ZMODEM sz filename rz Kermit kermit -s filename kermit -r pcomm runs external programs for file transfer by making stdin and stdout point to the modem port, and then exec-ing "/bin/sh -c xxx" (where xxx is the appropriate command). However, C-Kermit does not treat stdin and stdout as the communication device unless you instruct it: Send Receive Kermit kermit -l 0 -s filename kermit -l 0 -r The "-l 0" option means to use file descriptor 0 for the communication device. In general, any program can pass any open file descriptor to C-Kermit for the communication device in the "-l" command-line option. When Kermit is given a number as the argument to the "-l" option, it simply uses it as a file descriptor, and it does not attempt to close it upon exit. Here's another example, for Seyon (a Linux communication program). First try the technique above. If that works, fine; otherwise... If Seyon does not give you a way to access and pass along the file descriptor, but it starts up the Kermit program with its standard i/o redirected to its (Seyon's) communications file descriptor, you can also experiment with the following method, which worked here in brief tests on SunOS. Instead of having Seyon use "kermit -r" or "kermit -s filename" as its Kermit protocol commands, use something like this (examples assume C-Kermit 6.0): kermit -YqQl 0 -r <-- to receive kermit -YqQl 0 -s filename(s) <-- to send one or more files kermit -YqQF 0 -r <-- to receive kermit -YqQF 0 -s filename(s) <-- to send one or more files Command line options: Y - skip executing the init file Q - use fast file transfer settings (default in 8.0) l 0 - transfer files using file descriptor 0 for a serial connection F 0 - transfer files using file descriptor 0 for a Telnet connection q - quiet - no messages r - receive s - send [ Top ] [ Contents ] [ Section Contents ] [ Previous ] (This section is obsolete, but not totally useless. See Chapter 14 of Using C-Kermit, 2nd Edition). After you have opened a communication link with C-Kermit's SET LINE (SET PORT) or SET HOST (TELNET) command, C-Kermit makes its file descriptor available to you in the \v(ttyfd) variable so you can pass it along to other programs that you RUN from C-Kermit. Here, for example, C-Kermit runs itself as an external protocol: C-Kermit>set modem type hayes C-Kermit>set line /dev/acu C-Kermit>set speed 2400 C-Kermit>dial 7654321 Call complete. C-Kermit>echo \v(ttyfd) 3 C-Kermit>run kermit -l \v(ttyfd) Other programs that accept open file descriptors on the command line can be started in the same way. You can also use your shell's i/o redirection facilities to assign C-Kermit's open file descriptor (ttyfd) to stdin or stdout. For example, old versions of the Unix ZMODEM programs, sz and rz, when invoked as external protocols, expect to find the communication device assigned to stdin and stdout with no option for specifying any other file descriptor on the sz or rz command line. However, you can still invoke sz and rz as exterior protocols from C-Kermit if your current shell ($SHELL variable) is ksh (the Korn shell) or bash (the Bourne-Again shell), which allows assignment of arbitrary file descriptors to stdin and stdout: C-Kermit> run rz <&\v(ttyfd) >&\v(ttyfd) or: C-Kermit> run sz oofa.zip <&\v(ttyfd) >&\v(ttyfd) In version 5A(190) and later, you can use C-Kermit's REDIRECT command, if it is available in your version of C-Kermit, to accomplish the same thing without going through the shell: C-Kermit> redirect rz or: C-Kermit> redirect sz oofa.zip A complete set of rz,sz,rb,sb,rx,sx macros for Unix C-Kermit is defined in the file ckurzsz.ini. It automatically chooses the best redirection method (but is redundant since C-Kermit 6.0, which now has built-in support for external protocols via its SET PROTOCOL command). Note that external protocols can be used on C-Kermit SET LINE or SET HOST connections only if they operate through standard input and standard output. If they open their own connections, Kermit can't redirect them over its own connection. As of version 7.0, C-Kermit supports a wide range of security options for authentication and encryption: Kerberos 4, Kerberos 5 / GSSAPI, SSL/TLS, and SRP. See the separate security document for details. Date: Thu, 12 Mar 92 1:59:25 MEZ From: Walter Mecky <walter@rent-a-guru.de> Subject: Help.Unix.sw To: svr4@pcsbst.pcs.com, source@usl.com PRODUCT: Unix RELEASE: Dell SVR4 V2.1 (is USL V3.0) MACHINE: AT-386 PATHNAME: /usr/lib/libc.so.1 /usr/ccs/lib/libc.a ABSTRACT: Function ttyname() does not close its file descriptor DESCRIPTION: ttyname(3C) opens /dev but never closes it. So if it is called often enough the open(2) in ttyname() fails. Because the broken ttyname() is in the shared lib too all programs using it can fail if they call it often enough. One important program is uucico which calls ttyname for every file it transfers. Here is a little test program if your system has the bug: #include <stdlib.h> #include <stdio.h> main() { int i = 0; while (ttyname(0) != NULL) i++; perror("ttyname"); printf("i=%d\n", i); } If this program runs longer than some seconds you don't have the bug. WORKAROUND: None FIX: Very easy if you have source code. Another user reports some more explicit symptoms and recoveries: > What happens is when invoking ckermit we get one of the following > error messages: > You must set line > Not a tty > No more processes. > One of the following three actions clears the problem: > shutdown -y -g0 -i6 > kill -9 the ttymon with the highest PID > Invoke sysadm and disable then enable the line you want to use. > Turning off respawn of sac -t 300 and going to getty's and uugetty's > does not help. > > Also C-Kermit reports "?timed out closing /dev/ttyxx". > If this happens all is well. ------------------------------(Note: the following problem also occurs on SGI and probably many other Unix systems): From: James Spath <spath@jhunix.hcf.jhu.edu> To: Info-Kermit-Request@cunixf.cc.columbia.edu Date: Wed, 9 Sep 1992 20:20:28 -0400 Subject: C-Kermit vs uugetty (or init) on Sperry 5000 We have successfully compiled the above release on a Unisys/Sperry 5000/95. We used the sys5r3 option, rather than sys5r2 since we have VR3 running on our system. In order to allow dialout access to non-superusers, we had to do "chmod 666 /dev/tty###, where it had been -rw--w--w- (owned by uucp), and to do "chmod +w /usr/spool/locks". We have done text and binary file transfers through local and remote connections. The problem concerning uucp ownership and permissions is worse than I thought at first. Apparently init or uugetty changes the file permissions after each session. So I wrote the following C program to open a set of requested tty lines. I run this for any required outgoing line prior to a Kermit session. ------ cut here ------- /* opentty.c -- force allow read on tty lines for modem i/o */ /* idea from: restrict.c -- System 5 Admin book Thomas/Farrow p. 605 */ /* /jes jim spath {spath@jhunix.hcj.jhu.edu } */ /* 08-Sep-92 NO COPYRIGHT. */ /* this must be suid to open other tty lines */ /* #define DEBUG */ #define TTY "/dev/tty" #define LOK "/usr/spool/locks/LCK..tty" #include <stdio.h> /* allowable lines: */ #define TOTAL_LINES 3 static char allowable[TOTAL_LINES][4] = { "200", "201", "300" }; static int total=TOTAL_LINES; int allow; /* states: */ #define TTY_UNDEF 0 #define TTY_LOCK 1 #define TTY_OKAY 2 main(argc, argv) int argc; char *argv[]; { char device[512]; char lockdev[512]; int i; if (argc == 1) { fprintf(stderr, "usage: open 200 [...]\n"); } while (--argc > 0 && (*++argv) != NULL ) { #ifdef DEBUG fprintf(stderr, "TRYING: %s%s\n", TTY, *argv); #endif sprintf(device, "%s%s", TTY, *argv); sprintf(lockdev, "%s%s", LOK, *argv); allow = TTY_UNDEF; i = 0; while (i <= total) { /* look at all defined lines */ #ifdef DEBUG fprintf(stderr, "LOCKFILE? %s?\n", lockdev); #endif if (access(lockdev, 00) == 0) { allow=TTY_LOCK; break; } #ifdef DEBUG fprintf(stderr, "DOES:%s==%s?\n", allowable[i], *argv); #endif if (strcmp(allowable[i], *argv) == 0) allow=TTY_OKAY; i++; } #ifdef DEBUG fprintf(stderr, "allow=%d\n", allow); #endif switch (allow) { case TTY_UNDEF: fprintf (stderr, "open: not allowed on %s\n", *argv); break; case TTY_LOCK: fprintf (stderr, "open: device locked: %s\n", lockdev); break; case TTY_OKAY: /* attempt to change mode on device */ if (chmod (device, 00666) < 0) fprintf (stderr, "open: cannot chmod on %s\n", device); break; default: fprintf (stderr, "open: FAULT\n"); } } exit (0); } Unix versions, especially those for PCs (SCO, Unixware, etc) might be augmented by third-party communication-board drivers from Digiboard, Stallion, etc. These can sometimes complicate matters for Kermit considerably since Kermit has no way of knowing that it is going through a possibly nonstandard driver. Various examples are listed in the earlier sections of this document; search for Stallion, Digiboard, etc. Additionally: - World Wide Escalation Support, Stallion Technologies, Toowong QLD, support@stallion.oz.au. Later (December 1997, from the same source): [ Top ] [ Contents ] [ C-Kermit Home ] [ C-Kermit 8.0 Overview ] [ Kermit Home ]
http://www.columbia.edu/kermit/ckubwr.html
CC-MAIN-2014-42
refinedweb
29,881
62.48
On Thu, Jan 13, 2005 at 08:00:28PM +0100, Iago Rubio wrote: >. SYN cookies will not be used unless the SYN queue is full, if the queue is full the connection would be dropped if SYN cookies are not enabled. Using cookies lets you serve the majority of clients instead of none at all. The document you quoted says that SYN cookies should not be as a fallback facility when legitimate traffic is overwhelming the server. >From linux 2.4.24 net/ipv/tcp_ipv4.c: 1417 if (tcp_synq_is_full(sk) && !isn) { 1418 #ifdef CONFIG_SYN_COOKIES 1419 if (sysctl_tcp_syncookies) { 1420 want_cookie = 1; 1421 } else 1422 #endif 1423 goto drop; 1424 } Cheers, Oskari
http://www.redhat.com/archives/fedora-devel-list/2005-January/msg00484.html
CC-MAIN-2015-11
refinedweb
110
73.58
XML proprty file, printout order wayne morton Greenhorn Joined: May 17, 2012 Posts: 28 posted Oct 24, 2012 06:18:08 0 I am starting to play with XML files to store my data and i was wondering if someone could explain to me the logic behind the way it adds information to the file.e.g. i have this simple little code import java.io.*; import java.util.*; public class PropertyFileCreator { public PropertyFileCreator() { Properties prop = new Properties(); prop.setProperty("a", "one"); prop.setProperty("b", "two"); prop.setProperty("c", "three"); try { File propFile = new File("properties.xml"); FileOutputStream propStream = new FileOutputStream(propFile); Date now = new Date(); prop.storeToXML(propStream, "Created on " + now); } catch (IOException exception) { System.out.println("Error: " + exception.getMessage()); } } public static void main(String[] arguments) { PropertyFileCreator pfc = new PropertyFileCreator(); } } All works fine apart from the order it stores the information in the file.i.e. the XML file looks like this <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE properties SYSTEM ""> <properties> <comment>Created on Wed Oct 24 12:59:00 BST 2012</comment> <entry key="b">two</entry> <entry key="a">one</entry> <entry key="c">three</entry> </properties> Why does it store the data in that order and not a, b, c? I would prefer it did a ,b ,c rather than b, a, c, mainly for clarity reasons when i am checking it has stored the data properly and the first step towards that goal is understanding why it prints in the order it does. I guess my obvious next question is tips for ordering it or where to find information on how to order it? Paul Clapham Bartender Joined: Oct 14, 2005 Posts: 18563 8 I like... posted Oct 24, 2012 10:30:18 0 Because Properties extends Hashmap, which is an unordered map. And so when you extract data from a Properties object and write it to XML, the order of that data is undefined. Tips for ordering the data: the easiest way to deal with it is to stop wanting it. However if you absolutely must have your data in a specific order in the properties file, then don't use a Properties object. (And I'm going to move this post out of the Swing forum because it isn't related to Swing at all.) Paul Clapham Bartender Joined: Oct 14, 2005 Posts: 18563 8 I like... posted Oct 24, 2012 10:54:28 0 Here's another way to look at it: When you have a properties file, you have two choices. One choice is to maintain it with a text editor, in which case you can put the entries in any way you like. When you read them into a Properties object, they become unordered, but you don't care because the ordering means nothing to the application which uses the properties. Or you can maintain them with some application, which works with the Properties file and knows what order to put the entries in when it displays them to you. When the Properties object writes them out to the file, they become unordered, but you don't care because you're maintaining them with something which knows what it's doing. It's when you start using both ideas at the same time that you run into trouble with "unordered" properties. So just pick one of the two approaches and you won't care. wayne morton Greenhorn Joined: May 17, 2012 Posts: 28 posted Oct 25, 2012 02:47:28 0 Thanks for the reply Paul. It would seem for future reference that using Properties is not the ideal way so i will look at other ways for transferring data in future, for now having it ordered would have just been nice rather than any form of necessity. Thanks for moving the post also, i thought i put it in a different section than i did. I agree. Here's the link: subject: XML proprty file, printout order Similar Threads save properties file without affecting existing comments. Simple text file records using properties or ? Reading xml data Including property file in the jar java.io.FileNotFoundException: FileOutputStream error? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/596033/java/java/XML-proprty-file-printout-order
CC-MAIN-2014-41
refinedweb
709
61.77
random-coderandom-code ContributionsContributions If you'd like to contribute to this project, check out this blog post to understand how to build and test the code locally. Need some code for your project? We've got you covered. Choose your language. Choose how much code. BÄM! You got code. Install packageInstall package npm install @whitep4nth3r/random-code Get random codeGet random code There are two functions available for you to use: import { getLanguages, generateRandomCode } from "@whitep4nth3r/random-code"; Use getLanguages() to return an Object with key/value pairs of available languages. Use generateRandomCode() to generate some random code with two optional parameters. - language (a key from getLanguages()) - lines (an integer of how many lines of code you require) If you do not provide these parameters, your result will be random. Here's an example request: const myRandomCode = generateRandomCode("js", 3); Here's an example response: { "code": "const replaceObject = () => {\n /* FIXME: For some reason this is causing the code below to error out? */\n const property = true;\n return 0;\n}", "lines": "3", "languageKey": "js", "languageValue": "JavaScript", "contributors": ["whitep4nth3r", "lukeocodes", "negue", "isabellabrookes"] } ProfitProfit And lol.
https://www.npmjs.com/package/@whitep4nth3r/random-code
CC-MAIN-2021-43
refinedweb
185
55.74