text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
API Versioning in .NET Core
API Versioning in .NET Core
.NET Core is a great framework for building out front- and backend applications. Read on to learn how to version the APIs your app's call using this framework.
Join the DZone community and get the full member experience.Join For Free
Start coding something amazing with our library of open source Cloud code patterns. Content provided by IBM.
In this post, we will see how to use different options for versioning in .NET Core API projects. Versioning APIs is very important and it should be implemented in any API project. Let's see how to achieve this in .NET Core.
Prerequisites:
- Visual Studio 2017 community edition, download here.
- .NET Core 2.0 SDK from here (I have written a post to install SDK here).
Create the API App Using a .NET Core 2.0 Template in VS 2017
Once you have all these installed, open your Visual Studio 2017 -> Create New Project -> Select Core Web application:
Click on Ok and in the next window, select API as shown below:
Visual Studio will create a well-structured application for you.
Install the NuGet Package for API Versioning
The first step is to install the NuGet package for API Versioning.
Search with "Microsoft.AspNetCore.Mvc.Versioning" in the NuGet Package Manager and click on Install:
This NuGet package is a service API versioning library for Microsoft ASP.NET Core.
Changes in Startup Class
Once the NuGet package is installed, the next step is to add the API Versioning service in the
ConfigureService method as shown below:
services.AddApiVersioning(o => { o.ReportApiVersions = true; o.AssumeDefaultVersionWhenUnspecified = true; o.DefaultApiVersion = new ApiVersion(1, 0); });
Some points here:
The
ReportApiVersionsflag is used to add the API versions in the response header as shown below:
- The
AssumeDefaultVersionWhenUnspecifiedflag is used to set the default version when the client has not specified any versions. Without this flag, the
UnsupportedApiVersionexception will occur when the version is not specified by the client.
- The
DefaultApiVersionflag is used to set the default version count.
Create Multiple Versions of the Sample API
Once the API versioning service is added, the next step is to create multiple versions of our Values API.
For now, just keep the GET method and remove the rest of the methods and create version 2 of the same API, as shown below:
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; namespace ApiVersioningSampleApp.Controllers { [ApiVersion("1.0")] [Route("api/Values")] public class ValuesV1Controller: Controller { // GET api/values [HttpGet] public IEnumerable < string > Get() { return new string[] { "Value1 from Version 1", "value2 from Version 1" }; } } [ApiVersion("2.0")] [Route("api/Values")] public class ValuesV2Controller: Controller { // GET api/values [HttpGet] public IEnumerable < string > Get() { return new string[] { "value1 from Version 2", "value2 from Version 2" }; } } }
In the above code:
- We have applied the attribute [ApiVersion("1.0")] for Version 1.
- We have applied the attribute [ApiVersion("2.0")] for Version 2.
- Also changed the GET value to understand which version is getting called.
Just run your application and you will see the Version 1 API is getting called because we did not specify any specific version; thus the default version (1.0 in our case) will be called:
There are some ways by which you can specify the version of the API, we'll discuss below.
Query String-Based Versioning
In this, you can specify the version of the API in the query string. For example, to call version 2 of the Values API, the below call should work:
/api/values?api-version=2.0
URL-Based Versioning
There are many people who do not like query based patterns, in which case we can implement URL-based versioning by changing the route as shown below:
[Route("api/{v:apiVersion}/Values")]
In such a case, the below call will return the version 2 of the API:
/api/2.0/values
This approach is more readable.
HTTP Header-Based Versioning
If you do not wish to change the URL of the API then you can send the version of API in the HTTP header.
To enable this, the version reader needs to be added to the
ConfigureService method as shown below:
o.ApiVersionReader = new HeaderApiVersionReader("x-api-version");
Once you enable this, the query string approach will not work. If you wish to enable both of them then just use the below code:
o.ApiVersionReader = new QueryStringOrHeaderApiVersionReader("x-api-version");
Once the API Version reader is enabled, you can specify the API version while calling this particular API. For example, I have given Version 1 while calling the API from Postman:
Some Useful Features
Deprecating the Versions
Sometimes we need to deprecate some of the versions of the API, but we do not wish to completely remove that particular version of the API.
In such cases, we can set the
Deprecated flag to
true for that API, as shown below:
[ApiVersion("1.0", Deprecated = true)] [Route("api/Values")] public class ValuesV1Controller: Controller { //// Code }
It will not remove this version of the API, but it will return the list of deprecated versions in the header, api-deprecated-versions, as shown below:
Assign the Versions Using Conventions
If you have lots of versions of the API, instead of putting the
ApiVersion attribute on all the controllers, we can assign the versions using a conventions property.
In our case, we can add the convention for both the versions as shown below:
o.Conventions.Controller<ValuesV1Controller>().HasApiVersion(new ApiVersion(1, 0)); o.Conventions.Controller<ValuesV2Controller>().HasApiVersion(new ApiVersion(2, 0));
Which means that we are not required to put [ApiVersion] attributes above the controller
API Version Neutral
There might be a case when we need to opt out the version for some specific APIs.
For example, Health checking APIs which are used for pinging. In such cases, we can opt out of this API by adding the attribute
[ApiVersionNeutral] as shown below:
[ApiVersionNeutral] [RoutePrefix( "api/[controller]" )] public class HealthCheckController : Controller { ////Code }
Other Features to Consider:
Add
MapToApiVersionin the attribute if you wish to apply versioning only for specific action methods instead of the whole controller. For example:
[HttpGet, MapToApiVersion("2.0")] public IEnumerable<string> Get() { return new string[] { "value1 from Version 2", "value2 from Version 2" }; }
- We can get the version information from the method
HttpContext.GetRequestedApiVersion();, this is useful to check which version has been requested by the client.
- Version Advertisement can be used by each service to advertise the supported and deprecated API versions it knows about. This is generally used when the service API versions are split across hosted applications.
- We can allow clients to request a specific API version by media type. This option can be enabled by adding the below line in the API versioning options in the
ConfigureServicemethod:
options => options.ApiVersionReader = new MediaTypeApiVersionReader();
Hope this helps.
You can find my all .NET core posts here.
Code something amazing with the IBM library of open source blockchain patterns. Content provided by IBM.
Published at DZone with permission of Neel Bhatt , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/api-versioning-in-net-core
|
CC-MAIN-2018-47
|
refinedweb
| 1,202
| 55.84
|
Adding state management with Redux in a CRA + SSR project
Or how to initialize your Redux store on the server, then pick it up and hydrate it on the client.
This is part 2 of my CRA+SSR series:
- Upgrading a create-react-app project to SSR + code splitting
- Adding state management with Redux in a CRA + SSR project
I personally like Redux because it uses a single object to represent the entire state of the application. This single object is also composable, so you can split its management into smaller, independent bits. I also find it useful to have a one-way data flow, so when I click a button in one place, I trigger a chain of events, ending with the update of the UI based on the new state, maybe even in another place of the app, without having to pass props and execute callbacks all around.
If you’re not too familiar with Redux and how it works, I suggest you read this quick article for a birds-eye view, or go into much more detail and even find how it’s different than other libraries.
What we’ll cover in this article:
- Adding Redux on the client
- Adding Redux on the server
- Rehydrating the client store from the server
First: Client side
Let’s install Redux and its helpers:
yarn add redux react-redux redux-thunk
We need to create a reducer in /src/store/appReducer.js, aka a pure function taking two arguments (the previous state, and an action/modifier object) and returning the new state as an immutable object.
const initialState = {
message: null,
};export const appReducer = (state = initialState, action) => {
switch(action.type) {
case 'SET_MESSAGE':
return {
...state,
message: action.message,
};
default:
return state;
}
};
We’re destructuring the state object here to keep the old properties unchanged, and only replace what we need. In our case, this is not necessarily needed, as we only have the message property, but in a larger application this is what you typically do.
Let’s also write an action creator, that is a function that returns an action object. I usually like to keep things together, so we’ll add this in our reducer file. But feel free to create a separate file if you want to group things differently.
export const setMessage = messageText => ({ type: 'SET_MESSAGE', message: messageText });
Now we’ll create our store initializer in /src/store/configureStore.js.
import {
createStore,
combineReducers,
compose,
applyMiddleware,
} from 'redux';import ReduxThunk from 'redux-thunk'import { appReducer } from './appReducer';// if you're using redux-thunk or other middlewares, add them here
const createStoreWithMiddleware = compose(applyMiddleware(
ReduxThunk,
))(createStore);const rootReducer = combineReducers({
app: appReducer,
});export default function configureStore(initialState = {}) {
return createStoreWithMiddleware(rootReducer, initialState);
};
We’re wrapping our createStore in a function so we can pass the initial state when initializing. This will help us when hydrating the state from the server.
Now let’s use it in our app. Wrap the main App component in a Redux provider in /src/index.js:
import React from 'react';
import ReactDOM from 'react-dom';
import Loadable from 'react-loadable';
import { Provider as ReduxProvider } from 'react-redux'import App from './App';
import configureStore from './store/configureStore';const store = configureStore();const AppBundle = (
<ReduxProvider store={store}>
<App />
</ReduxProvider>
);window.onload = () => {
Loadable.preloadReady().then(() => {
ReactDOM.hydrate(
AppBundle,
document.getElementById('root')
);
});
};
Next, we’ll display the message in our App. We’ll also set a default message on the client if the initial value is empty.
import { connect } from 'react-redux';
import { setMessage } from './store/appReducer';class App extends Component {
componentDidMount() {
if(!this.props.message) {
this.props.updateMessage("Hi, I'm from client!");
}
} render() {
return (
<div className="App">
// ...
<p>
Redux: { this.props.message }
</p>
</div>
);
}
}export default connect(
({ app }) => ({
message: app.message,
}),
dispatch => ({
updateMessage: (txt) => dispatch(setMessage(txt)),
})
)(App);
That’s it! Now run the app with yarn start and see the “Hi, I’m from client!” message displayed after the app loads.
Next: Server side
Remember our serverRenderer middleware which renders our app to a string? Let’s modify that a little bit. We’ll wrap it in another function, so we can pass the store from outsite. We’ll also wrap our main App component in a Redux provider, just like on the client.
export default (store) => (req, res, next) => {
// ...
const html = ReactDOMServer.renderToString(
<ReduxProvider store={store}>
<App />
</ReduxProvider>
);
// ...
}
Now we need to initialize our store and pass it as a prop when using the renderer middleware in our router (in /server/index.js):
import serverRenderer from './middleware/renderer';
import configureStore from '../src/store/configureStore';//...const store = configureStore();
router.use('^/$', serverRenderer(store));// ...
In a real application, you will want to move this code in a controller, so you decouple the logic of the app from the initialization of the express server. Also, you’ll want some controller actions that hold more complex logic, maybe even based on the request url. In fact, let’s do this now. We’ll move the code for the router initialization and we’ll write an index action that handles the Redux store initialization in /server/controllers/index.js:
import express from "express";
import serverRenderer from '../middleware/renderer';
import configureStore from '../../src/store/configureStore';
const router = express.Router();
const path = require("path");
const actionIndex = (req, res, next) => {
const store = configureStore();
serverRenderer(store)(req, res, next);
};
// root (/) should always serve our server rendered page
router.use('^/$', actionIndex);
// other static resources should just be served as they are
router.use(express.static(
path.resolve(__dirname, '..', '..', 'build'),
{ maxAge: '30d' },
));
export default router;
As we moved the code in a subdirectory, please make sure that in the route for static files you add an extra ‘..’, so path.resolve() will point to the right location.
Our action is just another middleware that will call the serverRenderer middleware after the Redux store has been initialized. We can even dispatch an action before calling the renderer.
import { setMessage } from '../../src/store/appReducer';// ...const actionIndex = (req, res, next) => {
const store = configureStore();
store.dispatch(setMessage("Hi, I'm from server!")); serverRenderer(store)(req, res, next);
};// ...
Now we can clean our /server/index.js entry point:
import express from 'express';
import indexController from './controllers/index';const app = express();
app.use(indexController);// start the app
// ...
Build the app and start the node server. You should see the message rendered on the server before the client app initializes.
yarn build && node server/bootstrap.js
Finally: Rehydrate the client store from the server
As you may have already noticed, when the client app runs, the store on the client is still empty. As such, the message property will be empty, so the code in the componentDidMount will set our default message, overwriting the one rendered on the server. We need to sync the state on the client with the one on the server.
Because we’re already writing code in our HTML on the server, let’s send the data in the same place. We’ll add a placeholder in /public/index.html:
<div id="root"></div>
<script type="text/javascript" charset="utf-8">
window.REDUX_STATE = "__SERVER_REDUX_STATE__";
</script>
Now, let’s replace this on the server with a JSON representation of our data. Update the serverRenderer:
export default (store) => (req, res, next) => {
// ... const html = ReactDOMServer.renderToString(
<ReduxProvider store={store}>
<App />
</ReduxProvider>
); const reduxState = JSON.stringify(store.getState()); // ... return res.send(
htmlData
.replace(
'<div id="root"></div>',
`<div id="root">${html}</div>`
)
.replace(
'</body>',
extraChunks.join('') + '</body>'
)
.replace('"__SERVER_REDUX_STATE__"', reduxState)
);
}
OK, now we’ll pick this up on the client and initialize the store before rendering the app. Update /src/index.js:
const store = configureStore( window.REDUX_STATE || {} );const AppBundle = (
<ReduxProvider store={store}>
<App />
</ReduxProvider>
);
Build the app and start the node server one last time.
Going further
If you really want to, you can even fetch data on the server from a remote API backend. Just remember that the server will take longer to respond, because it has to wait for the request to complete before doing anything. Also, depending on how much data you have to fetch, your HTML file will increase in size, because you have to include the entire initial state in the response.
// src/store/appReducer.js
export const setAsyncMessage = messageText => dispatch => (
new Promise((resolve, reject) => {
setTimeout(() => resolve(), 2000);
})
.then(() => dispatch(setMessage(messageText)))
);// server/controllers/index.js
const actionIndex = (req, res, next) => {
const store = configureStore();
store.dispatch(setAsyncMessage("Hi, I'm from server!"))
.then(() => {
serverRenderer(store)(req, res, next);
});
};
While this works, I personally discourage its use. The main idea behind SSR is to render the app with a minimum initial state, so the user sees something until the app loads in the browser. Also, client-side applications should only hold the logic (aka data manipulation), while the actual data can be fetched async from whatever remote backend / API source.
Many production apps prefer this approach: Facebook, Slack etc. They render an empty application “shell”, then they fetch the data asynchronously and show it when it’s downloaded.
On Snipit.io, I personally use this technique as well. The only data I place in the store on the server is wether the user is logged in or not. Based on that, I load a separate UI on the client. Then, only after the app is initialized on the client, I get the data asynchronously from the backend API. Until the data is fetched, the user will see placeholders for the content that will be renderer. This way, the UI loads almost instantly so the entire perceived performance is better.
What do you think about the techniques explained in these articles? Do you find them useful for your project? Let me know in the comments.
You can also follow me here on Medium or on Twitter @andreiduca for more stories like this.
|
https://medium.com/bucharestjs/adding-state-management-with-redux-in-a-cra-srr-project-9798d74dbb3b
|
CC-MAIN-2019-47
|
refinedweb
| 1,616
| 57.87
|
LINQ is clearly gaining a fair amount of traction, given the number of posts I see about it on Stack Overflow. However, I’ve noticed an interesting piece of coding style: a lot of developers are using query expressions for every bit of LINQ they write, however trivial.
Now, don’t get the wrong idea – I love query expressions as a helpful piece of syntactic sugar. For instance, I’d always pick the query expression form over the “dot notation” form for something like this:
from line in new LineReader(file)
let entry = new LogEntry(line)
where entry.Severity == Severity.Error
select file + “: “ + entry.Message;
(Yes, it’s yet another log entry example – it’s one of my favourite demos of LINQ, and particularly Push LINQ.) The equivalent code using just the extension methods would be pretty ugly, especially given the various range variables and transparent identifiers involved.
However, look at these two queries instead:
where person.Salary > 10000m
select person;
var dotNotation = people.Where(person => person.Salary > 10000m);
In this case, we’re just making a single method call. Why bother with three lines of query expression? If the query becomes more complicated later, it can easily be converted into a query expression at that point. The two queries are exactly the same, even though the syntax is different.
My guess is that there’s a “black magic” fear of LINQ – many developers know how to write query expressions, but aren’t confident about what they’re converted into (or even the basics of what the translation process is like in the first place). Most of the C# 3.0 and LINQ books that I’ve read do cover query expression translation to a greater or lesser extent, but it’s rarely given much prominence.
I suspect the black magic element is reinforced by the inherent “will it work?” factor of LINQ to SQL – you get to write the query in your favourite language, but you may well not be confident in it working until you’ve tried it; there will always be plenty of little gotchas which can’t be picked up at compile time. With LINQ to Objects, there’s a lot more certainty (at least in my experience). However, the query expression translation shouldn’t be part of what developers are wary of. It’s clearly defined in the spec (not that I’m suggesting that all developers should learn it via the spec) and benefits from being relatively dumb and therefore easy to predict.
So next time you’re writing a query expression, take a look at it afterwards – if it’s simple, try writing it without the extra syntactic sugar. It may just be sweet enough on its own.
14 thoughts on “You don’t have to use query expressions to use LINQ”
A new developer recently joined the team I work for and explained me he did not like Linq because he does not trust what’s running behind the curtain.
He does not like declarative programming either (attributes…) and prefers to have everything ‘under control’ imperatively.
I think you get much more confidence in a technology when you start understanding it rather than just knowing it. This is the main reason why I like books such as “C# in depth”, they allow me to understand how things work and not just write the code from the book blindly.
Three things I’m sure about still
Parts of my code are really clearer with Linq and I really miss it when I need it on <3.5 projects
Other developers from other environments to whom I show Linq (to objects and entities) find it really neat
Never forget someone else might have to read your code in the future…
I agree, I see a fair amount of the more verbose syntax from fellow developers for such simple queries and at a glance (at least to me) the intention isn’t as clear.
I tried to +1 but I couldn’t find the vote button. Oh yeah this isn’t SO.
Wow, it’s the exact opposite for me, where query expressions seem like black magic, whereas a method call seems understandable.
I could use that aspect without reading any documentation outside of intellisense, yay for discoverablility. :)
Of course I don’t know SQL, and only use LINQ to Objects.
I think some of this is from the fact that alot of developers introduction to LINQ is through LINQ to SQL examples. I think this sometimes leads some developers to write all LINQ code like they would an SQL statement.
Agreed. Add to that the number of overloads and additional methods (both standard and bespoke) that can *only* be called through dot syntax.
From what I’ve seen, people with too much query-syntax affinity regularly miss out on a simple, elegant way to do something simply because query-syntax can’t express it. Even simple *core* things like Skip/Take.
@CQ – the “+1” made me laugh ;-p
@Vincent – don’t forget that you can use LINQ (to objects, at least) with .NET 2.0 (as long as you are using C# 3.0) via things like LINQBridge.
Thanks Marc
Usually I just need to maintain the application and try to respect the same coding techniques as for the rest of the application.
I’d love to change all those FindAll(delegate) but it is not worth the effort
As you say it depends on the complexity, but pretty simple ones still sometimes look better with the sugar. For example take the following which has a mix of styles:
return (from d in descendants where d.Key == descendantKey select d.Value).Single();
in dot notation:
return descendants.Where(d => d.Key == descendantKey).Select(d => d.Value).Single();
Either works for me, but I’m pretty sure the first is more readable to all skill levels of LINQ.
Marc has a great point in that anything outside the sugar can easily be missed if you rely on it… I’m thinking perhaps I should always code in dot notation first and if ugly see if I can then sugar it, rather than the other way around.
I agree. This could be a perfect refactoring tip in ReSharper.
Today (for example) it suggests to invert “if” statements, if it can make the code shorter and less indented.
It could easily find LINQ expressions that could be shortened by their equivalent query expressions, and offer to convert them for you.
JetBrains: The ball is in your court :-)
I’ve heard Luke Hoban saying that query expressions were introduced in C# because the learning curve for LINQ still was quite high.
And to that point, I’m really interested in the percentage of those who use expressions (“from .. in .. select ..”) among the whole bunch of us using LINQ.
Cause from what I can see, it’s just a tiny fraction of “expressionists”. Maybe it could have been a wiser choice to introduce LINQ and *then* consider changing the language??
Just curious. I might be wrong, of course.
It’s funny, I originally used query expressions because they were so heavily advertised and they had a certain novelty. Nowadays, I often forget they even exist. I use LINQ all the time, but I virtually never use query expressions unless I really need to declare an intermediate with ‘let’. Once you get comfortable with the standard LINQ operators, you can leverage the framework much more effectively (and often more efficiently) using the extension methods.
LikeLiked by 1 person
It is interesting that most of the usages in LINQ (>90%$) use the sugar free syntax. It just seems more natural in many situations. It could be that for a C# developer there is less of an impedance mismatch when you look at the traditional syntax over a query expression.
I certainly find I have to think a little harder about LINQ which is written using the expression syntax.
@Jon,
A first “query expression” example, with its two “from” lines one after another, looks surprisingly similar to Scala for expression syntax. Inside Scala’s “for” you can also do it: select files from a directory, select lines from files, filter results and then use it inside {} block of “for”. That’s one of examples in “Programming in Scala”
|
https://codeblog.jonskeet.uk/2009/01/07/you-don-t-have-to-use-query-expressions-to-use-linq/?like_comment=10677&_wpnonce=a2fb47392f
|
CC-MAIN-2021-17
|
refinedweb
| 1,386
| 69.01
|
21043/comparing-strings-in-python-using-or-is
In the following Python code:
>>>>>>> s2 is s1
True
If I compare Var1 is Var2 in a conditional statement if fails to give the correct answer but if i write Var1 == Var2 instead, it returns true. Why is that happening?
is is used for identity testing and == is used for equality testing, i.e, is is the id(a) == id(b).
So your code will be emaluated in the following way:
>>>>> b = ''.join(['h', 'i'])
>>> a == b
True
>>> a is b
False
Convert both the strings to either uppercase ...READ MORE
In python objects/variables are wrapped into methods ...READ MORE
There is an efficient way to write ...READ MORE
For Python 3, try doing this:
import urllib.request, ...READ MORE
Use , to separate strings and variables while printing:
print ...READ MORE
For a better understanding you can refer ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
In this case, you can use the ...READ MORE
my_list = [1,2,3,4,5,6,7]
len(my_list)
# 7
The same works for ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/21043/comparing-strings-in-python-using-or-is
|
CC-MAIN-2020-40
|
refinedweb
| 201
| 85.39
|
Slashback: Cinelerra, Dolphiname, Phoenix 259
And you don't want your database being a Flipper. MySQL has finally announced the name of their logo dolphin, and the winner is... Sakila! The name, submitted by Ambrose Twebaze of Swaziland, was chosen from 6357 entries "because it represents the global reach of MySQL as well as the friendly, open nature of the company." Slashdot covered the contest back in January, and MySQL listed some of the more popular names submitted back in April.
Perverse incentives are the most fun. Mark Barnett writes in reference to the ongoing pets.com lawsuit story Update: 10/04 00:18 GMT by T : Sorry, that's "PetsWarehouse," not "pets.com.":
"I was one of the settling parties. I did not settle out of fear. I settled because he wanted me to run his banner on my website for 120 days. The settlement did not say anything about the number of times it had to run. I ran it once per day at about 4 AM EST for 120 days. It was my joke on him. I think I got the better deal. I ran the defense fund banner about 1.5 million times versus his 120 times."
Wings for a lizard. Espectr0 writes "Phoenix 0.2 has been released!. Improvements include the return of the sidebar, extensions management and web form autocomplete. It's also a little smaller and faster, and 0.3 will be released in about a week. Get the scoop here."
Unsolicited testimonial. boomerang_56 writes "Wanting to see what the fuss was about, I just installed Red Hat 8. For me, working IEEE1394 features are a must. It was nice to see that now I don't have to recompile the kernel just to have Firewire working. So I downloaded and compiled Kino, and was able to capture from my camcorder, and even control it, without the major tweaks I used to have to do. Then I found out that Cinelerra has been released at version 1.0!!! So I downloaded and installed it via RPM (Pentium II binaries). I had to install an old version of libstdc++-3, but that was easy. No "--force" or the other hassles we used to have to go through. So the first time I fired up Cinelerra, after changing the preferences for IEEE1394 capture, I was impressed to see it actually captured on the first try. I guess the bottom line for this submission is as a user I wanted to say "thanks" for all the developers working on this kind of thing. We all know that besides gaming, video editing is the big killer app. It's really nice to be able to have this kind of power in open source software and not have to boot to Windows just to edit video now. It's not easy enough for my mom yet, but the way things are going, it won't be long. Oh, links... get Cinelerra here (check out the screenshots too). Get Kino here."
Blinkenlampen ueber Paris. fluxdvd writes "In celebration of the Nuit Blanche art festival in Paris, Project Blinkenlights has transformed Tower T2 of the Bibliothèque nationale de France into what is claimed to be the world's largest computer screen. The system used to drive the display runs an embedded version of Linux.
Read the story at Linuxdevices.com. They have live streams of the building at night (Paris time) and replay the previous night's display druing the day. It's quite impressive :)"
We mentioned the plans for this display a few weeks ago.
Don't you hunger for a patent-free, royalty-free, better-at-identical-bitrate alternative? The release of Red Hat 8.0 included the notable, intentional ommission of MP3 software, a decision Red Hat made on the basis of possible patent and royalty problems.
Now SnowDeath writes "After two days of trying to get my ALSA install to work correctly in RedHat 8.0 (Psyche), I finally headed over to the xmms website to see if there were any known bugs with ALSA. Low and behold, the first thing my eyes read tells how RedHat Software decided to not include the mp3 plugin in their xmms install in Psyche in fear of pending patent problems. So, do not despair, there is an rpm "update" for this particular problem on the xmms site."
CS (Score:4, Funny)
Am I the only one thinking this was someone's plan to play counter-strike on the worlds biggest screen?
Sakila (Score:5, Funny)
Shit, that's what I thought when I first glanced at that name.
Re:Sakila (Score:2)
Mmmm, Shakira [geocities.com]...
Re:Sakila (Score:3, Funny)
I had a Japanese roomate/friend/coworker for a while, and I'd beg to differ. We worked for weeks trying to get his english R's right (while I worked on trying to get my japanese ra, ri, ru, re, & ro right), especially on words that ended with an r (like door, more, etc).
One day we were leaving the appartment and he absentmindedly refered to closing the door and said it exactly right. "Iwi!" I almost yelled, "you said it!" Unfortunately, he never could reproduce it again.
Don't say "R" (Score:2, Offtopic)
When I congratulated her, she said:All that work for nothing....
Re:Sakila (Score:2)
It's more complex than that. The Japanese "r" sound, found in common words like "ryori," meaning cooking or cuisine, is sort of a cross between the American "r" and "l" sounds. Whereas Americans (typically) make the "r" sound with their lips, and the "l" sound with their tongue, the Japanese "r" sound is made with a little bit of lip and a little bit of tongue. (Er, you know what I mean.)
So Japanese speakers often have trouble with both initial "r" and initial "l" when speaking English. Terminal "r" and "l" sounds, of course, just disappear entirely. "Door" becomes "doh," and "pull" becomes "puh."
But the average American has a much harder time pronouncing a Japanese word like "ryori" than Japanese people seem to have saying a world like "library."
Re:Sakila (Score:2)
I can't figure out how it's even possible to make an "r" sound with one's lips... I make an "r" sound by curling my tongue back so the tip presses against the middle of my palate, then making some sort of noise with my vocal cords. My lips aren't particularly involved... they just stay open during the process.
Japanese "r" keeps the tongue flatter and taps the tip close to the front of the mouth, making it sound sort of like a soft "d". Check this page [thejapanesepage.com] for a more complete explanation, plus sound samples.
Re:Sakila (Score:2)
Just trying this myself, and I can't understand how you could make an "r" sound while pressing your tongue against your palate.
It seems to me that *my* tongue remains fairly neutral on the "r" sound. It seems to be mostly made by the shape of my lips.
pets.com? (Score:4, Informative)
Re:pets.com? (Score:3, Funny)
Mark Barnett (Score:5, Insightful)
Sure its funny, but now they can tell other people that previous suits have been successfull settled out of court and they had better pay up.
All actions have consequences.
"Weaseling out of things is makes us different from animals...except the waesel." H.Simpson.
Re:Mark Barnett (Score:3, Insightful)
All a settlement means is that both parties agreed
to something. If one -- obviously insane -- party
says the terms were favorable to them, and you
believe them without checking, then you're a fool.
As a programmer... (Score:4, Interesting)
Each distro also demands tradeoffs. Redhat sacrifices everything to be "easy to install". Debian sacrifices currentness for stability (ha-ha). SuSE sacrifices compatibility with other distros for ease-of-use. And Sorcerer sacrifices that compatibility even more.
When Redhat removes another component like they did here, it's just business as usual in the Linux distro world. But for those developers out here who want to write applications, it's really hard with moving targets like these.
Re:As a programmer... (Score:3, Insightful)
Program to only one platform and ignore the others, and you better not tell your users that you support them.
Re:As a programmer... (Score:3, Insightful)
Of course, you should learn how to package for your favorite distro.
Re:As a programmer... (Score:3, Informative)
Programming under Linux is a bit tricky. Basically its a tradeoff between using the libraries specified in the LSB (which doesn't help you at all for GUI programming), or to simply target a specific set of libraries (probably the ones bundled with the newest RedHat). Too many of Linux's APIs are currently in flux, and so it is a crapshoot which versions people will have installed.
The good news is that fixing the problem is usually as easy as making sure the right libraries are installed.
This problem, however, is a horse of a different color. This doesn't have anything to do with shifting APIs or the difference between distributions. This has to do with the fact that MP3 compression is patented, and the patent holders have changed the terms for use of the patents. RedHat can't distribute MP3 codecs without paying royalties, and so they don't distribute the libraries that XMMS uses to decode MP3s.
Re:As a programmer... (Score:3, Interesting)
RedHat can't distribute MP3 codecs without paying royalties, and so they don't distribute the libraries that XMMS uses to decode MP3s.
Err, umm.... well, xmms.org [xmms.org] says:.
So all this about RedHat not being able to distribute MP3 codecs without paying royalties actually appears to be, as we say, a bunch of FUD. Maybe they have different reasons, but it's not about royalties.
ummm... (Score:2)
It's really quite simple, it IS about royalties, either they pay them to cover their ass so Thomson can't sue them a year down the road from now, or they simply don't include an MP3 decoder.
Re:As a programmer... (Score:2)
Last I checked RedHat sold RedHat Linux, and that's almost certainly the catch. It's easy enough to take Thomson's word that they aren't going to prosecute, but the fact of the matter is that you don't need to protect patents like you protect trademarks. As long as RedHat doesn't have it in writing that they are free to distribute MP3 codecs then RedHat is liable for royalties (and penalties as well should it go to court).
Thomson probably isn't going to go after the folks working on XMMS, but RedHat Linux could easily be categorized as a "commercially sold decoder." Nullsoft pays licensing fees, by the way, and they clearly give their WinAmp away as well. The world of law is a murky place where it is always better to be safe than sorry. You can't blame RedHat for staying clear of potential problems.
Re:As a programmer... (Score:2)
What about The Stable And Secure, Slackware [slackware.com]?
Re:As a programmer... (Score:2)
Re:As a programmer... (Score:2)
Because setting up such things can be more of a pain than writing the application itself. Autoconf is a good idea only because it works under some fairly extreme conditions. There's nothing else at all to recommend it.
Re:As a programmer... (Score:2)
Re:As a programmer... (Score:2)
I've used automake for several projects, and never for portability. It is a nice build system. Automake+CVS+a test suite can be a beautiful rapid development environment.
Re:As a programmer... (Score:4, Funny)
Re:As a programmer... (Score:2)
Re:As a programmer... (Score:2)
Write to the standard (Score:2)
Do you want to know more? [linuxbase.org]
Linux Standard Base & GCC 3.2 (Score:2)
If you develop in C++, make the effort to upgrade to GCC 3.2 and the new style standard C++ library style of programming. Believe me it's worth the effort. The only execption to this is if your interacting/recompiling with older KDE or Mozilla. The latter needs GCC-2.96 to load plugins.
Re:Linux Standard Base & GCC 3.2 (Score:2)
Do not set LD_LIBRARY_PATH. [visi.com] Compile with -R or LD_RUN_PATH instead.
That essay about LD_LIBRARY_PATH is one of the most interesting things I've ever read. Reading it helped me understand not only the issues involved, but affected a lot of my thinking about programming in general. It's good for you!
:) I never can remember where it is, but I see that it's the first hit you get when you search for LD_LIBRARY_PATH on google now.
Re:As a programmer... (Score:2)
Re:As a programmer... (Score:2)
Pretty much. The difference being that on Linux, you can report bugs you find and they actually get fixed. They don't just ignore bugs for 7 years [microsoft.com] which have never been fixed.
Another nice thing is that if you release the source for your application, the distro people themselves generally do all the work necessary to get your stuff working. In my experience, the parent question has really been a non-issue.
Lossy formats are louse (Score:4, Interesting)
Re:Lossy formats are louse (Score:2, Insightful)
By sharing music, are we really showing record companies they don't need to exist, or are we showing them they need to tighten the reigns on people sharing music so they can top off their profit margin?
Re:Lossy formats are louse (Score:2)
First off, not all of the record labels out there exist to screw you out of your hard earned cash. While you may think you're 'fighting the man' by swapping music with your buds, ultimately, you ARE doing damage to the artists. The bigger a problem swapping becomes, the more money the labels are going to spend trying to fight it, and legislate it, and ultimately, that means a tighter grip on artist rights and material. The labels are draconian enough, and like enough, as copyright owners, they hold the cards. You're not being Robin Hood by trading those mp3s.
Next, using 30 megs of space/bandwidth for a single song is more than ludicrous, it's flat out stupid. One, I've got better uses for the disk space (like porn). Two, I've got better uses for the bandwidth (like streaming porn). Moreover, pegging out the pipes on your schools network just costs them more money, and by extension, the students.
Like as not, there are things in this country you may THINK you have fundamental rights to, but you're operating on borrowed time if you expect to go forward in life with that attitude. Here's an idea: how would you feel if I wandered up to your house at 2 am with a handset, tapped the J box on the side of your house, and spent an hour on the phone to Tibet? How about if I did this every night for a month until you got the bill for it? The usage pattern alone is enough for the phone company to tell you to take a hike when you say it wasn't you. You still wind up paying.. for my usage. While abstract, this is roughly how it works out for colleges and businesses across the country, footing the bill for your playtime.
You want to give the RIAA the finger? Good for you! Do it by producing your own quality material, and don't license it to them. You want a nice phat digital on-demand archive of quality audio in your home? Pay for the damn CD. Think it costs too much? Wait a few months and buy it used, or GET A DAMN JOB.
While our nation may be founded on acts of civil insubordination, I hardly find the tyranny of the RIAA to be affecting my life to such a degree that I need to resort to what amounts to petty theft from an artist who spent more than a few years busting ass playing shitty bars and clubs because they believed enough in their music to keep at it. Sure, I've swapped mp3's with people. The things I didn't like, I deleted. The stuff I liked, I bought. Don't screw it for the rest of us because you're a cheap bastard.
Re:Lossy formats are louse (Score:2)
Put up or shut up
Re:Lossy formats are louse (Score:2)
Re:Lossy formats are louse (Score:2)
If that's the case, sending checks directly to the band probably won't help much either, since they would be forced to hand them over to their label anyway.
Re:Lossy formats are louse (lousy) (Score:2)
Start with a 50 MB WAV file. Compress with Flac, and you *might* get 60% cut out. I honestly don't see much advantage over gzip or zip! Why bother with a new format name? ("Oh, our FLAC is better than zip/gzip, because... eh... we have a different name! We offer 2.2% better compression ratios!")
So now you have a 20 MB file. Lesee, over a 28.8 modem connection, you have...
A royal nightmare.
Ogg, on the other hand, compresses comparably to MP3. Your 50 MB WAV file might compress down to 4 MB with reasonable audio quality.
Lesee, over a 28.8 modem connection, you have...
Something reasonable.
Free beer! (Score:3, Funny)
FLAC is champagne, and mp3 is beer.
Ogg is quality beer, and MP3 is Bud beer. How is Bud beer like repairing your filesystem on a boat? They're both fscking close to water.
Re:Free beer! (Score:2)
As a Canadian, and therefore a connaisseur of all things alcoholic
Re:you are ignorant (Score:2)
Yeah, sure. A real lively music scene. How do you listen to it once you get home?
Perfect example: There's a guy here in Memphis who plays solo quite a bit, Ron Franklin (website [rfentertainers.com]). I really love his solo stuff, a good bluesy/folk/rock/gospel mix. But he only releases albums w/ the Entertainers. Don't get me wrong, I love RFE, too. But sometimes I'd rather listen to his solo stuff. I could probably get permission to record a show (I'm friendly w/ him), but then I'd have to mike it, try & get decent levels, pipe it into line in,
But hey! If you want to do all the work for me, I'd be glad to give you $10 for some
Re:Lossy formats are louse (Score:2)
As both a programmer and a musician who doesn't get paid for either(though I am employed in the tech field), I can honestly say that there will always be people who are willing to create music, or programs, for free. Along the same vein(but off the subject entirely), When I read about Microsoft bad-mouthing a huge group of coders who are coding just for the love of code(the OSS movement), it sparks a flame deep inside me; It seems fundamentally wrong. I won't elabourate too far on this, because I tend to get far too deep in my comparisons, but face it -- a big company(or rather, since it's the software industry and a monopoly, THE big company) is critisizing a worldwide volunteer effort, bad-mouthing it in a way that would land most people in court for slander(or is that libel? It's been too long since I brushed up on my legal...) if the tables were turned, and regular people are actually buying into it. Maybe I should write about those nasty communist orphanages?
big screen (Score:4, Funny)
Re:big screen (Score:2, Funny)
Oh yeah! You like hyperthreading, don't you baby?? Who's your daddy? Red hat's your daddy!
Re:big screen (Score:2)
But in all seriousness..time to find out if that bitch has an active internet connection...
You start smoking pot (Score:2)
MySQL Control Center (Score:3, Informative)
PROS:
1) Sleek User Interface (graphically shows PRI keys and I believe you can map relations (FK), but I haven't figured that out yet, also graphically shows indices).
2) Some queries download faster than web browser and telnet/ssh. Some SQL statements execute quite quickly like DELETE and INSERT.
3) Multi-window display helps to show historical SQL statements and current actions.
CONS:
1) System crashes with "large" queries. Kind of bad that I tried a simple SELECT of one of my "large" tables with 2,500 rows/records and my computer crashed. Yea, I quoted "large" because is is relative between my tables, not to the maximum number of rows that can be stored in MySQL tables. Your mileage may vary as I have really old computer at home - (64 MG/Ram, Pentium I, 32-bit Virtual Memory, Windows 95b).
2) Not very user-friendly in terms of SQL beginners. You have to know SQL in order to operate the application via the SQL pane.
3) Compared to other products like MS SQL Server Enterprise Manager, some of the screens are difficult to interpret (related to #2).
Hope this helps
Re:MySQL Control Center (Score:2)
Phoenix: Everything I always wanted in a browser! (Score:5, Interesting)
1) Customizable Toolbars
2) Home button where it SHOULD BE!
3) Inline form management (Mozilla's form manager is all but worthless unless you've already filled out 20+ pages of forms.)
4) Theme that respects my system colors! (Go ahead, change your system colors, Phoenix changes with them!)
5) No bundled on software--I just want a browser! And if you use Mozilla for the mail, don't worry, the Mail client will be getting the same overhaul as the browser. It's a project called Minotaur [mozilla.org], and will be started on roughly when Phoenix hits
There are tons of other things to mention here like the extensions manager, default popup blocking, tabs, worthwhile sidebars, ability to remove the throbber, a clean statusbar that actually works, etc., but it's best if you just see it for yourself! Go grab a copy, and then while you're enjoying it, thank Asa Dotzler, Blake Ross, Dave Hyatt, and the other guys who are making this a reality!
Thanks guys!
Not to mention... (Score:3, Funny)
From the release notes:
6. Why would I want to use 0.2?
It has a cool build ID. 20021001 (October 1, 2002).
...nifty
Re:Phoenix: Everything I always wanted in a browse (Score:2, Interesting)
Not sure about the others, but Dave Hyatt is/was one of the principles on the Chimera project and you can really see the similarity between these two browsers -- even to the point of the OS X style slide-out preference sheets. Very nice.
Re:Phoenix: Everything I always wanted in a browse (Score:2)
I wish this was true. Phoenix has an interface for disabling extensions. But the uninstallation button is disabled because Mozilla still doesn't implement the functionality. (And Phoenix is a rewrite of the GUI portions. It doesn't implement anything new in the base.)
The uninstall functions in existing packages have been a pain to implement for the developers of the extensions. It's still several hundred lines of code to provide an uninstall button.
Just to show how open MySQL is... (Score:5, Funny)
bastardo 14
absolutely hilarious
Phoenix Review (Score:5, Informative)
Phoenix is going to be the default browser in all Windows boxes that I admin - simply because it doesen't need to "install". Just plunk the directory over the network when a new version comes out and - wham! New broswer!
No "Updating Windows Installer"
No rebooting.
No IE vunerabilities!
No Unnesesary features from Mozilla.
No EULA to click through.
Oh. No rebooting!
Re:Phoenix Review (Score:2)
Doesn't the
Re:Phoenix Review (Score:2)
Windows 2000 was supposed to end all the rebooting. How soon we forget the promises.
I have learned since I set up a samba PDC at home that some of the installation features play well with network domains. You dont want everything to be on the network. Some things are local to the computer, some things are local to the user, and some things are totally global. For instance, your OE mailboxes can be on the network, but the server information is kept in the registry.
M$ does not employ stupid people. Their products are nice, but their management is questionable.
Red Hat and software patents (Score:3, Troll)
What patents are Molnar and Red Hat applying for? Why, patents on parts of Linux itself. See applications 20020059330 and 20020091868 at
Re:Red Hat and software patents (Score:3, Informative)
Method and apparatus for atomic file look-up.. The request can be redirected by the application to a process that includes blocking point handling. An operating system according to the present invention includes a file system including a file system namespace, and an operating system kernel is operatively connected to the file system. The operating system kernel includes the file system namespace cache and the atomic look-up operation.
20020091868
Method and apparatus for handling communication requests at a server without context switching. An application protocol subsystem and protocol modules are disposed within an operating system kernel at a server. The protocol subsystem creates an "in-kernel" protocol stack that stores information regarding application protocol requests, such as HTTP and FTP requests, in a kernel request structure. A user space application can then continue execution while the operating system responds to the application protocol request without context switching. In this way, application protocol requests received over a network are handled and responded to by the server without causing a context switch.
---------
What has Red Hat done to cause you not to trust them? They are a solid GPL supporter, they don't play games like Lindows does with EULAs on GPL software. We have no reason to believe that they will do what they said, use these patents to protect open source, not hinder it.
They are not distributing the MP3 code because it opens them up to potential lawsuits. They are selling the code, along with distributing it freely, so Frauenwhosit just might have a problem with that, and decide a 200 million dollar bank account like Red Hat has, is a juicy target.
Re:Red Hat and software patents (Score:2)
1) They're applying for patents. Buying a handgun (copyright) for personal self-defense is reasonable. Buying a thermonuclear device (patent) for personal self-defense is not.
2) They have already stated that only GPL software will have a free ride, regardless of the software freedom other licenses provide. These patents allow them, should they choose to excercise their legal rights, to extort royalty fees out of every other distribution, since every distribution includes non-GPL software such as XFree86, Apache, etc. To reiterate, no one needs a thermonuclear device for personal self-defense.
Re:Red Hat and software patents (Score:2)
You obviously have no understanding of what patents are, or how they're different from copyrights. Put simply, you cannot both copyright and patent the same thing. Copyrights apply to certain things, and patents apply to other, different things. There's no overlap.
Before you speak out against something, you should learn about it. It helps cut down on the looking like an idiot.
Re:Red Hat and software patents (Score:2)
Re:Red Hat and software patents (Score:2)
Hand-guns are not very useful if all your opponents have thermonuclear weapons. Anyway, I will always prefer being hit by a patent lawsuit to being hit by a thermonuclear weapon.
Sakila (Score:4, Funny)
BSD has the BSD Daemon (sometimes known as Beastie, the daemon story is pretty long and I'm not going to type it here)
GNU has a Gnu (Well they share the same name so it was a fitting animal)
So umm why does MySQL have a dolphin? Named Sakila?
Re:Sakila (Score:2)
Because if you pronounce SQL with out saying each individual letter it sounds like dolphin talk
Re:Sakila (Score:5, Funny)
(2) Add arbitrary vowels between them to make it a three syllable word: Sa-Qi-La.
(3) Observe that people will pronouce the middle term "Chi" or "Qui" or something like that.
(4) Change Q to K. Reflect on how the KDE project will be happy about this (Symbolic Kuery Language), and also, how it sounds like a Latin crossover star [shakira.com]. Be pleased.
(5) Think of how cool the name Squall would have been. Masculin, sea-related, implies a disruptive yet powerful force, has S,Q, and L in it...
(6) Sigh.
Phoenix Screenshots... (Score:3, Informative)
I have posted severl screenshots on my site:
0.1 screenshots are here: [phatvibez.net]
0.2 screenshots are here: [phatvibez.net]
Re:Phoenix Screenshots... (Score:2)
https works great, cookies work, bookmarks work. And it renders stuff great. And it's FAST! Who woulda thunk it?
sakila? (Score:4, Funny)
seriously. WHAT THE FUCK.
the dolphins name is SQUEAL. EVERYONE thinks it should be SQUEAL. i am starting my own form of mysql starting today, and the ONLY thing different is that the dolphin is named SQUEAL!
on that note:
ARE YOU A PHP DEVELOPER? WORK WITH ME AND MAKE MILLIONS!
Web Developer II [sst.com]
Re:sakila? (Score:2)
Education required: BA or BS.
Yes, I have a bad attitude and I am educated in the fine art of bull shitting.
-- iCEBaLM
African flavor? More likely the arab peninsula (Score:2)
Nice to see that Slashdot got a reply. (Score:2)
Finally, I ended up forgetting about it. All the better. The name that they chose was equally forgettable. A "global" name probably means one that isn't trademarked that you're likely to forget in 5 minutes unless you're bombarded with heavy advertising and brand building.
So what did the dolphin namer win, anyhow?
Re:Nice to see that Slashdot got a reply. (Score:2)
A free copy of MySQL, obviously!
Concurrent use of Mozilla and Phoenix (Score:2)
Re:Concurrent use of Mozilla and Phoenix (Score:3, Informative)
Sakila! (Score:5, Funny)
overkill (Score:2)
It would be a different story if redhat received notice from Thomson multimedia, requesting that the package be removed. Since Thomson seems to be fine with Linux distros including mp3 player capabilities, why remove it?
xmms mp3 workaround (Score:4, Informative)
I didn't go back to the xmms site, I just used the Red Hat xmms RPMs which were included in the final beta called (null). These are xmms-1.2.7-14.mp3 and xmms-skins-1.2.7-14.mp3. I figure I don't need a lot of updates to a basic file player, and I prefer Red Hat authored RPMs for a Red Hat system.
Yanking MP3 support is unfortunate but not worth crying about. If you like MP3s, you probably can handle the hunt for the appropriate files to get your fix. I only use MP3s because so few hardware solutions support OGG or other formats yet. I'd love it if my SliMP3 [slimdevices.com] supported OGG too, but for now it does a great job of making a household jukebox. If I adopt a similar OGG solution, I'll just re-rip the CDs.
ballmer (Score:2)
too bad this wasn't around when the ally our base craze got started
Phoenix......I'm back! (Score:3, Insightful)
So far Phoenix has yet to crash, is "popup" free, fast and everything I wanted Mozilla to be.
Re:Phoenix......I'm back! (Score:2)
Sort of like a man swimming in a sewer claiming that he feels dirty because he's got a turd in his hand isn't it?
Blinken Ads (Score:2)
er, wait... We already have those.
Linux 1394 Works (Score:2, Insightful)
style video editing under Windows. I've fiddled
with every hardware configuration and used every
capture program under the sun and I still can't
capture more than a few minutes of video without
loosing frames. I read the various forums
occasionally and it seems to me that a weegie
board has more relevant things to say
about video editing.
It's not your motherboard. It's not software X.
It's all Microsoft. I dual booted RedHat (so my
other box is debian, I was lazy) and low and
behold I can capture for HOURS and nary a dropped
frame. When it did drop a frame, dvgrab politely
told me why. This stuff works. Too bad I can't edit
under linux yet. When Cinelerra has the stability
and feature set of something like Sonic Foundry's
Video Vegas desktop video will finally stop being
an aggravating trip through the worst that personal
computing has to offer.
By the way, if you are a Windows user frustrated
with your editing app crashing get Video Vegas.
Despite the crazy name it has plenty of
professional features and it's rock solid. Unlike
Premiere, which I can crash just by blowing on the
case gently, Vegas let's me get through hours of
footage with no back talk.
Re:it's spelled "losing" not "loosing"... (Score:2)
Sakila? (Score:2)
Eric Cartman: "Yeah, smart on rye bread with some mayonaise."
Can't get cinerella to work (Score:2)
I tried using mencoder to convert to another format, but mencoder complains about 'illegal instruction' for some reason.
Anybody have any useful suggestions ? How can I convert the files ?
Sakila - how to pronounce (Score:2)
Is There A Tool For These Names? (Score:3, Funny)
Sakila. Avaya. Verizon. Aquent (used to be MacTemps). Akamai.
Oh sure, they always say it comes from someplace. Akamai, for example, is supposed to mean something in Hawaiian. I forget what. It doesn't really matter because all these names sound the same. I think there is a secret Perl script somewhere that they aren't telling us about.
I think it has two basic algorithms. One of them takes a regular word and changes the spelling according to an algorithm I've yet to decipher. The other, simpler algorithm uses the folling syllables:
av, ev, iv, al, el, il, ul, ti, te, vi, va, vey, ty, tra, tri (perhaps others) and strings them together randomly.
Try it. It's easy:
Aviva. Eltiva. Altria. Ultera. Tyvela.
Thank-you.
By reading this post, and using the information contained herein, you consent to pay an outrageous consulting fee to me for naming your company. Make checks payable to Steven Marthouse, 5308 Oldcastle Ln., Springfield VA 22151.
Re:Is There A Tool For These Names? (Score:3, Funny)
For an extra $50,000, I'll type your new name into Google, and advise you of how many hits come back. If there are fewer than 50 hits, I'll research them and check to make sure that it's nothing anybody would care about.
Just by typing random names based on those sylables (and a few I left out, like "a" by itself). I had no trouble getting Google search rersults with fewer than 10 hits in some cases. An interesting side note--most of the hits came from character names used in online RPGs and/or Anime series. Is it possible that these corporate consultants are just geeks with a sense of humor?
Why Doesn't Red Hat.... (Score:2)
This way, they don't violate the patents (instead redirecting the download to xmms.org, which doesn't seem to mind distributing it), while still making it relatively simple and automatic for new users and others who then wouldn't have to figure out what's going on.
See 'ya in court! (Score:2)
You'll be sued now, for sure.
RadialContext for Phoenix (Score:2)
I just put a package for RadialContext for
Phoenix on the usual downloads page.
Huh? (Score:2)
Uhh, we do? Could somebody explain why please? I've heard it's more popular in the States than elsewhere, but I can count with one hand how many times I've seen (or even heard of) people editing their home movies on their computer: none.
Apple seem to make a big deal of this as well. Is this some kind of craze that never reached Europe, much like text messaging/sms never made the crossover to the US? Or is it just the latest round of tech industry hype, not actually backed up by substance?
From the Phoenix FAQ (Score:2)
:-)
Phoenix i686 on K6 (Score:2)
Re:wow, didn't know it had that (Score:4, Funny)
Re:MySQL new version (Score:3, Informative)
Re:MySQL new version (Score:2)
Re:Those rpm "--force" tricks... (Score:2)
While it's good to have a reminder every now and then of why I switched to Debian, every time I use it is a reminder of why I stay.
Opera needs --force and works fine (Score:2)
There are exceptions to every blanket statement, I guess. The Opera 6.x RPMs need --force on my system because I'm missing a Tk library which the RPM requires (for what I don't know). I don't want to install said libraries, so I use --force when installing Opera via RPM. Everything's fine.
-B
|
https://slashdot.org/story/02/10/02/2359254/slashback-cinelerra-dolphiname-phoenix
|
CC-MAIN-2016-40
|
refinedweb
| 6,294
| 74.19
|
Ember Timer Leaks: The Bad Apples in Your Test Infrastructure
January 3, 2018 3x3 system: the notion that we should be able to ship code to production three times a day, with no more than three hours between releases, so that our members experience the latest and greatest of our platform. With 3x3, we’re shipping more code than ever to our members, which in turn reduces the amount of time we have to manually test our releases. In order for us to confidently ship to production three times a day, we need to be able to heavily rely on the health of our automated tests, as well as on our test infrastructure as a whole.
SPAs and test (in)stability
LinkedIn.com is built as a SPA, and if your site is built using Ember, you’re in the same boat. Building LinkedIn as a SPA offers many benefits to our members, such as faster page loads between routes and fewer round trips between the client and the server. On the other hand, developing SPAs comes with unique challenges, such as managing memory and asynchrony. In a traditional non-SPA, the browser gives us a clean slate on each page reload. We do not have this luxury in a SPA environment. If care is not taken when writing a SPA, it’s easy to end up with asynchronous code, such as xhr requests and setTimeouts executing after the components that initiated these calls have been destroyed. This can lead to nasty side effects, a poor user experience, and test instability. More on this later.
Testing asynchronous code in Ember: A crash course
Let’s talk about the wait helper (recently renamed to settled), Ember’s solution to testing asynchronous code. The wait helper, at the most basic level, is a utility function that returns a promise that will resolve when all asynchrony in the application has been completed. In Ember, an asynchronous timer is created with a call to Ember.run.* methods (e.g., Ember.run.later). Each of these methods call setTimeout under the hood, hence the name "asynchronous timer." Because the wait helper pauses the execution of the test runner while at least one async timer exists, it’s a great way to ensure that all async code has been completed prior to running your assertions.
Is using the wait helper causing test timeouts? You may have a leak.
This section makes references to “leaking timers,” a timer that’s set up at some point during the application’s lifecycle but not torn down when the application is destroyed.
At LinkedIn, there was a point in time when a lot of our tests were timing out, and we weren’t sure what the cause was. We eventually noticed that many of the tests timing out were using the wait helper, which pauses the test runner until all asynchrony has finished. Had the same set of tests been timing out consistently, we could have safely assumed that the application code being run by the tests in question was likely leaking async timers. Unfortunately, though, different tests were failing between multiple executions of our test runner, which led us to believe that we likely had timers leaking between tests. From here, we needed to come up with a way to a) prove that we had async timers leaking in our test suite; and b) locate the source of said leaks in order to remove them from our application code.
Identifying leaking timers in Ember
Within the Ember.run namespace, Ember exposes a handy method called hasScheduledTimers, which returns a boolean (true if there are any running async timers). As seen in the code below, combining hasScheduledTimers with QUnit’s testDone method makes for a convenient way to confirm async leaks in tests, as well as to pinpoint the source of the leaks.
Tracking down an async leak
Congrats, you’ve detected a timer leak in your application and you’re now ready to clean it up. Let’s walk through what a leak might look like, and how to track it down. In the example below, you’ll notice that we’re setting up a timer to run after five seconds, but we’re never making a call to cancel the timer when the component is destroyed.
The above code results in a leak if the component is destroyed before five seconds have passed, since five seconds is the delay we used in the run.later call. Let’s create an integration test around this component, along with a check for leaks in the QUnit.testDone hook. Note: for illustration purposes, we’re hooking into QUnit from within our test file; in a typical Ember app, this would be done from within tests/test-helpers.js.
The console.log calls in the above snippet will log out three messages. As seen in the screenshot below, the message labeled "Existing timers" contains two items. The first item in the array represents the time at which Ember will execute the timer’s callback, while the second item in the array is the callback itself.
By drilling down into the [[scopes]][0].method property, we’re able to get a reference to the function definition of the timer’s callback, which enables us to make the changes necessary for cleaning up the leak. In this case, myRunLater is the name of the callback function that the timer calls when it finishes executing.
Patching a leaky timer
We’ve figured out where the leak is coming from, so now we need to patch the leak. Any timers set up during the lifecycle of a component need to be cleaned up when the component is torn down. To accomplish this, let’s modify our previous example by making a call to Ember.run.cancel within the willDestroy hook.
You can try this out for yourself by opening up my-component.js in the linked twiddle above.
Lots of timers? Enter ember-lifeline.
Remembering to clean up all of your async timers can be a challenging task, especially when working in a large application split amongst lots of engineers. Luckily for us, ember-lifeline takes care of this hassle by providing a wrapper API around run.later, run.debounce, and run.throttle. Ember-lifeline keeps a reference to each running async timer and cancels them from within the willDestroy hook.
Conclusion
SPAs are powerful, but they come with the added responsibility of managing asynchronous requests throughout the lifecycle of the application. As engineers, it’s important that we take this added responsibility seriously so that we can confidently ship our best product to our members.
Acknowledgements
Big shout out to Steve Calvert, Robert Jackson, Scott Khamphoune, and Kris Selden for their immense help in ridding of our codebase of these tricky leaks.
|
https://engineering.linkedin.com/blog/2018/01/ember-timer-leaks
|
CC-MAIN-2020-29
|
refinedweb
| 1,137
| 67.99
|
Acme::Tools - Lots of more or less useful subs lumped together and exported into your namespace
use Acme::Tools; print sum(1,2,3); # 6 print avg(2,3,4,6); # 3.75 my @list = minus(\@listA, \@listB); # set operations my @list = union(\@listA, \@listB); # set operations print length(gzip("abc" x 1000)); # far less than 3000 writefile("/dir/filename",$string); # convenient my $s=readfile("/dir/filename"); # also conventient print "yes!" if between($pi,3,4); print percentile(0.05, @numbers); my @even = range(1000,2000,2); # even numbers between 1000 and 2000 my @odd = range(1001,2001,2); my $dice = random(1,6); my $color = random(['red','green','blue','yellow','orange']); ...and so on.
About 120 more or less useful perl subroutines lumped together and exported into your namespace.
Subs created and collected since the mid-90s.
sudo cpan Acme::Tools
or maybe better:
sudo apt-get install cpanminus make # for Ubuntu 12.04 sudo cpanm Acme::Tools
Almost every sub, about 90 of them.
Beware of namespace pollution. But what did you expect from an Acme module?
See "code2num"
num2code() convert numbers (integers) from the normal decimal system to some arbitrary other number system. That can be binary (2), oct (8), hex (16) or others.
Example:
print num2code(255,2,"0123456789ABCDEF"); # prints FF print num2code(14,2,"0123456789ABCDEF"); # prints 0E
...because 255 are converted to hex FF (base
length("0123456789ABCDEF") ) with is 2 digits 0-9 or characters A-F. ...and 14 are converted to 0E, with leading 0 because of the second argument 2.
Example:
print num2code(1234,16,"01")
Prints the 16 binary digits 0000010011010010 which is 1234 converted to binary zeros and ones.
To convert back:
print code2num("0000010011010010","01"); #prints 1234
num2code() can be used to compress numeric IDs to something shorter:
$chars='0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-_'; $code=num2code("241274432",5,$chars);
"The Euclidean algorithm (also called Euclid's algorithm) is an algorithm to determine the greatest common divisor (gcd) of two integers. It is one of the oldest algorithms known, since it appeared in the classic Euclid's Elements around 300 BC. The algorithm does not require factoring.
"
Input: two or more positive numbers (integers, without decimals that is)
Output: an integer
Example:
print gcd(12, 8); # prints 4
Because exists for all three)
Implementation:
sub gcd { my($a,$b,@r)=@_; @r ? gcd($a,gcd($b,@r)) : $b==0 ? $a : gcd($b, $a % $b) }
lcm() finds the Least Common Multiple of two or more numbers (integers).
Input: two or more positive numbers (integers)
Output: an integer number
Example:
2/21 + 1/6 = 4/42 + 7/42 = 11/42
Where 42 = lcm(21,6).
Example:
print lcm(45,120,75); # prints 1800
Because the factors are:
45 = 2^0 * 3^2 * 5^1 120 = 2^3 * 3^1 * 5^1 75 = 2^0 * 3^1 * 5^2
Take the bigest power of each primary number (2, 3 and 5 here). Which is 2^3, 3^2 and 5^2. Multiplied this is 8 * 9 * 25 = 1800.
sub lcm { my($a,$b,@r)=@_; @r ? lcm($a,lcm($b,@r)) : $a*$b/gcd($a,$b) }'; # try means pure perl and no warnings if Math::GMP is:
TODO: why these fail?
perl -MAcme::Tools -le'for(map$_/10,-4..20){printf"%9.4f %s\n",$_,3*$_+$_**4-12}print resolve(sub{$x=shift;3*$x+$x**4-12},0,1)' resolve(sub{ my $x=shift; $x**2 - 4*$x - 21 },undef,1.9) 3.90625 , mm, cm,, erg, ergs, foot-pound, foot-pounds, ftlb, joule, kWh, milage: Currency rates are automatically updated from net if +24h since last (on linux/cygwin). numbers: des, hex, bin, oct, roman, dozen, doz, dz, gross, gr, gro, great_gross, small_gross (not supported: desimal, c, fps, ft/s, ftps, km/h, km/t, kmh, kmph, kmt, kn, knot, knots, kt, m/s, mach, machs, mi/h, mph, m, floz, ft3, in3, gal, gallon, gallon_imp, gallon_uk, gallon_us, gallons, liter, liters, litre, litres, pint, pint_imp, pint_uk, pint_us, tablespoon, teaspoon, therm, thm, tsp
See:); # 1000 B print bytes_readable(1001); #("MCMLXXI") == 1971
Input: the four decimal numbers of latutude1, longitude1, latitude2, longitude2
Output: the air distance in meters from point1 to point2. and bigscale are just convenient shorthands for using Math::BigInt->new(), Math::BigFloat->new() and Math::BigRat-new() preferably with the GMP for faster calculations. Use those modules instead of the real deal.:
Returns input string as uppercase.
Can be used if perls build in
uc() for some reason does not convert æøå and other letters outsize a-z.
æøåäëïöüÿâêîôûãõàèìòùáéíóúýñð => ÆØÅÄËÏÖÜ?ÂÊÎÔÛÃÕÀÈÌÒÙÁÉÍÓÚÝÑÐ
See also
perldoc -f uc and
perldoc -f lc
Same as "upper", only lower...
"Pads" a string to the given length by adding one or more spaces at the end (right, rpad) or at the start (left,'
Input: A string (i.e. a name). And an optional x (see example 2)
Output: A list of this strings trigrams (See examlpe)
Example 1:
print join ", ", trigram("Kjetil Skotheim");
Prints:
Kje, jet, eti, til, il , l S, Sk, Sko, kot, oth, the, hei, eim
Example 2:
Default is 3, but here 4 is used instead in the second optional input argument:
print join ", ", trigram("Kjetil Skotheim", 4);
And this prints:
Kjet, jeti, etil, til , il S, l Sk, Sko, Skot, koth, othe, thei, heim
trigram() was created for "fuzzy" name searching. If you have a database of many names, addresses, phone numbers, customer numbers etc. You can use trigram() to search among all of those at the same time. If the search form only has one input field. One general search box.
Store all of the trigrams of the trigram-indexed input fields coupled with each person, and when you search, you take each trigram of you query string and adds the list of people that has that trigram. The search result should then be sorted so that the persons with most hits are listed first. Both the query strings and the indexed database fields should have a space added first and last before
trigram()-ing them.
This search algorithm is not includes here yet...
trigram() should perhaps have been named ngram for obvious reasons.
Same as trigram (except there is no default width). Works also with arrayref instead of string.
Example:
sliding( ["Reven","rasker","over","isen"], 2 )
Result:
( ['Reven','rasker'], ['rasker','over'], ['over','isen'] )("Tittentei"); # ('T','i','t','t','e','n','t','e','i')
Returns the largest number in a list. Undef is ignored.
@heights=(123,90,134,undef,132); $highest = max(@heights); # 134
Just as "min", except for strings.
print min(2,7,10); # 2 print mins("2","7","10"); # 10 print mins(2,7,10); # 10
Just as "mix", except for strings.
print max(2,7,10); # 10 print maxs("2","7","10"); # 7 print maxs(2,7,10); # 7
...or any input argument is not an array ref.
Adds one or more element to a numerically sorted array and keeps it sorted.
pushsort @a, 13; # this... push @a, 13; @a = sort {$a<=>$b} @a; # is the same as this, but the former is faster if @a is large present, whether result not found should return undef or a fractional position.
If the third argument is false binsearcg a code-ref that alters the way binsearch compares two elements.
Example:
Same as binsearch except that the arrays is sorted alphanumerically (cmp) instead of numerically (<=>) and the searched element is a string, not a number. See "binsearch".}};
Return true if the input array is alphanumerically sorted.
@a=(1..10); print "array is sorted" if sortedstr @a; #false @a=("01".."10"); print "array is sorted" if sortedstr @a; #true.
Returns the sum of a list of numbers. Undef is ignored.
print sum(1,3,undef,8); # 12 print sum(1..1000); # 500500 print sum(undef); # undef.
Returns the geometric average (a.k.a geometric mean) of a list of numbers.
print geomavg(10,100,1000,10000,100000); # 1000 print 0+ (10*100*1000*10000*100000) ** (1/5); # 1000 same thing print exp(avg(map log($_),10,100,1000,10000,100000)); # 1000 same thing, this is how geomavg() works internally
Returns the harmonic average (a.k.a geometric mean) of a list of numbers.
print harmonicavg(10,11,12); # 3 / ( 1/10 + 1/11 + 1/12) = 10.939226519337
variance = ( sum (x[i]-Average)**2)/(n-1)
Standard_Deviation = sqrt(variance)
Standard deviation (stddev) is a measurement of the width of a normal distribution where one stddev on each side of the mean covers 68% and two stddevs 95%. Normal distributions are sometimes called Gauss curves or Bell shapes.
Returns the median value of a list of numbers. The list do not have to be sorted.
Example 1, list having an odd number of numbers:
print median(1, 100, 101); # 100
100 is the middlemost number after sorting.
Example 2, an even number of numbers:
print median(1005, 100, 101, 99); # 100.5
100.5 is the average of the two middlemost numbers.
Returns one or more percentiles of a list of numbers.
Percentile 50 is the same as the median, percentile 25 is the first quartile, 75 is the third quartile.
Input:
First argument is your wanted percentile, or a refrence to a list of percentiles you want from the dataset.
If the first argument to percentile() is a scalar, this percentile is returned.
If the first argument is a reference to an array, then all those percentiles are returned as an array.
Second, third, fourth and so on argument are the numbers from which you want to find the percentile(s).
Examples:
This finds the 50-percentile (the median) to the four numbers 1, 2, 3 and 4:
print "Median = " . percentile(50, 1,2,3,4); # 2.5
This:
@data=(11, 5, 3, 5, 7, 3, 1, 17, 4, 2, 6, 4, 12, 9, 0, 5); @p = map percentile($_,@data), (25, 50, 75);
Is the same as this:
@p = percentile([25, 50, 75], @data);
But the latter is faster, especially if @data is large since it sorts the numbers only once internally.
Example:
Data: 1, 4, 6, 7, 8, 9, 22, 24, 39, 49, 555, 992
Average (or mean) is 143
Median is 15.5 (which is the average of 9 and 22 who both equally lays in the middle)
The 25-percentile is 6.25 which are between 6 and 7, but closer to 6.
The 75-percentile is 46.5, which are between 39 and 49 but close to 49.
Linear interpolation is used to find the 25- and 75-percentile and any other x-percentile which doesn't fall exactly on one of the numbers in the set.
Interpolation:
As you saw, 6.25 are closer to 6 than to 7 because 25% along the set of the twelve numbers is closer to the third number (6) than to he fourth (7). The median (50-percentile) is also really interpolated, but it is always in the middle of the two center numbers if there are an even count of numbers.
However, there is two methods of interpolation:
Example, we have only three numbers: 5, 6 and 7.
Method 1: The most common is to say that 5 and 7 lays on the 25- and 75-percentile. This method is used in Acme::Tools.
Method 2: In Oracle databases the least and greatest numbers always lay on the 0- and 100-percentile.
As an argument on why Oracles (and others?) definition is used.
The larger the data sets, the less difference there is between the two methods.
Extrapolation:
In method one, when you want a percentile outside of any possible interpolation, you use the smallest and second smallest to extrapolate from. For instance in the data set
5, 6, 7, if you want an x-percentile of x < 25, this is below 5.
If you feel tempted to go below 0 or above 100,
percentile() will die (or croak to be more precise)
Another method could be to use "soft curves" instead of "straight lines" in interpolation. Maybe B-splines or Bezier curves. This is not used here.
For large sets of data Hoares algorithm would be faster than the simple straightforward implementation used in
percentile() here. Hoares don't sort all the numbers fully.
Differences between the two main methods described above:
Data: 1, 4, 6, 7, 8, 9, 22, 24, 39, 49, 555, 992 Percentile Method 1 Method 2 (Acme::Tools::percentile (Oracle) and others) ----------- --------------------------- --------- 0 -2 1 1 -1.61 1.33 25 6.25 6.75 50 (median) 15.5 15.5 75 46.5 41.5 99 1372.19 943.93 100 1429 992
Found like this:
perl -MAcme::Tools -le 'print for percentile([0,1,25,50,75,99,100], 1,4,6,7,8,9,22,24,39,49,555,992)'
And like this in Oracle-databases:
create table tmp (n number); insert into tmp values (1); insert into tmp values (4); insert into tmp values (6); insert into tmp values (7); insert into tmp values (8); insert into tmp values (9); insert into tmp values (22); insert into tmp values (24); insert into tmp values (39); insert into tmp values (49); insert into tmp values (555); insert into tmp values (992); select percentile_cont(0.00) within group(order by n) per0, percentile_cont(0.01) within group(order by n) per1, percentile_cont(0.25) within group(order by n) per25, percentile_cont(0.50) within group(order by n) per50, percentile_cont(0.75) within group(order by n) per75, percentile_cont(0.99) within group(order by n) per99, percentile_cont(1.00) within group(order by n) per100 from tmp;
(Oracle also provides a similar function:
percentile_disc where disc is short for discrete, meaning no interpolation is taking place. Instead the closest number from the data set is picked.)
Mixes an array in random order. In-place if given an array reference or not if given an array.
mix() could also have been named
shuffle(), as in shuffling a deck of cards.
Example:
This:
print mix("a".."z"),"\n" for 1..3;
...could write something like:
trgoykzfqsduphlbcmxejivnwa qycatilmpgxbhrdezfwsovujkn ytogrjialbewcpvndhkxfzqsmu
Input:
This:
@a=mix(@a) is the same as:
mix(\@a).
Note that an input-array which COINCIDENTLY SOME TIMES has one element (but more other times), and that element is an array-ref, you will probably not get the expected result.
To check distribution:
perl -MAcme::Tools -le 'print mix("a".."z") for 1..26000'|cut -c1|sort|uniq -c|sort -n
The letters a-z should occur around 1000 times each.
Shuffles a deck of cards: (s=spaces, h=hearts, c=clubs, d=diamonds)
perl -MAcme::Tools -le '@cards=map join("",@$_),cart([qw/s h c d/],[2..10,qw/J Q K A/]); print join " ",mix(@cards)'
(Uses "cart", which is not a typo, see further down here)
Note:
List::Util::shuffle() is approximately four times faster. Both respects the Perl built-in
srand().
The no value function (or null value function)
nvl() takes two or more arguments. (Oracles nvl-function take just two)
Returns the value of the first input argument with length() > 0.
Return undef if there is no such input argument.
In perl 5.10 and perl 6 this will most often be easier with the
// operator, although
nvl() and
// treats empty strings
"" differently. Sub nvl here considers empty strings and undef the same.
Synonym for replace().
Return the string in the first input argument, but where pairs of search-replace strings (or rather regexes) has been run.
Works as
replace() in Oracle, or rather regexp_replace() in Oracle 10 and onward. Except that this
replace() accepts more than three arguments.
Examples:
print replace("water","ater","ine"); # Turns water into wine print replace("water","ater"); # w print replace("water","at","eath"); # weather print replace("water","wa","ju", "te","ic", "x","y", # No x is found, no y is returned 'r$',"e"); # Turns water into juice. 'r$' says that the r it wants # to change should be the last letters. This reveals that # second, fourth, sixth and so on argument is really regexs, # not normal strings. So use \ (or \\ inside "") to protect # the special characters of regexes. You probably also # should write qr/regexp/ instead of 'regexp' if you make # use of regexps here, just to make it more clear that # these are really regexps, not strings. print replace('JACK and JUE','J','BL'); # prints BLACK and BLUE print replace('JACK and JUE','J'); # prints ACK and UE print replace("abc","a","b","b","c"); # prints ccc (not bcc)
If the first argument is a reference to a scalar variable, that variable is changed "in place".
Example:
my $str="test"; replace(\$str,'e','ee','s','S'); print $str; # prints teeSt.
More examples:
my $a=123; print decode($a, 123=>3, 214=>7, $a); # also 3, note that => is synonym for , (comma) in perl print decode($a, 122=>3, 214=>7, $a); # prints 123 print decode($a, 123.0 =>3, 214=>7); # prints 3 print decode($a, '123.0'=>3, 214=>7); # prints nothing (undef), no last argument default value here print decode_num($a, 121=>3, 221=>7, '123.0','b'); # prints b
Sort of:
decode($string, %conversion, $default);
The last argument is returned as a default if none of the keys in the keys/value-pairs matched.
A more perl-ish and often faster way of doing the same:
{123=>3, 214=>7}->{$a} || $a # (beware of 0)
Input: Three arguments.
Returns: Something true if the first argument is numerically between the two next.
Returns the values of the input list, sorted alfanumerically, but only one of each value. This is the same as "uniq" except uniq does not sort the returned list.
Example:
print join(", ", distinct(4,9,3,4,"abc",3,"abc")); # 3, 4, 9, abc print join(", ", distinct(4,9,30,4,"abc",30,"abc")); # 30, 4, 9, abc note: alphanumeric sort
Returns 1 (true) if first argument is in the list of the remaining arguments. Uses the perl-operator
eq.
Otherwise it returns 0 (false).
print in( 5, 1,2,3,4,6); # 0 print in( 4, 1,2,3,4,6); # 1 print in( 'a', 'A','B','C','aa'); # 0 print in( 'a', 'A','B','C','a'); # 1
I guess in perl 5.10 or perl 6 you
Input: Two arrayrefs. (Two lists, that is)
Output: An array containing all elements from both input lists, but no element more than once even if it occurs twice or more in the input.
Example, prints 1,2,3,4:
perl -MAcme::Tools -le 'print join ",", union([1,2,3],[2,3,3,4,4])' # 1,2,3,4
Input: Two arrayrefs.
Output: An array containing all elements in the first input array but not in the second.
Example:
perl -MAcme::Tools -le 'print join " ", minus( ["five", "FIVE", 1, 2, 3.0, 4], [4, 3, "FIVE"] )'
Output is
five 1 2.
Input: Two arrayrefs
Output: An array containing all elements which exists in both input arrays.
Example:
perl -MAcme::Tools -le 'print join" ", intersect( ["five", 1, 2, 3.0, 4], [4, 2+1, "five"] )' # 4 3 five
Output:
4 3 five
Input: Two arrayrefs
Output: An array containing all elements member of just one of the input arrays (not both).
Example:
perl -MAcme::Tools -le ' print join " ", not_intersect( ["five", 1, 2, 3.0, 4], [4, 2+1, "five"] )'
The output is
1 2.
Input: An array of strings (or numbers)
Output: The same array in the same order, except elements which exists earlier in the list.
Same as "distinct" but distinct sorts the returned list, uniq does not.
Example:
my @t=(7,2,3,3,4,2,1,4,5,3,"x","xx","x",02,"07"); print join " ", uniq @t; # prints 7 2 3 4 1 5 x xx 07 version 5.20+ subhashes (hash slices returning keys as well as values) is built in like this:
%scandinavia = %population{'Norway','Sweden','Denmark'};
Input: a reference to a hash of hashes
Output: a hash like the input-hash, but matrix transposed (kind of). Think of it as if X and Y has swapped places.
%h = ( 1 => {a=>33,b=>55}, 2 => {a=>11,b=>22}, 3 => {a=>88,b=>99} ); print serialize({hashtrans(\%h)},'v');
Gives:
%v=( 'a'=>{'1'=>'33','2'=>'11','3'=>'88'}, 'b'=>{'1'=>'55','2'=>'22','3'=>'99'} );
"zipb64", "unzipb64", "zipbin", "unzipbin", "gzip", and "gunzip" compresses and uncompresses strings to save space in disk, memory, database or network transfer. Trades time for space. (Beware of wormholes)
Compresses the input (text or binary) and returns a base64-encoded string of the compressed binary data. No known limit on input length, several MB has been tested, as long as you've got the RAM...
Input: One or two strings.
First argument: The string to be compressed.
Second argument is optional: A dictionary string.
Output: a base64-kodet string of the compressed input.
The use of an optional dictionary string will result in an even further compressed output in the dictionary string is somewhat similar to the string that is compressed (the data in the first argument).
If x relatively similar string are to be compressed, i.e. x number automatic of email responses to some action by a user, it will pay of to choose one of those x as a dictionary string and store it as such. (You will also use the same dictionary string when decompressing using "unzipb64".
The returned string is base64 encoded. That is, the output is 33% larger than it has to be. The advantage is that this string more easily can be stored in a database (without the hassles of CLOB/BLOB) or perhaps easier transfer in http POST requests (it still needs some url-encoding, normally). See "zipbin" and "unzipbin" for the same without base 64 encoding.
Example 1, normal compression without dictionary:
$txt = "Test av komprimering, hva skjer? " x 10; # ten copies of this norwegian string, $txt is now 330 bytes (or chars rather...) print length($txt)," bytes input!\n"; # prints 330 $zip = zipb64($txt); # compresses print length($zip)," bytes output!\n"; # prints 65 print $zip; # prints the base64 string ("noise") $output=unzipb64($zip); # decompresses print "Hurra\n" if $output eq $txt; # prints Hurra if everything went well print length($output),"\n"; # prints 330
Example 2, same compression, now with dictionary:
$txt = "Test av komprimering, hva skjer? " x 10; # Same original string as above $dict = "Testing av kompresjon, hva vil skje?"; # dictionary with certain similarities # of the text to be compressed $zip2 = zipb64($txt,$dict); # compressing with $dict as dictionary print length($zip2)," bytes output!\n"; # prints 49, which is less than 65 in ex. 1 above $output=unzipb64($zip2,$dict); # uses $dict in the decompressions too print "Hurra\n" if $output eq $txt; # prints Hurra if everything went well
Example 3, dictionary = string to be compressed: (out of curiosity)
$txt = "Test av komprimering, hva skjer? " x 10; # Same original string as above $zip3 = zipb64($txt,$txt); # hmm print length($zip3)," bytes output!\n"; # prints 25 print "Hurra\n" if unzipb64($zip3,$txt) eq $txt; # hipp hipp ...
zipb64() and zipbin() is really just wrappers around Compress::Zlib and
inflate() & co there.
zipbin() does the same as
zipb64() except that zipbin() does not base64 encode the result. Returns binary data.
See "zip" for documentation.
Input:
First argument: A string made by "zipb64"
Second argument: (optional) a dictionary string which where used in "zipb64".
Output: The original string (be it text or binary).
unzipbin() does the same as "unzip" except that
unzipbin() wants a pure binary compressed string as input, not base64.
See "unzipb64" for documentation.
Input: A string you want to compress. Text or binary.
Output: The binary compressed representation of that input string.
gzip() is really the same as
Compress:Zlib::memGzip() except that
gzip() just returns the input-string if for some reason Compress::Zlib could not be
required. Not installed or not found. (Compress::Zlib is a built in module in newer perl versions).
gzip() uses the same compression algorithm as the well known GNU program gzip found in most unix/linux/cygwin distros. Except
gzip() does this in-memory. (Both using the C-library
zlib).
Input: A binary compressed string. I.e. something returned from
gzip() earlier or read from a
.gz file.
Output: The original larger non-compressed string. Text or binary.
bzip2() and
bunzip2() works just as
gzip() and
gunzip(), but use another compression algorithm. This is usually better but slower than the
gzip-algorithm. Especially in the compression. Decompression speed is less different.
See also
man bzip2,
man bunzip2 and Compress::Bzip2
Decompressed something compressed by bzip2() or the data from a
.bz2 file. See "bzip2".
Input: an IP-number
Output: either an IP-address machine.sld.tld or an empty string if the DNS lookup didn't find anything.
Example:
perl -MAcme::Tools -le 'print ipaddr("129.240.8.200")' # prints
Uses perls
gethostbyaddr internally.
ipaddr() memoizes the results internally (using the
%Acme::Tools::IPADDR_memo hash) so only the first loopup on a particular IP number might take some time.
Some few DNS loopups can take several seconds. Most is done in a fraction of a second. Due to this slowness, medium to high traffic web servers should probably turn off hostname lookups in their logs and just log IP numbers used as the missing input argument.
The environment variables QUERY_STRING, REQUEST_METHOD and CONTENT_LENGTH is typically set by a web server following the CGI standard (which Apache and most of them can do I guess) or in mod_perl by Apache. Although you are probably better off using CGI. Or
$R->args() or
$R->content() in mod_perl.
Output:
webparams() returns a hash of the key/value pairs in the input argument. Url-decoded.
If an input string), and
chmod +x /.../cgi-bin/script and the URL will print
My name is HAL to the web page. will print
My name is Bond, James Bond.
Input: a string
Output: the same string URL encoded so it can be sent in URLs or POST requests.
In URLs (web addresses) certain characters are illegal. For instance space and newline. And certain other chars have special meaning, such as
+,
%,
=,
?,
&.
These illegal and special chars needs to be encoded to be sent in URLs. This is done by sending them as
% and two hex-digits. All chars can be URL encodes this way, but it's necessary just on some.
Example:
$search="Østdal, Åge"; my $url="" . urlenc($search); print $url;
Prints
Example, this returns '
ø'. That is space and
ø.
urldec('+%C3')
ht2t is short for html-table to table.
This sub extracts an html-
<table>s and returns its
<tr>s and
<td>s as an array of arrayrefs. And strips away any html inside the
<td>s as well.
my @table = ht2t($html,'some string occuring before the <table> you want');
Input: One or two arguments.
First argument: the html where a
<table> is to be found and converted.
Second argument: (optional) If the html contains more than one
<table>, and you do not want the first one, applying a second argument is a way of telling
ht2t which to capture: the one with this word or string occurring before it.
Output: An array of arrayrefs.
ht2t() is a quick and dirty way of scraping (or harvesting as it is also called) data from a web page. Look too HTML::Parse to do this more accurate.
Example:
use Acme::Tools; use LWP::Simple;.
Justification:
Perl needs three or four operations to make a file out of a string:
open my $FILE, '>', $filename or die $!; print $FILE $text; close($FILE);
This is way simpler:
writefile($filename,$text);
Sub writefile opens the file i binary mode (
binmode()) and has two usage modes:
Input: Two arguments
First argument is the filename. If the file exists, its overwritten. If the file can not be opened for writing, a die (a croak really) happens.
Second input argument is one of:
\nis automatically appended to each element.
Alternativelly, you can write several files at once.
Example, this:
writefile('file1.txt','The text....tjo'); writefile('file2.txt','The text....hip'); writefile('file3.txt','The text....and hop');
...is the same as this:
writefile([ ['file1.txt','The text....tjo'], ['file2.txt','The text....hip'], ['file3.txt','The text....and hop'], ]);
Output: Nothing (for the time being).
die()s (
croak($!) really) if something goes wrong.
Just as with "writefile" you can read in a whole file in one operation with
readfile(). Instead of:
open my $FILE,'<', $filename or die $!; my $data = join"",<$FILE>; close($FILE);
This is simpler:
my $data = readfile($filename);
More examples:
Reading the content of the file to a scalar variable: (Any content in
$data will be overwritten)
my $data; readfile('filename.txt',\$data);
Reading the lines of a file into an array:
my @lines; readfile('filnavn.txt',\@lines); for(@lines){ ... }
Note: Chomp is done on each line. That is, any newlines (
\n) will be removed. If
@lines is non-empty, this will be lost.
Sub readfile is context aware. If an array is expected it returns an array of the lines without a trailing
\n. The last example can be rewritten:
for(readfile('filnavn.txt')){ ... }
With two input arguments, nothing (undef) is returned from
readfile().];
Does chmod + utime + chown on one or more files.
Returns the number of files of which those operations was successful.
Mode, uid, gid, atime and mtime are set from the array ref in the first argument.
The first argument references an array which is exactly like an array returned from perls internal
stat($filename) -function.
Example:
my @stat=stat($filenameA); chall( \@stat, $filenameB, $filenameC, ... ); #
Input: One or two arguments.
Works like perls
mkdir() except that
makedir() will create nesessary parent directories if they dont exists.
First input argument: A directory name (absolute, starting with
/ or relative).
Second input argument: (optional) permission bits. Using the normal
0777^umask() as the default if no second input argument is provided.
Example:
makedir("dirB/dirC")
...will create directory
dirB if it does not already exists, to be able to create
dirC inside
dirB.
Returns true on success, otherwise false.
makedir() memoizes directories it has checked for existence before (trading memory and for speed). Thus directories removed during running the script is not discovered by makedir.
See also
perldoc -f mkdir,
man umask
Input: a filename.
Output: a string of 32 hexadecimal chars from 0-9 or a-f.
Example, the md5sum gnu/linux command without options could be implementet like this:
#!/usr/bin/perl use Acme::Tools; print eval{ md5sum($_)." $_\n" } || $@ for @ARGV;
This sub requires Digest::MD5, which is a core perl-module since version 5.?.? It does not slurp the files or spawn new processes.
# = head2 timestr # # Converts epoch or YYYYMMDD-HH24:MI:SS time string to other forms of time. # # Input: One, two or three arguments. # # First argument: A format string. # # Second argument: (optional) An epock
time() number or a time # string of the form YYYYMMDD-HH24:MI:SS. I no second argument is gives, # picks the current
time(). # # Thirs argument: (optional True eller false. If true and first argument is eight digits: # Its interpreted as a YYYYMMDD time string, not an epoch time. # If true and first argument is six digits its interpreted as a DDMMYY date. # # Output: a date or clock string on the wanted form. # # Exsamples: # # Prints
3. july 1997 if thats the dato today: # # perl -MAcme::Tools -le 'print timestr("D. month YYYY")' # # print timestr"HH24:MI"); # prints 23:55 if thats the time now # print timestr"HH24:MI",time()); # ...same,since time() is the default # print timestr"HH:MI",time()-5*60); # prints 23:50 if that was the time 5 minutes ago # print timestr"HH:MI",time()-5*60*60); # print 18:55 if thats the time 5 hours ago # timestr"Day D. month YYYY HH:MI"); # Saturday juli 2004 23:55 (stor L liten j) # timestr"dag D. Måned ÅÅÅÅ HH:MI"); # lørdag 3. Juli 2004 23:55 (omvendt) # timestr"DG DD. MONTH YYYY HH24:MI"); # LØR 03. JULY 2004 23:55 (HH24 = HH, month=engelsk) # timestr"DD-MON-YYYY"); # 03-MAY-2004 (mon engelsk) # timestr"DD-MÅN-YYYY"); # 03-MAI-2004 (mån norsk) # # Formatstrengen i argument to: # # Formatstrengen kan innholde en eller flere av følgende koder. # # Formatstrengen kan inneholde tekst, som f.eks.
tid('Klokken er: HH:MI'). # Teksten her vil ikke bli konvertert. Men det anbefales å holde tekst utenfor # formatstrengen, siden framtidige koder kan erstatte noen tegn i teksten med tall. # # Der det ikke står annet: bruk store bokstaver. # # YYYY Årstallet med fire sifre # ÅÅÅÅ Samme som YYYY (norsk) # YY Årstallet med to sifre, f.eks. 04 for 2004 (anbefaler ikke å bruke tosifrede år) # ÅÅ Samme som YY (norsk) # yyyy Årtallet med fire sifre, men skriver ingenting dersom årstallet er årets (plass-sparing, ala tidstrk() ). # åååå Samme som yyyy # MM Måned, to sifre. F.eks. 08 for august. # DD Dato, alltid to sifer. F.eks 01 for første dag i en måned. # D Dato, ett eller to sifre. F.eks. 1 for første dag i en måned. # HH Time. Fra 00, 01, 02 osv opp til 23. # HH24 Samme som HH. Ingen forskjell. Tatt med for å fjerne tvil om det er 00-12-11 eller 00-23 # HH12 NB: Kl 12 blir 12, kl 13 blir 01, kl 14 blir 02 osv .... 23 blir 11, # MEN 00 ETTER MIDNATT BLIR 12 ! Oracle er også slik. # TT Samme som HH. Ingen forskjell. Fra 00 til 23. TT24 og TT12 finnes ikke. # MI Minutt. Fra 00 til 59. # SS Sekund. Fra 00 til 59. # # Måned Skriver månedens fulle navn på norsk. Med stor førstebokstav, resten små. # F.eks. Januar, Februar osv. NB: Vær oppmerksom på at måneder på norsk normal # skrives med liten førstebokstav (om ikke i starten av setning). Alt for mange # gjør dette feil. På engelsk skrives de ofte med stor førstebokstav. # Måne Skriver månedens navn forkortet og uten punktum. På norsk. De med tre eller # fire bokstaver forkortes ikke: Jan Feb Mars Apr Mai Juni Juli Aug Sep Okt Nov Des # Måne. Samme som Måne, men bruker punktum der det forkortes. Bruker alltid fire tegn. # Jan. Feb. Mars Apr. Mai Juni Juli Aug. Sep. Okt. Nov. Des. # Mån Tre bokstaver, norsk: Jan Feb Mar Apr Mai Jun Jul Aug Sep Okt Nov Des # # Month Engelsk: January February May June July October December, ellers = norsk. # Mont Engelsk: Jan Feb Mars Apr May June July Aug Sep Oct Nov Dec # Mont. Engelsk: Jan. Feb. Mars Apr. May June July Aug. Sep. Oct. Nov. Dec. # Mon Engelsk: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec # # måned måne måne. mån Samme, men med liten førstebokstav. På norsk. # month mont mont. mon Samme, men med liten førstebokstav. På engelsk. # MÅNED MÅNE MÅNE. MÅN Samme, men med alle bokstaver store. På norsk. # MONTH MONT MONT. MON Samme, men med alle bokstaver store. På engelsk. # # Dag Dagens navn på norsk. Stor førstebokstav, resten små. Mandag Tirsdag Onsdag Torsdag # Fredag Lørdag Søndag. # Dg Dagens navn på norsk forkortet. Stor førstebokstav, resten små. # Alltid tre bokstaver: Man Tir Ons Tor Fre Lør Søn # Day Samme som Dag, men på engelsk. Monday Tuesday Wednesday Thursday Friday Saturday Sunday # Dy Samme som Dg, men på engelsk. Alltid tre bokstaver: Mon Tue Wed Thu Fri Sat Sun # # dag dg day dy DAG DG DAY DY ....du klarer sikkert å gjette... # # UKE Ukenr ett eller to siffer. Bruker ISO-definisjonen som brukes stort sett i hele verden unntatt USA. # UKENR Ukenr, alltid to siffer, 01 02 osv. Se uke() et annet sted i SO::Bibl for mer om dette. # # # Gjenstår: Dag- og månedsnavn på nynorsk og samisk. # # Gjenstår: Dth => 1st eller 2nd hvis dato er den første eller andre # # Gjenstår: M => Måned ett eller to sifre, slik D er dato med ett eller to. Vanskelig/umulig(?) # # Gjenstår: J => "julian day".... # # Gjenstår: Sjekke om den takler tidspunkt for svært lenge siden eller om svært lenge... # Kontroll med kanskje die ved input # # Gjenstår: sub dit() (tid baklengs... eller et bedre navn) for å konvertere andre veien. # Som med to_date og to_char i Oracle. Se evt Date::Parse isteden. # # Gjenstår: Hvis formatstrengen er DDMMYY (evt DDMMÅÅ), og det finnes en tredje argument, # så vil den tredje argumenten sees på som personnummer og DD vil bli DD+40 # eller MM vil bli MM+50 hvis personnummeret medfører D- eller S-type fødselsnr. # Hmm, kanskje ikke. Se heller sub foedtdato og sub fnr m.fl. # # Gjenstår: Testing på tidspunkter på mer enn hundre år framover eller tilbake i tid. # # Se også "tidstrk" og "tidstr" # # =cut # # our %SObibl_tid_strenger; # our $SObibl_tid_pattern; # # sub tid # { # return undef if @_>1 and not defined $_[1]; # return 1900+(localtime())[5] if $_[0]=~/^(?:ÅÅÅÅ|YYYY)$/ and @_==1; # kjappis for tid("ÅÅÅÅ") og tid("YYYY") # # my($format,$time,$er_dato)=@_; # # # $time=time() if @_==1; # # ($time,$format)=($format,$time) # if $format=~/^[\d+\:\-]+$/; #swap hvis format =~ kun tall og : og - # # $format=~s,([Mm])aa,$1å,; # $format=~s,([Mm])AA,$1Å,; # # $time = yyyymmddhh24miss_time("$1$2$3$4$5$6") # if $time=~/^((?:19|20|18)\d\d) #yyyy # (0[1-9]|1[012]) #mm # (0[1-9]|[12]\d|3[01]) \-? #dd # ([01]\d|2[0-3]) \:? #hh24 # ([0-5]\d) \:? #mi # ([0-5]\d) $/x; #ss # # $time = yyyymmddhh24miss_time(dato_ok("$1$2$3")."000000") # if $time=~/^(\d\d)(\d\d)(\d\d)$/ and $er_dato; # # $time = yyyymmddhh24miss_time("$1$2${3}000000") # if $time=~/^((?:18|19|20)\d\d)(\d\d)(\d\d)$/ and $er_dato; # # my @lt=localtime($time); # if($format){ # unless(defined %SObibl_tid_strenger){ # %SObibl_tid_strenger= # (' => [4, 'JAN','FEB','MAR','APR','MAI','JUN','JUL','AUG','SEP','OKT','NOV','DES'], # 'Mån' => [4, 'Jan','Feb','Mar','Apr','Mai','Jun','Jul','Aug','Sep','Okt','Nov','Des'], # 'mån' => [4, 'jan','feb','mar','apr','mai','jun','jul','aug','sep','okt','nov','des'], # # '], # 'MON' => [4, 'JAN','FEB','MAR','APR','MAY','JUN','JUL','AUG','SEP','OCT','NOV','DEC'], # 'Mon' => [4, 'Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'], # 'mon' => [4, 'jan','feb','mar','apr','may','jun','jul','aug','sep','oct','nov','dec'], # 'DAY' => [6, 'SUNDAY','MONDAY','TUESDAY','WEDNESDAY','THURSDAY','FRIDAY','SATURDAY'], # 'Day' => [6, 'Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday'], # 'day' => [6, 'sunday','monday','tuesday','wednesday','thursday','friday','saturday'], # 'DY' => [6, 'SUN','MON','TUE','WED','THU','FRI','SAT'], # 'Dy' => [6, 'Sun','Mon','Tue','Wed','Thu','Fri','Sat'], # 'dy' => [6, 'sun','mon','tue','wed','thu','fri','sat'], # '], # 'DG' => [6, 'SØN','MAN','TIR','ONS','TOR','FRE','LØR'], # 'Dg' => [6, 'Søn','Man','Tir','Ons','Tor','Fre','Lør'], # 'dg' => [6, 'søn','man','tir','ons','tor','fre','lør'], # ); # for(qw(MAANED Maaned maaned MAAN Maan maan),'MAANE.','Maane.','maane.'){ # $SObibl_tid_strenger{$_}=$SObibl_tid_strenger{replace($_,"aa","å","AA","Å")}; # } # $SObibl_tid_pattern=join("|",map{quotemeta($_)} # sort{length($b)<=>length($a)} # keys %SObibl_tid_strenger); # #uten sort kan "måned" bli "mared", fordi "mån"=>"mar" # } # $format=~s/($SObibl_tid_pattern)/$SObibl_tid_strenger{$1}[1+$lt[$SObibl_tid_strenger{$1}[0]]]/g; # # $format=~s/TT|tt/HH/; # $format=~s/ÅÅ/YY/g;$format=~s/åå/yy/g; # $format=~s/YYYY /1900+$lt[5] /gxe; # $format=~s/(\s?)yyyy /$lt[5]==(localtime)[5]?"":$1.(1900+$lt[5])/gxe; # $format=~s/YY /sprintf("%02d",$lt[5]%100) /gxei; # $format=~s/MM /sprintf("%02d",$lt[4]+1) /gxe; # $format=~s/mm /sprintf("%d",$lt[4]+1) /gxe; # $format=~s/DD /sprintf("%02d",$lt[3]) /gxe; # $format=~s/D(?![AaGgYyEeNn])/$lt[3] /gxe; #EN pga desember og wednesday # $format=~s/dd /sprintf("%d",$lt[3]) /gxe; # $format=~s/hh12|HH12 /sprintf("%02d",$lt[2]<13?$lt[2]||12:$lt[2]-12)/gxe; # $format=~s/HH24|HH24|HH|hh /sprintf("%02d",$lt[2]) /gxe; # $format=~s/MI /sprintf("%02d",$lt[1]) /gxei; # $format=~s/SS /sprintf("%02d",$lt[0]) /gxei; # $format=~s/UKENR /sprintf("%02d",ukenr($time)) /gxei; # $format=~s/UKE /ukenr($time) /gxei; # $format=~s/SS /sprintf("%02d",$lt[0]) /gxei; # # return $format; # } # else{ # return sprintf("%04d%02d%02d%02d%02d%02d",1900+$lt[5],1+$lt[4],@lt[3,2,1,0]); # } # }
Input: A year (a four digit number)
Output: array of two numbers: day and month of Easter Sunday that year. Month 3 means March and 4 means April.
sub easter { use integer;my$Y=shift;my$C=$Y/100;my$L=($C-$C/4-($C-($C-17)/25)/3+$Y%19*19+15)%30; (($L-=$L>28||($L>27?1-(21-$Y%19)/11:0))-=($Y+$Y/4+$L+2-$C+$C/4)%7)<4?($L+28,3):($L-3,4) }
...is a "golfed" version of Oudins algorithm (1940) (see also )
Valid for any Gregorian year. Dates repeat themselves after 70499183 lunations = 2081882250 days = ca 5699845
No input arguments.
Return the same number as perls
time() except with decimals (fractions of a second, _fp as in floating point number).
print time_fp(),"\n"; print time(),"\n";
Could write:
1116776232.38632
...if that is the time now.
Or just:
1116776232
...from perl's internal
time() if
Time::HiRes isn't installed and available.
sleep_fp() work as the built in
sleep(), but accepts fractional seconds:
sleep_fp(0.02); #.
Estimated time of arrival (ETA).
for(@files){ ...do work on file... my $eta = eta( ++$i, 0+@files ); # file now, number of files print "" . localtime($eta); } ..DOC MISSING..
...NOT YET.
Input: A year. A four digit number.
Output: True (1) or false (0) of weather the year is a leap year or not. (Uses current calendar even for period before it was used).
print join(", ",grep leapyear($_), 1900..2014)."\n";
Prints: (note, 1900 is not a leap year, but 2000 is)
1904, 1908, 1912, 1916, 1920, 1924, 1928, 1932, 1936, 1940, 1944, 1948, 1952, 1956, 1960, 1964, 1968, 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012
Input: An array of values to be used to test againts for existence.
Output: A reference to a regular expression. That is a
qr//
The regex sets $1 if it match.
Example:
my @list=qw/ABc XY DEF DEFG XYZ/; my $filter=qrlist("ABC","DEF","XY."); # makes a regex of it qr/^(\QABC\E|\QDEF\E|\QXY.\E)$/ my @filtered= grep { $_ =~ $filter } @list; # returns DEF and XYZ, but not XYZ because the . char is taken literally
Note: Filtering with hash lookups are WAY faster.
Source:
sub qrlist (@) { my $str=join"|",map quotemeta, @_; qr/^($str)$/ }
Perhaps easier to use than Term::ANSIColor ?
Input: One argument. A string where the char
¤ have special meaning and is replaced by color codings depending on the letter following the
¤.
Output: The same string, but with
¤letter replaced by ANSI color codes respected by many types terminal windows. (xterm, telnet, ssh, telnet, rlog, vt100, cygwin, rxvt and such...).
Codes for ansicolor():
¤r red ¤g green ¤b blue ¤y yellow ¤m magenta ¤B bold ¤u underline ¤c clear ¤¤ reset, quits and returns to default text color.
Example:
print ansicolor("This is maybe ¤ggreen¤¤?");
Prints This is maybe green? where the word green is shown in green.
If Term::ANSIColor is not installed or not found, returns the input string with every
¤ including the following code letters removed. (That is: ansicolor is safe to use even if Term::ANSIColor is not installed, you just don't get the colors).
See also Term::ANSIColor.
Checks if a Credit Card number (CCN) has correct control digits according to the LUHN-algorithm from 1960. This method of control digits is used by MasterCard, Visa, American Express, Discover, Diners Club / Carte Blanche, JCB and others.
Input:
A credit card number. Can contain non-digits, but they are removed internally before checking.
Output:
Something true or false.
Or more accurately:
Returns
undef (false) if the input argument is missing digits.
Returns 0 (zero, which is false) is the digits is not correct according to the LUHN algorithm.
Returns 1 or the name of a credit card company (true either way) if the last digit is an ok control digit for this ccn.
The name of the credit card company is returned like this (without the
' character)
Returns (wo '') Starts on Number of digits ------------------------------ ------------------------ ---------------- 'MasterCard' 51-55 16 'Visa' 4 13 eller 16 'American Express' 34 eller 37 15 'Discover' 6011 16 'Diners Club / Carte Blanche' 300-305, 36 eller 38 14 'JCB' 3 16 'JCB' 2131 eller 1800 15
And should perhaps have had:
'enRoute' 2014 eller 2149 15
...but that card uses either another control algorithm or no control digits at all. So
enRoute is never returned here.
If the control digits is valid, but the input does not match anything in the column
starts on, 1 is returned.
(This is also the same control digit mechanism used in Norwegian KID numbers on payment bills)
The first digit in a credit card number is supposed to tell what "industry" the card is meant for:
MII Digit Value Issuer Category --------------------------- ---------------------------------------------------- 0 ISO/TC 68 and other industry assignments 1 Airlines 2 Airlines and other industry assignments 3 Travel and entertainment 4 Banking and financial 5 Banking and financial 6 Merchandizing and banking 7 Petroleum 8 Telecommunications and other industry assignments 9 National assignment
...although this has no meaning to
Acme::Tools::ccn_ok().
The first six digits is Issuer Identifier, that is the bank (probably). The rest in the "account number", except the last digits, which is the control digit. Max length on credit card numbers are 19 digits.
Checks if a norwegian KID number has an ok control digit.
To check if a customer has typed the number correctly.
This uses the LUHN algorithm (also known as mod-10) from 1960 which is also used internationally in control digits for credit card numbers, and Canadian social security ID numbers as well.
The algorithm, as described in Phrack (47-8) (a long time hacker online publication):
."
Input: A KID-nummer. Must consist of digits 0-9 only, otherwise a die (croak) happens.
Output:
- Returns undef if the input argument is missing.
- Returns 0 if the control digit (the last digit) does not satify the LUHN/mod-10 algorithm.
- Returns 1 if ok
Input:
One, two or three.
How many ways (permutations) can six people be placed around a table:
If one person: one If two persons: two (they can swap places) If three persons: six If four persons: 24 If five persons: 120 If six persons: 720
The formula is
x! where the postfix unary operator
!, also known as faculty is defined like:
x! = x * (x-1) * (x-2) ... * 1. Example:
5! = 5 * 4 * 3 * 2 * 1 = 120.Run this to see the 100 first
n!
perl -MAcme::Tools -le'$i=big(1);print "$_!=",$i*=$_ for 1..100' 1! = 1 2! = 2 3! = 6 4! = 24 5! = 120 6! = 720 7! = 5040 8! = 40320 9! = 362880 10! = 3628800 . . . that many seconds:
n times seconds -- ------- --------- 2 100000 0.32 3 10000 0.09 4 10000 0.33 5 1000 0.18 6 100 0.27 7 10 0.21 8 1 0.17 9 1 1.63 10 1 17.00
If the first argument is a coderef, that sub will be called for each permutation and the return from those calls with be the real return from
permutations(). For example this:
print for permutations(sub{join"",@_},1..3);
...will print the same as:
print for map join("",@$_), permutations(1..3);
...but the first of those two uses less RAM if 3 has been say 9. Changing 3 with 10, and many computers hasn't enough memory for the latter.
The examples prints:
123 132 213 231 312 321
If you just want to say calculate something on each permutation, but is not interested in the list of them, you just don't take the return. That is:
my $ant; permutations(sub{$ant++ if $_[-1]>=$_[0]*2},1..9);
...is the same as:
$$_[-1]>=$$_[0]*2 and $ant++ for permutations(1..9);
...but the first uses next to nothing of memory compared to the latter. They have about the same speed. (The examples just counts the permutations where the last number is at least twice as large as the first)
permutations() was created to find all combinations of a persons name. This is useful in "fuzzy" name searches with String::Similarity if you can not be certain what is first, middle and last names. In foreign or unfamiliar names it can be difficult to know that.
Cartesian product
Easy usage:
Input: two or more arrayrefs with accordingly x, y, z and so on number of elements.
Output: An array of x * y * z number of arrayrefs. The arrays being the cartesian product of the input arrays.
It can be useful to think of this as joins in SQL. In
select statements with more tables behind
from, but without any
where condition to join the tables.
Advanced usage, with condition(s):
Input:
- Either two or more arrayrefs with x, y, z and so on number of elements.
- Or coderefs to subs containing condition checks. Somewhat like
where conditions in SQL.
Output: An array of x * y * z number of arrayrefs (the cartesian product) minus the ones that did not fulfill the condition(s).
This of is as joins with one or more where conditions as coderefs.
The coderef input arguments can be placed last or among the array refs to save both runtime and memory if the conditions depend on arrays further back.
Examples, this:
for(cart(\@a1,\@a2,\@a3)){ my($a1,$a2,$a3) = @$_; print "$a1,$a2,$a3\n"; }
Prints the same as this:
for my $a1 (@a1){ for my $a2 (@a2){ for my $a3 (@a3){ print "$a1,$a2,$a3\n"; } } }
And this: (with a condition: the sum of the first two should be dividable with 3)
for( cart( \@a1, \@a2, sub{sum(@$_)%3==0}, \@a3 ) ) { my($a1,$a2,$a3)=@$_; print "$a1,$a2,$a3\n"; }
Prints the same as this:
for my $a1 (@a1){ for my $a2 (@a2){ next if 0==($a1+$a2)%3; for my $a3 (@a3){ print "$a1,$a2,$a3\n"; } } }
Examples, from the tests:
my @a1 = (1,2); my @a2 = (10,20,30); my @a3 = (100,200,300,400); my $s = join"", map "*".join(",",@$_), cart(\@a1,\@a2,\@a3); ok( $s eq "*1,10,100*1,10,200*1,10,300*1,10,400*1,20,100*1,20,200" ."*1,20,300*1,20,400*1,30,100*1,30,200*1,30,300*1,30,400" ."*2,10,100*2,10,200*2,10,300*2,10,400*2,20,100*2,20,200" ."*2,20,300*2,20,400*2,30,100*2,30,200*2,30,300*2,30,400"); $s=join"",map "*".join(",",@$_), cart(\@a1,\@a2,\@a3,sub{sum(@$_)%3==0}); ok( $s eq "*1,10,100*1,10,400*1,20,300*1,30,200*2,10,300*2,20,200*2,30,100*2,30,400");
Hash-mode returns hashrefs instead of arrayrefs:
@cards=cart( #100 decks of 52 cards deck => [1..100], value => [qw/2 3 4 5 6 7 8 9 10 J Q K A/], col => [qw/heart diamond club star/], ); for my $card ( mix(@cards) ) { print "From deck number $$card{deck} we got $$card{value} $$card{col}\n"; }} @_) / @_ }
Resembles the pivot table function in Excel.
pivot() is used to spread out a slim and long table to a visually improved layout.
For instance spreading out the results of
group by-selects from SQL:
pivot( arrayref, columnname1, columnname2, ...) pivot( ref_to_array_of_arrayrefs, @list_of_names_to_down_fields )
The first argument is a ref to a two dimensional table.
The rest of the arguments is a list which also signals the number of columns from left in each row that is ending up to the left of the data table, the rest ends up at the top and the last element of each row ends up as data.
top1 top1 top1 top1 left1 left2 left3 top2 top2 top2 top2 ----- ----- ----- ---- ---- ---- ---- data data data data data data data data data data data data
Example:
my @table=( ["1997",", "Height", "Winter",171], ["1998","Per", "Weight", "Winter",74], ["1998","Per", "Height", "Winter",183], ["1998","Hilde","Weight", "Winter",62], ["1998","Hilde","Height", "Winter",168], ["1998","Tone", "Weight", "Winter",71], );
.
my @reportA=pivot(\@table,"Year","Name"); print "\n\nReport A\n\n".tablestring(\@reportA);
Will print:
Report A Year Name Height Height Weight Weight Summer Winter Summer Winter ---- ----- ------ ------ ------ ------ 1997 Gerd 170 158 66 64 1997 Hilde 168 164 62 61 1997 Per 182 180 75 73 1997 Tone 70 69 1998 Gerd 171 171 64 64 1998 Hilde 168 168 62 62 1998 Per 182 183 76 74 1998 Tone 70 71
.
my @reportB=pivot([map{$_=[@$_[0,3,2,1,4]]}(@t=@table)],"Year","Season"); print "\n\nReport B\n\n".tablestring(\@reportB);
Will print:
Report B Year Season Height Height Height Weight Weight Weight Weight Gerd Hilde Per Gerd Hilde Per Tone ---- ------ ------ ------ ----- ----- ------ ------ ------ 1997 Summer 170 168 182 66 62 75 70 1997 Winter 158 164 180 64 61 73 69 1998 Summer 171 168 182 64 62 76 70 1998 Winter 171 168 183 64 62 74 71
.
my @reportC=pivot([map{$_=[@$_[1,2,0,3,4]]}(@t=@table)],"Name","Attributt"); print "\n\nReport C\n\n".tablestring(\@reportC);
Will print:
Report C Name Attributt 1997 1997 1998 1998 Summer Winter Summer Winter ----- --------- ------ ------ ------ ------ Gerd Height 170 158 171 171 Gerd Weight 66 64 64 64 Hilde Height 168 164 168 168 Hilde Weight 62 61 62 62 Per Height 182 180 182 183 Per Weight 75 73 76 74 Tone Weight 70 69 70 71
.
my @reportD=pivot([map{$_=[@$_[1,2,0,3,4]]}(@t=@table)],"Name"); print "\n\nReport D\n\n".tablestring(\@reportD);
Will print:
Report D Name Height Height Height Height Weight Weight Weight Weight 1997 1997 1998 1998 1997 1997 1998 1998 Summer Winter Summer Winter Summer Winter Summer Winter ----- ------ ------ ------ ------ ------ ------ ------ ------ Gerd 170 158 171 171 66 64 64 64 Hilde 168 164 168 168 62 61 62 62 Per 182 180 182 183 75 73 76 74 Tone 70 69 70 71
Options:
Options to sort differently and show sums and percents are available. (...MORE DOC ON THAT LATER...)
See also Data::Pivot
Input: a reference to an array of arrayrefs -- a two dimensional table of, rows containing multi-lined cells gets an empty line before and after the row to separate it more clearly.
Returns a data structure as a string. See also
Data::Dumper (serialize was created long time ago before Data::Dumper appeared on CPAN, before CPAN even...)
Input: One to four arguments.
First argument: A reference to the structure you want.
Second argument: (optional) The name the structure will get in the output string. If second argument is missing or is undef or '', it will get no name in the output.
Third argument: (optional) The string that is returned is also put into a created file with the name given in this argument. Putting a
> char in from of the filename will append that file instead. Use
'' or
undef to not write to a file if you want to use a fourth argument.
Fourth argument: (optional) A number signalling the depth on which newlines is used in the output. The default is infinite (some big number) so no extra newlines are output.
Output: A string containing the perl-code definition that makes that data structure. The input reference (first input argument) can be to an array, hash or a string. Those can contain other refs and strings in a deep data structure.
Limitations:
- Code refs are not handled (just returns
sub{die()})
- Regex, class refs and circular recursive structures are also not handled.
Examples:
$a = 'test'; @b = (1,2,3); %c = (1=>2, 2=>3, 3=>5, 4=>7, 5=>11); %d = (1=>2, 2=>3, 3=>\5, 4=>7, 5=>11, 6=>[13,17,19,{1,2,3,'asdf\'\\\''}],7=>'x'); print serialize(\$a,'a'); print serialize(\@b,'tab'); print serialize(\%c,'c'); print serialize(\%d,'d'); print serialize(\("test'n roll",'brb "brb"')); print serialize(\%d,'d',undef,1);
Prints accordingly:
$a='test'; @tab=('1','2','3'); %c=('1','2','2','3','3','5','4','7','5','11'); %d=('1'=>'2','2'=>'3','3'=>\'5','4'=>'7','5'=>'11','6'=>['13','17','19',{'1'=>'2','3'=>'asdf\'\\\''}]); ('test\'n roll','brb "brb"'); %d=('1'=>'2', '2'=>'3', '3'=>\'5', '4'=>'7', '5'=>'11', '6'=>['13','17','19',{'1'=>'2','3'=>'asdf\'\\\''}], '7'=>'x');
Areas of use:
- Debugging (first and foremost)
- Storing arrays and hashes and data structures of those on file, database or sending them over the net
- eval earlier stored string to get back the data structure
Be aware of the security implications of
evaling a perl code string stored somewhere that unauthorized users can change them! You are probably better of using YAML::Syck or Storable without enabling the CODE-options if you have such security issues. More on decompiling Perl-code: Storable or B::Deparse.
Debug-serialize, dumping data structures for you to look at.
Same as
serialize() but the output is given a newline every 80th character. (Every 80th or whatever
$Acme::Tools::Dserialize_width contains)
Call instead of
system if you want
die (Carp::croak) when something fails.
sub sys($){my$s=shift;system($s)==0 or croak"ERROR, sys($s) ($!) ($?)"}
Returns true or false (actually 1 or 0) depending on whether the current sub has been called by itself or not.
sub xyz { xyz() if not recursed; }
Input: one or two arguments
First argument: a string, source code of the brainfuck language. String containing the eight charachters + - < > [ ] . , Every other char is ignored silently.
Second argument: if the source code contains commas (,) the second argument is the input characters in a string.
Output: The resulting output from the program.
Example:
print brainfuck(<<""); #prints "Hallo Verden!\n" ++++++++++[>+++++++>++++++++++>+++>+<<<<-]>++.>---.+++++++++++..+++.>++.<<++++++++++++++ .>----------.+++++++++++++.--------------.+.+++++++++.>+.>.
See
Just as "brainfuck" but instead it return the perl code to which the brainfuck code is translated. Just
eval() this perl code to run.
Example:
print brainfuck;
Just as "brainfuck2perl" but optimizes the perl code. The same example as above with brainfuck2perl_optimized returns this equivalent but shorter perl code:
$b[++$c]+=8;while($b[$c]){$b[--$c]+=8;--$b[++$c]}$b[--$c]+=8;out;$b[++$c]+=6; while($b[$c]){$b[--$c]+=6;--$b[++$c]}$b[--$c]-=3;out;$o; 91230 bytes if you accept that you can only check the data structure for existence of a string and accept false positives with an error rate of 0.03 (that is three do not currently support
counting_bits => 3 so 4 and 8, which should be avoided, and Acme::Tools::bfadd don't check for that).
bfdelete croaks on deletion of a non-existing key
Deletes from a counting bloom filter:
bfdelete($bf, @keys); bfdelete($bf, \@keys);
Returns
$bf after deletion.
Croaks (dies) on deleting a non-existing key or deleting from an previouly overflown counter in a counting bloom filter. btw is not very deep, two levels at most)
This:
my $bfc = bfclone($bf);
Works just as:
use Storable; my $bfc=Storable::dclone($bf);
use Acme::Tools; my $bf=new Acme::Tools::BloomFilter(0.1,1000); # the same as bfinit, see bfinit above at once (speedup). Using different salts to the key on each md5 results in different hash functions.
Digest::SHA512 would have been even better since it returns more bits, if it werent for the fact that it's much)
Md5 seems to be an ok choice both for speed and avoiding collitions due to skewed data keys.
See also Scaleable Bloom Filters: (not implemented in Acme::Tools)
...and perhaps
Release history-2014, Kjetil Skotheim
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~kjetil/Acme-Tools/Tools.pm
|
CC-MAIN-2014-52
|
refinedweb
| 9,912
| 65.32
|
Setup PyCharm
Read :- https:/
Attempting to setup PyCharm
Project :-
popup("hello world")
Checked
Preferences >
Build,
Console >
Python Console
Result
/Users/
Traceback (most recent call last):
File "/Users/
popup("Hello World")
NameError: name 'popup' is not defined
Process finished with exit code 255
Changed Code to:
import org.sikuli.
from sikuli import *
popup("hello world")
Result
/Users/
Traceback (most recent call last):
File "/Users/
import org.sikuli.
ImportError: No module named sikuli
Process finished with exit code 255
searched my machine
SikulixForJython does not exist as a file
searched for TEXT my machine
SikulixForJython found in
/Applications/
/Applications/
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2017-12-04
- Last reply:
- 2017-12-04
So got hello world to work but failing miserably in getting an existing sikuli script into and working
Created a blank project called "Sailing"
Made sure that Environment variables : CLASSPATH=
and Python interpretor : Jython 2.7
In Finder added an existing Sikuli Script in the "Sailing" Folder
in PyCharm
added at the start of the Sikuli Script :-
import org.sikuli.
from sikuli import *
popup("hello world")
exit(0)
Just to make sure that it ignore the remaining code
Check interpreter and CLASSPATH again
same error
conclusion:-
I am doing something wrong - Hello World works fine - Sailing fails
Will spend more time tomorrow trying to find out where I am failing
There are 2 different scenarios, that have to be setup separately:
-- 1: running the Python console from inside an existing project
-- 2: running a script in an existing project using the Run feature backed by Configurations
for 1:
the approach in comment #1 works for using the console
for 2:
for anything else but using the console you have to use a so called Configuration:
Menu Run -> Edit Configurations
There is a set of default configurations, where you might add to the relevant configurations the environment variable CLASSPATH, so you later create configurations, that already have this preset.
... and simply saying Run for some script will auto-create a suitable Configuration, that you can edit later to meet your requirements.
Hope it helps
Not certain what my outcome is
Have created a New Project from Scratch
see below google sheet -
Tab "New Project"
then added an existing Sikuli Script
Tab "Add Existing"
https:/
Can you give me comments on the results
Looks ok.
The problem:
There does not seem to be obvious rules, where and when "default" settings are inherited.
In fact, it looks like they are only "copied" once at creation time or at first use somewhere else, if at all.
So to be on the safe side, one should always check the project settings for new projects after creation and do the same for the run configurations (I always set them up manually and do not rely on the automatics - see comment #3)
There is another point not yet mentioned: code completion with regard to SikuliX features.
With the settings so far, SikuliX methods should be marked as errors (not found).
This can be repaired by adding the folder reference <sikulix-
As mentioned: working on the docs for PyCharm.
Please come back with any insights, that should be doc'ed
After having made some more experiments, things turn out to be more complex, than expected.
PyCharms is principally useable in the above mentioned scenarios.
But a rather annoying caveat is the fact, that the editor (nor the console) does not support code completion with stuff residing inside of jars or Java classes - hence the complete Java API is not visible for the editor's code completion (which I remember is true for Eclipse PyDev too).
I played around a little with the IntelliJ Idea having the Python plugin: there you get at least access to the class names and method names inside the jar and Java classes when editing, but not to the parameter lists (hence still not optimal).
So for me finally IntelliJ IDEA is the better solution, since I use it for Java programming too.
Nevertheless I will complete the docs for PyCharm in this sense.
-- 1. thanks for the pointer to the question, since I am just on the way to add a chapter about PyCharm to the docs
-- 2. To open a Python console, you must be in an existing project (just a skeleton without any modules is sufficient).
As correctly mentioned in the question, a pointer to sikulixapi.jar must be in the CLASSPATH environment variable.
It makes sense, to add it as a standard default to the global preferences (only accessible from the welcome dialog with no project open).
The problem: existing projects will not inherit this setting, when you change it globally. For those you have to add it manually to the Preferences while having the project open. Only projects created after the global Preferences change will inherit the global default settings. I did not find something like a "project settings refresh" feature.
So I setup the global preferences for the console according to the question and created a new project without any code.
script. SikulixForJytho n
Running the Python console from this project works:
from org.sikuli.
from sikuli import *
popup("hello")
|
https://answers.launchpad.net/sikuli/+question/661362
|
CC-MAIN-2017-51
|
refinedweb
| 866
| 51.31
|
public class Solution { public void nextPermutation(int[] nums) { int i = nums.length - 2; while (i >= 0 && nums[i + 1] <= nums[i]) { i--; } if (i >= 0) { int j = nums.length - 1; while (j >= 0 && nums[j] <= nums[i]) { j--; } swap(nums, i, j); } reverse(nums, i + 1); } private void reverse(int[] nums, int start) { int i = start; int j = nums.length - 1; while (i < j) { swap(nums, i, j); i++; j--; } } private void swap(int[] nums, int i, int j) { int temp = nums[i]; nums[i] = nums[j]; nums[j] = temp; } }
We iterate through
nums from right to left to find the number
nums[i] where the descending order occurs for the first time. Then we scan through the nums from right to
i+1 to find a number that is greater
nums[i] and swap the number with it. Finally, we reverse
nums[i+1] ... nums[nums.length-1].
By doing so, we can guarantee that:
- The next permutation is always greater or equal to the current permutation (we assume the numbers in the current permutation are not sorted in descending order).
- There does not exist a permutation that is greater than the current permutation and smaller than the next permutation generated by the above code.
- If the numbers in the current permutation are already sorted in descending order (i.e. greatest possible value), the next permutation has the smallest value.
Time Complexity:
O(n)
Extra Space:
O(1)
Discussion
|
https://dev.to/algobot76/leetcode-31-next-permutation-1f5o
|
CC-MAIN-2020-45
|
refinedweb
| 240
| 60.35
|
Are Ports on Your Computers Listening?
Several of the applications I work with on a daily basis use TCP or UDP to communicate with each other. One application is listening on a specific port, waiting to receive data from the sending application. Occasionally the receiving application is not there, ready for the data. The problem could be a power outage, the network may be unavailable, or there could be a *gasp* bug that has caused the application not to be available. It would be nice to know this before the sending application attempts to make a connection so that the problem can be rectified before it impacts that system.
It is not difficult for you to write an application that scans the ports of interest and monitors their status. Below is a PortScanner class that you could use as a skeleton for monitoring your computers. The Scan method takes the IP address and port range to scan and builds a list of ports within the range that are active and inactive. A socket is used to attempt to make a TCP connection to each port in the range. If the connection succeeds the port is added to the active port list otherwise the port is added to the inactive port list. The list of ports is accessed through the ActivePorts and InactivePorts member variables.
using System.Collections.Generic; using System.Net.Sockets; class PortScanner { public List<int>
ActivePorts = new List <int>(); public List <int> InactivePorts = new List <int>(); public PortScanner() { } public void Scan(string IP, int StartPort, int EndPort) { Socket Sock = null; for (int Port = StartPort; Port <= EndPort; Port++) { try { // Create a new socket and attempt to connect to the ip/port Sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); Sock.Connect(IP, Port); // Connection succeeded, add port to list of active ports ActivePorts.Add(Port); } catch (SocketException ex) { // Connection failed, add port to list of inactive ports InactivePorts.Add(Port); } finally { // Gracefully close down the socket if (Sock != null) { if (Sock.Connected) Sock.Disconnect(false); Sock.Close(); } } } } }
Using PortScanner is straightforward. Create an instance of the PortScanner class and call the Scan method with the IP address and port range you need to monitor. The code below does just that and then displays the list of active and inactive ports on the console.
// Create a new PortScanner PortScanner PS = new PortScanner(); // Scan for open ports PS.Scan("10.1.1.12", 5995, 5999); // Write out the list of active/inactive ports Console.WriteLine("Port Scanner Results:"); Console.WriteLine(" Open Ports: "); foreach (int Port in PS.OpenPorts) Console.WriteLine(" " + Port.ToString()); Console.WriteLine(" Closed Ports: "); foreach (int Port in PS.ClosedPorts) Console.WriteLine(" " + Port.ToString());
Here is the console output for this example.
Port Scanner Results: Active Ports: 5999 Inactive Ports: 5995 5996 5997 5998
I said the PortScanner class could be used as a skeleton for you own monitor for good reason. I've stripped away parts of the class I use to keep the example simple and short. The ActivePorts and InactivePorts members should really be private. They could then be exposed through properties that return a read-only version of the list so that they can't be changed by the application using the PortScanner class. The Scan method only uses a TCP connection to test the port. Depending upon your application you may need to use a different protocol. Additions could be made to the Scan method to determine the reason the connection failed. You also need to decide what to do with the information you now have about the ports. I'll leave these items as exercises for you to complete to make this fit your needs.
I also recommend that you don't just start running this against all the servers in your organization. First, scanning a wide range of ports is not a very fast process. Second, you may set off alarms if your servers are monitoring for port scanners in order to prevent unwanted access.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/csharp/.net/net_framework/systemnamespace/article.php/c16059/Are-Ports-on-Your-Computers-Listening.htm
|
CC-MAIN-2014-41
|
refinedweb
| 674
| 64.3
|
Change of jsessionid after loginsinghakanksha Jun 13, 2011 7:36 AM
Hi,
I have an issue related to the change of JSESSIONID on login.
I am working on an ATG ecommerce application, where I am using jboss-eap-4.2 server and ATG9.1. We have a critical security issue which says that after login, the session id does not change. As this could lead to a "middle-man-attack", we need to change the session id after every login.
In our login page, we have a cookie for "remember me" functionality that stores only the username id (and not password). I could see that the cookie stores the JSESSIONID, which does not change after login. As a security fix, I have made the cookie as "secure". But this still does not solve our problem.
How can we make sure that the session id is being changed or a new session is created and all data related to the previous session( session attributes and cookies) is copied over to the new one?
I am new to the Jboss community. Kindly let me know if any other information is also required.
Thanks in Advance
Regards
Akanksha
1. Re: Change of jsessionid after loginquincyleung Jul 12, 2011 11:36 PM (in response to singhakanksha)
Hi Akanksha,
I am experiencing the same problem.
It happens when a user logout and relogin using the same browser window.
I tried to google it a bit and tried the following but both do not work
1) setting -Dorg.apache.catalina.connector.Request.SESSION_ID_CHECK=true
2) removing jsessionid cookies when invalidate the session during logout
I am using jboss 4.2.3 here.
Hope someone can lend us a hand.
2. Re: Change of jsessionid after loginShantanu Upadhyaya Jan 10, 2012 5:07 PM (in response to quincyleung)
Did anyone find a solution to this ? I'm using jboss 4.2.3 as well.
When user goes to the welcome page ( not logged in yet ), a session id is created.
When user enters the credentials, and successfully logs in, the same session id is used.
I don't see this problem in Tomcat.
org.apache.catalina.connector.Request.SESSION_ID_CHECK does not work.
3. Re: Change of jsessionid after loginJean-Frederic Clere Jan 11, 2012 3:26 AM (in response to Shantanu Upadhyaya)
emptySessionPath in the Connector?
4. Re: Change of jsessionid after loginShantanu Upadhyaya Jan 11, 2012 11:26 AM (in response to Jean-Frederic Clere)
I read the documentation on SESSION_ID_CHECK and emptySessionPath. Can you please explain how these solve the problem ? I guess these fixes have helped some and not worked for others.
5. Re: Change of jsessionid after loginJean-Frederic Clere Jan 12, 2012 3:20 AM (in response to Shantanu Upadhyaya)
SESSION_ID_CHECK allows a sessionif to be reused if it is already used in any application of the container.
emptySessionPath sets the cookie path to / so the cookie is shared between the webapps. (if you are using a portal you may need it).
6. Re: Change of jsessionid after loginShantanu Upadhyaya Jan 13, 2012 3:34 PM (in response to Jean-Frederic Clere)
I have tomcat 6.0.35 in local dev env and jboss 4.2.3 on unix.
This problem only happens on jboss. If I make the tomcat within Jboss the same as my standalong local tomcat, it should work. Sounds simple.
Tomcat, by default shows the path in the http headers. Therefore, I set emptySessionPath="false" in \server\default\deploy\jboss-web.deployer\server.xml
Now I see the path. Great ! But that doesnt fix the problem.
Is it possible that changeSessionIdOnAuthentication is the culprit ? If so, where do I set this in Jboss ?
7. Re: Change of jsessionid after loginShantanu Upadhyaya Jan 13, 2012 4:46 PM (in response to quincyleung)
Quincy,Were you able to resolve the session id fixation problem ? If so , can you post it here. Thanks.
8. Re: Change of jsessionid after loginShantanu Upadhyaya Jan 17, 2012 10:10 AM (in response to Shantanu Upadhyaya)
Since I'm using j_security_check authentication...how can the session be invalidated when using Container managed authentication ? My login module is in a common jar and I cannot modify that code. I would need some kind of a login pre processor to do this.
9. Re: Change of jsessionid after loginAl Lim Jan 30, 2012 4:14 PM (in response to singhakanksha)
I think I have a simple solution. Invalidate the session and delete the cookie name jsessionid!
-- login page code:
a) Call the invalidate function for the httpsession
b) Tell client to delete the cookie named jsessionid
-- User example
1. User goes to webpage, which is http, gets a session and jsessionid
2. User goes to login page, which is HTTPS
3. The login page invalidates the session and deletes the jsessionid cookie for good measure
4. Whatever page the user goes to after the login page, is issued a new jsessionid cookie
Seems to work after some initial testing, but probably will have to experiment more. The one thing I need to verify is that the first jsessionid is indeed purged from the session id list.
10. Re: Change of jsessionid after loginEndrigo Antonini Aug 6, 2012 4:56 PM (in response to singhakanksha)
I know there is a long time of this post, but I'm having the same problem!
But I'm using JBoss 7.x.x.
Is there any way to regen the sessionId to the user?
I'm using a custom login module.
11. Re: Change of jsessionid after logingreco Sep 4, 2012 4:03 PM (in response to Endrigo Antonini)
I'm trying to find an answer as well. I don't understand why they rejected your ticket to begin with, it seemed valid on all points.
12. Re: Change of jsessionid after loginEndrigo Antonini Sep 5, 2012 8:40 AM (in response to greco)
13. Re: Change of jsessionid after logingreco Sep 7, 2012 1:49 PM (in response to Endrigo Antonini)1 of 1 people found this helpful
I figured it out!! You need to write a custom FormAuthenticator that sets the change session id on authentication to true before the call to authenticate and add it as a valve in your jboss-web.xml.
Here's what I did:
Write a custom FormAuthenticator
import java.io.IOException; import javax.servlet.http.HttpServletResponse; import org.apache.catalina.authenticator.FormAuthenticator; import org.apache.catalina.connector.Request; import org.apache.catalina.deploy.LoginConfig; public class MyAuthenticator extends FormAuthenticator { @Override public boolean authenticate(final Request request, final HttpServletResponse response, final LoginConfig config) throws IOException { setChangeSessionIdOnAuthentication(true); return super.authenticate(request, response, config); } }
Add the valve config to your jboss-web.xml
<jboss-web> <context-root>/<!-- your app context></context-root> <security-domain><!-- your domain --></security-domain> <valve> <class-name>com.domain.path.to.your.MyAuthenticator</class-name> </valve> </jboss-web>
If you are using maven make sure you use the correct version of the catalina libraries. Add this to your pom.xml
<dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-catalina</artifactId> <version>7.0.27</version> <scope>provided</scope> </dependency>
That's all I had to do on AS7 (7.0.2). On 7.1.1 I believe its the same approach but instead of extending the FormAuthenticator from the catalina jar you need to work with the org.jboss.as.web.security.ExtendedFormAuthenticator.
The session is now changed prior to authentication and sesson fixation is no longer an issue.
14. Re: Change of jsessionid after loginEndrigo Antonini Sep 10, 2012 7:55 AM (in response to singhakanksha)
Thanks Greco!
I'll try this solution!!
|
https://developer.jboss.org/thread/167949
|
CC-MAIN-2018-47
|
refinedweb
| 1,273
| 57.87
|
package URL::Social::Twitter; use Moose; use namespace::autoclean; extends 'URL::Social::BASE'; =head1 NAME URL::Social::Twitter - Interface to the Twitter API. =head1 DESCRIPTION Do not use this module directly. Access it from L<URL::Social> instead; use URL::Social; my $social = URL::Social->new( url => '...', ); print $social->twitter->share_count . "\n"; =head1 METHODS =cut has 'share_count' => ( isa => 'Maybe[Int]', is => 'ro', lazy_build => 1 ); =head2 share_count Returns the number of times the URL in question has been shared/tweeted. Returns undef if it fails to retrieve the data from Twitter. =cut sub _build_share_count { my $self = shift; my $url = '' . $self->url; if ( my $share_count = $self->get_url_json($url)->{count} ) { return $share_count || 0; }.
|
https://web-stage.metacpan.org/release/TOREAU/URL-Social-0.07/source/lib/URL/Social/Twitter.pm
|
CC-MAIN-2021-49
|
refinedweb
| 110
| 58.38
|
I'm currently working on an personal Architecture Project. Now I have some stuff created and want to get some informations out of the model.
I know there is the "Arch Schedule" tool available. But i think it is not flexible enough to get all sorts of Information one can dream about out of the model. As I work with SQL (Structured Query Language) a lot during work, I decided to give it a try and create a SQL Module for FreeCAD.
After a few hours of playing around with it I have something to show to you. It is by far not finished, but I think it is still useable (at least in the python console).
The workbench can be found at.
For help requests or Bugs use this Topic:
For new Features post to this Topic:
EDIT: The following is a bit outdated. See the up to date documentation () for a list of all available features.
There is no documentation available right now. But i will give you a short introduction here. I think this should be enough to get you going
The workbench has no GUI tools yet. So everything works via the python console right now.
1. At first you might want to open a document that has some objects in it. E.g. the attached one.
2. Go to the python console and create a new SQL parser. You can reuse this parser to parse multiple SQL Statements
3. Now you can use the parser to parse a Statement and select something from the document
Code: Select all
from sql import freecad_sql_parser sql_parser = freecad_sql_parser.newParser()
Code: Select all
# Simply select all objects from the document select_all = sql_parser.parse('Select * from document') select_all.execute() [[<Sketcher::SketchObject>], [<Part::PartFeature>], [<Sketcher::SketchObject>], [<Part::PartFeature>], [<Sketcher::SketchObject>], [<Part::PartFeature>], [<Part::PartFeature>], [<Part::PartFeature>], [<Part::PartFeature>]]
4. You can select only some properties of the object if you want
5. You can also Limit the objects in a "Where" Clause. The where clause supports AND OR and even Brackets should work Right now. To compare properties you can use "=", "!=", ">", "<", ">=", "<="
Code: Select all
select_label = sql_parser.parse('Select Label from document') select_label.execute() [['WallTrace'], ['Wall'], ['WallTrace001'], ['Wall001'], ['WallTrace002'], ['Wall002'], ['Structure'], ['Structure001'], ['Structure002']]
6. You can even select multiple properties at once
Code: Select all
select_something = sql_parser.parse("Select Label from document Where IfcRole = 'Column' OR Label = 'Wall'") select_something.execute() [['Wall'], ['Structure'], ['Structure001'], ['Structure002']]
7. You can also use functions (Count, Sum, Min, Max)
Code: Select all
select_name_area_columns = sql_parser.parse("SELECT Label, VerticalArea From document Where IfcRole = 'Column'") select_name_area_columns.execute() [['Structure', 400000 mm^2], ['Structure001', 400000 mm^2], ['Structure002', 4e+06 mm^2]]
Code: Select all
select_number_of_objects = sql_parser.parse('Select count(*) From document') select_number_of_objects.execute() [[9]]
|
https://forum.freecadweb.org/viewtopic.php?f=9&t=33403
|
CC-MAIN-2020-10
|
refinedweb
| 453
| 59.6
|
With LINQ, it's now really easy to design your database in an interactive tool like SQL Server Management Studio, drag your tables into a DBML in Visual Studio and then get to work on all the classes and relationships that have been created for you.
This works great and ensures that there is 'one fact once place' concerning how data is persisted - you don't need to maintain a data layer AND a database and struggle to keep them in sync. But when it comes to metadata about the columns in your database, up to now, you've had to maintain that information in two (or more) places. The length of a text field in your UI for example should be limited to the length of the column that will store it in the database.
Today, you probably have the length defined in your database and you have the length defined in your UI. You might also have the length defined in some code that truncates data when storing it. Change it in the database and you have to go change it everywhere else.
This brief article shows you how to get column metadata from the properties on LINQ objects allowing you to have a single master (the database) that defines the allowed length of every property. Now your UI, your business layer, your data layer and your database can all be in synch all the time.
Auto-truncating data is rarely the right thing to do; normally you would only use the first of the two methods presented here to get the length limit and then pass it up through your business layer to your UI so the UI can validate the user's input. Auto-truncate might be used during some batch input process where there is, say, a field that is OK to truncate either with or without a warning to the user, like, say, a comments field.
Note also that this article isn't prescribing any particular system design, it's meant as an illustration as to how to get to the column metadata; it's up to you to decide how to use it. In an advanced, distributed system where the UI isn't talking directly to LINQ objects, this code might find use in the hands of your testers who can automate the generation of max-length and max-length+1 inputs to ensure that max-length data can pass through all layers of the system and that max-length+1 data is properly rejected in your validation code and in your business layer.
Add these two static methods to your Utilities assembly:
static
/// <span class="code-SummaryComment"><summary></span>
/// Gets the length limit for a given field on a LINQ object ... or zero if not known
/// <span class="code-SummaryComment"></summary></span>
/// <span class="code-SummaryComment"><remarks></span>
/// You can use the results from this method to dynamically
/// set the allowed length of an INPUT on your web page to
/// exactly the same length as the length of the database column.
/// Change the database and the UI changes just by
/// updating your DBML and recompiling.
/// <span class="code-SummaryComment"></remarks></span>
public static int GetLengthLimit(object obj, string field)
{
int dblenint = 0; // default value = we can't determine the length
Type type = obj.GetType();
PropertyInfo prop = type.GetProperty(field);
// Find the Linq 'Column' attribute
// e.g. [Column(Storage="_FileName", DbType="NChar(256) NOT NULL", CanBeNull=false)]
object[] info = prop.GetCustomAttributes(typeof(ColumnAttribute), true);
// Assume there is just one
if (info.Length == 1)
{
ColumnAttribute ca = (ColumnAttribute)info[0];
string dbtype = ca.DbType;
if (dbtype.StartsWith("NChar") || dbtype.StartsWith("NVarChar"))
{
int index1 = dbtype.IndexOf("(");
int index2 = dbtype.IndexOf(")");
string dblen = dbtype.Substring(index1 + 1, index2 - index1 - 1);
int.TryParse(dblen, out dblenint);
}
}
return dblenint;
}
/// <span class="code-SummaryComment"><summary></span>
/// If you don't care about truncating data that you are setting on a LINQ object,
/// use something like this ...
/// <span class="code-SummaryComment"></summary></span>
public static void SetAutoTruncate(object obj, string field, string value)
{
int len = GetLengthLimit(obj, field);
if (len == 0) throw new ApplicationException("Field '" + field +
"'does not have length metadata");
Type type = obj.GetType();
PropertyInfo prop = type.GetProperty(field);
if (value.Length > len)
{
prop.SetValue(obj, value.Substring(0, len), null);
}
else
prop.SetValue(obj, value, null);
}
Using them is easy. Suppose you have an instance 'customer' of LINQ type called 'Customer' and you want to get the length of the 'Name' field:
customer
Customer
Name
int len = GetLengthLimit (customer, "Name");
You would probably implement this at the lowest level in your solution and then provide methods to pass the length metadata up through your business logic to your UI. LINQ's partial classes might be the right place to implement this. You might, for example, add an int NameLength property to complement your Name property.
int NameLength
Name
SetAutoTruncate (song, "Comments", "Really long comments about the song
that someone else put in to the song metadata but which you really don't care about");
It's trivial to add a cache allowing you to go from <Type + Field Name> to <length> without having to reflect on the Type every single time, but as always, optimization like that is nearly always best left until you need it.
<Type + Field Name>
<length>
Don't forget to include the appropriate using statements:
using
using System.Reflection;
using System.Data.Linq;
using System.Data.Linq.Mapping;
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
'******************************************************************
'******************************************************************
'***
'*** Record.
|
http://www.codeproject.com/Articles/27392/Using-the-LINQ-ColumnAttribute-to-Get-Field-Length?PageFlow=FixedWidth
|
CC-MAIN-2016-44
|
refinedweb
| 923
| 59.84
|
I recently launched a re-write of my brothers Guitar teaching business website: cgguitar.co.uk, during this rewrite I had some guiding principles which I believe are best practices when building any website:
- Use the right tool for the job. You don't need a metric-ton of JavaScript for most websites minimum user experiences.
- Provide a great baseline experience with No JavaScript whatsoever.
- Limit the number of calls to external services to keep the page load fast.
In this post I'll describe my approach to getting embedded YouTube playlist content into the website, at build time, reducing the number calls to YouTube client side to only the embedded video and thumbnails, no calls out to the YouTube Data API. In addition to this, i'll show you how you can keep the site up to date with easy to configure cron jobs (scheduled builds).
The feature that I built, that I will explain, is an embedded YouTube playlist component which fetches all the data and stats for YouTube playlists at build time and renders their video metadata/thumbnails directly into the HTML. You can check out the feature live over at.
The problem with client side
Calling out to external APIs/services from your client side JavaScript can introduce you many problems, to name a few:
Security - if you want to hide your token or keep it secure you either have to:
- Ensure your token only works on your websites domain, but this doesn't stop people using it from outside of a web browser.
- Add some complex proxy set up where you hide the token on a server you manage, requires having a server or proxy configuration.
Rate limiting/charges - most APIs have limits to the number of API calls you can make, or will start charging you for usage:
- Your website content doesn't scale, each visitor would be using your token to call the external services for every visit.
- You could end up incurring accidental charges!
JavaScript needed - In order to show the data you want to show to user, you need to serve JavaScript to your users:
- Depending on Network speed or the amount of JavaScript on the page the user has to wait for the JS to download before seeing any content.
- A user may choose to disable JavaScript.
- JavaScript may fail to load entirely, rendering a useless experience to users.
Moving your calls to external APIs to build time
This is approach is not a silver bullet, not every feature would support this, e.g. if you want to work with user submitted content.
However, if all you are showing is content that changes infrequently, moving the data fetching to build time can be a really great solution.
The static site I built for my brothers' business uses Eleventy, a fantastic static site generator.
I wrote about getting started with 11ty in How I got started with 11ty.
The next section will assume some knowledge about 11ty, or static site generators in general.
11ty has a plugin called @11ty/eleventy-cache-assets which you can use to fetch any data you like.
const Cache = require("@11ty/eleventy-cache-assets"); module.exports = async function() { let url = ""; /* This returns a promise */ return Cache(url, { duration: "1d", // save for 1 day type: "json" // we’ll parse JSON for you }); };
The awesome thing about this plugin is that once the data is fetched it is cached so future local builds do not have to re-fetch data, meaning your builds can remain lightning fast which is a common characteristic of any 11ty project.
Embedding YouTube playlists at build time
For my feature I decided I wanted to be able to pick and choose which YouTube playlists that I wanted to show in the website, it is however possible to fetch all YouTube playlists for an account too. I wanted to be able to choose so that I could add, order and describe new playlists in my CMS (Netlify CMS).
The playlists in the website are defined as markdown in the code in a folder named playlists, Netlify CMS is configured to read these files e.g:
-------- title: Beginner guitar lessons name: beginner-guitar-lessons id: PLA0cAQ-2uoeoJoFfUz9oq9qhmlnsjFRhU --------
The first step to getting my playlists into 11ty is to define them as a collection, to do this inside of the
src/playlists folder I create a playlists.json.
{ "tags": ["playlist"], "permalink": false }
This creates an 11ty collection of all of the playlists, with their "id", "name" and "descriptions".
Inside of my videos page I can then work with this collection in my Nunjucks template:
{%- if collections.playlists %} {%- asyncEach collections.playlist in playlists | fetchYouTubePlaylists %} {%- include 'partials/video-playlist.njk' %} {%- endeach %} {%- endif %}
If you are unfamiliar with template languages in 11ty you can read about them over here.
I'll show what
partials/video-playlist.njk is later on in the article.
fetchYouTubePlaylists is where the magic happens and where we will start to use
@11ty/eleventy-cache-assets.
This is an 11ty filter which is defined in my
.eleventy.js config file.
eleventyConfig.addNunjucksAsyncFilter("fetchYouTubePlaylists", async (playlists, callback) => { const data = await getPlaylists(playlists); callback(null, data); })
Let's take a dive a layer deeper:
getPlaylists is making a call to
getPlaylistItem which is where i'm actually doing the data caching.
module.exports.getPlaylists = async (playlists) => { if(!playlists) { return []; } const lists = await Promise.all(playlists.map((async ({id, title, description}) => { const content = await getPlaylistItem(id); return { title, id, description, link: `{id}`, ...(content || {}), }; }))); return lists; }
This function is looping through all of my playlists and fetching the items (videos) in that playlist. It is also adding the name, description and direct link to YouTube for the whole playlist.
Now for
getPlaylistItem:
const getPlaylistItem = async (playlistId) => { const apiUrl = ''; const maxResults = 20; const order = 'viewCount'; const url = `${apiUrl}?key=${apiKey}&part=${encodeURIComponent('snippet,contentDetails')}&type=video%2C%20playlist&maxResults=${maxResults}&playlistId=${playlistId}&order=${order}`; console.log(`Fetching YouTube videos for playlist: ${playlistId}`); const videos = await Cache(url, { duration: "1d", // 1 day type: "json" // also supports "text" or "buffer" }); const videoIds = videos.items.map(({contentDetails}) => contentDetails.videoId); const metaInfo = await fetchMetaInfo(videoIds); return { videos: await Promise.all(videos.items.map(async ({snippet, contentDetails}) => { const hqThumbnail = snippet.thumbnails.maxres || snippet.thumbnails.high || snippet.thumbnails.medium || snippet.thumbnails.default; const smallThumbnail = snippet.thumbnails.medium || snippet.thumbnails.default; const defaultThumbnail = snippet.thumbnails.high; return { hqThumbnail, smallThumbnail, defaultThumbnail, channelTitle: snippet.channelTitle, channelId: snippet.channelId, title: snippet.title, id: contentDetails.videoId, ...(metaInfo[contentDetails.videoId] || {}), } })), hasMore: Boolean(videos.nextPageToken) } };
The first few things this function does is:
- Set base url for YouTube API:
- Set the max number of items in a playlist to return on a page
- Pass in APIKey and build up url in accordance with the API Docs.
You will want to store your API key as an environment variable e.g.
const apiKey = process.env.YT_API_KEY;. For production you can add this environment variable where ever you choose to build/host the site e.g. on Netlify.
Next up it fetches some extra metadata.
fetchMetaInfo fetches things like view count and likes, this is another API call which we would be concerned about if this was client side, but since it's build time, who cares!
Implementation available on Github.
Finally i'm looping through all the data and returning an array of
videos for each playlist and a flag
hasMore if the playlist has more than then 20 items shown. In my HTML when I see this flag I add an link out to YouTube to watch the full playlist.
The above code a modified version of the original, where i'm doing a a few extra things you can checkout the full version on Github.
Progressive Enhancement
Now I have the website fetching the external data, let's see how I could approach displaying the content in the HTML.
When designing an dynamic experience its a good idea to think about what is the minimal experience you can provide without needing JavaScript, and build from there.
You could start out very simply and just load a link
<a> to the YouTube videos, perhaps the thumbnail could open a tab to YouTube, this needs no JS at all, and is what I did:
{%- if playlist -%} {%- set firstVideo = playlist.videos[0] -%} {%- set description = playlist.description or (playlist.templateContent | safe) %} <youtube-playlist <div class="fallback" slot="fallback"> <div class="img-btn-wrapper"> <img decoding="async" loading="lazy" width="{{firstVideo.hqThumbnailWidth}}" height="{{firstVideo.hqThumbnaillWdith}}" src="{{firstVideo.hqThumbnailUrl}}" /> < -%}
You will see that i'm wrapping the whole code in a
youtube-playlist Custom Element.
When the component loads without JavaScript it is just a link out to YouTube, which is then upgraded to a full playlist experience. This will disable the default "link" behavior too.
I'm not going to go into the implementation of my Web Component in this post but you can check out the source code on Github. The general idea is to consume
<li> list items as child content inside of my
<youtube-playlist> and when JavaScript loads move this content in the Shadow DOM, and make them look pretty/interactive.
Here is my full Nunjucks template for my html:
{%- if playlist -%} {%- set firstVideo = playlist.videos[0] -%} {%- set description = playlist.description or (playlist.templateContent | safe) %} <youtube-playlist <a slot="heading" href="#{{playlist.title | slug }}"><h2>{{playlist.title | safe}}</h2></a> <p slot="description">{{description}}</p> <div class="fallback" slot="fallback"> <div class="img-btn-wrapper"> <img decoding="async" loading="lazy" width="{{firstVideo.hqThumbnailWidth}}" height="{{firstVideo.hqThumbnaillWdith}}" src="{{firstVideo.hqThumbnailUrl}}" /> <svg style="pointer-events:none;" class="playbtn" xmlns="" viewBox="0 0 32 32"> <g transform="translate(-339 -150.484)"> <path fill="var(--White, #fff)" d="M-1978.639,24.261h0a1.555,1.555,0,0,1-1.555-1.551V9.291a1.555,1.555,0,0,1,1.555-1.551,1.527,1.527,0,0,1,.748.2l11.355,6.9a1.538,1.538,0,0,1,.793,1.362,1.526,1.526,0,0,1-.793,1.348l-11.355,6.516A1.52,1.52,0,0,1-1978.639,24.261Z" transform="translate(2329 150.484)"/> <path fill="var(--Primary, #000)" d="M16.563.563a16,16,0,1,0,16,16A16,16,0,0,0,16.563.563Zm7.465,17.548L12.672,24.627a1.551,1.551,0,0,1-2.3-1.355V9.853a1.552,1.552,0,0,1,2.3-1.355l11.355,6.9A1.553,1.553,0,0,1,24.027,18.111Z" transform="translate(338.438 149.922)" /> </g> </svg> < -%}
Using Web Components like this is a perfect way of enhancing a base HTML experience with limited JavaScript.
Periodically building your website
In order to keep the YouTube playlists up to date I want to be able to build the website every day on schedule.
There are many options when it comes to periodically building a website, I wrote about my approach to doing this in: Scheduling builds on Netlify. In brief, I opted to use Circle CI to call my Netlify build hook every day at 3 PM. I tried Github Actions but there is a major limitation to using an Action for this use case, which I go into in the linked article.
Summary
I hope this article was helpful and you can see some of the advantages to moving dynamic content that changes infrequently to be rendered at build time.
If you want to read more of my work, please follow me on Twitter @griffadev, or get me a coffee if you feel like it ☕.
Discussion (0)
|
https://dev.to/griffadev/adding-dynamic-content-from-an-api-to-a-static-website-at-build-time-33jd
|
CC-MAIN-2021-39
|
refinedweb
| 1,927
| 62.48
|
Subject: [boost] [process] Formal Review
From: Nat Goodspeed (nat_at_[hidden])
Date: 2016-11-05 19:44:27
I would like to surface a number of issues, large and small.
You are clearly proud to support a number of different syntactic ways
to express the same semantic operation. To me this is a minor
negative. While in theory it sounds nice to be able to write Process
consumer code any way I want, in a production environment I spend more
time reading and maintaining other people's code than I do writing
brand-new code. Since each person contributing code to our (large)
code base will select his/her own preferred Process style, I will be
repeatedly confused as I encounter each new usage.
I admire the support for synchronous pipe I/O, while remaining
skeptical of its practical utility. Synchronous I/O with a child
process presents many legitimate opportunities for deadlock: traps for
the unwary. I would be content with a combination of:
* async I/O on pipes (yes, using Asio)
* system() for truly simple calls
* something analogous to Python's subprocess.Popen.communicate(): pass
an optional string for the child's stdin, retrieve a string (or, say,
a stringstream) for each of its stdout and stderr.
The example under I/O pipes the stdout from 'nm' to the stdin of
'c++filt'. But the example code seems completely unaware that c++filt
could be delayed for any of a number of reasons. It assumes that as
soon as nm terminates, c++filt must have produced all the output it
ever will. I worry about the Process implementation being confused
about such issues.
I'm dubious about the native_environment / environment dichotomy. As
others have questioned, why isn't 'environment' a typedef for a
map<string, string> (or unordered_map)?
I understand that you desire to avoid copying the native process
environment into such a map until necessary, but to me that suggests
something like an environment_view (analogous to string_view) that can
perform read-only operations on either back-end implementation.
Operations involving splitting and joining on ':' or ';' should be
defined purely in terms of strings and ranges of strings. They should
not be conflated with environment-map support.
The documentation so consistently uses literal ';' as the PATH
separator that I worry the code won't correctly process standard Posix
PATH strings.
At this moment in history, an example showing integration of Process
with Boost.Fiber seems as important as examples involving coroutines.
Why is the native_handle_t typedef in the boost::this_process namespace?
While in full context it makes sense to speak of "assigning" an
individual process to a process group, the method name assign() has
conventional connotations. Use add() instead.
There's a Note that says: "If a default-constructed group is used
before being used in a process launch, the behaviour is undefined." I
assume you mean "destroyed before being used," but this is a concern.
If a program has already instantiated a process group, but for any
reason decides not to (or fails to) launch any more child processes,
does that put the entire parent process at risk? What remedial action
can it take? Move the group object to the heap somewhere and let it
leak? Spawn a bogus child process for the purpose of defusing the
ticking process group instance?
If you're going to reify process group at all, you should wrap more
logic around it to give it well-defined cross-platform behavior. And
it should definitely be legal to create and destroy one without
associating any child process with it.
Given support elsewhere for splitting/joining PATH strings, the
string_type path parameter to search_path() feels oddly low-level.
Maybe accept a range of strings?
> At a minimum, kindly state:
> - Whether you believe the library should be accepted into Boost
> * Conditions for acceptance
YES, IF the present Boost.Process preserves the ability of its
predecessor to extend it without modifying it. I'm sorry, thorough
reading of the documentation plus some browsing through the code
leaves me doubtful.
Examples of such extensions:
* With Boost.Process 0.5 I can use Boost.TypeErasure to pass an
"initializer" which is a runtime collection of other initializers.
Such a feature in Process 0.6 would support people's requests to be
able to assemble an arbitrary configuration with conditional elements
and use it repeatedly.
* I can create an initializer (actually one for each of Posix and
Windows, presenting the same API, so portable code can use either
transparently) to implicitly forcibly terminate the child process
being launched when the parent process eventually terminates in
whatever way.
Please understand that I am not asking for the above features to be
absorbed into the Process library: I am asking for a Process library
extensible enough that I can implement such things with consumer code,
without having to modify the library. Perhaps Process 0.6 already is!
It's just hard for me tell.
Another feature should, in my opinion, be incorporated into the library:
*.
If the library doesn't natively support that, I *must* be able to
pass a custom initializer with deep access to the relevant control
blocks to set it up. This is a showstopper, as in "you can't remove
that file because, unbeknownst to you, some other process has it
open."
> - Your knowledge of the problem domain
I have hand-written process-management code in C++ several times
before -- each with multiple implementations to support multiple
platforms. I have tested the previous candidate Boost.Process 0.5 with
a number of programs exercising a number of features.
> You are strongly encouraged to also provide additional information:
> - What is your evaluation of the library's:
> * Design
Design notes are at the top of this mail.
> * Implementation
I have only glanced over parts of the implementation. It seems
somewhat more obscure than the corresponding pieces of Process 0.5,
which is why I couldn't quickly satisfy myself as to the library's
extensibility.
> * Documentation
Others have noted that the documentation is very sketchy in places. I
too wish for more explanation.
Much of the time I spent on this review was reading through the
documentation. Apologies if I have misunderstood the library's
capabilities.
> * Usefulness
I have felt for years now that it is essential to get a child-process
management library into Boost. It grieves me to have to write
platform-specific API calls into different applications.
> - Did you attempt to use the library?
I did not, sorry, ran out of time -- as you can infer from this review
arriving at the end of the final day!
> - How much effort did you put into your evaluation of the review?
Most of my time was spent reading the documentation. I looked through
a couple of the implementation files for further information.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2016/11/231406.php
|
CC-MAIN-2021-49
|
refinedweb
| 1,156
| 56.05
|
Introduction
Hi everyone, and welcome to this spur of the moment tutorial!
Aim of this tutorial
The aim of this tutorial is to provide a general pattern for separating asynchronous and multi threaded work from the UI. The general idea is to separate the logical work out into a separate class, instantiate that class (initializing it with any state data required by the work operation), and then get the result, and update the GUI via a callback, in a nice, decoupled manner.Forms technology.
Required Knowledge
Before you begin this tutorial, you should have a solid base knowledge of all C#'s fundamental concepts, and have at least a general knowledge of the different API's available for performing multithreaded and asynchronous operations in C#. The examples are kept very very simple, but basic knowledge is still required
Why I decided to write this tutorial
So, what has brought this on? Well, I responded to this thread made in the C# forum today about the standard cross thread communication issues that we have all faced when using multithreading in a GUI environment. For example, we have all seen code like this to marshal updates onto the UI thread, I am sure:
this.txtMyTextBox.Invoke(new MethodInvoker(() => this.txtMyTextBox.Text = "Updated Text!"));
Anyway, tlhIn`toq then made a very good point about how it is usually better to decouple the operation running on a background thread from the UI, so I'd like to provide a few examples
The gist of the point was that rather than have the background thread directly update the UI directly using Invoke(),. The UI is left with its single responsibility - updating the UI, and the class encapsulating the background operation is left with its single responsibility - perform the operation.
Although, even with events, you do still have to be aware of the cross thread communication issue, of course. If you simply raise an event (in the standard, everyday way) from the background thread, and have the form handle the event, the handler method will be run on that background thread, thus meaning you would still have to invoke UI updates onto the main thread.
This wouldn't be the end of the world, as you have still effectively decoupled the logical operation from the UI, but we would like to avoid even having to use Invoke() if possible, thus keeping the UI code as focused and simple as possible
Examples
So, let's move onto the examples, staring with the events technique, and moving onto new techniques that have become available in .NET 4.0...
All the examples revolve around executing some arbitrary work, on a background thread, of which eventually completes at which point the GUI is alerted, and the GUI is allowed to update itself, without needing to use Invoke(). They move progressively up from traditional techniques to newer techniques using features new to .NET 4.0.
I am going to encapsulate the work in a class called WorkItem (you could name it something more specific to your application. For example, you could have a class called PrimeNumberCalculator, have a method called Calculate() to calculate the numbers, and have an event called PrimesCalculated when the calculation was complete), and the work is always going to return a string as a result (just for example's sake, you can change this, or make it more general if you wish (return a generic Result object perhaps...)).
So, a button called btnStart is going to start the work going, and a textbox called txtResult is going to be updated with the result.
Example 1: Raising an event to update the UI - The More Traditional Approach
This concept is very closely related to the Event Based Asynchronous pattern, and, thus, in many cases, you may be able to use built in classes that already implement that pattern (or perhaps sub class those classes, and add custom functionality).
The BackgroundWorker class is a commonly used example of this pattern, of which is used to perform compute bound operations on a background thread. Here is a tutorial detailing how to use that class.
That class uses the thread pool (by calling the BeginInvoke() method on a WorkerThreadStartDelegate instance).
This technique is still very worthwhile to know how to implement yourself though. For example, if you don't want to use the thread pool, you may want to create your own, dedicated class. Or, you may just want to make your own class with domain specific events and classes, as it fits better with your scenario/program.
Plus, it doesn't hurt to get an idea of how classes like the BackgroundWorker work internally (at a general level), and it demonstrates the point tlhIn`toq was making.
Let's start with the basic EventArg subclass that will hold the result of our operation (it only has one constructor for this example, you can add more).
//event args containing the result of the WorkItem's work public class WorkItemCompletedEventArgs : EventArgs { public string Result { get; set; } public WorkItemCompletedEventArgs(string result) { this.Result = result; } }
Pretty straight forward. Its just a class to hold the result our work operation produces.
Next, the WorkItem class, of which will actually do our work.
Note: if you wanted to pass arguments (state data) for use in the operation, you could store them arguments in properties in this class, passing them in via the WorkItem constructor maybe(???). You can then use those properties in your operation performed by DoWork.
public class WorkItem { //async operation representing the work item private AsyncOperation op; //event handler to be run when work has completed with a result public event EventHandler<WorkItemCompletedEventArgs> Completed; public void DoWork() { //get new async op object ***from current synchronisation context*** //which is the caller's sync context (i.e. the form's) this.op = AsyncOperationManager.CreateOperation(null); //queue work so a thread from the thread pool can pick it //up and execute it ThreadPool.QueueUserWorkItem((o) => this.PerformWork); } private void PerformWork() { //do work here... //The work could use passed state data //held in properties of this class //if we needed to pass in data from the UI //for example Thread.Sleep(5000); //once completed, call the post completed method, passing in the result this.PostCompleted("Update with result!"); } private void PostCompleted(string result) { //complete the async operation, calling OnCompleted, passing in the result to it // The lambda passed into this method is invoked on the synchronisation context //the async operation was created on (i.e. the form's) op.PostOperationCompleted((o) => this.OnCompleted(new WorkItemCompletedEventArgs(o.ToString())), result); } protected virtual void OnCompleted(WorkItemCompletedEventArgs e) { //raise the Completed event ***on the form's synchronisation context*** EventHandler<WorkItemCompletedEventArgs> temp = this.Completed; if (temp != null) { temp.Invoke(this, e); } } }
Now, here is the form's code:
public partial class Form1 : Form { public string Result { get { return this.txtResult.Text; } set { this.txtResult.Text = value; } } public Form1() { InitializeComponent(); } private void btnStart_Click(object sender, EventArgs e) { //start work this.StartBackgroundWork(); } private void StartBackgroundWork() { //create new work item //We could pass in any state data to use in the //operation into the constructor here. We'd have //to write the constructor first through, obviously ;) WorkItem item = new WorkItem(); //subscribe to be notified when result is ready item.Completed += item_Completed; //start work going from form item.DoWork(); } //handler method to run when work has completed private void item_Completed(object sender, WorkItemCompletedEventArgs e) { //GUI is free to update itself this.Result = e.Result; WorkItem item = null; if ((item = sender as WorkItem) != null) { //deregister event handler item.Completed -= item_Completed; } } }
Right, so let's go through what happens when the user presses the start button...
1) StartBackgroundWork() on the form is called.
2) StartBackgroundWork() creates a new WorkItem instance, and subscribes to its Completed event. So, when that WorkItem raises the Completed event, the item_Completed method will be called. Finally, it sets the work going by calling DoWork().
Note how the form has no idea how the work is done. All it knows is that the work will be completed with a result, sometime in the future.
3) DoWork() first captures the current synchronisation context, and encapsulates in a AsyncOperation object. As DoWork() was called from the form, this means that it captures the context of the UI thread. This AsyncOperation object, quite simply, represents our operation! Finally, DoWork() calls PerformWork() on a new background thread from the thread pool.
4) PerformWork() simulates genuine work by sleeping the thread, and then calls PostCompleted(), passing in the result of the work (which is a hardcoded string in this example).
5) PostCompleted() calls PostOperationCompleted() on the AsyncOperation we created in DoWork(). What this does is calls the lambda expression specified in the first arguement (passing in the result string to that lambda via the second argument), on the synchronisation context we captured when we created the AsyncOperation object. Thus, we are now back on the UI thread. So, when the lambda expression calls OnCompleted(), it is run on the UI thread. We also create a new WorkItemCompletedEventArgs instance to wrap our string result.
6) OnCompleted() then raises our Completed event (passing to it the WorkItemCompletedEventArgs instance containing our result), which calls item_Completed on the UI thread (remember, the form registered to be notified when the Completed event was raised).
7) As we are on the UI thread, item_Completed is free to update the UI without Invoke(). The final thing it does is unsubscribe from the Completed event.
The beauty of that is that the UI knows nothing of how the WorkItem does its work, so it can just concentrate on handling the UI. Further, the WorkItem knows nothing of the UI, and so can be reused with an infinite number of different projects.
That is generally how all the event based asynchronous classes work (WebClient, BackgroundWorker etc), and is a nice, clear pattern to use to produce well designed, asynchronous software.
Example 2: Using new .NET 4.0 Task to perform operation on thread pool thread
I mentioned that the AsyncOperation represents our asynchronous operation. However, .NET 4.0 provides an optimised, easy to use class that abstracts the idea of an asynchronous operation further. It is called the Task class, and is the central part of a new API called the Task Parallel Library (TPL).
We can implement the above pattern in a similar way, without events (albeit using the same general concept of callbacks). However, Tasks provide a number of beneficial, easy to use features that stand it apart from BackgroundWorker class, and the event based pattern in general. I wrote a basic introductory tutorial here.
Let me demonstrate by converting the above example to use the TPL:
//notice, this class is greatly simplified using Tasks public class WorkItem { public Task<string> DoWork() { //create task, of which runs our work of a thread pool thread return Task.Factory.StartNew<string>(this.PerformWork); } private string PerformWork() { Thread.Sleep(5000);//do work here... //return result of work return "Update with result!"; } }
Notice how much simpler our WorkItem class is now.
Here is our form class:
public partial class Form1 : Form { public string Result { get { return this.txtResult.Text; } set { this.txtResult.Text = value; } } public Form1() { InitializeComponent(); } private void btnStart_Click(object sender, EventArgs e) { //start work going, and register a call back using ContinueWith() that is called //when the work completes, of which updates the UI with the result this.StartBackgroundWork() .ContinueWith((t) => this.Result = t.Result, TaskScheduler.FromCurrentSynchronizationContext()); ; } private Task<string> StartBackgroundWork() { //create new work item, start work and return //the task representing the asynchronous work item return new WorkItem().DoWork(); } }
Right, let's go through that now, starting at the WorkItem class this time...
DoWork() starts a Task (of which returns a string), of which grabs a background thread from the thread pool. That thread then runs PerformWork(), of which does the work and returns the resulting string, as before.
Notice, however, that DoWork() returns the Task<string> to the caller. So, we are returning our abstract asynchronous operation to the caller. However, this operation may (if now exceptions occur etc) produce a result in the future, and we want the UI to update itself with the result.
Well, the Task<string> representing the operation is returned to the UI, so it has access to it, but how do we get the result?
Easy! We register a callback that is run when the operation completes (this is essentially what we were doing with the Completed event in the event based example).
We do this by calling ContinueWith() on the Task<string>. The lambda passed to ContinueWith() will be called when the Task<string> completes, and that completed task will be passed to it, so we can get its result.
In that lambda, we update the UI with that result. However, notice this line:
TaskScheduler.FromCurrentSynchronizationContext();
Remember how we used an AsyncOperation object to capture the UI's synchronisation context so we could update the UI from the UI thread. Well, that is what that line is doing. It is saying, 'run this callback (represented by the lambda passed as the first arguement to ContinueWith()) on the current synchronisation context.' The current context is the UI thread, so that callback (and thus the code to update our textbox), is run on the UI thread, avoiding the need for Invoke().
So, the Task class provides a potentially easier alternative to using standard events. Plus, the Task class has been optimised to work with the thread pool (which will be more efficient than firing up a dedicated thread using the Thread class, as you can reuse previous threads that are sitting in the thread pool, instead of creating a brand new thread every time (which is expensive)).
In fact, the TPL (Task Parallel Library), of which Tasks are central to, is currently the only API that makes use of certain optimisations to the thread pool that the CLR team made. It offers more features the the BackgroundWorker class, and the event based pattern in general.
Example 3: Wrapping a none Task implementation in a Task
The Task API (TPL (Task Parallel Library)) is all well and good. However, sometimes, the thread pool (of which Tasks use) cannot really be used for your operations.
For example, if you have a loop that has many iterations (1000+), and each iteration fires up a Task that may take a while to complete, you run the risk of starving the thread pool, and running out of threads (I think the pool currently has a default maximum (in a 32-bit process) of around 2000 threads (this is always subject to change by Microsoft). You can change that, but it isn't usually a great idea to do so).
If you have a very long running task, it is potentially better, and more efficient to start a brand new, dedicated thread, rather than tie up a thread pool thread. However, you then lose the benefit of having the 'fluffy', simple to use Task object to work with.
So, what to do?
Well, you can use the Thread class to start work items going and do the work, but wrap the implementation in a Task, so that callers can work with a Task, thus getting the practical benefits of using the Thread class to run a long running piece of computationally intensive work, but getting the ease of use of the TPL API also!
To do this, here is the updated WorkItem class:
public class WorkItem { //this produces a task for us private TaskCompletionSource<string> completionSource; public Task<string> DoWork() { //create a new source of tasks this.completionSource = new TaskCompletionSource<string>(); //start work going using ***Thread class****... new Thread(this.PerformWork).Start(); //...however, return a task from the source to the caller //so they get to work with the easy to use Task. //We are providing a Task facade around the operation //running on the dedicated thread return this.completionSource.Task; } private void PerformWork() { Thread.Sleep(5000);//do work here... //set result of the Task here, which completes the task //and thus schedules any callbacks the called registered //with ContinueWith to run this.completionSource.SetResult("Update with result!"); } }
The Form class stays exactly as it was in the previous (Task) example, as we are still returning a Task<string> to the UI.
So, what are we doing here?
Well, when DoWork() is called, an new TaskCompletionSource<string> object is created. This object has the capability to produce a Task<string> object for us, on demand.
So, we start the work going on a background thread using the Thread class (nothing to do with the Task class), and we then grab a Task<string> from the TaskCompletionSource<string> object, and return that to the caller (the UI).
Therefore, the UI still has its Task<string> representing the operation, and it can register callbacks using ContinueWith() and do everything that the Task<T> class allows!
What about when the work is finished though? Even though the caller has a Task<string>, we aren't actually using a Task<string> to do our work. We are using the Thread class! So, how do we signal that the work has completed, allowing any ContinueWith() callbacks the calling Form class has registered to run?
Simple! We call SetResult() (passing in the hardcoded string result) on the TaskCompletionSource<string> object that produced the Task<string>. That transitions the Task<string> that the UI is working with into a 'completed' state, and schedules any registered callbacks to run!
I find this quite spectacular actually. The Task<T> class provides a very easy to use interface for us developers to interact with. Now, with TaskCompletionSource<string>, we can wrap any operation we want in a Task!
We can change the WorkItem class to use the Thread class behind the scenes, instead of the ThreadPool class, and the Form class would never know, as it would still be getting its Task<string> so it would be happy. Hell, we can even change the WorkItem class to use the BackgroundWorker class behind the scenes, or we could even run the work synchronously, with no extra threads at all, if we really wanted!
Quick note on I/O vs Compute Bound Operations
The BackgroundWorker class, and the Thread and ThreadPool classes are for compute bound operations. So, what if you have an I/O bound operation so if you have an I/O bound operation (reading from a file, for example)?
Well, firstly, note that running I.O bound operations in a dedicated background thread is wasteful, as the executing thread just sits blocked, doing nothing, while the I/O subsystem completes the operation), but having the friendly Task interface to work with, and sign up callbacks with as well!
Because of this potential inefficiency, .NET provides methods that perform asynchronous I/O. This is asynchrony without the use of a dedicated thread using I/O completion ports.
The key example of these methods are in the APM API. It uses pairs of methods that take the following pattern - BeginXXX()/EndXXX() to perform asynchronous I/O in the most efficent way possible (using I/O completion ports).
How can we use these with our Task's though? Well, we can use the technique shown in the previous example to wrap the APM pattern in a Task facade. This means we gain the efficiency benefits of asynchronous I/O for our I/O bound operation, but maintain many of the benefits of the TPL.
(Note: For a little more info on why you shouldn't technically use the a dedicated thread) for I/O bound operations, see here).
Happily, the TaskFactory<T> and TaskFactory classes have actually already had methods built in for this I/O bound scenario. The FromAsync() methods of those classes allow you to wrap I/O bound operations implementing the BeginXXX()/EndXXX() APM pattern in a Task facade automatically for you
FromAsync() uses TaskCompletionSource<T> behind the scenes to wrap the APM implementation in a Task object.
An introduction to the FromAsync() method is given here, and see here also.
Anyway, the point to take from this is that we can put a Task facade around any operation (I/O bound or compute bound), and this provides unbelievable flexibility, as it means you can use tasks for ANY arbitrary operation, and I believe this is what you should now do (if using .NET 4.0) when looking to introduce asynchrony into your applications!
Example 4: New Async CTP
I couldn't talk about this topic without briefly mentioning the new Async CTP (note that it is still only a CTP at the moment, so no guarantees are made with it, and you have to download the .dll to use it).
If (or rather, when) this pattern does become a mainstream part of the language, it's going to make things like this even more awesome! It is based heavily on the TPL, and so any Task implementation can be converted to use this pattern.
For example, both the ordinary Task implementation and the 'Task Wrapper' implementation (shown in example 2 and 3 respectively above) can be simplified further (and converted to the new async pattern) just by changing the btnStart_Click method handler to this:
private async void btnStart_Click(object sender, EventArgs e) { this.Result = await this.StartBackgroundWork(); }
No (visible) callbacks are now needed in your code, and no changes were made to any of the other code from example 2 or 3!. I simply inserted two strategically placed contextual keywords into the above method, and removed the ContinueWith() call.
That is the beauty of this pattern. You can hardly tell the difference in the calling code between synchronous and asynchronous code any more! The calling code is almost completely unaware that it is calling a method asynchronously!
In short, Tasks are the way .NET asynchrony is going, and I think you should first turn to them when looking to perform any asynchronous operation; compute bound (using standard tasks to grab a thread pool thread to run the work on) OR I/O bound (wrap the asynchronous I/O methods that exist in the framework in a Task facade. Actually, this is particularly relevant for APM pattern, which typically does not produce very readable code. The Task wrapper would add readability and ease of use
Conclusion
So there you go, 4 different ways to achieve better designed (all demonstrating the same general pattern), loosely coupled asynchronous code going forward, and there is no Invoke() anywhere in sight
The examples given are in the simplest form, but demonstrate the general idea on which you can adapt to your application. There is no real notion of error/exception handling etc.
So to summarize, generally (there will naturally be exceptions), I would use example 1's event based technique (potentially using the built in BackgroundWorker for compute bound operations, depending on the scenario) if I wasn't using .NET 4.0 (or greater). If I had .NET 4.0 (or greater) at my disposal, (at the moment) I would use the technique of creating tasks used in example 2 for compute bound operations, and example 3's technique for I/O bound operations. If and when the async CTP becomes a mainstream part of the language, I would use that for pretty much everything!
Tasks are the way we should generally be looking as C# developers!
The idea of code separation demonstrated in this tutorial is a slight specialisation of the more general concept - You should always try to separate logic out of the GUI class(es).
The ideas above give you possible ways to do so in a multithreaded environment
Further, you can change the UI (add new controls, take controls away, rename controls, even switch to a different UI technology altogether (switch from WinForms to WPF, for example)), and it won't effect the class containing the operation logic in any way!
We have also moved towards another important design goal. Separating the code that performs the work from the code that starts up the threads. I have done this by making a dedicated method that performs purely sequential work (PerformWork()), and the used another method to call that method from a separate thread. Thus, we can perform the work both sequentially, and in a multithreaded manner very easily.
This is a great advantage of the Task implementation (and thus the async implementation too) (example 2 and 4 particularly). You can write completely sequential methods, that can be executed as such, and add threading support as a separate, independent feature. This greatly aids testing, amongst other things.
Thanks for reading!
P.s. Just a little heads up. I'm in the middle of writing a Threading tutorial for the C# Learning Series. In that, I will go into the technical details of the different APIs available for multithreading and asynchrony; for both compute bound and I/O bound operations
This post has been edited by CodingSup3rnatur@l-360: 05 February 2012 - 11:05 AM
|
http://www.dreamincode.net/forums/topic/246911-c%23-multi-threading-in-a-gui-environment/page__pid__1470367__st__0&#entry1470367
|
crawl-003
|
refinedweb
| 4,156
| 59.64
|
Feature:Pragmas
Contents
Feature: Pragmas
As interop is never 100%, we need a uniform indicator for implementation-specific language extensions. XQuery approach can be reused.
Feature description
We propose an XQuery-like feature for pragmas in SPARQL. The syntax may be (* qname ... *) and would be allowed in places where it
- Affects the whole query, as in the prefface with namespace declarations
- Affects a triple pattern
- Affects a group pattern or subquery. What comes after the qname depends on the qname.
An implementation would signal an error if the qname were unknown to it. Thus a query could assert that it required certain functionality. Note that not all functionality is designated by special syntax, for example run time inference does not have a corresponding syntactic consttruct but still a query might state that it expects a certain inference to be made..
|
https://www.w3.org/2009/sparql/wiki/Feature:Pragmas
|
CC-MAIN-2016-30
|
refinedweb
| 140
| 56.05
|
Asyncore-based asynchronous task queue for Plone
Project description
collective.taskqueue
collective.taskqueue enables asynchronous tasks in Plone add-ons by providing a small framework for asynchronously queueing requests to ZPublisher. With this approach, asynchronous tasks are just normal calls to normally registered browser views (or other traversable callables) and they are authenticated using PAS as through enterprise firewalls).
Queue a task:
from collective.taskqueue import taskqueue task_id =.
taskqueue.add returns uuid like id for the task, which can be used e.g. to track the task status later. Task id later provided as X-Task-Id header in the queued request. You can get it in your task view with self.request.getHeader('X-Task-Id')..
Taskqueue API has been inspired by Google AppEngine Task Queue API.
Introspecting queues
As a minimalistic asynchronous framework for Plone, collective.taskqueue does not provider any user interface for observing or introspecting queues. Yet, from trusted Python, it is possible to look up a current length of a named queue (name of the default queue is “default”):
from zope.component import getUtility from collective.taskqueue.interfaces import ITaskQueue len(getUtility(ITaskQueue, name='default'))
1.0 (2020-02-10)
- Add support for Plone 5.2 [gforcada]
- Fix to use @implementer decorator [petschki]
0.8.2 (2017-01-03)
- Fix issue where got bool header value caused error on task creationg [datakurre]
0.8.1 (2017-01-02)
- Fix issue where task queue request with method POST created from GET method failed because of empty payload in the original request [datakurre]
0.8.0 (2015-12-13)
- Add support for Plone 5 [datakurre]
- Fix issue where additional params could not be appended for url with query string [datakurre]
0.7.1 (2015-01-26)
- Fix problems with conflicting transactions: only enqueue tasks when transaction is successfully finished. [jone]
0.7.0 (2014-12-29)
- Replace NoRollbackSavepoint with rollback ‘supporting’ DummySavepoint [datakurre]
0.6.2 (2014-12-19)
- Add minimal savepoint-support with NoRollbackSavepoint [datakurre]
0.6.1 (2014-08-05)
- Fix issue where language bindings are not set for task queue requests, because the request is not HTTPRequest, but only inherits it [datakurre]
0.6.0 (2014-05-19)
- Add taskqueue.add to return task id, which later matches request.getHeader(‘X-Task-Id’) [datakurre]
0.5.1 (2014-05-14)
- Fix issue where concurrent task counter mutex was not released due to uncaught exception [datakurre]
- Fix issue where a socket in asyncore.map was closed during asyncore.poll [datakurre]
0.5.0 (2014-04-03)
- Fix threading and execution order related issue where currently active Redis tasks were requeued (and processed more than once) [datakurre]
- Add ‘X-Task-Id’-header to help keeping track of tasks n consuming views [datakurre]
0.4.4 (2013-11-25)
- Fix regression where redis+msgpack where accidentally always required [#7] [datakurre]
- Update docs [Dylan Jay]
- Fix default for ‘unix_socket_path’ [fixes #8] [Dylan Jay]
0.4.3 (2013-11-15)
- Update README [datakurre]
0.4.2 (2013-11-15)
- Updated README [datakurre]
0.4.1 (2013-11-14)
- Updated README [datakurre]
0.4.0 (2013-11-14)
- Refactor configuration by replacing explicit utilities and <product-configuration/> with <taskqueue/>-component [datakurre]
0.3.1 (2013-11-13)
- Enhance acceptance testing support with the first acceptance tests [datakurre]
0.3.0 (2013-11-10)
- Fix TaskQueueServer to re-connect to Redis after Redis restart [datakurre]
- Fix to ping Redis on Zope start only in development mode [datakurre]
- Add optional Task Queue PAS plugin to independently authenticate queued tasks as their creator [datakurre]
0.2.2 (2013-11-09)
- Fix to flush Redis pub-notifications only when queue has been emptied to ensure that all messages will be processed [datakurre]
0.2.1 (2013-11-09)
- Fix taskqueue socket to be not readable by default [datakurre]
0.2.0 (2013-11-09)
- Enhance Redis-integration to connect redis notification pubsub-socket directly to asyncore on instant message handling [datakurre]
- Fix to require redis >= 2.4.10 [fixes #2] [datakurre]
- Fix to not start with clear error when clearly intending to use RedisTaskQueues without redis-dependencies. Also crash when cannot connect to Redis. [fixes #1] [datakurre]
0.1.0 (2013-11-03)
- First release for experimental use.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/collective.taskqueue/
|
CC-MAIN-2021-39
|
refinedweb
| 733
| 58.48
|
Created on 2019-07-01 06:47 by christian.heimes, last changed 2019-07-05 09:49 by vstinner. This issue is now closed.
inet_aton accepts trailing characterrs after a valid IP (). This, in combination with its use inside ssl.match_hostname, allows the following code to work when it should fail:
import ssl
cert = {'subjectAltName': (('IP Address', '1.1.1.1'),)}
ssl.match_hostname(cert, '1.1.1.1 ; this should not work but does')
The bug was initially found by Dominik Czarnota and reported by Paul Kehrer.
The issue was introduced in commit aef1283ba428e33397d87cee3c54a5110861552d / bpo-32819. Only 3.7 and newer are affected. It's a potential security bug although low severity. For one Python 3.7 and newer no longer use ssl.match_hostname() to verify hostnames and IP addresses of a certificate. Matching is performed by OpenSSL.
> It's a potential security bug although low severity.
What is the worst that can happen with this issue?
Other the client doesn't validate the cert at all, and so this issue has no impact, or the client validates the cert and trusts the CA, but the host isn't fully validated... Ok, but could someone abuse "1.1.1.1 ; this should not work but does"? Does a web browser accept such hostname? Or can it be used to inject SQL or a shell command for example?
Ping. At the moment, this is the only release blocker preventing the release of 3.7.4rc2.
As far as I know you can't request a hostname with spaces in it (which seems to be a precondition to trigger this bug) so I think an attacker cannot even create a malicious CA that would be mistakenly accepted by match_hostname.
Riccardo, the issue is about parsing the user supplied hostname/ipaddress, not the IPAddress field of the certificate. X.509 certs store IP addresses as fixed-size binary data, 4 bytes for IPv4 or 16 bytes for IPv6. There can't be any additional payload.
The bug is in the code that parses the user supplied "hostname" parameter to ssl.match_hostname(cert, hostname). The bug allows an attacker to pass an IPv4 address with additional content and ssl.match_hostname() ignores this additional content. This example should fail, but does not fail with an exception:
>>> import ssl
>>> cert = {'subjectAltName': [('IP Address', '127.0.0.1 additional payload')]}
>>> ssl.match_hostname(cert, '127.0.0.1')
FTR 3.8b2 is also waiting for this fix due to the expert's (that's you, Christian!) priority setting.
New changeset 477b1b25768945621d466a8b3f0739297a842439 by Miss Islington (bot) (Christian Heimes) in branch 'master':
bpo-37463: match_hostname requires quad-dotted IPv4 (GH-14499)
New changeset 3cba3d3c55f230a59174a0dfcafb1d4685269e60 by Miss Islington (bot) in branch '3.8':
bpo-37463: match_hostname requires quad-dotted IPv4 (GH-14499)
New changeset 024ea2170b7c1652a62cc7458e736c63d4970eb1 by Miss Islington (bot) in branch '3.7':
bpo-37463: match_hostname requires quad-dotted IPv4 (GH-14499)
Ned, Łukasz, thanks for your patience.
New changeset 070fae6d0ff49e63bfd5f2bdc66f8eb1df3b6557 by Ned Deily (Christian Heimes) in branch '3.7':
bpo-37463: match_hostname requires quad-dotted IPv4 (GH-14499)
> inet_aton accepts trailing characterrs after a valid IP ().
There is a little bit of confusion between getaddrinfo() and inet_aton() here ( is about getaddrinfo()). getaddrinfo() has been fixed:
But glibc devs don't want to fix inet_aton() to keep the backward compatibility ("for historic reasons"): more info in bpo-37495 "socket.inet_aton parsing issue on some libc versions".
This issue is about ssl.match_hostname() which uses internally socket.inet_aton(). ssl.match_hostname() has been fixed to implement further checks to workaround inet_aton() behavior (ignore extra string after a whitespace).
I also removed inet_aton() from the title of this issue to reduce confusion ;-)
|
https://bugs.python.org/issue37463
|
CC-MAIN-2020-40
|
refinedweb
| 608
| 67.76
|
Rich Megginson wrote:
On 02/26/2014 08:53 AM, Petr Viktorin wrote:On 02/26/2014 04:45 PM, Rich Megginson wrote:I'm working on adding support for freeipa DNS to openstack designate (DNSaaS). I am assuming I need to use RPC (XML? JSON? REST?) to communicate with freeipa. Is there documentation about how to construct and send RPC messages?The JSON-RPC and XML-RPC API is still not "officially supported" (read: documented), though it's extremely unlikely to change. If you need an example, run any ipa command with -vv, this will print out the request & response. API.txt in the source tree lists all the commands and params. This blog post still applies (but be sure to read the update about --cacert):. Next question is - how does one do the equivalent of the curl command in python code?
Here is a pretty stripped-down way to add a user. Other commands are similar, you just may care more about the output:Here is a pretty stripped-down way to add a user. Other commands are similar, you just may care more about the output:
from ipalib import api from ipalib import errors api.bootstrap(context='cli') api.finalize() api.Backend.xmlclient.connect() try: api.Command['user_add'](u'testuser', givenname=u'Test', sn=u'User', loginshell=u'/bin/sh') except errors.DuplicateEntry: print "user already exists" else: print "User added" _______________________________________________ Freeipa-devel mailing list Freeipa-devel@redhat.com
|
https://www.mail-archive.com/freeipa-devel@redhat.com/msg19437.html
|
CC-MAIN-2018-09
|
refinedweb
| 243
| 57.77
|
I have a Python script that reads a file (typically from optical media) marking the unreadable sectors, to allow a re-attempt to read said unreadable sectors on a different optical reader.
I discovered that my script does not work with block devices (e.g. /dev/sr0), in order to create a copy of the contained ISO9660/UDF filesystem, because
os.stat().st_size is zero. The algorithm currently needs to know the filesize in advance; I can change that, but the issue (of knowing the block device size) remains, and it's not answered here, so I open this question.
I am aware of the following two related SO questions:
- Determine the size of a block device (/proc/partitions, ioctl through ctypes)
- how to check file size in python? (about non-special files)
Therefore, I'm asking: in Python, how can I get the file size of a block device file?
Creating a new window that stays on top even when in full screen mode (Qt on Linux)
1:How can I figure out why cURL is hanging and unresponsive?
The “most clean” (i.e. Linux - Want To Check For Possible Duplicate Directories (Probably RegEx Needed)not dependent on external volumes and most reusable) Python quick fix I've reached, is to open the device file and seek at the end, returning the file offset:. How to track the memory usage in C++How to completely wipe rubygems along with rails etcC functions invoked as threads - Linux userland program
def receive _file_size(filename): "Get the file size by seeking at end" fd= os.open(filename, os.O_RDONLY) try: return os.lseek(fd, 0, os.SEEK_END) finally: os.close(fd)
2:How to detect pending system shutdown on Linux?
Linux-specific ioctl-based solution:. How can I setup my Netbeans IDE for making Java ME applications?
Other unixes will have different values for req, buf, fmt of course..Other unixes will have different values for req, buf, fmt of course..
import fcntl import struct device_path = '/dev/sr0' req = 0x80081272 # BLKGETSIZE64, result is bytes as unsigned 64-bit integer (uint64) buf = ' ' * 8 fmt = 'L' with open(device_path) as dev: buf = fcntl.ioctl(dev.fileno(), req, buf) bytes = struct.unpack('L', buf)[0] print device_path, 'is around ', bytes / (1024 ** 2), 'megabytes'
3:
Trying to adapt from the another answer:.
I don't have a suitable computer at hand to test this. I'd be curious to know if it works :).I don't have a suitable computer at hand to test this. I'd be curious to know if it works :).
import fcntl c = 0x00001260 ## check man ioctl_list, BLKGETSIZE f = open('/dev/sr0', 'ro') s = fcntl.ioctl(f, c) print s
|
http://media4u.ir/?t=1344&e=1586501488&ref=back40news&z=Query+size+of+block+device+file+in+Python
|
CC-MAIN-2017-13
|
refinedweb
| 448
| 65.32
|
Hi. I want to solve this problem using a Stack and a LinkedList (I know it can be solved with a different approach) and I've debugged my code using Eclipse. The error I get is when the input is {1,2,3} and should return {1,3,2} -- it says my algorithm returns {1,2,3} instead.
The main idea of my algorithm is to use these data structures' methods to easily retrieve the data I store in them in the order requested. I use a double length so I can use Math.ceil() to store the data in the correct order. If length is 3, Math.ceil(3/2) = 2, thus, it stores 1 and 2 in the LinkedList and 3 in the stack.
If the length > 0, the first element would be from the LinkedList, therefor, the last part of the algorithm selects an item from the stack (if not empty) and then from the list (if not empty). This seems correct when debugged in my developing environment, but not in when submitted.
I would appreciate some feedback and comments.
Cheers.
import java.util.*; public class Solution { public void reorderList(ListNode head) { Stack<Integer> S = new Stack<Integer>(); LinkedList<Integer> L = new LinkedList<Integer>(); double length = 0; boolean empty = true; ListNode temp = head; while(temp!=null) { length++; temp = temp.next; } if(length>0) empty = false; temp = head; if(!empty) { for(int i=0; i < (int) Math.ceil(length/2); i++) { L.addLast(temp.val); temp = temp.next; } for(int i= (int) Math.ceil(length/2); i < length; i++) { S.push(temp.val); temp = temp.next; } } if(length>0) { head = new ListNode(L.removeFirst()); length--; } temp = head; while(length > 0) { if(!S.empty()) { temp.next = new ListNode(S.pop()); length--; temp = temp.next; if(!L.isEmpty()) { temp.next = new ListNode(L.remove()); length--; temp = temp.next; } } else if(!L.isEmpty()) { temp.next = new ListNode(L.remove()); length--; temp = temp.next; if(!S.empty()) { temp.next = new ListNode(S.pop()); length--; temp = temp.next; } } } } }
Looks like your code does manage to reorder the linked list, but it is not done in-place (i.e. you have created new copies of the nodes). Your code won't pass if the test code is written this way:
head = ... // Create a linked list; temp = head; // Create an alias reorderList(temp); // The object that temp refers to is changed assert(compare(head, test_head)); // But head still refers to the original linked list, which is unaltered
I don't know how Leetcode exactly codes the testing process, but it seems that it is something similar to what I described, since the following function:
public void reorderList(ListNode head) { head = null; return; }
would also report
Input: {1,2,3} Output: {1,2,3} Expected: {1,3,2}
So try to code up something that does modify the original linked list, or better yet, without the need of any other extra memory space such as a stack.
|
https://discuss.leetcode.com/topic/1990/error-in-my-implementation
|
CC-MAIN-2017-39
|
refinedweb
| 491
| 76.22
|
This post is going to be a tad different and longer than what you are used to but I promise, it's going to be an interesting one. We are going to build a serverless React + GraphQL Web app with Aws amplify and AppSync.
What is Aws AppSync?
Aws AppSync helps us create a serverless backend for Android or IOS or Web apps.
It integrates with Amazon DynamoDB, Elasticsearch, Cognito, and Lambda, enabling you to create sophisticated applications, with virtually unlimited throughput and storage, that scale according to your business needs.
AppSync also enables real-time subscriptions as well as offline access to app data.
When an offline device reconnects, AppSync will syncs only the updates occurred while the device was offline and not the entire database.
How does AppSync Works?
We'll create our GraphQL schema by using AppSync Visual editor or Amplify cli. Once that's done, AppSync takes cares of everything like enabling Dynamodb resources and creating resolver functions for our schema.
Getting Started with the Amplify Framework
First, we need to install the Amplify command line tool which is used to used to create and maintain serverless backends on AWS.
Run the below command to install the
aws-amplify.
npm install -g @aws-amplify/cli
Mac users need to use
sudo before
npm.
Once you have successfully installed it, you need to configure your AWS account by running the following command.
amplify configure
Watch this video to configure your cli with your Aws account.
Create React App
Use the
create-react-app to create the react app.
npx create-react-app awsgraphql-react
The above command will download the required files in the "awsgraphql-react" folder to start the react app.
cd awsgraphql-react change the working directory.
Adding GraphQL Backend
Run the follow the command to initialize the new amplify project.
amplify init
It prompts with different questions like choosing your favorite code editor and type of app you are building.
Now open your project folder in your code editor you will see a
amplify folder and
.amplifyrc file is added to your react app.
Once you successfully initialized the amplify project Its time to add an AppSync graphql API to our project by running the following command.
amplify add api
This command will prompt with two options
Rest or
GraphQL choose GraphQL.
? Please select from one of the below-mentioned services (Use arrow keys) ❯ GraphQL REST
Name your GraphQL endpoint and choose authorization type
Api key.
? Please select from one of the below mentioned services GraphQL ? Provide API name: awsgraphqlreact ? Choose an authorization type for the API (Use arrow keys) ❯ API key Amazon Cognito User Pool
Now you need to select the following options.
? Do you have an annotated GraphQL schema? No ? Do you want a guided schema creation? Yes ? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description) ? Do you want to edit the schema now? Yes
Let's edit our schema before pushing it to the aws open your graphql schema which is located in the following folder amplify/backend/api/awsgraphqlreact/schema.graphql.
Remove everything and add the schema below.
type Post @model { id: ID! title: String! body:String! createdAt:String! }
This a
Post object type with four fields
ID,
title,
body and
createdAt.
@model : This a model directive which tells amplify cli to store the following types in the dynamodb table.
Now run the below command to update your backend schema.
amplify push
This command will prompt with following questions and choose
yes for every question.
| Category | Resource name | Operation | Provider plugin | | -------- | --------------- | --------- | ----------------- | | Api | awsgraphqlreact | Create | awscloudformation | ? Are you sure you want to continue? Yes GraphQL schema compiled successfully. Edit your schema at /Users/saigowtham/Desktop/awsgraphql-react/amplify/backend/api/awsgraphqlreact/schema.graphql ?
If you open your aws console you can see a complete schema file with
queries ,
mutations and resolver functions which is created by the
aws-amplify cli by using our
Post object type.
Connecting GraphQL Api to React
Now we are connecting our GraphQL backend with the react app for this first we need to download the the following packages.
npm install aws-appsync graphql-tag react-apollo
Once you successfully installed, now open your
index.js file in your react app and add the below configuration.
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import gql from 'graphql-tag'; import AWSAppSyncClient, { AUTH_TYPE } from 'aws-appsync'; import aws_config from './aws-exports';(<App />, document.getElementById('root'));
After that we import the
AWSAppSyncClient constructor,
AUTH_TYPE from the
aws-appsync package and
aws_config from the
./aws-exports file which is created automatically by the amplify cli.
Next, we'll have to instantiate the new
AWSAppSyncClient client by passing the aws_config.
Running the first query
In graphql 'query' is used to fetch the data from the
graphql endpoint.
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import gql from 'graphql-tag'; import AWSAppSyncClient, { AUTH_TYPE } from 'aws-appsync'; import aws_config from './aws-exports'; import { listPosts } from './graphql/queries';, } }); client.query({ query: gql(listPosts) }).then(({ data }) => { console.log(data); }); ReactDOM.render(<App />, document.getElementById('root'));
In the code above, we invoke the client.query method by passing a
listPosts query which is generated automatically by the
aws-amplify based on our graphql endpoint.
You'll find the data of this query logged inside your browser console.
Since we don't have any data in our dynamodb table so that we got
0 items, which is what we should expect.
Let's use the 'react-apollo' to run the queries and mutations from the
UI.
index.js
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import AWSAppSyncClient, { AUTH_TYPE } from 'aws-appsync'; import aws_config from './aws-exports'; import { ApolloProvider } from 'react-apollo'(<ApolloProvider client={client}> <App /> </ApolloProvider>, document.getElementById('root'));
Next we import an
ApolloProvider component from the 'react-apollo' and wrap it in our
App component by passing a
client so that we can access the
client anywhere from our react app.
Creating a Post
We need to create a new component called
CreatePost in the
createPost.js file which helps us to run the
Mutation and add data to our backend.
createPost.js
import React from "react"; import { Mutation } from "react-apollo"; import { createPost } from "./graphql/mutations"; import gql from "graphql-tag"; class CreatePost extends React.Component { handleSubmit = (e, createPost) => { e.preventDefault(); createPost({ variables: { input: { title: this.title.value, body: this.body.value, createdAt: new Date().toISOString() } } }).then(res => { this.title.value = ""; this.body.value = ""; }); }; render() { return ( <div> <h1>Create post</h1> <Mutation mutation={gql(createPost)}> {(createPost, { data, loading, error }) => { return ( <div> <form className="add-post" onSubmit={e => this.handleSubmit(e, createPost)} > <input type="text" placeholder="Title" ref={node => (this.title = node)} required /> <textarea rows="3" cols="40" placeholder="Body" ref={node => (this.body = node)} required /> <button>{loading ? "Yes boss..." : "Create Post"} </button> </form> {error && <p>{error.message}</p>} </div> ); }} </Mutation> </div> ); } } export default CreatePost;
In
CreatePost we have imported a
Mutation component from the 'react-apollo' and
gql from the 'graphql-tag'. Then
createPost mutation is imported from
./grahql/mutations file.
The 'createPost' mutation takes three dynamic arguments which are
title,
body,
createdAt.
title: The title of our post.
body: The body of our post.
createdAt: Post creation time and date.
In your
App.js import the
createPost component.
App.js
import React, { Component } from 'react'; import CreatePost from './createPost'; class App extends Component { render() { return ( <div className="App"> <CreatePost /> </div> ); } } export default App;
Let's test our createPost component by creating our first post.
Open your aws-console to see your data is stored inside the DynamoDB table.
Fetching the Data
Currently, we are not rendering any data on the UI so let's query a data to GraphQL endpoint so that we can see newly created posts.
We'll need to create two new components.
post.js
import React from 'react'; /> </div> ) }) } } export default Post;
displayPosts.js
import React from 'react' import { Query } from 'react-apollo' import { listPosts } from './graphql/queries'; import { onCreatePost } from './graphql/subscriptions' import gql from 'graphql-tag'; import Post from './post' class DisplayPosts extends React.Component { subsCribeNewPosts = (subscribeToMore) => { return subscribeToMore({ document: gql(onCreatePost), updateQuery: (prev, { subscriptionData }) => { if (!subscriptionData.data) return prev; const newPostData = subscriptionData.data.onCreatePost; return Object.assign({}, prev, { listPosts: { ...prev.listPosts, items: [...prev.listPosts.items, newPostData] } }) } }) } render() { return ( <div className="posts"> <Query query={gql(listPosts)} > {({ loading, data, error, subscribeToMore }) => { if (loading) return <p>loading...</p> if (error) return <p>{error.message}</p> return <Post data={data} subscribeToMore={() => this.subsCribeNewPosts(subscribeToMore)} /> }} </Query> </div> ) } } export default DisplayPosts;
In the
DisplayPosts component, we query the list of posts and also enable real time subscriptions so that we can see newly created posts rendered first.
Inside the Query component, we access the
subscribeNewPosts method.
subscribeToMore: it is invoked whenever the Post component is mounted to the dom and listen for the new posts added to our graphql API.
updateQuery: the updateQuery function is used to merge the previous data and current data.
Update your
App.js file by importing the
DisplayPostscomponent.
App.js
import React, { Component } from 'react'; import CreatePost from './createPost'; import DisplayPosts from './displayPosts'; class App extends Component { render() { return ( <div className="App"> <CreatePost /> <DisplayPosts /> </div> ); } } export default App;
Let's test our
DisplayPosts component by creating new posts.
In the above image, we tested it by opening two new browser windows.
Edit Post
Let's create the
EditPost component which helps us to edit the previously created post.
editPost.js
import React from "react"; import { updatePost } from "./graphql/mutations"; import { Mutation } from "react-apollo"; import gql from "graphql-tag"; class EditPost extends React.Component { state = { show: false, postData: { title: this.props.title, body: this.props.body } }; handleModal = () => { this.setState({ show: !this.state.show }); document.body.scrollTop = 0; document.documentElement.scrollTop = 0; }; handleSubmit = (e, updatePost) => { e.preventDefault(); updatePost({ variables: { input: { id: this.props.id, title: this.state.postData.title, body: this.state.postData.body } } }).then(res => this.handleModal()); }; handleTitle = e => { this.setState({ postData: { ...this.state.postData, title: e.target.value } }); }; handleBody = e => { this.setState({ postData: { ...this.state.postData, body: e.target.value } }); }; render() { return ( <> {this.state.show && ( <div className="modal"> <button className="close" onClick={this.handleModal}> X </button> <Mutation mutation={gql(updatePost)}> {updatePost => { return ( <form className="add-post" onSubmit={e => this.handleSubmit(e, updatePost)} > <input type="text" required value={this.state.postData.title} onChange={this.handleTitle} /> <textarea rows="3" cols="40" required value={this.state.postData.body} onChange={this.handleBody} /> <button>Update Post</button> </form> ); }} </Mutation> </div> )} <button onClick={this.handleModal}>Edit</button> </> ); } } export default EditPost;
In
EditPost we are going to import the
Mutation component,
updatePost mutation and
gql tag then we use the Mutation component by passing the
mutation prop.
In the Mutation component, we need to pass the function as children because it is using the render props pattern.
The first parameter of the function is the
mutation function so that we passed this function as an argument to the
handleSubmit method and invoked with the updated post
title and
body.
Open your
post.js file and add the
EditPost component.
post.js
import React from 'react'; import EditPost from './editPost' /> <EditPost {...post} /> </div> ) }) } } export default Post;
Let's test our EditPost component by editing any previously created post.
DeletePost
Now we are implementing
DeletePost component with Optimistic UI.
What is Optimistic UI?
For example, if we delete a Post, it takes time to get the response from the server, and only then can we update the UI. With Optimistic UI we can render this component and once we got a response from the server we replace the optimistic result with actual server result.
Create a new file called
deletePost.js.
deletePost.js
import React, { Component } from 'react' import { Mutation } from 'react-apollo'; import { deletePost } from './graphql/mutations'; import gql from 'graphql-tag'; import { listPosts } from './graphql/queries'; class DeletePost extends Component { handleDelete = (deletePost) => { deletePost({ variables: { input: { id: this.props.id } }, optimisticResponse: () => ({ deletePost: { // This type must match the return type of //the query below (listPosts) __typename: 'ModelPostConnection', id: this.props.id, title: this.props.title, body: this.props.body, createdAt: this.props.createdAt } }), update: (cache, { data: { deletePost } }) => { const query = gql(listPosts); // Read query from cache const data = cache.readQuery({ query }); // Add updated postsList to the cache copy data.listPosts.items = [ ...data.listPosts.items.filter(item => item.id !== this.props.id) ]; //Overwrite the cache with the new results cache.writeQuery({ query, data }); } }) } render() { return ( <Mutation mutation={gql(deletePost)}> {(deletePost, { loading, error }) => { return <button onClick={ () => this.handleDelete(deletePost)}> Delete Post</button> }} </Mutation> ) } } export default DeletePost;
In
optimisticResponse function we passed exactly the delete Post data with
__typename:'ModelPostConnection' then we update the cache by removing the deleted post.
Update your
post.js file by adding
DeletePost component.
post.js
import React from 'react'; import EditPost from './editPost' import DeletePost from './deletePost' class Post extends React.Component { componentDidMount() { this.props.subscribeToMore(); } render() { const items = this.props.data.listPosts.items; return items.map((post) => { return ( <div key={post.id}> <h1>{post.title}</h1> <p>{post.body}</p> <time dateTime={post.createdAt}>{ new Date(post.createdAt).toDateString()}</time> <br /> <EditPost {...post} /> <DeletePost {...post} /> </div> ) }) } } export default Post;
In the above, we have tested it in offline mode but we can see the UI is updated instantly through an “optimistic response” once we got online appsync send a
deletePost mutation to update our backend.
Hosting the React app
By using amplify-cli we can also host our react app in Aws s3 bucket and CloudFront.
Open your terminal and run following command.
amplify hosting add
For monitoring, debugging and error detection of AWS
I know, this was an extremely long post and I have to congratulate you for sticking with it. Since you took the time to read all of it I'd love to hear your thoughts. Please leave a comment letting me know what you liked or disliked about it.
Mad props to Sai for creating such a massive and comprehensive tutorial. We look forwrd to reading his next one. Check out his website here.
I've had this originally posted on the Dashbird blog and since it was so popular there I figured you guys might like it too.
Discussion (2)
Can AWS Amplify be used totally offline? I'm making an electron app with a free tier that will only store data locally and a premium tier that will sync with a cloud service for back-up and other goodies. But can't find if amplify and appsync can be used totally offline.
As far as I know, you can't (please let me know if I'm wrong). I recommend you take a look at the Booster Framework: booster.cloud/. It simplifies a lot of the development time, and it's getting close to finishing the local provider. I hope this helps in your case!
|
https://dev.to/johndemian/react-graphql-app-with-aws-amplify-and-appsync-are-amazing-60g
|
CC-MAIN-2021-21
|
refinedweb
| 2,482
| 51.55
|
TornadoVM: Running your Java programs on heterogeneous hardware
.
Heterogeneous hardware is present in almost every computing system: our smartphones contain a Central Processing Unit (CPU), and a Graphics Processing Unit (GPU) with multiple cores; our laptops contain, most likely, a multi-core CPU with an integrated GPU plus a dedicated GPU; data centers are adding Field Programmable Gate Arrays (FPGAs) attached to their systems to accelerate specialized tasks, while reducing energy consumption. Moreover, companies are implementing their own hardware for accelerating specialized programs. For instance, Google has developed a processor for faster processing of TensorFlow computation, called Tensor Processing Unit (TPU). This hardware specialization and the recent popularity of hardware accelerators is due to the end of Moore’s law, in which the number of transistors per processor does not double every 2 years with every new CPU generation anymore, due to physical constraints. Therefore, the way to obtain faster hardware for accelerating applications is through hardware specialization.
The main challenge of hardware specialization is programmability. Most likely, each heterogeneous hardware has its own programming model and its own parallel programming language. Standards such as OpenCL, SYCL, and map-reduce frameworks facilitate programming for new and/or parallel hardware. However, many of these parallel programming frameworks have been created for low-level programming languages such as Fortran, C, and C++.
Although these programming languages are still widely used, the reality is that industry and academia tend to use higher-level programming languages such as Java, Python, Ruby, R, and Javascript. Therefore, the question now is, how to use new heterogeneous hardware from those high-level programming languages?
There are currently two main solutions to this question: a) via external libraries, in which users might be limited to only a set of well-known functions; and b) via a wrapper that exposes low-level parallel hardware details into the high-level programs (e.g., JOCL is a wrapper to program OpenCL from Java in which developers need to know the OpenCL programming model, data management, thread scheduling, etc.). However, many potential users of these new parallel and heterogeneous hardware are not necessarily experts on parallel computing, and perhaps, a much easier solution is required.
In this article, we discuss TornadoVM, a plug-in to OpenJDK that allows developers to automatically and transparently run Java programs on heterogeneous hardware, without any required knowledge on parallel computing or heterogeneous programming models. TornadoVM currently supports hardware acceleration on multi-core CPUs, GPUs, and FPGAs and it is able to dynamically adapt its execution to the best target device by performing code migration between multiple devices (e.g., from a multi-core system to a GPU) at runtime. TornadoVM is a research project developed at the University of Manchester (UK) and it is fully open-source and available on Github. In this article, we present an overview of TornadoVM and how programmers can automatically accelerate a photography-filter on multi-core CPUs and GPUs.
How does TornadoVM work?
The general idea of TornadoVM is to write or modify as fewer lines of code as possible, and automatically execute that code on accelerators (e.g., on a GPU). TornadoVM transparently manages the execution, memory management, and synchronization, without specifying any details about the actual hardware to run on.
TornadoVM’s architecture is composed of a traditional layered architecture combined with a microkernel architecture, in which the core component is its runtime system. The following figure shows a high-level overview of all the TornadoVM components and how they interact with each other.
TornadoVM-API
At the top level, TornadoVM exposes an API to Java developers. This API allows users to identify which methods they want to accelerate by running them on heterogeneous hardware. One important aspect of this programming framework is that it does not automatically detect parallelism. Instead, it exploits parallelism at the task-level, in which each task corresponds to an existing Java method.
The TornadoVM-API can also create a group of tasks, called task-schedule. All tasks within the same task-schedule (all Java methods associated with the task-schedule) are compiled and executed on the same device (e.g., on the same GPU). By having multiple tasks (methods) as part of a task-schedule, TornadoVM can further optimize data movement between the main host (the CPU) and the target device (e.g., the GPU). This is due to non-shared memory between the host and the target devices. Therefore, we need to copy the data from the CPU’s main memory to the accelerator’s memory (typically via a PCIe bus). These data transfers are indeed very expensive and can hurt the end-to-end performance of our applications. Therefore, by creating a group of tasks, data movement can be further optimized if TornadoVM detects that some data can stay on the target device, without the need of synchronizing with the host side for every kernel (Java method) that is executed.
SEE ALSO: What kind of Java developer are you? Take our Java Quiz to find out!
The following code snippet shows an example of how to program a typical map-reduce computation by using TornadoVM. The class Sample contains three methods: one method that performs the vector addition (map); another method that computes the reduction (reduce), and the last one that creates the task-schedule and executes it (compute). The methods to be accelerated are the method map and reduce. Note that the user augments the sequential code with annotations such as @Parallel and @Reduce that are used as a hint to the TornadoVM compiler to parallelize the code. The last method (compute), creates an instance of the task-schedule Java class and specifies which methods to accelerate. We will go into the details of the API with a full example in the next section.
public class Sample { public static void map(float[] a, float[] b, float[] c) { for (@Parallel int i = 0; i < c.length; i++) { c[i] = a[i] + b[i]; } } public static void reduce(float[] input, @Reduce float[] out) { for (@Parallel int i = 0; i < input.length; i++) { out[0] += input[i]; } } public void compute(float[] a, float[] b, float[] c, float[] output) { TaskSchedule ts = new TaskSchedule("s0") .task("map", Sample::map, a, b, c) .task("reduce", Sample::reduce, c, output) .streamOut(output) .execute(); } }
TornadoVM Runtime
The TornadoVM runtime layer is split between two subcomponents: a task-optimizer and a bytecode generator. The task optimizer takes all tasks within the task-schedules and analyzes data dependencies amongst them (dataflow runtime analysis). The goal of this, as we have mentioned before, is to optimize data movement across tasks.
Once the TornadoVM runtime system optimizes the data transfers, it then generates internal TornadoVM-specific bytecodes. These bytecodes are not visible to the developers and their role is to orchestrate the execution on heterogeneous devices. We will show an example of the internal TornadoVM bytecodes in the next block.
Execution-engine
Once the TornadoVM bytecodes have been generated, the execution engine executes them in a bytecode interpreter. The bytecodes are simple instructions that can be reordered internally to perform optimizations – for example, to overlap computation with communication.
The following code snippet shows a list of generated bytecodes for the map-reduce example shown in the previous code snippet. Every task-schedule is enclosed between BEGIN-END bytecodes. The number that follows each bytecode is the device in which all tasks within a task-schedule will execute on. However, the device can be changed at any point during runtime. Recall that we are running two tasks in this particular task-schedule (a map-method and a reduce-method). For each method (or task), TornadoVM needs to pre-allocate the data and to perform the corresponding data transfers. Therefore, TornadoVM executes COPY_IN, which will allocate and copy data for the read-only data (such as arrays a and b from the example), and allocate the space on the device buffer for the output (write-only) variables by calling the ALLOC bytecode. All bytecodes have their bytecode-index (bi) that other bytecodes can refer to. For example, since the execution of many of the bytecodes is non-blocking, TornadoVM adds a barrier by running the ADD_DEP bytecode and a list of bytecode-indexes to wait for.
SEE ALSO: Java 13 – why text blocks are worth the wait
Then, to run the kernel (Java method), TornadoVM executes the bytecode LAUNCH. The first time this bytecode is executed, TornadoVM will compile the referenced method (in our example are the methods called map and reduce) from Java bytecode to OpenCL C. Since the compiler is, in fact, a source to source (Java bytecode to OpenCL C), another compiler is needed. The latter compiler is part of the driver of each target device (e.g., the GPU driver for NVIDIA, or the Intel driver for an Intel FPGA) that will compile the OpenCL C to binary. TornadoVM then stores the final binary in its code cache. If the task-schedule is reused and executed again, TornadoVM will obtain the optimized binary from the code-cache saving the time of re-compilation. Once all tasks are executed, TornadoVM copies the final result into the host memory by running the COPY_OUT_BLOCK bytecode.
BEGIN <0> COPY_IN <0, bi1, a> COPY_IN <0, bi2, b> ALLOC <0, bi3, c> ADD_DEP <0, b1, b2, b3> LAUNCH <0, bi4, @map, a, b, c> ALLOC <0, bi5, output> ADD_DEP <0, b4, b5> LAUNCH <0, bi7, @reduce, c, output> COPY_OUT_BLOCK <0, bi8, output> END <0>
The following figure shows a high-level representation of how TornadoVM executes and compiles the code from Java to OpenCL. The JIT compiler is an extension of the Graal JIT compiler for OpenCL developed at the University of Manchester. Internally, the JIT compiler builds a control flow graph (CFG) and a data flow graph (DFG) for the input program that are optimized during different tiers of compilation. In the TornadoVM JIT compiler currently exist three tiers of optimization: a) architecture-independent optimizations (HIR), such as loop unrolling, constant propagation, parallel loop exploration or parallel pattern detection; b) memory optimizations, such as alignment in the MIR, and c) architecture-dependent optimizations. Once the code is optimized, TornadoVM traverses the optimized graph and generates OpenCL C code, as shown on the right side of the following figure.
Additionally, the execution engine automatically handles memory and keeps consistency between the device buffers (allocated on the target device), and the host buffers (allocated on the Java heap). Since compilation and execution are automatically managed by the TornadoVM, end-users of TornadoVM do not have to worry about the internal details.
Testing TornadoVM
This section shows some examples of how to program and run TornadoVM. We show, as an example, a simple program of how to transform an input coloured JPEG image to a grayscale image. Then we show how to run it for different devices and measure its performance. All examples presented in this article are available online on Github.
Grayscale transformation Java code
The Java method that transforms a color JPEG image into grayscale is the following:
class Image { private static void grayScale(int[] image, final int w, final int s) { for (int i = 0; i < w; i++) { for ; } } } }
For every pixel in the image, the alpha, red, green and blue channels are obtained. Then they are all combined into a single value to emerge the corresponding grey pixel, which is finally stored again into the image array of pixels.
Since this algorithm can be executed in parallel, it is an ideal candidate for hardware acceleration with TorandoVM. To program the same algorithm with TornadoVM, we first use the @Parallel annotation to annotate the loops that can potentially run in parallel. TornadoVM will inspect the loops and will analyze if there is no data dependency between iterations. In this case, TornadoVM will specialize the code to use 2D indexing in OpenCL. For this example, the code looks as follows:
class Image { private static void grayScale(int[] image, final int w, final int s) { for (@Parallel int i = 0; i < w; i++) { for (@Parallel; } } } }
Note that we introduce @Parallel for the two loops. After this, we need to instruct TornadoVM to accelerate this method. To do so, we create a task-schedule as follows:
TaskSchedule ts = new TaskSchedule("s0") .streamIn(imageRGB) .task("t0", Image::grayScale, imageRGB, w, s) .streamOut(imageRGB); // Execute the task-schedule (blocking call) ts.execute();
The task-schedule is an object that describes all tasks to be accelerated. At first we pass a name to identify the task-schedule (“s0” in our case, but it could be any name). Then, we define which Java arrays we want to stream to the input tasks. This call indicates to the TornadoVM that we want to copy the contents of the array every time we invoke the execute method. Otherwise, if no variables are specified in the streamIn, TornadoVM will create a cached read-only copy for all variables needed for the tasks’ execution.
The next call is the task invocation. We can create as many tasks as we want within the same task-schedule. As we described in the previous section, each task references an existing Java method. The arguments to the task are as follows: first we pass a name (in our case we name it “t0”, but it could be any other name); then we pass either a lambda expression or a reference to a Java method. In our case we pass the method grayScale from the Java class Image. Finally, we pass all parameters to the method, as any other method call.
After that, we need to indicate to TornadoVM which variables we want to synchronize again with the host (main CPU). In our case we want the same input JPEG image to be updated with the accelerated grayscale one. This call, internally, will force a data movement in OpenCL, from device to host, and copy the data from the device’s global memory to the Java heap that resides in the host’s memory. These four lines only declare the tasks and variables to be used. However, nothing is executed until the programmer invokes the execute method.
Once we have created the program, we compile it with standard javac. In the TornadoSDK (once TornadoVM is installed the machine), utility commands are provided to compile with javac with all classpaths and libraries already set:
$ javac.py Image.java
At runtime, we use the tornado command, which is, in fact, an alias to java with all classpaths and flags required to run TornadoVM over the Java Virtual Machine (JVM). But, before running with TornadoVM, let’s check which parallel and heterogeneous hardware are available in our machine. We can query this by using the following command from the TornadoVM SDK:
$ tornadoDeviceInfo Number of Tornado drivers: 1 Total number of devices : 4 Tornado device=0:0 NVIDIA CUDA -- GeForce GTX 1050 Tornado device=0:1 Intel(R) OpenCL -- Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz Tornado device=0:2 AMD Accelerated Parallel Processing -- Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz Tornado device=0:3 Intel(R) OpenCL HD Graphics -- Intel(R) Gen9 HD Graphics NEO
On this laptop, we have an NVIDIA 1050 GPU, an Intel CPU and the Intel Integrated Graphics (Dell XPS 15’’, 2018). As shown, all of these devices are OpenCL compatible and all drivers are already installed. Therefore, TornadoVM can consider all these devices available for execution. Note that, on this laptop, we have two devices targeting the Intel CPUs, one using the Intel OpenCL driver and another one using the AMD OpenCL driver for CPU. If no device is specified, TornadoVM will use the default one (0:0). To run our application with TornadoVM, we simply type:
$ tornado Image
In order to discover on which device our program is running, we can query basic debug information through TornadoVM by using the `–debug` flag as follows:
$ tornado --debug Image task info: s0.t0 platform : NVIDIA CUDA device : GeForce GTX 1050 CL_DEVICE_TYPE_GPU (available) dims : 2 global work offset: [0, 0] global work size : [3456, 4608] local work size : [864, 768]
This means that we used an NVIDIA 1050 GPU (the one available in our laptop) to run this Java program. What happened underneath is that TornadoVM compiled the Java method grayScale into OpenCL at runtime, and run it with the available OpenCL supported device. In this case, on the NVIDIA GPU. Additional information from the debug mode includes how many threads were used to run as well as their block size (local work size). This is automatically decided by the TornadoVM runtime and it depends on the input size of the application. In our case, we used an input image of 3456×4608 pixels.
So far we managed to get a Java program running, automatically and transparently on a GPU. This is great, but what about performance? We are using an Intel CPU i7-7700HQ on our testbed laptop. The time that it takes to run the sequential code with this input image is 1.32 seconds. On the GTX 1050 NVIDIA GPU it takes 0.017 seconds. This is 81x times faster to process the same image.
SEE ALSO: Jakarta EE 8: Past, Present, and Future
We can also change the device to run at runtime by passing the flag -D<task-scheduleName>.<taskName>.device=0:X.
For example, the following code snippet shows how to run TornadoVM to use the Intel Integrated GPU:
$ tornado --debug -Ds0.t0.device=0:3 Image task info: s0.t0 platform : Intel(R) OpenCL HD Graphics device : Intel(R) Gen9 HD Graphics NEO CL_DEVICE_TYPE_GPU (available) dims : 2 global work offset: [0, 0] global work size : [3456, 4608] local work size : [216, 256]
By running on all devices, we get the following speedup-graph. The first bar shows the baseline (running with Java sequential code with no acceleration) which is 1. The second bar shows the speedup of TornadoVM, against the baseline) by running on a multi-core (4 core) CPU. The last bars correspond to the speedup on an integrated GPU and a dedicated GPU. By running this application with TornadoVM, we can get up to 81x performance improvements (NVIDIA GPU) over the Java sequential and up to 62x by running on the Intel integrated graphics card. Notice that on a multi-core configuration, TornadoVM is superlinear (27x on a 4 core CPU). This is due to the fact that the generated OpenCL C code can exploit the vector instructions on the CPUs, such as AVX and SSE registers available per core.
Use cases
The previous section showed an example of simple application, in which a quite common filter in photography is accelerated. However, TornadoVM’s functionality extends beyond simple programs. For example, TornadoVM can currently accelerate machine learning and deep learning applications, computer vision, physics simulations and financial applications.
SLAM Applications
TornadoVM has used to accelerate a complex computer vision application (Kinect Fusion) on GPUs written in pure Java, which contains around 7k lines of Java code. This application records a room with the Microsoft Kinect camera, and the goal is to do its 3D space reconstruction in real-time. In order to achieve real-time performance, the room must be rendered with at least 30 frames per second (fps). The original Java version achieves 1.7 fps, meanwhile, the TornadoVM version running on a GTX 1050 NVIDIA GPU achieves up to 90 fps. The TornadoVM version of the Kinect Fusion application is open-sourced and available on Github.
Machine Learning for the UK National Health Service (NHS)
Exus Ltd. is a company based in London which is currently improving the UK NHS system by providing predictions of patients’ hospital readmissions. To do so, Exus has been correlating patients’ data that contain their profile, characteristics and medical conditions. The algorithm used for prediction is a typical logistic regression with millions of elements as data sets. So far, Exus have accelerated the training phase of the algorithm via TornadoVM for 100K patients, from 70 seconds (the pure Java application) to only 7 seconds (10x performance improvement). Furthermore, they have demonstrated that, by using a dataset of 2 million patients, the execution with TornadoVM improves by 14x.
Physics Simulation
We have also experimented with synthetic benchmarks and computations commonly used for physics simulation and signal processing, such as NBody and DFT. In these cases we have experienced speedups of up to 4500x using an NVIDIA GP100 GPU (Pascal Microarchitecture) and up to 240x using an Intel Nallatech 385a FPGA. These types of applications are computationally intensive, and the bottleneck is the kernel processing time. Thus, having a power parallel device specialized for these types of computation helps to increase the overall performance.
Present and future of TornadoVM
TornadoVM is currently a research project at the University of Manchester. Besides, TornadoVM is part of the European Horizon 2020 E2Data project in which TornadoVM is being integrated with Apache Flink (a Java framework for batch and stream data processing) to accelerate typical map-reduce operations on heterogeneous and distributed-memory clusters.
TornadoVM currently supports compilation and execution on a wide variety of devices, including Intel and AMD CPUs, NVIDIA and AMD GPUs, and Intel FPGAs. We have an ongoing work to also support also Xilinx FPGAs. With this option, we aim to cover all current offerings of cloud providers Additionally, we are integrating more compiler and runtime optimizations, such as the use of device memory tiers and the use of virtual shared memory to reduce the total execution time and increase the overall performance.
Summary
In this article, we discussed TornadoVM, a plug-in for OpenJDK for accelerating Java programs on heterogeneous devices. At first, we described how TornadoVM can compile and execute code on heterogeneous hardware such as a GPU. Then we presented an example for programming and running TornadoVM on different devices, including a multi-core CPU, an integrated GPU and a dedicated NVIDIA GPU. Finally, we showed that, with TornadoVM, developers can achieve high-performance while keeping their applications totally hardware agnostic. We believe that TornadoVM offers an interesting approach in which the code to be added is easy to read and maintain, and at the same time, it can offer high-performance if parallel hardware is available in a system.
More information regarding the technical aspects of TornadoVM can be found below:
References:
Juan Fumero and Christos Kotselidis. 2018. Using compiler snippets to exploit parallelism on heterogeneous hardware: a Java reduction case study. In Proceedings of the 10th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages (VMIL 2018). ACM, New York, NY, USA, 16-25. DOI:
James Clarkson, Juan Fumero, Michail Papadimitriou, Foivos S. Zakkak, Maria Xekalaki, Christos Kotselidis, and Mikel Luján. 2018. Exploiting high-performance heterogeneous hardware for Java programs using Graal. In Proceedings of the 15th International Conference on Managed Languages & Runtimes (ManLang ’18). ACM, New York, NY, USA, Article 4, 13 pages. DOI:
TornadoVM with Juan Fumero.
Christos Kotselidis, James Clarkson, Andrey Rodchenko, Andy Nisbet, John Mawer, and Mikel Luján. 2017. Heterogeneous Managed Runtime Systems: A Computer Vision Case Study. SIGPLAN Not. 52, 7 (April 2017), 74-82. DOI:
Acknowledgments
This work is partially supported by the European Union’s Horizon 2020 E2Data 780245 and ACTiCLOUD 732366 grants. Special thanks to Gerald Mema from Exus for reporting with the NHS use case.
Be the First to Comment!
|
https://jaxenter.com/tornado-vm-java-162460.html
|
CC-MAIN-2020-05
|
refinedweb
| 3,894
| 51.78
|
Send color data to sensors
Hi Guys,
This is one to think of and it would be nice to have input from the users.
A this moment there is no V_* available (as far as i have understood it) to hold a color string/code. I have seen code passing by sending three vars (VAR_1 to VAR_3) with percentages.
I'm getting questions if it is able to use the color picker to send colors to nodes. So here is the question:
I can send one of the following three vars to nodes (without the ()-characters of course):
RGB as hex (000000 to FFFFFF),
RGB as int (0-255,0-255,0-255),
or HSL as floats (0-360, 0-100,0-100).
It would not matter which V_TYPE you would be using, but the data containing it does. Because i would like to support it, what would be the optimal data format you guys prefer?
Maybe a candidate for a new data / variable type?
@tbowmo
That would be handy for creators, PiDome does not care about the V_* types, only the datatype it contains
which is defined in the server's device editor.
nice idea
an (unsigned) long would allow us to bit shift the three values easily and allow an extra byte for something else (e.g led ID, state, etc). We could add a function to decode in the library.
I worked on (a while back) a concept of "broadcasting" a number that each node could use to show a 'system status' with one RGB led (i.e. alarm armed/unarmed, doors locked, unlocked). I like the idea of having a "transmit to all listening" function.
@BulldogLowell
That would of course also be possible, By using this there would then be some information needed for users for the bit shifting code on the MySensor node side. Because when implemented it would become a global message for MySensors nodes.
P.S. wouldn't an int be enough?
[EDIT]Scrap my P.S.[/EDIT]
@BulldogLowell
Was thinking 32bit, sorry....
yup, you gotta think small!
@BulldogLowell
Yup, i'm spoiled.....
I will take this option in consideration. It does depends on what users would prefer the most.
In the next version RGB values it will be transmitted as a 3 byte binary value over the air.
On the serial line it will probably be split into decimal json like: {red:<0-255>;green:<0-255>;blue:<0-255>}.
If you want to add this to 1.4.1 i would suggest sending data as RGB hex string (to make survive over serial interface).
Then to make sure there is as less data as possible over the air then this would mean to send data as: "000000" to "ffffff" (so no "#" char)
This also has my personal preference, but want to know how the community would like it to handle it on the node side.
Using something like:
string hexstring = "FF3Fa0"; int number = (int) strtol( &hexstring, NULL, 16); int r = number >> 16; int g = number >> 8 & 0xFF; int b = number & 0xFF;
@hek
Understood, i meant how they want to receive the data. @BulldogLowell would like to receive a long instead of an hex string (if i understood it correctly).
That's the part i'm interested in.
I agree on the 6 character hex text value. Who will implement this when in 1.4.2?
I have build a RGB neopixel led strip actuator and I would like to control the colours with V_RGB.
Thanks in advance.
@arendst
Together with an user this is implemented in the current PiDome version available on the build server. He has posted a little tutorial on how to do this. So if you are a PiDome user:
It is only available in the serial version for testing purposes.
If all goes well, it will be extended to the MQTT version.
[EDIT]It sends hex values which can be extracted as posted above[/EDIT]
@John
Thanks for the response. I use Domoticz as controller which just started to support MySensors. It works great for V_Light and V_Dimmer but lacks RGB color support as the MySensors library lacks color support too.
@hek
Why not update the MySensors library with S_Color and V_RGB. That way Domoticz and other controllers can support color natively.
Why not update the MySensors library with S_Color and V_RGB. That way Domoticz and other controllers can support color natively.
Wanted to add this in the next major release. But if this drags out (time-wise) I might add it to the next minor as well.
Hi guys,
Any news about that ?
Is it now possible to use domoticz's rgb module to control an rgb led strip through an arduino mysensor node ?
Thanks for your help.
up !
Could anyone help me to configure domoticz & mysensors in order to control analog RGB led strip ?
@davy39
I just made RGB controller for domoticz. So if it's not to late...
#include <MySensor.h> #include <SPI.h> #define RED_PIN 3 #define GREEN_PIN 5 #define BLUE_PIN 6 #define NODE_ID 2 #define CHILD_ID 0 #define SKETCH_NAME "RGB_STRIP" #define SKETCH_VERSION "1.0.0" #define NODE_REPEAT false MySensor gw; long RGB_values[3] = {0, 0, 0}; float dimmer; void setup() { pinMode(RED_PIN, OUTPUT); pinMode(GREEN_PIN, OUTPUT); pinMode(BLUE_PIN, OUTPUT); gw.begin(incomingMessage, NODE_ID, NODE_REPEAT); gw.sendSketchInfo(SKETCH_NAME, SKETCH_VERSION); gw.present(CHILD_ID, S_RGB_LIGHT, "RGB Strip", false); gw.request(CHILD_ID, V_RGB); } void loop() { gw.process(); } void incomingMessage(const MyMessage &message) { if (message.type == V_RGB) { String hexstring = message.getString(); long number = (long) strtol( &hexstring[0], NULL, 16); RGB_values[0] = number >> 16; RGB_values[1] = number >> 8 & 0xFF; RGB_values[2] = number & 0xFF; } if (message.type == V_DIMMER) { dimmer = message.getInt(); analogWrite(RED_PIN, int(RGB_values[0] * (dimmer / 100))); analogWrite(GREEN_PIN, int(RGB_values[1] * (dimmer / 100))); analogWrite(BLUE_PIN, int(RGB_values[2] * (dimmer / 100))); } if (message.type == V_LIGHT) { if (message.getInt() == 0) { digitalWrite(RED_PIN, 0); digitalWrite(GREEN_PIN, 0); digitalWrite(BLUE_PIN, 0); } if (message.getInt() == 1) { analogWrite(RED_PIN, int(RGB_values[0] * (dimmer / 100))); analogWrite(GREEN_PIN, int(RGB_values[1] * (dimmer / 100))); analogWrite(BLUE_PIN, int(RGB_values[2] * (dimmer / 100))); } } }
up !
Could anyone help me to configure domoticz & mysensors in order to control analog RGB led strip ?
Is this option working?
Or perhaps a RGBW strip...
|
https://forum.mysensors.org/topic/816/send-color-data-to-sensors
|
CC-MAIN-2018-43
|
refinedweb
| 1,031
| 66.84
|
History of Wikipedia
From Wikipedia, the free encyclopedia
Wikipedia is an online encyclopedia that can be edited by anyone and that aims to provide free encyclopedic information to its readers. The pioneering concept and technology of Wiki comes from Ward Cunningham, the concept of a free online encyclopedia from Richard Stallman. It was formally launched on 15 January 2001. Initially it was created as a complement and 'feeder' to the expert-written English-language encyclopedia project 'Nupedia', in order to provide an additional source of draft articles and ideas. It quickly overtook Nupedia, growing to become a large global project, and originating a wide range of additional reference projects. Today Wikipedia includes several million freely usable articles and pages in hundreds of languages worldwide, and content from millions of contributors.
[edit] History overview
[edit] Background
The concept of gathering all of the world's knowledge in a single place goes back to the ancient Library of Alexandria and Pergamon, but the modern concept of a general purpose, widely distributed, printed encyclopedia dates from shortly before Denis Diderot and the 18th century encyclopedists. The idea of using automated machinery beyond the printing press to build a more useful encyclopedia can be traced to librarian Charles Ammi Cutter's article "The Buffalo Public Library in 1983" (Library Journal, 1883, p. 211–217), Paul Otlet's book Traité de documentation (1934; Otlet also founded the Mundaneum institution, 1910), H. G. Wells' book of essays World Brain (1938) and Vannevar Bush's future vision of the microfilm based Memex in As We May Think (1945). Another milestone was Ted Nelson's Project Xanadu in 1973.
With the development of the web, many people attempted to develop Internet encyclopedia projects. One little-acknowledged predecessor was the Interpedia (initiated in 1993). Free software exponent Richard Stallman described the usefulness of a "Free Universal Encyclopedia and Learning Resource" in 1999.[1] GNUPedia project went online, competing with Nupedia,[2] but today the FSF encourages people "to visit and contribute to [Wikipedia]".[3]
[edit] Formulation of the concept
Wikipedia was initially conceived as a feeder project for Nupedia, an earlier (now defunct) project to produce a free online encyclopedia, founded by Bomis, a web-advertising-selling firm owned by Jimmy Wales, Tim Shell and Michael Davis.[4][5][6],[7] the writing of content was extremely slow with only 12 articles written during the first year.[6]
Wales and Sanger discussed various ways to create content more rapidly.[5] The idea of a wiki-based complement originated from a conversation between Larry Sanger and Ben Kovitz.[8][9][10] Ben Kovitz was a computer programmer and regular on Ward Cunningham's revolutionary wiki "the WikiWikiWeb". He explained to Sanger what wikis were, at that time a difficult concept to understand, over a dinner on 2 January 2001.[8][9][10][11] Wales first stated, in October 2001, that "Larry had the idea to use Wiki software",[12] though he later claimed in December 2005 that Jeremy Rosenfeld, a Bomis employee, introduced him to the concept.[13][14][15]:
Wales set one up and put it online on 10 January 2001.[16]
[edit] Founding of Wikipedia 2001.
The bandwidth and server (located in San Diego) used for these projects were donated by Bomis. Many current and past Bomis employees have contributed some content to the encyclopedia: notably Tim Shell, co-founder and current CEO of Bomis, and programmer Jason Richey.
The first edits ever made on Wikipedia are believed to be test edits by Wales.[citation needed] However, the oldest article still preserved is the article UuU, created on 16 January 2001, at 21:08 UTC.[17]
The project received many new participants after being mentioned three times on the Slashdot website,[citation needed] with two minor mentions in March 2001.[18][19] It then received a prominent pointer to a story on the community-edited technologies and culture website Kuro5hin on 25 July.[20].[21]
The project passed 1,000 articles around 12 February 2001, and 10,000 articles around 7 September. In the first year of its existence, over 20,000 encyclopedia entries were created—a rate of over 1,500 articles per month. On 30 August 2002, the article count reached 40,000. The rate of growth has more or less steadily increased since the inception of the project, except for a few software- and hardware-induced slow-downs.[dubious ]
[edit] Namespaces and internationalization
Early in Wikipedia's development, it began to expand internationally, with the creation of new namespaces, each with a distinct set of usernames. The first domain created for a non-English Wikipedia was deutsche.wikipedia.com (created on 16 March 2001, 01:38 UTC),[22] followed after a few hours by Catalan.wikipedia.com (at 13:07 UTC).[23] The Japanese Wikipedia, started as nihongo.wikipedia.com, was created around that period,[24][25] and initially used only Romanized Japanese. For about two months Catalan was the one with the most articles in a non-English language,[26][27] although statistics of that early period are imprecise.[28] The French Wikipedia was created on or around 11 May 2001,[29] in a wave of new language versions that included also Chinese, Dutch, Esperanto, Hebrew, Italian, Portuguese, Russian, Spanish, and Swedish.[30] These languages were soon joined by Arabic[31] and Hungarian.[32][33] In September 2001, an announcement pledged commitment to the multilingual provision of Wikipedia,[34].[35]
In January 2002, 90% of all Wikipedia articles were in English. By January 2004, less than 50% were English, and this internationalization has continued to increase. As of 2007, around 75% of all Wikipedia articles are contained within non-English Wikipedia versions.
[edit] Development
In March 2002, following the withdrawal of funding by Bomis, Larry Sanger left both Nupedia and Wikipedia. Initially amicable, by 2004 differences between Sanger and Wales had driven a wedge between them, centering upon Sanger's criticism of Wikipedia's approach, his role in Wikipedia's success, and their views on how best to manage open encyclopedias (see Early roles of Wales and Sanger). Both still supported the open-collaboration concept, but the two differed on how best to handle disruptive editors, specific roles for experts, and the best way to guide the project to success.
Wales, a believer in communal governance and "hands off" executive management,[citation needed].[36]
[edit] Organization
The Wikipedia project has grown rapidly in the course of its life, at several levels. Individual wikis have grown organically through the addition of new articles, new wikis have been added in English and non-English languages, and entire new projects replicating these growth methods in other related areas (news, quotations, reference books and so on) have been founded as well.
Respectively, Wikipedia itself has grown, with the creation of the Wikimedia Foundation to act as an umbrella body and the growth of software and policies to address the needs of the editorial community. These are documented below.
[edit] Historical overview by year
- Articles summarizing each year are held within the Wikipedia project namespace and are linked to below. Additional resources for research are available within the Wikipedia records and archives, and are listed at the end of this article.
[edit] 2000
The Nupedia project is started with Larry Sanger running the daily operations and formulating many of the initial policies.
[edit] 2001
The Wikipedia.com and Wikipedia.org domain names are registered on 12 January 2001[37] and 13 January 2001,[38] respectively, with the latter being brought online on 13 January 2001, according to Alexa; project formally opens 15 January ('Wikipedia Day'); the first international Wikipedias are created (March-May: French, German, Catalan, Swedish); "Neutral point of view" (NPOV) policy is formally formulated; first slashdotter wave arrives 26 July. The first media report about Wikipedia appears in August 2001 coincidentally by the newspaper Wales on Sunday.[39] The September 11, 2001 attacks spur the appearance of breaking news stories on the homepage, as well as information boxes linking related articles.[40]
[edit] 2002
Year 2002 sees: the end of funding from Bomis and the departure of Larry Sanger; the forking of the Spanish Wikipedia to establish the Enciclopedia Libre; and the creation of the first portable Mediawiki software (went live 25 January)[dubious ]. Bots are introduced, Jimmy Wales confirms Wikipedia would never run commercial advertising, and the first sister project (Wiktionary) and first formal Manual of Style are launched. A separate board of directors to supervise the project is proposed and initially discussed at Meta-Wikipedia.
[edit] 2003
Mathematical formulae using TeX are introduced; English Wikipedia passes 100,000 articles (the next largest, German, passes 10,000); the Wikimedia Foundation is established; Wikipedia adopts its jigsaw world logo; and the first Wikipedian social meeting is organized. The basic principles of Wikipedia's Arbitration system and committee (known colloquially as "Arbcom") are developed mostly by Florence Devouard, Fred Bauder and other key early Wikipedians.
[edit] 2004
The worldwide Wikipedia article pool continues to grow rapidly, doubling in size in 12 months, from under 500,000 articles to over 1 million (English Wikipedia was just less than half of these) in over 100 languages. The server farms are moved from California to Florida; Categories and CSS style configuration sheets are introduced; and the first attempt to block Wikipedia occurs (China, June 2004, duration 2 weeks). Formal election of a board and ArbCom begin - Devouard is the only person elected who was instrumental in ArbCom[citation needed]. She and others begin to criticize balance and focus problems and lead efforts to fill in articles in neglected areas. The first formal projects are proposed to deliberately balance content and seek out systemic bias arising from Wikipedia's community structure.
[edit] 2005
Multilingual and subject portals are established; the first quarter's formal fundraiser raises almost US $ 100,000 for system upgrades to handle growing demand; Wikipedia becomes the most popular reference website on the Internet according to Hitwise; China again blocks Wikipedia (October); English Wikipedia passes 750,000 articles. The first Wikipedia scandal occurs, when a well known figure is found to have a vandalized biography which had gone unnoticed for months (the "Seigenthaler incident"). In the wake of this and other concerns,[41] the first policy and system changes specifically designed to counter this form of abuse are established. These include a new Checkuser privilege policy update (checkuser is a Mediawiki tool that assists in sock puppetry investigations), a new feature called semi-protection, a more strict policy on biographies of living people and tagging of such articles for stricter review, and restriction of new article creation to registered users only.
[edit] 2006
English Wikipedia gains its 1½ millionth article; the first approved Wikipedia article selection is made freely available to download; "Wikipedia" becomes registered as a trademark of the Wikimedia Foundation; The congressional aides biography scandals come to public attention: multiple incidents in which congressional staffers and a campaign manager are caught trying to covertly alter Wikipedia biographies, the campaign manager resigns. popular than anticipated, with over 1,000 pages semi-protected at any given time; Wikipedia is rated as one of the top 2006 global brands.[42]
[edit] 2007
Wikipedia continues to grow, with some 5 million registered editor accounts;[43] the combined Wikipedias in all languages together contain 1.74 billion words in 7.5 million articles in approximately 250 languages;[44] the English Wikipedia gains a steady 1,700 articles a day,[45] with the wikipedia.org domain name ranked at around the 10th busiest on the Internet (See Wikipedia Statistics); Wikipedia continues to garner visibility in the press and to slowly but steadily gain traction as a tertiary.[46]
[edit].
[edit] 2009
On March 20, 2009, English Wikipedia reached 2,800,000 articles. The site also reached 2.9 million articles on June 4, 2009. Three million articles should be reached on or around August 17. There are 2,931,970 English articles as of 4 July 2009.
The Arbitration Committee of the Wikipedia internet encyclopedia decided in May 2009 to restrict access to its site from Church of Scientology IP addresses, to prevent self-serving edits by Scientologists.[47][48][49] A "host of anti-Scientologist editors" were topic-banned as well.[48][49] The committee concluded that both sides had "gamed policy" and resorted to "battlefield tactics", with articles on living persons being the "worst casualties".[48]
[edit] History by subject area
[edit] Hardware and software
- The software that runs Wikipedia, and the hardware, server farms and other systems upon which Wikipedia relies.
- In January 2001, Wikipedia ran on UseModWiki, written in Perl by Clifford Adams. The server has run on Linux to this day, although the original text was stored in files rather than in a database. Articles were named with the CamelCase convention.
- In January 2002, "Phase II" of the wiki software powering Wikipedia was introduced, replacing the older UseModWiki. Written specifically for the project by Magnus Manske, it included a PHP wiki engine.
- In July 2002, a major rewrite of the software powering Wikipedia went live; dubbed "Phase III", it replaced the older "Phase II" version, and became MediaWiki. It was written by Lee Daniel Crocker in response to the increasing demands of the growing project.
- initial uniformity and writing style (for example, see this version of an original bot-generated town article, and compare to current version).
- In January 2003, support for mathematical formulas in TeX was added. The code was contributed by Tomasz Wegrzanowski.
- 9 June 2003 - ISBNs in articles now link to Special:Booksources, which fetches its contents from the user-editable page Wikipedia:Book sources. Before this, ISBN link targets were coded into the software and new ones were suggested on the Wikipedia:ISBN page. See the edit that changed this.
- After 6 December 2003, various system messages shown to Wikipedia users were no longer hard coded, allowing Wikipedia administrators to modify certain parts of MediaWiki's interface, such as the message shown to blocked users.
- On 12 February 2004, server operations were moved from San Diego, California to Tampa, Florida.[50]
- On 29 May 2004, all the various websites were updated to a new version of the MediaWiki software.
- On 30 May 2004, the first instances of "categorization" entries appeared..[51]
- After 3 June, administrators could edit the style of the interface by changing the CSS in the monobook stylesheet at MediaWiki:Monobook.css.
- Also on 30 May 2004, with MediaWiki 1.3, the Template namespace was created, allowing transclusion of standard texts.[52]
- On 7 June 2005 at 3:00AM Eastern Standard Time the bulk of the Wikimedia servers were moved to a new facility across the street. All Wikimedia projects were down during this time.
[edit] Look and feel
- The external face of Wikipedia, its look and feel, and the Wikipedia branding, as presented to users
- On 4 April 2002 Brilliant Prose, since renamed to Featured Articles,[53] was moved to the Wikipedia Namespace from the article namespace.
- Around 15 October 2003, the current Wikipedia logo was installed. The logo concept was selected by a voting process,[54] which was followed by a revision process to select the best variant. The final selection was created by David Friedland (who edits wikipedia under the username "nohat") based on a logo design and concept created by Paul Stansifer.
- On 22 February 2004 DYK made its first Main Page appearance.
- On 23 February 2004 a coordinated new look for the Main Page appeared at 19:46 UTC. Hand-chosen entries for the Daily Featured Article, Anniversaries, In the News, and Did You Know rounded out the new look.
- On 10 January 2005, the multilingual portal at was set up, replacing a redirect to the English-language Wikipedia.
- On 5 February 2005, the Portal:Biology was created, first "portal" on the English Wikipedia.[55] However, the concept was pioneered on the German Wikipedia where Portal:Recht (law studies) was set up in October 2003.[56]
- On 16 July 2005, the English Wikipedia began the practice of including the day's "featured pictures" on the Main Page.
- On 19 March 2006, following a vote, the Main Page of the English language Wikipedia featured its first redesign in nearly two years.
[edit] Internal structures
- Landmarks in the Wikipedia community, and the development of its organization, internal structures, and policies.
- April 2001, Wales formally defines the "neutral point of view",[57] Wikipedia's core non-negotiable editorial policy,[58] a reformulation of the "Lack of Bias" policy outlined by Sanger for Nupedia[59] in spring or summer 2000, which covered many of the same core principles.[60]
- In September 2001, collaboration by subject matter in WikiProjects is introduced.[61]
- In February 2002, concerns over the risk of future censorship and commercialization by Bomis Inc (Wikipedia's original host) combined with a lack of guarantee this would not happen, led most participants of the Spanish Wikipedia to break away and establish it independently as the Enciclopedia Libre.[62] Following clarification of Wikipedia's status and non-commercial nature later that year, re-merger talks between Enciclopedia Libre and the re-founded Spanish Wikipedia occasionally took place in 2002 and 2003, but no conclusion was reached. As of July 2007, the two continue to coexist as substantial Spanish language reference sources, with around 36,700 articles (EL) and 248,800 articles (Sp.W)[63] respectively.
- Also in 2002, policy and style issues were clarified with the creation of the Manual of Style, along with a number of other policies and guidelines.[64]
- November 2002 - new mailing lists for WikiEN and Announce are set up, as well as other language mailing lists (eg Polish), to reduce the volume of traffic on mailing lists.[6]
- In July 2003, the rule against editing your autobiography is introduced.[65]
- On 28 October.
- From 10 July to 30 August 2004 the 1 and 1 formerly on the Main Page were replaced by links to overviews. On 27 August 2004 the Community Portal was started,[66] to serve as a focus for community efforts. These were previously accomplished on an informal basis, by individual queries of the Recent Changes, in wiki style, as ad-hoc collaborations between like-minded editors.
- During September to December 2005 following the Seigenthaler controversy and other similar concerns,[41] several anti-abuse features and policies were added to Wikipedia. These were:
- The policy for "Checkuser" (a MediaWiki extension to assist detection of abuse via internet sock-puppetry) was established in November 2005.[67] but was viewed more as a system tool at the time, as a result of which there had been no need for a policy covering use on a more routine basis.[68]
- Creation of new pages on the English Wikipedia was restricted to editors who had created a user account.[69]
- The introduction and rapid adoption of the policy Wikipedia:Biographies of living people, giving a far tighter quality control and fact-check system to biographical articles related to living people.
- The "semi-protection" function and policy,[70] allowing pages to be protected so that only those with an account could edit.
- In May 2006, a new "oversight" feature was introduced on the English Wikipedia, allowing a handful of highly trusted users to permanently erase page revisions containing copyright infringements or libelous or personal information from a page's history. Previous to this, page version deletion was laborious, and also deleted versions remained visible to other administrators and could be un-deleted by them.
- On 1 January 2007, the subcommunity named Esperanza was disbanded by communal consent. Esperanza had begun as an effort to promote "wikilove" and a social support network, but had developed its own subculture and private structures.[71][72] Its disbanding was described as the painful but necessary remedy for a project that had allowed editors to "see themselves as Esperanzans first and foremost".[72] A number of Esperanza's subprojects were integrated back into Wikipedia as free-standing projects, but most of them are now inactive. When the group was founded in September 2005, there had been concerns expressed that it would eventually be condemned as such.[73]
- In April 2007 the results of 4 months policy review by a working group of several hundred editors seeking to merge the core Wikipedia policies into one core policy (See: Wikipedia:Attribution) was polled for community support. The proposal did not gain consensus; a significant view became evident that the existing structure of three strong focused policies covering the respective areas of policy, was frequently seen as more helpful to quality control than one more general merged proposal.
[edit] The Wikimedia Foundation and legal structures
- Legal and organizational structure of the Wikimedia Foundation, its executive, and its activities as a foundation.
- In August 2002, shortly after Jimmy Wales announced that he would never run commercial advertisements on Wikipedia, the URL of Wikipedia was changed from wikipedia.com to wikipedia.org (see: .com and .org).
- On 20 June 2003, the Wikimedia Foundation was founded.
- Communications committee was formed in January 2006 to handle media inquiries and emails received for the foundation and Wikipedia via the newly implemented OTRS (a ticket handling system).
- Angela Beesley and Florence Nibart-Devouard were elected to the Board of Trustees of the Wikimedia Foundation. During this time, Angela was active in editing content and setting policy, such as privacy policy, within the Foundation.[74]
- On 10 January 2006, Wikipedia became a registered trademark of Wikimedia Foundation.[75]
- In July 2006, Angela Beesley resigned from the board of the Wikimedia Foundation.[76]
- In June 2006, Brad Patrick was hired to be the first executive director of the Foundation. He resigned in January 2007, and was later replaced by Sue Gardner (June 2007).
- In October 2006, Florence Nibart-Devouard became chair of the board of Wikimedia Foundation.
[edit] Projects and landmarks
- Sister projects, and landmarks related to articles, user base, and other statistics.
- 16 January 2001, the first recorded edit of Wikipedia at UuU, although it is suspected there were earlier edits.
- In December 2002, the first sister project, Wiktionary, was created; aiming to produce a dictionary and thesaurus of the words in all languages. It uses the same software as Wikipedia.
- On 22 January 2003, the English Wikipedia was again slashdotted after having reached the 100,000 article milestone with the Hastings, New Zealand article. Two days later, the German language Wikipedia, the largest non-English version, passed the 10,000 article mark.
- On 20 June 2003, the same day that the Wikimedia Foundation was founded, "Wikiquote" was created. A month later, "Wikibooks" was launched. "Wikisource" was set up towards the end of the year.
- In January 2004, Wikipedia passed the 200,000 article milestone in English with the article on Neil Warnock, and reached 450,000 articles for both English and non-English wikis. The next month, the combined article count of the English and non-English wikis reached 500,000.
- On 20 April 2004, the article count of the English wiki reached 250,000.
- On 7 July 2004, the article count of the English wiki reached 300,000.
- On 20 September 2004, Wikipedia reached one million articles in over 105 languages, and received a flurry of related attention in the press.[77] The one millionth article was published in the Hebrew language Wikipedia, and discusses the flag of Kazakhstan.
- On 20 November 2004, the article count of the English Wikipedia reached 400,000.
- On 18 March 2005, Wikipedia passed the 500,000 article milestone in English, with Involuntary settlements in the Soviet Union being announced in a press release as the landmark article.[78]
- In May 2005, Wikipedia became the most popular reference website on the Internet according to traffic monitoring company Hitwise, relegating Dictionary.com to second place.
- On 29 September 2005, the English Wikipedia passed the 750,000 article mark.
- On 1 March 2006, the English language Wikipedia passed the 1,000,000 article mark, with Jordanhill railway station being announced on the Main Page as the milestone article[79]
- On 8 June 2006, the English language Wikipedia passed the 1,000 featured article mark, with Iranian peoples.[80]
- On 15 August 2006 the Wikimedia Foundation launches Wikiversity.[81]
- On 24 November 2006, the English language Wikipedia passed the 1,500,000 article mark, with Kanab ambersnail being announced on the Main Page as the milestone article.[79]
- On 4 April 2007, the first CD selection in English was published as a free download (see 2006 Wikipedia CD Selection).[82]
- On 9 September 2007, the English language Wikipedia passed the 2,000,000 article mark. El Hormiguero, an article which covers a Spanish TV comedy show, is accepted by consensus as the 2,000,000th article.
- On 12 August 2008, the English language Wikipedia passed the 2,500,000 article mark.
[edit] Funding
- One of the first[citation needed] fundraisers was held from 18 February 2005 to 1 March 2005, raising $94,000, which was $21,000 more than expected.[83]
- On 6 January 2006, the Q4 2005 fundraiser concluded, raising a total of just over $390,000.[84]
- In June 2007 it was announced that the German Wikipedia will be receiving state funding.[85]
[edit] External impact
- In 2007, Wikipedia is deemed fit to be used as a major source by the UK Intellectual Property Office in the Formula One trademark case ruling.[86]
- Over time Wikipedia gains recognition amongst other traditional media as a "key source" for some current new events such as the 2004 Indian Ocean earthquake and related tsunami, the biographies of 2008 Presidential election candidates,[87] and the 2007 Virginia Tech massacre. The latter article is accessed 750,000 times in two days, with newspapers published local to the shootings adding that "Wikipedia has emerged as the clearinghouse for detailed information on the event."[88]
- On 21 February, Noam Cohen of the New York Times publishes A History Department Bans Citing Wikipedia as a Research Source
- On 27 February, An article in The Harvard Crimson newspaper reported that some of the professors at Harvard University do include Wikipedia in their syllabi, but that there is a split in their perception of using Wikipedia.[89]
[edit] Effect of biographical articles.
- November 2005: The Seigenthaler controversy. Someone, who later admitted that he wanted to make a joke, wrote into the article that journalist John Seigenthaler had been involved in the Kennedy murder of 1963.
- December 2006: German comedian "Atze Schröder", who does not want his real name published, sued Arne Klempert, secretary of Wikimedia Deutschland, because of the Wikipedia article. Then the artist drew back his complaint, but wanted his attorney's costs to be paid by Klempert. Trial decided that the artist had to cover those costs by himself.[90]
- 16 February 2007: Turkish historian Taner Akçam was briefly detained upon arrival at Montréal-Pierre Elliott Trudeau International Airport because of false information on his biography that he was a terrorist.[91][92]
- September 2008: Changes or "manipulations" at the Sarah Palin article in English Wikipedia have been noticed by the media.
- November 2008: Germany's Left Party politician Lutz Heilmann believed that some remarks in "his" article caused damage to his reputation. He succeeded in getting a court order to make Wikimedia Deutschland stop linking from its page to German Wikipedia de.wikipedia.org. The result was a huge national support for Wikipedia, more donations to Wikimedia Deutschland, a rise from several dozen page views of "Lutz Heilmann" daily to half a million the two days after, and after a couple of days Heilmann asked the court to withdraw the court order.
- December 2008: Wikimedia Nederland, the Dutch chapter, won a preliminary injunction. An entrepreneur was linked in "his" article with the criminal Willem Holleeder and wanted the article deleted. The judge in Utrecht did not follow him but believed the chapter that it has no influence on the content of Dutch Wikipedia.[93]
[edit] Controversies
- January 2005: The fake charity QuakeAID, in the month following the 2004 Indian Ocean earthquake, attempted to promote itself on its Wikipedia page.
- October 2005: Alan Mcilwraith was exposed as a fake war hero with a Wikipedia page.
- November 2005: The Seigenthaler controversy caused Brian Chase to resign from his employment, after his identity was ascertained by Daniel Brandt of Wikipedia Watch. Following this, the scientific journal Nature undertook a peer reviewed study to test articles in Wikipedia against their equivalents in Encyclopædia Britannica, and concluded they are comparable in terms of accuracy.[94][95] Britannica rejected their methodology and their conclusion.[96] Nature refused to make any apologies, asserting instead the reliability of its study and a rejection of the criticisms.[97] (For studies like this, see Reliability of Wikipedia. For traffic impact see Wikipedia history in images)
- Early-to-mid 2006: The congressional aides biography scandals came to public attention, in which several political aides were caught trying to influence the Wikipedia biographies of several politicians to remove undesirable information (including pejorative statements quoted, or broken campaign promises), add favorable information or "glowing" tributes, or replace the article in part or whole by staff authored biographies. The staff of at least five politicians were implicated: Marty Meehan, Norm Coleman, Conrad Burns, Joe Biden, Gil Gutknecht.[98] In a separate but similar incident the campaign manager for Cathy Cox, Morton Brilliant, resigned after being found to have added negative information to the Wikipedia entries of political opponents.[99] Following media publicity, the incidents tapered off around August 2006.
- July 2006: Joshua Gardner was exposed as a fake Duke of Cleveland with a Wikipedia page.
- January 2007: English-language Wikipedians in Qatar were briefly blocked from editing, following a spate of vandalism, by an administrator who did not realize that the country's internet traffic is routed through a single IP address. Multiple media sources promptly declared that Wikipedia was banning Qatar from the site.[100]
- On 23 January 2007, a Microsoft employee offered to pay Rick Jelliffe to review and change certain Wikipedia articles regarding an open-source document standard which was rival to a Microsoft format.[101]
- In February 2007, The New Yorker magazine issued a rare editorial correction that a prominent English Wikipedia editor and administrator known as "Essjay", had invented a persona using fictitious credentials.[102][103] The editor, Ryan Jordan, became a Wikia employee in January 2007 and divulged his real name; this was noticed by Daniel Brandt of Wikipedia Watch, and communicated to the original article author. (See: Essjay controversy)
- February 2007: Fuzzy Zoeller sued a Miami firm because defamatory information was added to his Wikipedia biography in an anonymous edit that came from their network.
- 16 February 2007: Turkish historian Taner Akçam was briefly detained upon arrival at a Canadian airport because of false information on his biography indicating that he was a terrorist.
- In June 2007, an anonymous user posted hoax information that, by coincidence, foreshadowed the Chris Benoit murder-suicide, hours before the bodies were found by investigators. The discovery of the edit attracted widespread media attention and was first covered in sister site Wikinews.
- In October 2007, in their obituaries of recently-deceased TV theme composer Ronnie Hazlehurst, many British media organisations reported that he had co-written the S Club 7 song "Reach". In fact, he hadn't, and it was discovered that this information had been sourced from a hoax edit to Hazlehurst's Wikipedia article.[104]
- In February 2007,[105] Barbara Bauer, a literary agent, sued Wikimedia for defamation and causing harm to her business, the Barbara Bauer Literary Agency.[106] In Bauer v. Glatzer, Bauer claimed that information on Wikipedia critical of her abilities as a literary agent caused this harm. The Electronic Frontier Foundation defended Wikipedia[107] and moved to dismiss the case on May 2, 2008.[108] The case against the Wikimedia Foundation was dismissed on 1 July 2008.[109]
[edit] Notable forks and derivatives
See Wikipedia:Mirrors and forks for a partial list of Wikipedia mirrors and forks. No list of sites utilizing the software is maintained. A significant number of sites utilize. It has expert-led top-down culture, the absence of which in Wikipedia he views as a major concern.[110] (see also Nupedia).
[edit] Publication on other media
The German Wikipedia was the first to be partly published also using other media (rather than online on the internet), including releases on CD in November 2004[111].[112] Originally, Directmedia also announced plans to print the German Wikipedia in its entirety, in 100 volumes of 800 pages each. Publication was due to begin in October 2006, and finish in 2010. In March 2006, however, this project was called off.[113]
In September 2008, Bertelsmann published a 1000 pages volume with a selection of popular German Wikipedia articles. Bertelsmann paid voluntarily 1 Euro per sold copy to Wikimedia Deutschland.[114]
The first CD version containing a selection of articles from the English Wikipedia was published in April 2006 by SOS Children as the 2006 Wikipedia CD Selection.[115].[116][117].[118]
[edit] Lawsuits
In limited ways, the Wikimedia Foundation is protected by Section 230 of the Communications Decency Act. In the defamation action Bauer et al. v. Glatzer et al., it was held that Wikimedia had no case to answer due to the provisions of this section.[119] A similar law in France caused a lawsuit to be dismissed in October 2007.[120]
[edit] Other notable occurrences
[edit] Early roles of Wales and Sanger
Both Wales and Sanger played important roles in the early stages of Wikipedia. Sanger initially brought the wiki concept to Wales and suggested it be applied to Nupedia and then, after some initial skepticism, Wales agreed to try it.[9]."[6] Wales stated in October 2001 that "Larry had the idea to use Wiki software."[12] Sanger coined the portmanteau "Wikipedia" as the project name.[6] In review, Larry Sanger conceived of a wiki-based encyclopedia as a strategic solution to Nupedia's inefficiency problems.[121] In terms of project roles, Sanger spearheaded and pursued the project as its leader in its first year, and did most of the early work in formulating policies (including "Ignore all rules"[122] and "Neutral point of view")[123] and building up the community.[121] Upon departure in March 2002, Sanger emphasized the main issue was purely the cessation of Bomis' funding for his role, which was not viable part-time, and his changing personal priorities,[7] however by 2004 the two had drifted apart and Sanger became more critical. Two weeks after the launch of Citizendium, Sanger criticized Wikipedia, describing the latter as "broken beyond repair."[124]
Wales claims to be the founder of Wikipedia,[125] however, as explained by Brian Bergstein of the Associated Press, "Sanger has long been cited as a co-founder."[121].[126] Wales later disputed this, stating, "He used to work for me [...] I don't agree with calling him a co-founder, but he likes the title."[127] There is no evidence from before January 2004 of Wales disputing Sanger's status as co-founder,[128] indeed, Wales identified himself as "co-founder" as late as August 2002.[129].[121][126][130][131]
[edit] Blocking of Wikipedia
Wikipedia has been blocked on some occasions by national authorities. To date these have related to the People's Republic of China, Iran, Tunisia, Uzbekistan and Syria.
[edit] Mainland China (multiple occasions):
- June 2004: Access to the Chinese Wikipedia from Beijing blocked on the fifteenth anniversary of the Tiananmen Square protests of 1989. Possibly related to this, on May 31 an article from the IDG News Service was published, discussing the Chinese Wikipedia's treatment of the protests.[132]
- September 2004: A second and less serious outage..[citation needed]
- October 2005 to around mid October 2006: For the first few days the English Wikipedia seems to have been unblocked in most provinces in China, while users were still unable to access the Chinese version in certain provinces, varying by ISP. By November, both versions seemed to be accessible in all provinces and by all ISPs. The end of the block coincided with the Chinese Wikipedia's 100,000th article milestone.[133][134][135]
The first block had an effect on the vitality of Chinese Wikipedia, which suffered sharp dips in various indicators such as the number of new users, the number of new articles, and the number of edits. In some cases, it took anywhere from six to twelve months in order to recover to the levels of May 2004.
On 31 July.[136]
[edit] Syria
Access to Arabic Wikipedia was blocked between 30 April 2008 and February 13, 2009 . (Other languages were accessible).
[edit] Tunisia
Wikimedia website was blocked for a few days in Tunisia (23 November 2006 - 27 November 2006).
[edit] United Kingdom
On 5 December 2008, users in the United Kingdom were affected by a block of a page (Virgin Killer) and associated picture (Image:Virgin Killer.jpg), following a claim that the image was "potentially illegal" under the Protection of Children Act 1978. An estimated 95% of British users were affected by the block, which was put in place on the recommendation of the Internet Watch Foundation.[137] The IWF's recommendation was rescinded on 9 December 2008.[138]
[edit] Uzbekistan
Access to Uzbek Wikipedia was blocked in Uzbekistan on 10 January 2008;[139] the block was lifted 5 March 2008. This was the second time Wikipedia had been blocked in Uzbekistan; the first case was in 2007.
[edit] See also
[edit] References
- ^ "The Free Universal Encyclopedia and Learning Resource"..
- ^ [1]
- ^ "The Free Encyclopedia Project"..
- ^ Poe, Marshall (September 2006). "The Hive". The Atlantic Monthly.. Retrieved on 25 March 2007. ."
- ^ a b Sidener, Jonathan (6 December 2004). "Everyone's Encyclopedia". The San Diego Union-Tribune.. Retrieved on 25 March 2007.
- ^ a b c d "The Early History of Nupedia and Wikipedia: A Memoir - Part I" and "Part II", Slashdot, April 2005. Retrieved on 25 March 2007. ." —Larry Sanger.
- ^ a b My resignation: Larry Sanger (meta.wikimedia.com) - ."
- ^ a b "Ben Kovitz". WikiWikiWeb.. Retrieved on 25 March 2007.
- ^ a b c Moody, Glyn (13 July 2006). "This time, it'll be a Wikipedia written by experts". The Guardian.. Retrieved on 25 March 2007. --."
- ^ a b Sidener, Jonathan (23 September 2006). "Wikipedia co-founder looks to add accountability, end anarchy". The San Diego Union-Tribune.. Retrieved on 25 March 2007. "The origins of Wikipedia date to 2000, when Sanger was finishing his doctoral thesis in philosophy and had an idea for a Web site."
- ^ Poe, Marshall (September 2006). "The Hive". The Atlantic Monthly. p. 3.. Retrieved on 25 March 2007. --.”
- ^ a b Wales, Jimmy (30 October 2001). "LinkBacks?" (Email). wikipedia-l archives (Bomis).. Retrieved on 25 March 2007.
- ^ "Assignment Zero First Take: Wiki Innovators Rethink Openness". Wired News. 3 May 2007.. Retrieved on 1 November 2007. Wired.com states: "Wales offered the following on-the-record comment in an e-mail to NewAssignment.net editor [and NYU Professor] Jay Rosen ...'Larry Sanger was my employee working under my direct supervision during the entire process of launching Wikipedia. He was not the originator of the proposal to use a wiki for the encyclopedia project -- that was Jeremy Rosenfeld'."
- ^ Rogers Cadenhead. "Wikipedia Founder Looks Out for Number 1".. Retrieved on 15 October 2006.
- ^ Also stated on Wikipedia, on December 2, 2005 permanent reference
- ^ Larry Sanger (10 January 2001). "Let's make a wiki". Nupedia mailing list..
- ^ "Wikipedia:Wikipedia's oldest articles", Wikipedia. Retrieved on 30 January 2007.
- ^ Nupedia and Project Gutenberg Directors Answer 5 March 2001
- ^ Everything2 Hits One Million Nodes 29 March 2001
- ^ Britannica or Nupedia? The Future of Free Encyclopedias 25 July 2001
- ^ "Fact driven? Collegial? This site wants you", New York Times, 20 September 2001
- ^ Alternative language wikipedias
- ^ History of the Catalan Homepage
- ^ The Wayback Machine: An early Japanese Wikipedia HomePage (revision #3), dated 20 March 2001 23:00. Accessed 4 November 2008.
- ^ An Internet Archive's snapshot of English Wikipedia HomePage, dated 30 March 2001, showing links to the three first sister projects, "Deutsch (German)", "Catalan", and "Nihongo (Japanese)".
- ^ Multilingual monthly statistics
- ^ First edition in the Catalan Wikipedia
- ^ This table, for instance, misses Japanese and German articles such as this one and this one, both dated 6 April 2001.
- ^ The Documentation on the French Wikipedia mentions the date of 23 March 2001, but this date is not supported by Wikipedia snapshots on the Internet Archive, nor by Jason Richney's letter, which was dated 11 May 2001 (see below).
- ^ Letter of Jason Richey to wikipedia-l mailing list 11 May 2001
- ^ HomePage from the Internet Archive
- ^ Wikipedia:Announcements May 2001
- ^ International_Wikipedia
- ^ Wikipedia: Announcements 2001
- ^ International wikipedias statistics
- ^ Anderson, Nate (25 February 2007). "Citizendium: building a better Wikipedia". Ars Technica.. Retrieved on 25 March 2007.
- ^ Network Solutions (2007) WHOIS domain registration information results for wikipedia.com from Network Solutions Accessed 27 July 2007.
- ^ Network Solutions (2007) WHOIS domain registration information results for wikipedia.org from Network Solutions Accessed 27 July 2007.
- ^ Wales on Sunday (26 August 2001) Knowledge at your fingertips. Game On : Internet Chat.(writing, "Both Encarta and Britannica are official publications with well-deserved reputations. But there are other options, such as the homemade encyclopaedias. One is Wikipedia (www. wikipedia. com) which uses clever software to build an encyclopaedia from scratch. Wiki is software installed on a web server that allows anyone to edit any of the pages. At the Wikipedia, anyone can write about any subject they know about. The idea is that over time, enough experts will offer their knowledge for free and build up the world's ultimate hand-built database of knowledge. The disadvantage is that it's still an ongoing project. So far about 8,000 articles have been written and the editors are aiming for 100,000.")
- ^ October, 2001 snapshot of the homepage shows the "Breaking News" header up top as well as the September 11, 2001 block of articles under "Current events"; the 9/11 page shows the activist nature of the page, as well as the large number of subtopics created to cover the event.
- ^ a b WP:BLP started 17 December 2005 with narrative "I started this due to the Daniel Brandt situation". [2]
- ^ Similar Search Results: Google Wins 29 January 2007
- ^ See the special page: Special:Statistics: 5,078,036 registered user accounts as at 13 August 2007, excluding anonymous editors who have not created accounts.
- ^ Source: Wikipedia:Size comparisons as at 13 August 2007
- ^ From around Q3 2006 Wikipedia's growth rate has been approximately linear, source: Wikipedia:Statistics - new article count by month 2006-2007.
- ^ E.g., cases such as Crystal Gail Mangum and Daniel Brandt.
- ^ Telegraph 30 May 2009 20:30: Church of Scientology members banned from editing Wikipedia
- ^ a b c Shea, Danny (2009-05-29). "Wikipedia Bans Scientology From Site". The Huffington Post.. Retrieved on 2009-05-29.
- ^ a b Metz, Cade (2009-05-29). "Wikipedia bans Church of Scientology". The Register.. Retrieved on 2009-05-29.
- ^ "Server swapping soon".. Retrieved on 10 February 2007.
- ^ "Wikipedia:Categorization", Wikipedia. Retrieved on 30 January 2007.
- ^ "Wikipedia:Template namespace", Wikipedia. Retrieved on 17 September 2007.
- ^ "Wikipedia:Featured articles", Wikipedia. Retrieved on 30 January 2007.
- ^ "International logo vote/Finalists". Meta-Wiki. Wikimedia.. Retrieved on 2006-07-08.
- ^ "Portal:Biology", English Wikipedia. Retrieved on 31 January 2007.
- ^ Portals on German Wikipedia ordered by date of creation.
- ^ NeutralPointOfView
- ^ "A few things are absolute and non-negotiable, though. NPOV for example." in statement by Jimbo Wales in November 2003 and, in this thread reconfirmed by Jimbo Wales in April 2006 in the context of lawsuits.
- ^ Nupedia.com editorial policy guidelines. Version 3.31 (16 November 2000). Retrieved 7 September 2007.
- ^ "Nupedia articles are, in terms of their content, to be unbiased. There may be respectable reference works that permit authors to take recognizable stands on controversial issues, but this is not one of them ... "On every issue ... is it very difficult or impossible for the reader to determine what the view is to which the author adheres?" ... for each controversial view discussed, the author of an article (at a bare minimum) mention various opposing views that are taken seriously by any significant minority of experts (or concerned parties) on the subject ... In a final version of the article, every party to the controversy in question must be able to judge that its views have been fairly presented, or as fairly as is possible in a context in which other, opposing views must also be presented as fairly as possible." [3]
- ^
- ^ 'Why we are here and not in Wikipedia (in Spanish, under GFDL)
- ^
- ^ First substantial edit to Wikipedia:Manual of Style, Wikipedia (23 August 2002). Retrieved on 30 January 2007.
- ^
- ^ "Wikipedia:Community Portal", Wikipedia. Retrieved on 30 January 2007.
- ^ "CheckUser policy", Meta-Wiki. Retrieved on 2007-01-25. Checkuser function had previously existed, but was known as Espionage -- for example, in the Arbitration Committee case of JarlaxleArtemis.
- ^ Checkuser proposal
- ^ "Page creation restrictions", Wikipedia Signpost / English Wikipedia. Retrieved on 31 January 2007.
- ^ "Semi-protection policy", Wikipedia Signpost / English Wikipedia. Retrieved on 30 January 2007.
- ^ Esperanza organization disbanded after deletion discussion 2 January 2007
- ^ a b
- ^ New group aims to promote Wiki-Love 19 September 2005
- ^ Riehle, Dirk. "How and Why Wikipedia Works: An Interview with Angela Beesley, Elisabeth Bauer, and Kizu Naoko",, 2006.
- ^ "Wikipedia:Wikipedia Signpost/2006-01-16/Trademark registered". Wikipedia. 16 January 2006.. Retrieved on 14 January 2007.
- ^ "Angela Beesley resigns from Wikimedia Foundation board", Wikimedia Foundation press release, 7 July 2006.
- ^ One million Wikipedia articles
- ^ Wikipedia Publishes 500,000th English Article
- ^ a b While this article was announced as the milestone on the Main Page, multiple articles qualified due to the continuous creation and deletion of pages on the site.
- ^ Wikimedia Foundation: English Wikipedia Announces Thousandth Featured Article
- ^ Welcome speech, Jimbo Wales, Wikimania 2006 (audio)
- ^ A Schools Global Citizen Resource from SOS Children
- ^ "Fund drives/2005/Q1", Wikimedia Foundation. Retrieved on 25 January 2007.
- ^ "Fund drives/2005/Q4", Wikimedia Foundation. Retrieved on 25 January 2007.
- ^ German Wikipedia receives state funding 26 June 2007
- ^ In deciding the trademark of F1 racing, the websites of news organisations. [Formula One's lawyer] did not express any concerns about the Wikipedia evidence [presented by the plaintiff]. I consider that the evidence from Wikipedia can be taken at face value."
- ^."
- ^ Source: Wikipedia emerges as key source for Virginia Tech shootings - cyberjournalist.net citing the New York Times [4], stating: "Even The Roanoke Times, which is published near Blacksburg, Va., where the university is located, noted on Thursday that Wikipedia 'has emerged as the clearinghouse for detailed information on the event'."
- ^ Child, Maxwell L.,"Professors Split on Wiki Debate", The Harvard Crimson, by: Maxwell L. Child, Monday, 26 February 2007.
- ^ "Atze muss zahlen", Klemperts blog "recent changes" on 27 June 2007:.
- ^ "Caught in the deadly web of the internet", Robert Fisk, The Independent, 21 April 2007. Retrieved 24 July 2007.
- ^ "A question of authority", by Paul Jay, 22 June 2007, CBC News. Retrieved 24 July 2007.
- ^ News release of Vereniging Wikimedia Nederland, retrieved 10 December 2008.
- ^ Internet encyclopaedias go head to head
- ^ The (Nature) peer review
- ^ Britannica: Fatally Flawed. Refuting the recent study on encyclopedic accuracy by the journal Nature (PDF)
- ^ Nature's responses to Encyclopaedia Britannica, Nature (23 March 2006). Retrieved on 25 January 2007.
- ^ See for example: this article on the scandal. The activities documented were:
- ^ Information included the mention of an opponent's son's arrest in a fatal drunk driving accident, and the allegation of questionable business practices of another [5]. See article Morton Brilliant for detailed citations.
- ^ "Wikipedia Founder Refutes Claims That It Banned Qatar" by Thomas Claburn, InformationWeek, 2 January 2007
- ^ Bergstein, Brian (23 January 2007). "Microsoft offers cash for Wikipedia edit". MSNBC.. Retrieved on 1 February 2007.
- ^ Schiff, Stacy (24 July 2006). "Annals of Information: Know It All: Can Wikipedia conquer expertise?". The New Yorker.. Retrieved on 16 April 2007.
- ^ Finkelstein, Seth (8 March 2007). "Read me first". Technology. The Guardian.. Retrieved on 16 April 2007.
- ^ Braindead obituarists hoaxed by Wikipedia Andrew Orlowski, The Register, 3 October 2007
- ^ Docket number L-001169-07 in Monmouth Court, New Jersey. Records may be searched here.
- ^ Bauer v. Wikimedia et al. | Electronic Frontier Foundation
- ^ EFF and Sheppard Mullin Defend Wikipedia in Defamation Case | Electronic Frontier Foundation
- ^
- ^
- ^ Wikipedia founder forks Wikipedia 18 September 2006
- ^ "Wikipedia, Die freie Enzyklopädie" (in German).. Retrieved on 25 April 2007.
- ^ "Neue Wikipedia-DVD im Handel und zum Download" (in German).. Retrieved on 25 April 2007.
- ^ "Wikipedia wird noch nicht gedruckt" (in German).. Retrieved on 25 April 2007.
- ^ Titelinformationen, Bertelsmann site. Retrieved 7 October 2008.
- ^ "SOS Children releases 2006 Wikipedia CD Selection". SOS Children. 4 June 2006.. Retrieved on 25 April 2007.
- ^ "Wikipedia 0.5 available on a CD-ROM". April 2007.. Retrieved on 25 April 2007.
- ^ "Wikipedia maakt cd voor internetlozen" (in Dutch). tweakers.net. 25 April.. Retrieved on 25 April 2007.
- ^ "Encyclopodia site Encyclopodia – the encyclopedia on your iPod". Sourceforge. Encyclopodia site. Retrieved on 25 April 2007.
- ^ Judge tosses Matawan literary agent's defamation lawsuit against Wikipedia - Asbury Park Press
- ^ Wikipedia:Wikipedia Signpost/2007-11-05/French lawsuit 5 November 2007
- ^ a b c d Bergstein, Brian (25 March 2007). "Sanger says he co-started Wikipedia". msnbc.com (Associated Press).. Retrieved on 28 March 2007. ." — Brian Bergstein.
- ^ "Rules To Consider". Ignore all rules (Internet Archive).. Retrieved on 25 March 2007.
- ^ Schiff, Stacy (24 July 2006). "Know It All". Can Wikipedia conquer expertise? (The New Yorker).. Retrieved on 25 March 2007.
- ^ Thomson, Iain (13 April 2007). "Wikipedia 'broken beyond repair' says co-founder". Information World Review.. Retrieved on 15 April 2007.
- ^ Mitchell, Dan (24 December 2005). "Insider Editing at Wikipedia". The New York Times.. Retrieved on 25 March 2007.
- ^ a b Peter Meyers (20 September 2001). "Fact-Driven? Collegial? This Site Wants You". The New York Times.. Retrieved on 18 April 2007. ."
- ^ James Niccolai, Wikipedia taking on the vandals in Germany, PC Advisor, 26 September 2006.
- ^ Bishop, Todd. (January 26, 2004) Seattle Post-Intelligencer. Microsoft Notebook: Wiki pioneer planted the seed and watched it grow. Section: Business; Page D1.
- ^."
- ^ Heim, Judy (4 September 2001). "Free the Encyclopedias!". Technology Review.. Retrieved on 25 March 2007.
- ^ Sanger, Larry. "My role in Wikipedia (links)". larrysanger.org (Larry Sanger).. Retrieved on 25 March 2007.
- ^ Chinese Build Free Net Encyclopedia
- ^ Chart: Wikipedia access in China
- ^ Chinese Wikipedia now fully unblocked?
- ^ Friend in high place unblocks Wikipedia, Fortune Magazine
- ^ "Beijing unblocks BBC Chinese site", BBC, 31 July 2008
- ^ Satter, Raphael G. (7 December 2008). "Wikipedia article blocked in U.K. for nude photo of a girl". Associated Press.. Retrieved on 7 December 2008.
- ^ "IWF statement regarding Wikipedia webpage". Internet Watch Foundation. 9 December 2008.. Retrieved on 10 December 2008.
- ^ Oʻzbekcha wikipedia yana yopildimi?(Uzbek)
[edit] External links
[edit] Wikipedia records and archives
- Wikipedia's project files contain a large quantity of reference and archive material. Useful resources on Wikipedia history within Wikipedia are:
- Historical summaries
- Category:Wikipedia years - historical events by year
Wikipedia:Wikipedia's oldest articles
History of Wikipedia - from the Wikipedia:Meta
Wikipedia:Historic debates
Wikipedia:Wikipedia records
meta:Wikimedia News - news and milestones index from all Wikipedias
Wikipedia:History of Wikipedia bots
- Size and statistics
- stats.wikimedia.org - the Mediawiki Foundation's main interface for all project statistics, including the various and combined Wikipedia's.Wikipedia:Milestones
Wikipedia:Statistics
Wikipedia:Size of Wikipedia
- Discussion and debate archives
-
-
-
- Other
- Wikipedia:CamelCase and Wikipedia
Nostalgia Wikipedia - a snapshot of Wikipedia from 20 December 2001, running the current version of MediaWiki for security reasons but using a skin that looks like the software of the time.
Larry Sanger about the origins of Wikipedia
Wikipedia:Volunteer Fire Department - handling of major editorial influx. Disbanded when no longer needed (2004)
Wikipedia:Magnus Manske Day - mediawiki software goes live into production
- "Truth in Numbers: The Wikipedia Story", a 2007 documentary.
[edit] Third party
- The Free Universal Encyclopedia and Learning Resource — Free Software Foundation endorsement of Nupedia (later updated to include Wikipedia) 1999.
- Even older Wikipedia snapshot - 28 February 2001
- Early Wikipedia snapshot - 30 March 2001
- New York Times on Wikipedia, September 2001
- Larry Sanger, The Early History of Nupedia and Wikipedia: A Memoir and Part II Slashdot (18 April 2005 - 19 April 2005)
- Giles, Jim, Internet encyclopaedias go head to head, Nature comparison between Wikipedia and Britannica, 14 December 2005
- Fatally Flawed: Refuting the recent study on encyclopedic accuracy by the journal Nature, Encyclopedia Britannica Inc., March 2006
- Nature's responses to Encyclopaedia Britannica, Nature, 23 March 2006
|
http://ornacle.com/wiki/History_of_Wikipedia
|
crawl-002
|
refinedweb
| 8,737
| 55.34
|
Hi all,
I am using hardware asrc on imx6 ul free scale board by using linux ioctl driver calls. There are two threads in my application in which one is main thread and other one is asrc thread. In the asrc thread, I am calling ioctl asrc convert driver function. In general, once I have data I will call ioctl function, otherwise the asrc thread will go into conditional wait. The code inside the ioctl function is supposed to be light weight so that the main thread can carry other tasks while hardware asrc is converting the data. But, I see there is a serious loading happens and my main thread is running slowly. When the ioctl call in the asrc thread is switched off, the main thread runs at a faster rate which is what needed even in the presence of ioctl call. Could anyone please help to understand this behavior of ioctl call...Thanks
Hi,
Actually, there is a unit test about ASRC driver in your target image root fs, the path is "/unit_tests/mxc_asrc_test.out" , you can run it :
$ . /unit_tests/autorun-asrc.sh
And test this for check the driver if available. Otherwise, it is your thread issue.
Hi Yifang Guo,
Thanks for your suggestions... The asrc after testing independently is working fine. There is no issue with quality. My goal is to use it as a parallel resource. So, I am calling it from a separate thread. I suppose the function ioctl for asrc convert should not take much cycles from the core after it is called. Because I want to use the core for other tasks running in parallel through threading. Please correct me if there is any mistake..Thanks..
Hi ,
ASRC driver implement CONVERT by hardware module (Asynchronous Sample Rate Converter), it don't spend ARM core executing cycles when convert sample rate. You can use ARM core to deal with other task simultaneously.
You test call ioctl ASRC_CONVERT periodically in your thread. e.g:
while(1){
ioctl(fd_asrc, ASRC_CONVERT, &buf_info);
msleep(100); // escape time from this thread; then CPU can call other threads.
}
Attatched "mxc_asrc_test" source code for reference.
Hi Yifang,
Which header file has this function msleep(). Is this compulsory to keep?
#include <unistd.h>
google linux msleep function.
msleep(100); can escape time from this thread; then CPU can call other threads
Hi jimmychan,
Thanks for your reply..
The ioctl call goes as follows ..
err = ioctl(fd_asrc, ASRC_CONVERT, &buf_info);
fd_asrc is the id of the hardware asrc
ASRC_CONVERT is the command
struct asrc_convert_buffer buf_info has the details of input and output addresses with the lengths as well
This call is used in mxc_asrc driver code.. I would be looking forward for your views and suggestions.. thanks..
could you tell me more details? which driver are you talking about? which ioctl function you called that had serious loading?
|
https://community.nxp.com/t5/i-MX-Processors/Why-ioctl-asrc-convert-call-is-loading-the-main-thread/m-p/497600
|
CC-MAIN-2022-27
|
refinedweb
| 476
| 75
|
I presented in my last post "Calendar and Time Zone in C++20: Calendar Dates" the new calendar-related data types. Today, I go one step further and interact with them.
Assume you have a calendar date such as year(2100)/2/29. Your first question may be: Is this date valid?
year(2100)/2/29.
The various calendar types in C++20 have a function ok. This function returns true if the date is valid.
ok
true
// leapYear.cpp
#include <iostream>
#include "date.h"
int main() {
std::cout << std::boolalpha << std::endl;
using namespace date;
std::cout << "Valid days" << std::endl; // (1)
day day31(31);
day day32 = day31 + days(1);
std::cout << " day31: " << day31 << "; ";
std::cout << " day31.ok(): " << day31.ok() << std::endl;
std::cout << " day32: " << day32 << "; ";
std::cout << "day32.ok(): " << day32.ok() << std::endl;
std::cout << std::endl;
std::cout << "Valid months" << std::endl; // (2)
month month1(1);
month month0(0);
std::cout << " month1: " << month1 << "; ";
std::cout << " month1.ok(): " << month1.ok() << std::endl;
std::cout << " month0: " << month0 << "; ";
std::cout << "month0.ok(): " << month0.ok() << std::endl;
std::cout << std::endl;
std::cout << "Valid years" << std::endl; // (3)
year year2020(2020);
year year32768(-32768);
std::cout << " year2020: " << year2020 << "; ";
std::cout << " year2020.ok(): " << year2020.ok() << std::endl;
std::cout << " year32768: " << year32768 << "; ";
std::cout << "year32768.ok(): " << year32768.ok() << std::endl;
std::cout << std::endl;
std::cout << "Leap Years" << std::endl; // (4)
constexpr auto leapYear2016{year(2016)/2/29};
constexpr auto leapYear2020{year(2020)/2/29};
constexpr auto leapYear2024{year(2024)/2/29};
std::cout << " leapYear2016.ok(): " << leapYear2016.ok() << std::endl;
std::cout << " leapYear2020.ok(): " << leapYear2020.ok() << std::endl;
std::cout << " leapYear2024.ok(): " << leapYear2024.ok() << std::endl;
std::cout << std::endl;
std::cout << "No Leap Years" << std::endl; // (5)
constexpr auto leapYear2100{year(2100)/2/29};
constexpr auto leapYear2200{year(2200)/2/29};
constexpr auto leapYear2300{year(2300)/2/29};
std::cout << " leapYear2100.ok(): " << leapYear2100.ok() << std::endl;
std::cout << " leapYear2200.ok(): " << leapYear2200.ok() << std::endl;
std::cout << " leapYear2300.ok(): " << leapYear2300.ok() << std::endl;
std::cout << std::endl;
std::cout << "Leap Years" << std::endl; // (6)
constexpr auto leapYear2000{year(2000)/2/29};
constexpr auto leapYear2400{year(2400)/2/29};
constexpr auto leapYear2800{year(2800)/2/29};
std::cout << " leapYear2000.ok(): " << leapYear2000.ok() << std::endl;
std::cout << " leapYear2400.ok(): " << leapYear2400.ok() << std::endl;
std::cout << " leapYear2800.ok(): " << leapYear2800.ok() << std::endl;
std::cout << std::endl;
}
I checked in the program if a given day (line1), a given month (line 2), or a given year (line 3) is valid. The range of a day is [1, 31], of a month [1, 12], and of a year [ -32767, 32767]. Consequently, the ok-call on the corresponding values returns false. Two facts are interesting when I output the various values. First, if the value is not valid, the output displays: "is not a valid day"; "is not a valid month"; "is not a valid year". Second, month values are displayed in string representation.
ok
false
"is not a valid day
"is not a valid month
"is not a valid year
You can apply the ok-call on a calendar date. Now it's quite easy to check if a specific calendar date is a leap day and, therefore, the corresponding year a leap year. In the worldwide used Gregorian calendar, the following rules apply:
Each year that is exactly divisible by 4 is a leap year.
Too complicate? The program leapYears.cpp exemplifies this rule.
The extended chrono library makes it quite comfortable to ask for the time duration between calendar dates.
Without further ado. The following program queries a few calendar dates.
// queryCalendarDates.cpp
#include "date.h"
#include <iostream>
int main() {
using namespace date;
std::cout << std::endl;
auto now = std::chrono::system_clock::now(); // (1)
std::cout << "The current time is: " << now << " UTC\n";
std::cout << "The current date is: " << floor<days>(now) << std::endl;
std::cout << "The current date is: " << year_month_day{floor<days>(now)} << std::endl;
std::cout << "The current date is: " << year_month_weekday{floor<days>(now)} << std::endl;
std::cout << std::endl;
auto currentDate = year_month_day(floor<days>(now)); // (2)
auto currentYear = currentDate.year();
std::cout << "The current year is " << currentYear << '\n';
auto currentMonth = currentDate.month();
std::cout << "The current month is " << currentMonth << '\n';
auto currentDay = currentDate.day();
std::cout << "The current day is " << currentDay << '\n';
std::cout << std::endl;
// (3)
auto hAfter = floor<std::chrono::hours>(now) - sys_days(January/1/currentYear);
std::cout << "It has been " << hAfter << " since New Year!\n";
auto nextYear = currentDate.year() + years(1); // (4)
auto nextNewYear = sys_days(January/1/nextYear);
auto hBefore = sys_days(January/1/nextYear) - floor<std::chrono::hours>(now);
std::cout << "It is " << hBefore << " before New Year!\n";
std::cout << std::endl;
// (5)
std::cout << "It has been " << floor<days>(hAfter) << " since New Year!\n";
std::cout << "It is " << floor<days>(hBefore) << " before New Year!\n";
std::cout << std::endl;
}
With the C++20 extension, you can directly display a time point, such as now (line 1). std::chrono::floor allows it to convert the time point to a day std::chrono::sys_days. This value can be used to initialize the calendar type std::chrono::year_month_day. Finally, when I put the value into a std::chrono::year_month_weekday calendar type, I get the answer, that this specific day is the 3rd Tuesday in October.
std::chrono::floor
std::chrono::sys_days
std::chrono::year_month_day
std::chrono::year_month_weekday
Of course, I can also ask a calendar date for its components, such as the current year, month, or day (line 2).
Line (3) is the most interesting one. When I subtract from the current date in hours resolution first January of the current year, I get the hour since the new year. On the contrary: When I subtract from the first January of the next year (line 4) the current date in hours resolution, I get the hours to the new year. Maybe you don't like the hour's resolution. Line 5 displays the values in days resolution.
I want to know the weekdays of my birthdays.
Query Weekdays
Thanks to the extended chrono library, it is quite easy to get the weekday of a given calendar date.
// weekdaysOfBirthdays.cpp
#include <cstdlib>
#include <iostream>
#include "date.h"
int main() {
std::cout << std::endl;
using namespace date;
int y;
int m;
int d;
std::cout << "Year: "; // (1)
std::cin >> y;
std::cout << "Month: ";
std::cin >> m;
std::cout << "Day: ";
std::cin >> d;
std::cout << std::endl;
auto birthday = year(y)/month(m)/day(d); // (2)
if (not birthday.ok()) { // (3)
std::cout << birthday << std::endl;
std::exit(EXIT_FAILURE);
}
std::cout << "Birthday: " << birthday << std::endl;
auto birthdayWeekday = year_month_weekday(birthday); // (4)
std::cout << "Weekday of birthday: " << birthdayWeekday.weekday() << std::endl;
auto currentDate = year_month_day(floor<days>(std::chrono::system_clock::now()));
auto currentYear = currentDate.year();
auto age = (int)currentDate.year() - (int)birthday.year(); // (5)
std::cout << "Your age: " << age << std::endl;
std::cout << std::endl;
std::cout << "Weekdays for your next 10 birthdays" << std::endl; // (6)
for (int i = 1, newYear = (int)currentYear; i <= 10; ++i ) {
std::cout << " Age " << ++age << std::endl;
auto newBirthday = year(++newYear)/month(m)/day(d);
std::cout << " Birthday: " << newBirthday << std::endl;
std::cout << " Weekday of birthday: "
<< year_month_weekday(newBirthday).weekday() << std::endl;
}
std::cout << std::endl;
}
First, the program asks you for the year, month, and day of your birthday (line 1). Based on the input, a calendar date is created (line 2) and checked if it's valid (line 3). Now I display the weekday of your birthday. I use, therefore, the calendar date to fill the calendar type std::chrono::year_month_weekday (line 4). To get the int representation of the calendar type year, I have to convert it to int (line 5). Now I can display your age. Finally, the for-loop displays to each of your next ten birthdays (line 6) the following information: your age, the calendar date, and the weekday. I only have to increment the age and newYear variable.
int
age
newYear
Here is a run of the program with my birthday.
One important component in my posts to the extended Chrono library is still missing: time zones.78
Yesterday 7029
Week 40902
Month 107568
All 7375408
Currently are 151 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more...
|
https://modernescpp.com/index.php/calendar-and-time-zone-in-c-20-working-with-calendar-dates
|
CC-MAIN-2021-43
|
refinedweb
| 1,365
| 65.83
|
Suppose we have an array that is sorted, now that is rotated at some pivot. The pivot is not known before. We have to find the minimum element from that array. So if the array is like [4,5,5,5,6,8,2,3,4], then the minimum element is 2.
To solve this, we will follow these steps −
Define one method called search(), this takes arr, low and high
if low = high, then return arr[low]
mid := low + (high – low) / 2
ans := inf
if arr[low] < arr[mid], then ans := min of arr[low] and search(arr, mid, high)
otherwise when arr[high] > arr[mid], then ans := min of arr[mid] and search(arr, low, mid)
otherwise when arr[low] = arr[mid], then ans := min of arr[low] and search(arr, low + 1, high)
otherwise when arr[high] = arr[mid], then ans := min of arr[high] and search(arr, low, high - 1)
return ans
From the main method call solve(nums, 0, size of nums – 1)
Let us see the following implementation to get better understanding −
#include <bits/stdc++.h> using namespace std; class Solution { public: int search(vector <int>& arr, int low, int high){ if(low == high){ return arr[low]; } int mid = low + (high - low) / 2; int ans = INT_MAX; if(arr[low] < arr[mid]){ ans = min(arr[low], search(arr, mid, high)); } else if (arr[high] > arr[mid]){ ans = min(arr[mid], search(arr, low, mid)); } else if(arr[low] == arr[mid]){ ans = min(arr[low], search(arr, low + 1, high)); } else if(arr[high] == arr[mid]){ ans = min(arr[high], search(arr, low, high - 1)); } return ans; } int findMin(vector<int>& nums) { return search(nums, 0, nums.size() - 1); } }; main(){ Solution ob; vector<int> v = {4,5,5,5,6,8,2,3,4}; cout <<(ob.findMin(v)); }
[4,5,5,5,6,8,2,3,4]
2
|
https://www.tutorialspoint.com/find-minimum-in-rotated-sorted-array-ii-in-cplusplus
|
CC-MAIN-2021-25
|
refinedweb
| 312
| 58.55
|
A search for 'Data, Wiki' yields:
is king", wrote a famous author to start explaining LINQ. As MSDN suggests – to explain LINQ in simple terms - "LINQ is a general purpose query facility (or feature) to query data. Not just relational data or XML, but all sources of informational data. Over the past few years, we have had several LINQ flavors: LINQ to SQL, LINQ to XML, LINQ to CSV files, LINQ to Text files, etc. All of these target a specific medium (or form) of data. The goal of LINQ is to provide a common simple interface to query data.
Today, data comes in many different formats through a variety of different channels, such as database, XML, Raw, Text, Binary, RSS feeds through TCP, UDP, HTTP, FTP, etc. 'LINQ to www' is LINQ to query data from complying (REST like) web sites. To give an example: let's say you have a favorite car listing website that displays thousands of pages of car listins in their web site. Using the LINQ2www tool, you can query that web site to get the desired information. For example, you can query cars priced greater than $15,000 but less than $30,000 using LINQ2www. This means, you do not need to browse manually through hundreds of continuous web pages to extract the list of cars that qualify your interest ($15,000 to $30,000). All you need to do is just write an appropriate LINQ query to extract this information. This same principle can be applied to pages of financial data, corporate accounts data, a web-telephone directory, and what not.
It is best to read the article in an orderly fashion. However, not everyone is in the same situation. So, here is a brief guideline to get what you are looking for quickly:
Prerequisite: To run the demo, you need .NET framework 3.5 installed in your machine. If it is not already installed, you may download it for free from here:
The web spider is a sample WPF application that uses the LINQ2www assembly. This application crawls through the Who's who @ CodeProject, page by page. To see a demo of this application, download the demo project from the first line of this article - run the application. Press the 'Go' button. Give it 2 to 3 minutes to see the flow of data from the CodeProject server to your computer's 3D bar chart. You can observe a continuous update in the status bar at the bottom of the WPF application.
'REpresentational State Transfer' is a style of software architecture for systems such as the World Wide Web. All credits go to the doctoral dissertation of Roy Fielding (Reference 1). LINQ2www is a tool capable of querying REST based (complying) web sites. For example, consider the first page of 'Who's who @ CodeProject':.
Now to get the second page, all we need to do is remove the page number 1 to replace it with page number 2. I.e., pgnum=1 should be replaced with pgnum=2 to get to the next page and so on... This is a RESTful web interface.
Using the above web link, we can browse up to 1000 members from page 1 to page 1000. Now, consider your requirement is to just get the list of Gold members from these 1000 web pages. Manually browsing (traversing) through all the 1000 pages is tiresome and error prone. Writing a small program to automatically perform this operation is a good idea. Generalizing such a program so that it can solve a similar problem can be considered as the next step. How about generalizing to this extent: all you have to do is just write 'two lines of code' to get all gold members from 1000 web pages? This is the power of LINQ. We are specializing LINQ to achieve our 'Two line code' goal, which is called LINQ2www - LINQ 2 World Wide Web. One more important thing to be mentioned here is, we do not check or need a 100% REST compliance web site. LINQ2www can work with REST like web links. The basic requirement for LINQ2www to work is all the pages need to be linked to each other through an href link. Even if it does not comply with REST in other areas, LINQ2www will work.
Understanding how LINQ works is necessary to specialize it to our needs. Let us start from a very simple example. Consider we have a pipe that carries some liquid. We have Green, Red, and Blue color liquids mixed, flowing in this pipe. Someone sends this mixed liquid through this pipe continuously. All you know from your side is, if you open the faucet (tap), you will get this mixed liquid flowing. Consider, you just need the Green color liquid. You prepare a filter that will filter just the Green color liquid. Now when we fix this filter in the faucet (tap), only the Green liquid will start flowing though the pipe. This filter is LINQ and the mixed liquid is raw data. LINQ helps you to extract information from a raw flow of data. Based on the liquid type, you will need a different filter. Similarly, based on the data variety, you will need a different LINQ. That is how we have LINQ 2 SQL, LINQ to XML etc.
Let us do a small code example. Consider we have a list of names. All we need is names that start with 'S'. Let us try to write the one line LINQ code to get what we want.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestLinq4
{
class Program
{
static void Main(string[] args)
{
List<string> listOfNames = new List<string>()
{
"Nina", "Kyle", "Steven",
"Joe", "Neal", "Sanjay", "Elijah",
"Steen", "Stech", "Donn",
"Thomas", "Peter", "Steinberg"
};
var Result = from Name in listOfNames where Name.StartsWith("S") select Name;
foreach (string NameReturned in Result)
{
System.Console.WriteLine(NameReturned);
}
}
}
}
Here, all we do is just write a simple LINQ statement to get a query result. When we enumerate the Result using foreach, the query actually gets executed. The data gets filtered by our where condition. Then, if it passes the where condition, it is selected and returned to the NameReturned variable in the foreach.
Result
foreach
where
NameReturned
Let's take a closer look. Several questions may arise when seeing the above code for the first time. One important question may be, where is the where method? And, where is the select method and what's going on. Where is a method in the IEnumerable class. This is a special method called an 'Extension Method'. As the name suggests, you can extend a class (even a sealed one) by writing static methods. Let us see how to write one soon. Where is an important method as this is our filter.
Where
IEnumerable
sealed
First, let us see what an extension method is and how to write one. Then, we will override (specialize) the default where method in the above sample to add an additional feature to it.
Consider a well known .NET class - String. As we know, String is a sealed class. This means, we cannot specialize (derive from) the String class. But we can extend the String class by adding an Extension Method. This extension method will appear as if it is an exposed method of the String class. Let us write one now.
String
public static class StringExtension
{
public static bool CompareCaseInsensitive(this string strSource, string strTarget)
{
if (string.Compare(strSource, strTarget, true) == 0)
{
return true;
}
return false;
}
}
Note that the extension method is defined inside a static class StringExtension. Also note that the static extension method's first parameter starts with the this keyword, i.e., it is extending the String class. The picture below shows the intellisense when we have the above extension method defined.
StringExtension
this
After learning about extension methods, it is not much difficult to figure that where is an extension method for the IEnumerable class. Now, let us see how our LINQ statement is converted, which improves our understanding:
var Result = from Name in listOfNames where Name.StartsWith("S") select Name;
is converted to:
var Result = listOfNames.Where(delegate(string item)
{ return item.StartsWith("S");} ).Select(delegate(string item){ return item; } );
After getting enough background, let us see the challenges in writing LINQ to www (World Wide Web).
Let us see how to use the code. This will help us to understand how some of the challenges are met.
Let us take CodeProject's Who's who web link for our example. Consider our goal is to fetch different types of membership statuses available in CodeProject. The code to perform this query is as below:
Linq2www linq2wwwUrl = new Linq2www("" +
"Membership/Profiles.aspx?mgtid=1&%3bmgm=True" +
"&ml_ob=MemberId&mgm=False&pgnum=1",
"?" +
"mgtid=1&%3bmgm=True&ml_ob=MemberId&mgm=False&pgnum=");
int CancelId = from webItem in linq2wwwUrl where webItem.GetMatchDyn(
@"class=""Member(?<name>.*?)"">(\k<name>)",
this.CallThisMethod) select webItem;
The first line constructor takes two arguments. The first argument is the starting web address link. The next is an optional parameter that tells that when you fetch the first page, search for the next page link which will look like this parameter (i.e., the second parameter).
The next line is a little different LINQ statement compared to regular ones. As you might have assumed, we are using a Regular Expression to extract the desired information from this web site. Another difference is, we pass in a callback method to call with the updates. This means, while traversing the web pages like a web spider/crawler, if it finds anything that matches this Regular Expression, it makes a call to the callback method that is supplied (CallThisMethod). This method will be called continuously until all the pages and their links are visited. However, the user can cancel this query anytime using the return value integer CancelId. The code for doing this is:
CallThisMethod
CancelId
linq2wwwUrl.CancelUpdate(cancel1);
So now, we know how challenges (1), (2), and (3) mentioned in the previous section are resolved. We are using a callback method to update the query caller. This will enable the user to cancel the update whenever it is necessary. All we need out of this LINQ query is filtered information, which is received asynchronously. As we do not have any standard language to query HTML data, we use a Regular Expression, which is a powerful tool to query any raw, text like data.
Why is this LINQ a different flavor ? The LINQ2www is a different flavor as we do two unconventional things in this LINQ, which is explained in this section.
The where method is the only method we override in LINQ2www. The purpose of the where method is a little unconventional here. The where method just sets the condition and the callback. The unconventional part is: it is not enumerating through the data. This is because, as you know, complete data is not available at the time of the where method's invocation. However, it sets the necessary information in the class so that the callback will start getting the filtered information. The implementation of the where method is as follows:
public static class LinqExtnsnOvrride
{
public static int Where<linq2www>(
this IEnumerable<linq2www> enumLinq2www, Func<linq2www,> predicate)
{
enumLinq2;
Linq2www Item = enumLinq2;
return predicate(Item); // To set the Condition and Callback
}
}
The next unconventional part the where method does is, it returns an integer. This is the ID that needs to be passed to stop getting updates in the callback method. The first and the second line resets the enumerator to the first item in the collection. The next line calls the callback, there by setting the Regular expression to filter and set the callback function.
Let us put all that we walked through to explain the LINQ2www class. Let us follow the top down approach.
LINQ2www
Linq2www linq2wwwUrl = new Linq2www(WebLink, webLinkTemplate);
When the LINQ2www constructor is called, we create a background thread. The job of this thread is to get the content of the web link. We store it in a multimap (Reference 3). Multimap is a sophisticated dictionary like collection class. It can store key-value pairs. First, we store the weblink and its content in the multimap. Next, we parse (traverse through the contents of) the weblink to find the connecting next page. If the optional second parameter is provided, we use that as a template to find the next page. Otherwise, we use the web link to create a connecting template. Once we find the next page, we again do the same thing - store the link and contents in the multimap. Then, we parse through the contents to find the next link. We do this until we are exhausted with no more *new* links to browse.
Shown below is the second line in our sample explained above. This line actually fetches useful filtered information for us from the whole data. As you will notice, we are passing two parameters to GetMatchDyn. The first parameter is the Regular expression - the filter. The second parameter is the callback. This callback receives the filtered information continuously.
GetMatchDyn
int CancelId = from webItem in linq2wwwUrl where
webItem.GetMatchDyn(MyRegularExpression, this.CallThisMethod) select webItem;
Let us see how we do it. In the previous section, we saw the where method override. Our overridden where is called when the above line is executed. As we saw in the above section, the where method calls GetMatchDyn. This means, the where method calls the delegate, which in turn calls GetMatchDyn. GetMatchDyn creates a thread. This thread reads the data stored in the multimap. This thread moves (enumerates) item by item in the multimap to read each weblink's data. It filters the data using the Regular Expression passed to GetMathDyn by the caller. Once the Regular Expression matches, it calls the callback method passed in by the user. Remember, this is the second parameter to the GetMatchDyn method.
GetMathDyn
The above picture explains what we described in this section.
Last but not least, we should provide a method to cancel the LINQ call. As we saw before, since this is a continuous update from HTTP, it can be really time consuming. The user should be able to cancel the update anytime. This can be performed easily by using the return value (integer) from the LINQ call we made. The line below does this:
linq2wwwUrl.CancelUpdate(CancelId);
This simple method is defined as below:
public bool CancelUpdate(int ThreadId)
{
bool retVal = false;
Regex regDet = threadDetails.GetFirstItem(ThreadId);
if (regDet != null)
{
retVal = threadDetails.Remove(ThreadId);
}
lock (this) Monitor.PulseAll(this);
return retVal;
}
All we do is: we remove the Regular expression that we stored when the GetMatchDyn method was called previously. Then we trigger the thread created by the GetMatchDyn method. This thread, when trying to read the Regular Expression (that it is tied to) will get a null value. This is because, we just removed this before triggering the thread. The thread that was created by the GetMatchDyn method will close down gracefully. Hence the callback will stop receiving any more updates.
IQueryable
I would like to hear from you guys on your feedback. Please leave a detailed message irrespective of your.
|
http://www.codeproject.com/Articles/35911/LINQ-to-World-Wide-Web-www-A-different-flavor-of-L?fid=1539834&tid=3019737
|
CC-MAIN-2014-52
|
refinedweb
| 2,543
| 66.33
|
Walkthrough: Authoring a Composite Control with Visual C# C# projects, select the Windows Forms Control Library project template, type ctlClockLib component using ctlClockLib.ctlClock.
In Solution Explorer, right-click UserControl1.cs, and then click Rename. Change the file name to ctlClock.cs..cs, and then click View Designer.
In the Toolbox, expand the Common Controls node, and then double-click Label.
A Label control named label1 and the Enabled property to true.
The Interval property controls the frequency with which the Timer component ticks. Each time timer1 ticks, it runs the code in the timer1_Tick event. The interval represents the number of milliseconds between ticks.
In the Component. public Color ClockForeColor { get { return colFColor; } set { colFColor = value; lblDisplay.ForeColor = colFColor; } }
The preceding code makes two custom properties, ClockForeColor and ClockBackColor, available to subsequent users of this control. The get and set statements provide for storage and retrieval of the property value, as well as code to implement functionality appropriate to the property.
On the File menu, click Save All to save the project.
Controls are not stand-alone applications;,..cs, and then click Add.
The Inheritance Picker dialog box appears.
Under Component Name, double-click ctlClock.
In Solution Explorer, browse through the current projects. public class statement. Note that your control inherits from ctlClockLib.ctlClock. Beneath the opening brace ({) statement, type the following code.
[C#]
private DateTime dteAlarmTime; private bool blnAlarmSet; // These properties will be declared as public to allow future // developers to access them. public DateTime AlarmTime { get { return dteAlarmTime; } set { dteAlarmTime = value; } } public bool AlarmSet { get { return blnAlarmSet; } set { blnAlarmSet = value; } }.
In the previous procedures, you added properties and a control that will enable alarm functionality in your composite control. In this procedure, you will add code to compare the current time to the alarm time and, if they are the same, the Code Editor, locate the private bool blnAlarmSet; statement. Immediately beneath it, add the following statement.
[C#]
In the Code Editor, locate the closing brace (}) at the end of the class. Just before the brace, add the following code.
[C#]
protected override void timer1_Tick(object sender, System.EventArgs e) { // Calls the Timer1_Tick method of ctlClock. base.timer1_Tick(sender, e); // Checks to see if the alarm is set. if (AlarmSet == false) return; else // If the date, hour, and minute of the alarm time are the same as // the current time, flash an alarm. { if (AlarmTime.Date == DateTime.Now.Date && AlarmTime.Hour == DateTime.Now.Hour && AlarmTime.Minute == DateTime.Now.Minute) { //) { lblAlarm.BackColor = Color.Red; blnColorTicker = true; } else { lblAlarm.BackColor = Color.Blue; blnColorTicker = false; } } else { // Once the alarm has sounded for a minute, the label is made // invisible again. lblAlarm.Visible = false; } } }
The addition of this code accomplishes several tasks. The override statement directs the control to use this method in place of the method that was inherited from the base control. When this method is called, it calls the method it overrides by invoking the base.timer1_Tick statement, ensuring that all of the functionality incorporated in the original control is reproduced in this control. It then runs additional code to incorporate the alarm functionality. A flashing label control will appear when the alarm void dtpTest_ValueChanged.
Modify the code so that it resembles the following.
[C#]
In Solution Explorer, right-click Test, and then click Set as StartUp Project.
On the Debug menu, click Start Debugging.
The test program starts. Note that the current time is updated in the ctlAlarmClock control, lblAlarm will flash.
Turn off the alarm by clicking btnAlarmOff..
|
https://msdn.microsoft.com/en-us/library/Vstudio/a6h7e207.aspx
|
CC-MAIN-2015-32
|
refinedweb
| 582
| 59.8
|
0
I've just started programming in c#, its a simple Forms application with single form.
I have a class written in a separate .cs file in its own namespace.
The rest of the code is in the Form1.cs file.
I am able to create an instance of the my class and use it in Form1.
My Problems...
- 1.Form1 contains an array of type char which need to use in the object instance.
- 2.I need to access the text field of the Form from within member function of the object
instance.
Not sure which code sample to give u here...
I would appreciate any help provided,
thanks:)
|
https://www.daniweb.com/programming/software-development/threads/315257/c-passing-values-between-namespaces
|
CC-MAIN-2018-39
|
refinedweb
| 110
| 84.88
|
RSA_private_encrypt, RSA_public_decrypt - low level signature operations
#include <openssl/rsa.h> int RSA_private_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); int RSA_public_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding);.
RSA_private_encrypt() returns the size of the signature (i.e., RSA_size(rsa)). RSA_public_decrypt() returns the size of the recovered message digest.
On error, -1 is returned; the error codes can be obtained by ERR_get_error(3).
ERR_get_error(3), RSA_sign(3), RSA_verify(3)
Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>.
|
https://www.zanteres.com/manpages/RSA_private_encrypt.3ssl.html
|
CC-MAIN-2022-33
|
refinedweb
| 112
| 58.48
|
[
]
Xiao Chen updated HDFS-11410:
-----------------------------
Attachment: HDFS-11410.01.patch
Simple patch 1 attached. Trivial changes.
Pinged one of the initial jiras HDFS-6301, no response so far. Very likely just a bug, or
accident as Andrew said.
> Use the cache when edit logging XAttrOps
> ----------------------------------------
>
> Key: HDFS-11410
> URL:
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.5.0
> Reporter: Xiao Chen
> Assignee: Xiao Chen
> Attachments: HDFS-11410.01.patch
>
>
> [~andrew.wang] recently had a comment on HDFS-10899:
> {quote}
> Looks like we aren't using the op cache in FSEditLog SetXAttrOp / RemoveXAttrOp. I think
this is accidental, could you do some research? Particularly since we'll be doing a lot of
SetXAttrOps, avoiding all that object allocation would be nice. This could be a separate JIRA.
> {quote}
> i.e.
> {code}
> static SetXAttrOp getInstance() {
> return new SetXAttrOp();
> }
> {code}
> v.s.
> {code}
> static AddOp getInstance(OpInstanceCache cache) {
> return (AddOp) cache.get(OP_ADD);
> }
> {code}
> Seems we should fix these non-caching usages.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org
|
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201702.mbox/%3CJIRA.13042757.1487034982000.72380.1487057981717@Atlassian.JIRA%3E
|
CC-MAIN-2018-09
|
refinedweb
| 195
| 52.97
|
Hi,
I have started working on a Rust bindings for Bokeh, I have been able to import Bokeh models into Rust by parsing the json outputs from
spec.py, but I need help to understand how to serialize the models into a json output that is compatible with BokehJS
Thanks
Hi,
[Topic moved to Development category]
Hi @Rodolphe_Conan that’s an exciting endeavor, I personally would love to see Rust bindings for Bokeh. However, the question above is a bit to broad and vague to really offer any concrete answers. Can you narrow down to a couple of specific questions related to specific steps you want to accomplish, that could help get you started?
Hi, thanks for your response, ok I am going to try to be more specific, now that I do have all Bokeh models in Rust, I wonder how to put them together and produce an html file like in the following example
from bokeh.plotting import figure, show, output_file
output_file("/tmp/out.html")
p = figure()
p.circle([1, 2, 3], [4, 5, 6])
show§
@Rodolphe_Conan I am afraid I still consider the question way too broad (it’s really just a re-statement of your first post). I don’t know what you already know, and what you don’t already know, so trying to answer the question as-is risks me waste a huge amount of effort only to explaining things that you might already know. We need to proceed step by step from small, atomic, narrow questions about specific technical points with well-defined answers.
- “What are the different BokehJS APIs for rendering Bokeh content?”
- “How do those APIs differ?”
- “What is the smallest unit of serialization?”
- “Should default values be included in serialized JSON?”
- “What is the top-level key structure of a Document”
Questions like that are things I can answer.
Ok I see, so lets start with these 2 questions:
- “What is the smallest unit of serialization?”
- “What is the top-level key structure of a Document”
Thank you
What is the smallest unit of serialization
A
Document is the smallest meaningful unit of serialization. Although individual Bokeh models have some JSON representation, that representation may also contain references to other models, so it is not in general meaningful to “serialize” a single Bokeh model in isolation. When you add a model as a “root” of a
Document you also need to traverse any and all other Bokeh models reachable from that root, and include them in the
Document references. Here’s an example of adding a bare plot as a root to a
Document:
In [16]: p = figure() In [17]: doc = Document() In [18]: doc.add_root(p) In [19]: doc.to_json()
The resulting output is
{ "roots": { "references": [ { "attributes": { "axis": { "id": "1013" }, "ticker": null }, "id": "1016", "type": "Grid" }, { "attributes": { "renderers": [] }, "id": "1005", "type": "DataRange1d" }, { "attributes": {}, "id": "1014", "type": "BasicTicker" }, { "attributes": { "renderers": [] }, "id": "1007", "type": "DataRange1d" }, { "attributes": { "formatter": { "id": "1040" }, "ticker": { "id": "1018" } }, "id": "1017", "type": "LinearAxis" }, { "attributes": {}, "id": "1040", "type": "BasicTickFormatter" }, { "attributes": { "active_drag": "auto", "active_inspect": "auto", "active_multi": null, "active_scroll": "auto", "active_tap": "auto", "tools": [ { "id": "1021" }, { "id": "1022" }, { "id": "1023" }, { "id": "1024" }, { "id": "1025" }, { "id": "1026" } ] }, "id": "1028", "type": "Toolbar" }, { "attributes": {}, "id": "1022", "type": "WheelZoomTool" }, { "attributes": { "bottom_units": "screen", "fill_alpha": 0.5, "fill_color": "lightgrey", "left_units": "screen", "level": "overlay", "line_alpha": 1, "line_color": "black", "line_dash": [ 4, 4 ], "line_width": 2, "right_units": "screen", "top_units": "screen" }, "id": "1027", "type": "BoxAnnotation" }, { "attributes": { "below": [ { "id": "1013" } ], "center": [ { "id": "1016" }, { "id": "1020" } ], "left": [ { "id": "1017" } ], "title": { "id": "1035" }, "toolbar": { "id": "1028" }, "x_range": { "id": "1005" }, "x_scale": { "id": "1009" }, "y_range": { "id": "1007" }, "y_scale": { "id": "1011" } }, "id": "1004", "subtype": "Figure", "type": "Plot" }, { "attributes": { "overlay": { "id": "1027" } }, "id": "1023", "type": "BoxZoomTool" }, { "attributes": {}, "id": "1009", "type": "LinearScale" }, { "attributes": {}, "id": "1025", "type": "ResetTool" }, { "attributes": { "formatter": { "id": "1038" }, "ticker": { "id": "1014" } }, "id": "1013", "type": "LinearAxis" }, { "attributes": {}, "id": "1024", "type": "SaveTool" }, { "attributes": {}, "id": "1011", "type": "LinearScale" }, { "attributes": { "text": "" }, "id": "1035", "type": "Title" }, { "attributes": {}, "id": "1021", "type": "PanTool" }, { "attributes": {}, "id": "1026", "type": "HelpTool" }, { "attributes": {}, "id": "1018", "type": "BasicTicker" }, { "attributes": {}, "id": "1038", "type": "BasicTickFormatter" }, { "attributes": { "axis": { "id": "1017" }, "dimension": 1, "ticker": null }, "id": "1020", "type": "Grid" } ], "root_ids": [ "1004" ] }, "title": "Bokeh Application", "version": "2.3.0dev5-6-g8c193aa5b" }
What is the top-level key structure of a Document
Referring to the above:
"roots": { "references": [<all the serialized models>], "root_ids": ["<some model id>"] }, "title": "Some title for the document", "version": "<bokeh version that created>" }
The references contains a JSON representation of every Bokeh model reachable from the root, where each looks like:
{ "attributes": { "axis": { "id": "1017" }, "dimension": 1, "ticker": null }, "id": "1020", "type": "Grid" }
The above is a JSON repr for an object of type
Grid, with
"1020". Every models needs an id value that is unique within the document. Bokeh increments a simple counter by default but it can be anything (e.g. UUIDs if you want). The attributes contains all property values that are different from the default value for the property. I.e. all values that have been explicitly set. In this case it has a dimension value of 1, a ticker value of null (None, from Python) and for the axis, it has a reference to another model with id “1017”. These values were modified by the
figure function when it assembled all the pieces of the figure.
Thanks, that’s extremely helpful, another question: what is the template for the html output file?
That template is here:
But I am not sure how useful it will be in isolation (also: ignore the comments at the top that appear badly out of date). I think you’ll want to trace code paths starting here at the
file_html function, which is the basic function to take Bokeh models and embed them as standalone content in that template:
Thanks, I am generating the bindings from the output of the
spec.py script, but it seems some attributes are missing like the
plot attributes in the models
Grid or
LinearAxis for example, can you confirm?
I’m not sure I understand the question. If you are asking if
Grid and
LinearAxis should have a
.plot property that is a back reference to the plot they are on, the answer is no:
In [5]: p.axis[0].plot --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-26506ad36d50> in <module> ----> 1 p.axis[0].plot AttributeError: 'LinearAxis' object has no attribute 'plot'
Alternatively if you are asking if plots should have properties for grids and axes, the answer is also no. There is a list of renderers that may (or may not) contain axis renderers or grid renderers. The
plot.axis is a python convenience that just returns the list of renderers that happen to be Axis (if any), it is not a Bokeh model property.
|
https://discourse.bokeh.org/t/rust-bindings-for-bokeh/6701
|
CC-MAIN-2020-50
|
refinedweb
| 1,113
| 54.05
|
:
Python, IronPython and all that...
Contents
Introduction
Central to Resolver is the concept that your spreadsheets are turned into Python code ('generated code') and then executed in order to generate the results.
This means that any 'user code' you add to a spreadsheet is a a central part of your spreadsheet, not a bolt on extra.
It has two important consequences:
- It is possible to use Python and .NET libraries within your spreadsheets. These are both very powerful, and anything your computer can do can be done from inside a spreadsheet.
- Your spreadsheets can be exported as code, and then integrated with other parts of your IT system.
If you want to jump straight into a simple example of creating spreadsheets with user code, then turn to Silly Colors. This uses parts of the .NET framework and the Python standard library to create a multi-coloured spreadsheet. If you would rather find out what these things are before you start using them, then read on...
Python
Writing user code means writing Python code. Fortunately Python is very easy to learn. You can start with very simple loops and functions, and gradually expand your knowledge. It is a fully fledged object oriented language though, with many large systems written entirely in Python.
For useful resources on getting started with Python, see the Useful Links page.
Python is a mature Open Source language that has been around since 1990. It started off life in the Unix world, but is also widely used on Windows.
Python gets used for an enormous range of different tasks, including:
- Games development (Civilization IV uses a lot of Python),
- Web applications (YouTube and Reddit are almost entirely written in Python)
- System administration (several Linux distributions make heavy use of Python)
- Embedded systems (including the One Laptop Per Child project)
- Science (particularly bio-informatics and language analysis)
- Engineering tasks (Seagate automate their hard drive testing with Python)
- GIS services (Python is the scripting languages for several commercial GIS systems)
- Computer graphics and animation (Sony Imageworks script their entire image processing pipeline with Python)
- Writing desktop applications (Bittorrent is one of the most famous Python desktop applications)
The Python 'philosophy' emphasizes clarity and readability of code, whilst maximising flexibility for the programmer.
IronPython
IronPython is a Microsoft implementation of the Python programming language, to run on the .NET framework. It was originally developed by Jim Hugunin (who also created Jython, Python for the Java Virtual Machine) as an experiment. It has since been taken on by Microsoft, and turned into a fully fledged member of the .NET family.
Resolver is written in IronPython, and any user code you write is IronPython (which is really just the Microsoft flavour of Python). IronPython code can take advantage of many of the great number of Python libraries that there are available. You can also use the .NET framework directly within IronPython code.
The Python Standard Library
The most important Python libraries you can use are what is known as the 'Python Standard Library'. This is a wide set of modules for tackling common programming tasks, and has earned Python the reputation of coming 'batteries included'.
Resolver includes a copy of the Python Standard Library, so you can import and use these functions and classes within user code.
Not all of the standard library works with IronPython, some it relies on C code (whereas IronPython is written in C#) or uses platform dependent features that aren't available on IronPython. However, the IronPython team have put a great deal of effort into making sure that as much of the standard library as possible does work.
Where there are bits that don't work, there may be a wrapper around a .NET library to provide an identical equivalent and there will certainly be a .NET library that does a similar job anyway.
The .NET Framework
IronPython runs on top of the .NET framework. The .NET framework is a programming 'ssytem' created by Microsoft for writing everything from desktop applications, programmers libraries, web applications to web services. It is the most common framework for writing modern business applications. The user interface for Resolver is written using the Windows Forms library from .NET.
The .NET framework consists of two parts:
- The CLR: The Common Language Runtime, which actually runs your programmes and includes a fast JIT (Just in Time Compiler), built in security features and a lot more besides.
- The Framework Classes: Like the Python Standard Library, the .NET framework comes with its own large assortment of tools for tackling programming tasks.
As you can use the .NET classes easily within IronPython code, IronPython is doubly blessed with libraries and modules to use. The IronPython Cookbook contains many good examples of using .NET classes from IronPython.
Resolver Library
Of course, in order to write user code that does something useful, you need to be able to manipulate the spreadsheet. The way Resolver does this is by giving you access to the spreadsheet and its components as 'objects' that you can do things with.
All the Resolver spreadsheet objects are defined in Python files, contained in the Library directory in your Resolver installation. The usual place for this is C:\Program Files\Resolver\Library, unless you have installed Resolver somewhere odd! We refer to these files as 'the Library code'.
The major spreadsheet objects defined in the Library code are:
Workbook
This represents the whole spreadsheet, and as well as giving you access to all the worksheets it provides methods for adding new ones.
Worksheet
This represents individual worksheets in your spreadsheet. It gives you access to the rows, columns and cells that it contains, and allows you to set attributes ('traits') for the whole worksheet.
Cell
This represents individual cells in a spreadsheet. You can ask cells for their value, or set the value, and also set all the traits (like BackColor, Bold and friends).
Row
This represents a single row in a worksheet. It provides access to all the cells in the row, as well as allowing you to set attributes for the whole row.
Column
See Row, but substitute the word column instead...
CellRange
Cell ranges provide a view into a specified (rectangular) area of a worksheet. They allow you to perform operations with just a part of a worksheet, including formatting them. They are a nice way of working with (including presenting) tables of data that are smaller than a whole worksheet.
If you are curious, you can look at the source code for these files to get an idea of how things work. However, if you change anything and it breaks then you are on your own [1]!
There is lots more in the library of course, including useful functions, image worksheets (worksheets that display an image instead of containing cells) and so on. The other articles and examples on this site will hopefully give you ideas about some of the things you can do with them.
Developing Your Own Spreadsheet Code
As you create functions and classes for use with your spreadsheets, it can be unwieldy to include the code within each spreadsheet. Things are much more manageable if they live in separate Python files (modules).
Resolver adds the directory containing the current spreadsheet to the 'path'. This means that it can import from modules contained in the same directory as the spreadsheet, which makes it easier to distribute spreadsheets with the Python files that they depend on.
If you create modules that you use in several spreadsheets, the Library directory is currently the best place to put them. That means that you can import from them into your spreadsheets, with code like the following:
from Library.MyModule import MyFunction, MyClass
Eventually we will provide a user directory for putting storing your Resolver modules (at which point I have to remember to update this document).
Last edited Fri Feb 15 13:45:04 2008.
|
http://www.resolverhacks.net/user_code_and_python.html
|
crawl-002
|
refinedweb
| 1,313
| 61.87
|
Pippo - Micro Java Web Framework
Pippo - Micro Java Web Framework
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
After a period of research and study of other micro frameworks (java and non java - javascript, python) I created Pippo. It's instructive to refresh your knowledge with HTTP/HTTPS concepts and to be cutting edge (Servlet 3.x) :) Also this framework has an educational role for my younger colleagues.
Trying to create an open source micro java web framework which is also simple to learn as it is to use was the first thing taken into account when Pippo began its journey.
From the beginning I wanted this to be understandable and hackable. These means that you must read very few (some examples code) to produce something useful and you can add your contribution to framework in short time. You are a busy developer, you want to accomplish a task not to read books.
I am for transparency. I don’t want to hide the Request-Response nature of the HTTP protocol and to add unnecessary layers that complicate things. We play with solid concepts like: Application, Request, Response, Route, Router, RouteHandler, RouteContext, RouteDispatcher.
Pippo can be used in small and medium applications and also in applications based on micro services architecture.
I believe in simplicity and I tried to develop this framework with these words on my mind.
The core is small (around 100k, with a single tiny dependency slf4j-api) and we intend to keep it as small/simple as possible and to push new functionalities in pippo modules and third-party repositories/modules. Still, it comes with many features and allows the addition of new features.
The framework comes with many useful modules (Spring, Guice, Metrics, Session cookie, Controller, Content type engines - Json,Xml, Yaml, Text) and many demo applications.
I think that Pippo does a great job in mixing server generated pages with RESTful APIs. We have a demo that shows you Pippo AngularJS integration demo.
Enough with the introduction. It’s time for “show me the code”.
See below the classic “Hello World” in Pippo using the embedded web server:
public class HelloWorld { public static void main(String[] args) { Pippo pippo = new Pippo(); pippo.getApplication().GET("/", (routeContext) -> routeContext.send("Hello World!")); pippo.start(); } }
I will show you a more complex example. We split our application in two parts for a better readability.
First we must create a BasicApplication (extends Application) and add some routes:"); } }
public class Contact { private int id; private String name; private String phone; private String address; // getters and setters }
public class BasicDemo { public static void main(String[] args) { Pippo pippo = new Pippo(new BasicApplication()); pippo.start(); } }
-
-
-
-
-
-
Useful }}
|
https://dzone.com/articles/pippo-micro-java-web-framework
|
CC-MAIN-2019-04
|
refinedweb
| 461
| 54.93
|
6.3. Optimisation (code improvement)¶
The
-O* options specify convenient “packages” of optimisation flags;
the
-f* options described later on specify individual
optimisations to be turned on/off; the
-m* options specify
machine-specific optimisations to be turned on/off.
Most of these options are boolean and have options to turn them both “on” and
“off” (beginning with the prefix
no-). For instance, while
-fspecialise
enables specialisation,
-fno-specialise disables it. When multiple flags for
the same option appear in the command-line they are evaluated from left to
right. For instance,
-fno-specialise -fspecialise will enable
specialisation.
It is important to note that the
-O* flags are roughly equivalent to
combinations of
-f* flags. For this reason, the effect of the
-O* and
-f* flags is dependent upon the order in which they
occur on the command line.
For instance, take the example of
-fno-specialise -O1. Despite the
-fno-specialise appearing in the command line, specialisation will
still be enabled. This is the case as
-O1 implies
-fspecialise,
overriding the previous flag. By contrast,
-O1 -fno-specialise will
compile without specialisation, as one would expect.
6.3.1.
-O*: convenient “packages” of optimisation flags.¶: This is taken to mean “Please
compile quickly; I’m not over-bothered about compiled-code quality.”
So, for example,
ghc -c Foo.hs
.
6
Allow constant folding in case expressions that scrutinise some primops: For example,
case x `minusWord#` 10## of 10## -> e1 20## -> e2 v -> e3
Is transformed to,
case x of 20## -> e1 30## -> e2 _ -> let v = x `minusWord#` 10## in e3
representation.
‘96).
This optimisation moves let bindings closer to their use site. The benefit here is that this may avoid unnecessary allocation if the branch the let is now on is never executed. It also enables other optimisation passes to work more effectively as they have more information locally.
This optimisation isn’t always beneficial though (so GHC applies some heuristics to decide when to apply it). The details get complicated but a simple example is that it is often beneficial to move let bindings outwards so that multiple let bindings can be grouped into a larger single let binding, effectively batching their allocation and helping the garbage collector and allocator.
-ffull-laziness¶. Although GHC’s full-laziness optimisation does enable some transformations which would be performed by a fully lazy implementation (such as extracting repeated computations from loops), these transformations are not applied consistently, so don’t rely on them.
Set the maximum size of inline array allocations to n bytes. GHC will allocate non-pinned arrays of statically known size in the current nursery block if they’re no bigger than n bytes, ignoring GC overheap. This value should be quite a bit smaller than the block size (typically: 4096).
omit-yields¶
Tells GHC to omit heap checks when no allocation is being performed. While this improves binary sizes by about 5%, it also means that threads run in tight non-allocating loops will not get preempted in a timely fashion. If it is important to always be able to interrupt such threads, you should turn this optimization off. Consider also recompiling all libraries with this optimization turned off, if you need to guarantee interruptibility.
Only applies in combination with the native code generator. Use the graph colouring register allocator for register allocation in the native code generator. By default, GHC uses a simpler, faster linear register allocator. The downside being that the linear register allocator usually generates worse code.
Note that the graph colouring allocator is a bit experimental and may fail when faced with code with high register pressure #8657.
GHC’s optimiser can diverge if you write rewrite rules (Rewrite rules) that don’t terminate, or (less satisfactorily) if you code up recursion through data types (Bugs in GHC).
Turn on call-pattern specialisation; see Call-pattern specialisation for Haskell programs.
This optimisation specializes recursive functions according to their argument “shapes”. This is best explained by example so consider:
last :: [a] -> a last [] = error "last" last (x : []) = x last (x : xs) = last xs
In this code, once we pass the initial check for an empty list we know that in the recursive case this pattern match is redundant. As such
-fspec-constrwill transform the above code to:
last :: [a] -> a last [] = error "last" last (x : xs) = last' x xs where last' x [] = x last' x (y : ys) = last' y ys
As well avoid unnecessary pattern matching it also helps avoid unnecessary allocation. This applies when a argument is strict in the recursive call to itself but not on the initial entry. As strict recursive branch of the function is created similar to the above example.
It is also possible for library writers to instruct GHC to perform call-pattern specialisation extremely aggressively. This is necessary for some highly optimized libraries, where we may want to specialize regardless of the number of specialisations, or the size of the code. As an example, consider a simplified use-case from the
vectorlibrary:
import GHC.Types (SPEC(..)) foldl :: (a -> b -> a) -> a -> Stream b -> a {-# INLINE foldl #-} foldl f z (Stream step s _) = foldl_loop SPEC z s where foldl_loop !sPEC z s = case step s of Yield x s' -> foldl_loop sPEC (f z x) s' Skip -> foldl_loop sPEC z s' Done -> z
Here, after GHC inlines the body of
foldlto
If this flag is on, call-pattern specialisation will specialise a call
(f (Just x))with an explicit constructor argument, even if the argument is not scrutinised in the body of the function. This is sometimes beneficial; e.g. the argument might be given to some other function that can itself be specialised.
.
-flate-specialise¶.
-fsolve-constant-dicts¶
When solving constraints, try to eagerly solve super classes using available dictionaries.
For example:
class M a b where m :: a -> b type C a b = (Num a, M a b) f :: C Int b => b -> Int -> Int f _ x = x + 1
The body of f requires a Num Int instance. We could solve this constraint from the context because we have C Int b and that provides us a solution for Num Int. However, we can often produce much better code by directly solving for an available Num Int dictionary we might have at hand. This removes potentially many layers of indirection and crucially allows other optimisations to fire as the dictionary will be statically known and selector functions can be inlined.
The optimisation also works for GADTs which bind dictionaries. If we statically know which class dictionary we need then we will solve it directly rather than indirectly using the one passed in at run time.
-fstatic-argument-transformation¶
Turn on the static argument transformation, which turns a recursive function into a non-recursive one with a local recursive loop. See Chapter 7 of Andre Santos’s PhD thesis.
-fstg
Switch on the strictness analyser. The implementation is described in the paper Theory and Practice of Demand Analysis in Haskell.ma (see UNPACK pragma) to every strict constructor field that fulfils the size restriction.
For example, the constructor fields in the following data types
data A = A !Int data B = B !A newtype C = C B data D = D !C
would all be represented by a single
Int#(see Unboxed types and primitive operations) value with
-funbox-small-strict
Governs the maximum size that GHC will allow a function unfolding to be. (An unfolding has a “size” that reflects the cost in terms of “code bloat” of expanding (aka inlining) that unfolding at a call site. A bigger function would be assigned a bigger cost.)
Consequences:
-
This is the magic cut-off figure for unfolding (aka inlining): below this size, a function definition will be unfolded at the call-site, any bigger and it won’t. The size computed for a function depends on two things: the actual size of the expression minus any discounts that apply depending on the context into which the expression is to be inlined.
The difference between this and
-funfolding-creation-threshold=⟨n⟩is that this one determines if a function definition will be inlined at a call site. The other option determines if a function definition will be kept around at all for potential inlining.
-fworker-wrapper¶
Enable the worker-wrapper transformation after a strictness analysis pass. Implied by
-O, and by
-fstrictness. Disabled by
-fno-strictness. Enabling
-fworker-wrapperwhile strictness analysis is disabled (by
-fno-strictness).
|
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/using-optimisation.html
|
CC-MAIN-2020-24
|
refinedweb
| 1,408
| 52.7
|
Getopt::Modular - Modular access to Getopt::Long
version 0.13
Perhaps a little code snippet.
use Getopt::Modular; Getopt::Modular->acceptParam( foo => { default => 3, spec => '=s', validate => sub { 3 <= $_ && $_ <= determine_max_foo(); } } ); Getopt::Modular->parseArgs(); my $foo = Getopt::Modular->getOpt('foo');
There are a few goals in this module. The first is to find a way to allow a bunch of custom modules to specify what options they want to take. This allows you to reuse modules in multiple environments (applications) without having to repeat a bunch of code for handling silly things like the parameters themselves, their defaults, validation, etc. You also don't need to always be aware of what parameters a module may take if it merely grabs them from the global environment.
I find I'm reusing modules that should be user-configurable (via the commandline) in multiple applications a lot. By separating this out, I can just say "use Module;" and suddenly my new application takes all the parameters that Module requires (though I can modify this on a case-by- case basis in my application). This allows me to keep the information about a variable in close proximity to its use (i.e., the same file).
There is a lot of information here that otherwise would need to be handled with special code. This can greatly simplify things:
Because the same parameters are used in multiple applications with the same meaning, spelling, valid values, etc., it makes all your applications consistent and thus easy to learn together.
The online help is a big challenge in any application. This module will handle the help for your parameters by using what is provided to it from each module. Again, the help for a parameter will be the same in all your applications, maintaining consistency.
Further, the help will be right beside the parameter. No more looking through hundreds or thousands of lines of pod and code trying to match up parameters and help, wondering if you missed something. Now you only have to address about 5-10 lines of code at a time wondering if you missed something.
Defaults right beside the parameter. Again, you only need to address 5-10 lines of code to look for parameter and its default. They aren't separated any longer. Now, it's true that you don't necessarily need to have defaults far removed with Getopt::Long, but that really does depend on what you're doing.
Further, the defaults can be dynamic. That means you can put in a code reference to determine the default. Your default may depend on other parameters, or it may depend on external environment (Is the destination directory writable? What is the current hostname? What time is it?). You can grab your default from a webserver from another continent (not recommended). It doesn't matter. But you can have that code right there with the parameter, making it easy to compartmentalise.
You do not need to have dynamic defaults. Some would argue that dynamic defaults make applications more difficult for the user to know what will happen. Not only do I think that good dynamic defaults can help the application Do The Right Thing, but that the developer of the application should be able to choose, thus defaults can be dynamic, even if that is not necessarily useful to your application.
In one application, my goal was to minimise any requirement to pass in parameters, thus having defaults that made sense, but to Do The Right Thing, which was usually different between different environments. As one example, a flag to specify mod_perl vs FastCGI vs CGI could be:
'cgi-style' => { default => sub { if (detect_mod_perl()) { return 'mod_perl'; } elsif (detect_fastcgi()) { return 'fastcgi'; } else { return 'cgi'; } }
This would Do The Right Thing, but you can override it during testing with a simple command line parameter.
Like everything above, the validation of a parameter is right beside the parameter, making it easy to address the entirety of a parameter all in a single screen (usually much less) of code.
Validation is also automatically run against both the default (same idea as having tests for your perl modules: sanity test that your default is valid) when no parameter is given, and any programmatic changes to a value. Without this, I was always forgetting to validate my option changes. This automates that.
All this, the power of Getopt::Long, and huge thanks from whomever inherits your code for keeping everything about --foo in a single place.
The downside is that you need to ensure all modules that may require commandline parameters are loaded before you actually parse the commandline. For me, this has meant that my test harness needs to either ask for the module to test via environment variable or needs to pre-parse the commandline (kind of defeating the purpose of the module). I've opted for checking for the module via
$ENV{MODULE}, loading it, and then parsing the commandline.
Also, another downside is that parameters are not positional. That is,
--foo 3 --bar 5 is the same as
--bar 5 --foo 3. The vast majority of software seems to agree that these are the same.
As the module is intended to be used as a singleton (most of the time), and all methods are class (not object) methods, there really isn't much to import. However, typing out "Getopt::Modular->getOpt" all the time can be cumbersome. So a few pieces of syntactical sugar are provided. Note that as sugar can be bad for you, these are made optional.
By specifying
-namespace => "GM", you can abbreviate all class calls from
Getopt::Modular to simply
GM. Another alternative is to simply create your own subclass of Getopt::Modular with a simple, short name, and use that.
This only has to be done once per application.
This will import getOpt as a simple function (not a class method) into your namespace. This can be done for any namespace that needs the getOpt function imported.
Arguably, more could be added. However, as most of the calls into this module will be getting (not setting, etc.), this is seen as the biggest sugar for least setup.
Construct a new options object. If you just need a single, global options object, you don't need to call this. By default, all methods can be called as package functions, automatically instantiating a default global object.
Takes as parameters all modes accepted by setMode, as well as a 'global' mode to specify that this newly-created options object should become the global object, even if a global object already exists.
Note that if no global object exists, the first call to new will create it.
Overridable method for initialisation. Called during object creation to allow default parameters to be set up prior to any other module adding parameters.
Default action is to call $self->setMode(@_), though normally you'd set any mode(s) in your own init anyway.
Sets option modes.
Currently, the only supported mode is:
Don't allow anyone to request an option that doesn't exist. This will catch typos. However, if you have options that may not exist in this particular program that may get called by one of your other modules, this may cause problems in that your code may die unexpectedly.
Since this is a key feature to this option approach, the default is not strict. If you always knew all your options up front, you could just define them and be done with it. But then you would likely be able to just go with Getopt::Long anyway.
Sets the names used by getHelp and getHelpRaw for boolean values. When your user checks the help for your application, we display the default or current values - but "0" and "1" don't make any sense for booleans for users. So we, by default, use "on" and "off". You can change this default. You can further override it on a parameter-by-parameter basis.
Pass in two strings: the off or false value, and the on or true value. (Mnemonic: index 0 is false, index 1 is true.)
Set up to accept parameters. All parameters will be passed to Getopt::Long for actual parsing.
e.g.,
Getopt::Modular->acceptParam('fullname' => { aliases => [ 'f', 'fn' ], spec => '=s@', # see Getopt::Long for argument specification help => 'use fullname to do blah...', default => 'baz', validate => sub { # verify that the value passed in is ok }, });
You can pass in more than one parameter at a time.
Note that order matters. That is, the order that parameters are told to Getopt::Modular is the same order that parameters will be validated when accepted from the commandline, regardless of the order the user passes them in. If this is no good to you, then you may need to find another method of handling arguments. If one parameter depends on another, e.g., for the default or validation, be sure to
use the module that declares that parameter prior to calling
acceptParam to ensure that the other parameter will be registered first and thus parsed/handled first.
The parameter name is given separately. Note that whatever this is will be the name used when you retrieve the option. I suggest you use the longest name here to keep the rest of your code readable, but you can use the shortest name or whatever you want.
Acceptable options are:
In Getopt::Long, these would be done like this:
'fullname|f|fn'
Here, we separate them out to make them easier to read. They are combined back into a single string for Getopt::Long. Optionally, you can simply provide
'fullname|f|fn' as the parameter name, and it will be split apart. In this case, the name used to retrieve the value will be the first string given.
This is the part that tells Getopt::Long what types to accept. This can be a quick check against what can be accepted (numeric, string, boolean) or may be more informative (such as taking a list). While this is mostly used to pass in to Getopt::Long, it is also used for context in the help option, or in returning options back to whoever needs them, such as knowing whether the given values can be a list, or if it's simply a boolean.
This is either a string, or a code ref that returns a string to display to the user for help. The reason why a code ref is allowed is in case the help string is dynamic based on the parameters that are given. For example, you may want to provide different help for the current flag based on the valid of some other flag.
If this is a code ref, it is not passed any parameters, and $_ is not set reliably.
This is an array reference with the two values of boolean you want to use. It overrides the global strings. e.g., [ qw(false true) ]. The unset value is first (mnemonic: index 0 is false, 1 is true).
These strings are only used if this option is a boolean, and only in the help output.
This is either a scalar, an array ref (if
spec includes
@), a hash ref (if
spec includes
%), or a code ref that returns the appropriate type. A code ref can provide the opportunity to change the default for a given parameter based on the values of other parameters. Note that you can only rely on the values of parameters that have already been validated, i.e., parameters that were given to acceptParam earlier than this one. That's because ones given later would not have had their values set from the command line yet.
This is checked/called only once, maximum, per process, as once the default is retrieved, it is stored as if it were set by the user via the command line. It may be called either as part of the help display, or it may be called the first time the code requests the value of this parameter. If the current code path does not check this value, the default will not be checked or called even if the parameter is not passed in on the command line.
If this is a code ref, it is not passed any parameters, and $_ is not set reliably.
This is a code ref that validates the parameter being passed in against not only valid values, but the current state of the parameters. This includes validation of the default value.
You can use this callback to ensure that the current values are allowed given all the parameters validates so far. That is, you can call getOpt on any previous parameter to check their values make sense given the current value. If it doesn't, simply die with the error message. Do not call exit, because this is called in an eval block for displaying help, and it's perfectly reasonable that a user requests help when some values are invalid.
The value(s) being validated are passed in via $_, which may be a reference if the type is an array or hash.
You may throw an exception in case of error, or you can simply return false and a generic exception will be thrown on your behalf. Obviously throwing your own exception with a useful error message for the user is the better choice.
If this key is not present, then anything Getopt::Long accepts (due to the specification) will be accepted as valid.
If the list of valid values is limited and finite, it may be easier to just specify them. Then Getopt::Modular can verify the value provided is in the list. It can also use the list in the help.
This parameter needs to be either an array ref, or a CODE ref that generates the list (lazy). Note that the CODE ref will only be called once, so don't count on it being dynamic, too.
If this is set to a true value, then during parameter validation, this option will always be set, either via the command line, or via checking/calling default (which will then be validated). The purpose of this is to ensure the validate code is called during the parsing of arguments even if the parameter was not passed in on the command line. If you have no default and your validate rejects an empty value, this can, in effect, make the parameter mandatory for the user.
If this is set to a true value, then
getHelp and
getHelpWrap, but not
getHelpRaw, will not return this item in its output. Useful for debugging or other "internal" parameters.
Sometimes you may load a module that has a parameter, but in this particular case, you don't want the user to be able to specify it. Either you want the default to always be used, or you want to set it to something explicitly. You can set the parameter to be "un"accepted, thereby eliminating it from the list of options the user can pass in.
However, this will not remove it from the list that Getopt::Modular will recognise inside the code. That is, Getopt::Modular->getOpt() will still accept that parameter, and setOpt will still allow you to set it programmatically.
To re-accept an unaccepted parameter, simply call acceptParam, passing in the parameter name and an empty hash of options, and all the old values will be used.
Once all parameters have been accepted (and, possibly, unaccepted), you must call parseArgs to perform the actual parsing.
Optionally, if you pass in a hash ref, it will be populated with every parameter. This is intended to provide a stop-gap for migration from Getopt::Long::GetOptions wherein you can provide your options hash and use that directly.
GM->parseArgs(\my %opts);
The downside to this is that it will determine all values during parsing rather than deferring until the value is actually required. Most of the time, this will be okay, but if some defaults take a long time to resolve or validate, e.g., network activities such as looking up users via LDAP, requesting a value from a wiki page, or even just reading a file over NFS, sshfs, Samba, or similar, that time will be wasted if the value isn't actually required during this execution based on other parameters.
Retrieve the desired option. This will "set" any option that has not been retrieved before, and was not on the command line, by calling the default.
If you need to know the difference between an implicit default and an explicit default, you need to do that in your default code. That said, you should think twice about that: is it intuitive to the user that there should be a difference between "--foo 3" and not specifying --foo at all when the default is 3?
Programmatic changing of options. This should not be done until after the options have been parsed: defaults are set through the default flag, not by setting the option first.
Note that this will pass the value through the validation code, if any, so be sure you set the values to something that make sense. Will throw an exception if the value cannot be set, e.g., it is invalid.
This function will go through all the parameters and construct a list of hashes for constructing your own help. It's also the internal function used by getHelp to create its help screen.
Each hash has the following keys:
Array ref of parameter names. This is what the user passes in, e.g., "-f" or "--foo".
The string associated with the parameter (if this was a code ref, the code is called, and this is the return from there).
If there is a default (that doesn't die when validated), or if the value was already on the command line, that value. If the default does die, then this key will be absent (i.e., no default, or mandatory, or however you want to interpret this).
Returns a string representation of the above raw help. If you need to translate extra strings, an extra hash-ref of callbacks will be used. For example:
GM->getHelp({ current_value => sub { lookup_string("Current value: '[_1]'", shift // ''); }, # only needed if you use the valid_value key at the moment, but # could be extended later. valid_values => sub { lookup_string("Valid values: '[_1]'", join ',', @_); }, });
Callbacks:
Receives the current value (may be undef).
Receives all valid values.
Similar to getHelp, this uses Text::WrapI18N, if available, otherwise Text::Wrap, to automatically wrap the text on for help, making it easier to write.
Default screen width is 80 - you can pass in the columns if you prefer.
A second parameter is the same as getHelp above with callbacks for translations.
Examples:
print GM->getHelpWrap(70, { ... }); # specify cols and callbacks print GM->getHelpWrap({ ... }); # implicit cols (80), explicit callbacks print GM->getHelpWrap(70); # implicit cols, default English text print GM->getHelpWrap(); # implicit all.
Various exceptions can be thrown of either
Getopt::Modular::Exception or
Getopt::Modular::Internal types. All exceptions have a "type" field which you can retrieve with the
->type method (see Exception::Class). This is intended to facilitate translations. Rather than using the exception message contained in this object, you can substitute with your own translated text.
Exception types:
Internal error: an option was used, for example as one of the aliases, that didn't resolve. I don't think this should happen.
Getopt::Long returned a failure. The warnings produced by Getopt::Long have been captured into the warnings of this exception (
$e->warnings), but they are likely also English-only.
getOpt didn't get any parameters. Probably doesn't need translating unless you are doing something odd (but has a type so you can do something odd).
The valid_values key for an option wasn't either an array ref or a code ref.
Strict mode is on, and you asked getOpt for an option that G::M doesn't know about.
Called setOpt on an integer value (types +, i, or o), without giving an integer.
Called setOpt on an real value (type f), without giving a number.
The validation for this value failed. The option and value fields are filled in.
When calling setOpt, trying to set a value of the wrong type (a hash reference to a list, for example)
Darin McBride,
<dmcbride at cpan.org>
Please report any bugs or feature requests to
bug-getopt-modular at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc Getopt::Modular
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Darin McBride <dmcbride@cpan.org>
This software is copyright (c) 2014 by Darin McBride.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
|
http://search.cpan.org/~dmcbride/Getopt-Modular/lib/Getopt/Modular.pm
|
CC-MAIN-2017-30
|
refinedweb
| 3,516
| 62.98
|
This robot was a collaboration between my daughter and I.
The idea was she could use flowchart software to program it, however that never really happened.
The design was based on an earlier and slightly larger model I built.
I had assisted the local school in a challenge and felt the 'feelers' that were used for sensing a wall were too delicate and bent if they hit too hard or got caught, so I worked on a bumper system.
Version 1.0 on the left and ver 0.9 HaloBOT on the right.
A second angled plate was fitted and the battery pack velcros to this.
As the drawing shows the shape of the bumper was made to help prevent it getting stuck.
HaloBOT was essentially Ver 2.0 using commonly available parts (board), motors and wheels.
It uses a castor underneath that glides on the surface to act as a nose wheel.
The design was refined by Hayley and once built she added the graphical touch to it.
HaloBOT was the name she gave it.
Bumper
The design of the front bumper is such that it can detect three different touches, using two switches.
This shows the bumper before the triangle shaped clearance holes were filled.
It also shows a chip in the corner which was 'removed' by some rounding.
Pressure on the left side operates the left bumper switch. Pressure on the right side operates the right bumper switch.
However pressure directly onto the front of the bumper will operate both switches.
This is achieved by making the mounting mount float (the screws are have a lock nut on the bottom), and the 45 deg angle surface the micro-switches connect with.
The spring action inside the micro-switch provides the necessary centering action without any additional springs.
The use of acrylic also means it is non marking (or hard to isolate what caused those marks), and the shape means it can push it's way past if you get the programming wrong.
Plan
You should be able to enlarge this using the measurments as a guide, and print full size to use as a template.
Stick it down onto ypur material of choice, and cut out.
Wheels
Tamiya provided the wheels, and I discovered that these are one of the 'stickiest' off the shelf, meaning it has the best traction if your motors are up to it.
The Tamiya gearbox allows for multiple ratios and is cheap and robust, although not very good in dirty situations due to it's open construction.
Updated My software code included this note ...... Tamiya Dual Gearbox at 114.7:1 ratio (Type C) and 56mm wheels
The gearbox configuration changes depending on which ratio you choose, so multiple mounting holes were provided, however we've never changed it.
The gearboxes are easy to mount and together with the wheels make a very simple construction.
The power consumption is very light, and therefore the battery voltage remains constant which makes programming easier.
Some other bots have high speed and high discharge, so 0.25 secs motor at the start makes the bot travel much further than 0.25 secs after some time in use.
These have no encoders so there is no correlation with time v distance travelled.
Chassis
The chassis of any robot is the backbone.
Because these tend to be a WIP (Work In Progress) you need to allow room for extras to be added, and mounting can be a problem.
The chassis was constructed using 5mm aluminium that was recycled (previous life was a blank panel in a rack) and most holes are drilled and tapped for M3. (use meths as the lubricant for the tap)
The intention was to enable it to be made using basic hand tools.
I've seen the odd robot using a very lightweight chassis which can flex and bend, or becomes too hard to add extras, so while this could be considered as over engineered, in this case it wasn't going to be the fastest or lightest bot.
Software
The board is based on Picaxe 18 and was programmed in Basic.
The Picaxe software can be programmed using a flowchart, which removes the need to understand Basic.
It does provide an early and easier learning curve, which was the reason it was chosen (at the time).
HaloBOT is programmed to wait after it is turned on, wait for a bumper switch to operate, then move forward until it detects a wall.
It will respond until it detects a wall and respond accordingly with the aim of keeping it moving forward.
Since there is no vision and both motors were travelling forward, the angle of collision is not known, hence there are two options when it touches a wall :-
- stop the motor on that side and reverse the other motor, then move both motors forward.
- Stop both motors and reverse both, then make the motor on the side that touched go forward first, followed by the other motor.
As you can see there is a difference in the amount the bot clears the wall by, and how quickly forward progress is made.
Any time you reverse, you will lose time and distance travelled.
The next problem was being stuck in a corner.
The worst is striking the corner at 45 degs where you react to one side touching followed by the same reaction on the other side, and it repeats.
In this case we counted the contacts and then made the next touch add more time to the sequence to break the deadlock.
This shows in the video as a 'pause' to show it is thinking about the next move, and it carries on.
Enhancements
We added an IR sensor to the front in order to warn of impending wall and slow it down.
It was never programmed to be used, but means there is some protection from collision with objects.
You can detect the bumper switches at turn on, and jump to a sub routine in the software to do one of three operations.
One of these routines was to provide a method of equalizing the motors so it drove straight.
DC brushed motors can be run forward or backwards, and are never the same speed.
Manufacturing tolerances also means one will run slightly faster than another, so it's impossible to make these run straight.
Because the motor can be PWM (Pulse Width Modulated) it does mean you can adjust the speed of the fastest motor to something less than 1.
This gets set in software or is best stored in the EEPROM so it can be set and used.
It was tagged as a future enhancement.
Video
The video show Halobot trying to escape the lid of an A4 Paper box.
It keeps on going and never really understands that it can't escape, but does provide entertainment.
Hopefully someone else will enjoy making something similar.
If you do use my bumper design, it would be nice if you could acknowledge this blog.
Cheers
Mark
Edit 04/04/18 Code added, and Motor details.
This was designed to run on a Picaxe 18 board with the L293D motor controller
'Robot Test Program For PICAXE18 microcontroller 'using CHI007 'Opposite motor is off during turns 'MCB May 2011 'Suited to HaloBOT 'output pin allocation (%76543210) '76 = Right Motor (viewed from the rear) '54 = Left Motor (viewed from the rear) '3 = not used '2 = LED B 'future '1 = LED A 'future '0 = piezo sounder 'future 'input pin allocation '7 = Right Switch looking from the rear '6 = Left Switch looking from the rear 'counter allocation symbol RightCount = b0 symbol LeftCount = b1 symbol ForwardCount = b2 symbol CornerCount =b3 symbol BackCount =b4 symbol LastTurn = b5 symbol RightBump =pin7 symbol LeftBump = pin6 'start going forwards 'testing switches as you go Waiting: 'In order to put the robot down, the motors should be off 'Press a bumper switch to start if RightBump = 1 or LeftBump = 1 then check goto waiting check: pause 100 main: let pins = %10100000 'both motors forward ForwardCount = ForwardCount + 1 if RightBump =1 or LeftBump =1 then pause 50 'time to settle and check if both touch endif 'if RightBump = 1 and LeftBump =1 then backturn 'if RightBump =1 and CornerCount >3 or LeftBump =1 and CornerCount >3 then Corner if LeftBump = 1 then left if RightBump = 1 then right if ForwardCount = 200 then ResetCount goto main ResetCount: 'sound 0,(255,25) ForwardCount = 0 RightCount = 0 LeftCount =0 CornerCount =0 LastTurn=0 Goto Main left: 'left switch hit 'so stop, both reverse, turn right 'LastTurn =1 If LastTurn =1 then CornerCount = 0 else CornerCount = CornerCount +1 endif LeftCount = LeftCount +1 let pins =%00000000 'Stop let pins =%01010000 'Reverse 'sound 0,(110,25) pause 100 let pins =%00100000 'Left forward, Right off pause 50 LastTurn =1 ForwardCount =0 'If LeftCount = RightCount +3 or RightCount = LeftCount +3 CornerExit goto main right: 'right switch hit 'so stop, both reverse, turn left 'LastTurn =2 If LastTurn =2 then CornerCount = 0 else CornerCount = CornerCount +1 endif RightCount = RightCount +1 let pins =%00000000 'Stop let pins =%01010000 'Reverse 'sound 0,(50,25) pause 100 let pins =%10000000 'Right forward, Left off pause 50 LastTurn =2 ForwardCount =0 goto main backturn: 'both switch hit 'so stop, reverse, turn to one side 'LastTurn =0 'BackTurn = backTurn +1 If RightCount > LeftCount then let pins =%00000000 'Stop let pins =%01010000 'Reverse sound 0,(100,10,110,10,110,10,110,10) pause 150 'larger reverse movement let pins =%10000000 'Right forward, Right off pause 100 else let pins =%00000000 'Stop let pins =%01010000 'Reverse sound 0,(100,10,110,10,110,10,110,10) pause 150 'larger reverse movement let pins =%00100000 'Left forward, Left off pause 100 endif LastTurn =0 ForwardCount =0 'CornerCount =0 goto main Corner: 'we think we've stuck in a corner therefore we need to add something to exit the sequence ' this would show as a high left right backups without much forward. ' the solution is to decide which side has a greater hit rate, and move opposite. ' In this case I have tried to move back a larger amount, along with more turn. sound 0,(255,10,255,10,255,10) pause 100 'pause 500 If RightCount > LeftCount then let pins =%00000000 'Stop let pins =%01010000 'Reverse sound 0,(75,50) pause 200 'larger reverse movement let pins =%10000000 'Right forward, Right off pause 150 else let pins =%00000000 'Stop let pins =%01010000 'Reverse sound 0,(75,50) pause 200 'larger reverse movement let pins =%00100000 'Left forward, Left off pause 150 endif 'If CornerCount >1 then ' CornerCount =CornerCount -1 'endif Cornercount=0 goto main
|
https://www.element14.com/community/community/project14/robots-with-wheels/blog/2018/03/31/halobot
|
CC-MAIN-2019-09
|
refinedweb
| 1,776
| 63.73
|
Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.
is it thisYes, getResourceAsStream() would be the preferred way to go, to access a file that's on the classpath. Whether this .csv file *should* be on the classpath is another question...
Now how can i convert a InputStreamReader to a string asAre you sure that you want the whole file as a String? I would guess that the code after that would need to split that string based on newline character anyway so that you can process each individual line. It sounds like it would be wasted effort to concatenate the whole file into one string only to have it split up again.
I need to pass it inputReader as a string
String classpath = System.getProperty("\\MID.is it not the correct way to set the correct path......
csv");csv");
public class AppTest { // public static final String path = // "D:\\MyDev\\Consum\\identityy\\MID.csv"; String classpath = System.getProperty("\\MID.csv"); @Autowired private JobLauncherTestUtils jobLauncherTestUtils; @Test public void launchJob() throws Exception { Map<String, JobParameter> jobParameterMap = new HashMap<String, JobParameter>(); jobParameterMap.put("inputFilePath", new JobParameter(path)); JobParameters jobParameters = new JobParameters(jobParameterMap); }
is it not the correct way to set the correct path......You don't need to set any path. All that's required is to do what is specified in my link about Eclipse
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.
|
https://www.experts-exchange.com/questions/28322907/input-file-in-the-classpath.html
|
CC-MAIN-2017-47
|
refinedweb
| 263
| 55.24
|
Res.
Right Access Permission
First thing we will consider providing the right access. Only the resource administrator should have the owner access, developers should have read access and applications can have contributor access. By this way, data can only be deleted by the resource administrator or by a process e.g. by Databricks or by Azure Data Factory pipelines.
Accidental Delete Protection
To avoid any accidental deletion we should always add a delete lock on our data lake.
By mistake if someone tries to delete, he’ll get a prompt to remove the lock first!
Delta Lake Time Travelling
Delta Lake time travelling is a great feature and should be used in case of any data corruption in the Delta Lake (e.g. by wrong data ingestion or faulty update procedure). Find below a short example:
import org.apache.spark.sql.SaveMode// adding records for the first time
val studentDF = Seq(
(1, "Prosenjit"),
(2, "Abhijit"),
(3, "Aadrika")
).toDF("id", "name")studentDF.write.format("delta").mode("overwrite").save("/mnt/mydeltalake/Student")// updating with a new record
val studentDF2 = Seq(
(4, "Ananya")
).toDF("id", "name")studentDF2.write.format("delta").mode("append").save("/mnt/mydeltalake/Student")// creating an external table of type Delta for easy access
spark.sql("CREATE TABLE Student USING DELTA LOCATION '/mnt/mydeltalake/Student'")
Now, we have deleted a record.
spark.sql ("DELETE FROM Student WHERE id = 1")
val studentDF3 = spark.sql("SELECT * FROM Student")
display (studentDF3)
We can retrieve the deleted records by simply travelling the time backwards and loading the right snapshots.
val historical_studentDF =
spark.read.format("delta")
.option("timestampAsOf", "2020-04-15 18:12:26")
.load("/mnt/mydeltalake/Student")display (historical_studentDF)spark.sql ("INSERT INTO Student SELECT * FROM Student TIMESTAMP AS OF \"2020-04-15 18:12:26\"")
val studentDF4 = spark.sql("SELECT * FROM Student")display (studentDF4)
Restoring the records by time travelling can help when the data were deleted/updated by any Spark application.
But, what will happen in case someone/some application by mistake removes any data files?!
Delta Lake will not be able track the changes so, will not be able to recover the records! We can run Fsck Repair Table but, that’ll only repair the transaction log.
Azure Storage Blob Soft Delete Feature
Azure Storage supports the soft delete feature for Blobs. The deleted blobs are stored for the configurable retention days. If our Delta Lake is created on Azure Storage Blob we can avail this feature.
Any deleted blobs can be ‘undelet’ed very easily.
Once restored, we can query the Delta Lake table and it’ll return the records without any further repairing.
Azure Data Factory Periodic Backup
As the soft delete feature is yet to be supported for Azure Data Lake Storage Gen 2 at the time of this writing (refer here for the list of features), we can implement Azure Data Factory pipeline to copy Delta Lake directories to another location either in the same region or in a separate region.
Find below a simple ADF Copy Activity code. We should preserve the source hierarchy and source attributes.
{
"name": "Delta_Lake_Backup",
"properties": {
"activities": [
{
"name": "Delta Lake Backup",
"type": "Copy",
",
"copyBehavior": "PreserveHierarchy"
}
},
"enableStaging": false,
"preserve": [
"Attributes"
]
},
"inputs": [
{
"referenceName": "mydeltalake",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "mydeltalakebackups",
"type": "DatasetReference"
}
]
}
],
"annotations": [
]
}
}
We can then connect to the copied snapshots, read the data and if required, we can track the changes using the transaction logs.
For any on-demand backup, we can try the Cloning feature of the Azure Storage Explorer.
Points to note:
- Retaining Delta Lake data by taking periodic snapshots will consume extra space. The amount of storage and cost for that will depend on our backup frequency, size of our Delta Lake and if we’re transferring the data into another region.
- If we backup our data into Azure Storage Blob, we can use Lifecycle Management to delete the data after retention days.
- Lifecycle Management for Azure Data Lake Storage Gen 2 is yet to be fully supported. Until then, we can use ADF Delete activity to clear the old snapshots.
Azure Disaster Recovery Feature
In case of any severe disaster, the whole region containing our Delta Lake will go down. If we set our replication as Geo-redundant storage (GRS) or Read-access geo-redundant storage (RA-GRS) and primary region suffers an outage, the secondary region will serve as a redundant source of our Delta Lake with some data loss (refer here to know more about Last Sync Time to estimate the amount of data loss).
The Delta Lake in the secondary region will not be accessible unless Microsoft declares disaster and flip over to the secondary. So, we may want to create our own backup solution if Azure provided redundancy doesn’t suit our purpose.
Points to note:
- In case we want to implement our own backup/redundancy solution by copying the Delta Lake data into another region, compare the solution cost (e.g. 2 LRS locations + ADF pipeline run time + approximate data transfer-out cost from primary region) with Azure Storage GRS/RA-GRS cost w.r.t the benefits.
- In case of outages, we may require to access the Delta Lake at our secondary region. Azure Databricks needs to be pre-configured as part of our disaster recovery readiness process. Refer here for the steps to follow.
Conclusion
We have seen few steps to make our data lake more resilient with Databricks Delta Lake and some Azure features. We should select the options based on our application criticality and budget.
|
https://cprosenjit.medium.com/improving-resiliency-with-databricks-delta-lake-azure-d5ffa4a723a3
|
CC-MAIN-2021-04
|
refinedweb
| 913
| 54.12
|
In my Last post we learn the basic of lightning components. Today we will continue with where we can use Lightning Components in UI.
We can use lightning component in 6 places.
- Lightning App
- Salesforce 1
- Lightning Experience
- Visualforce
- Lightning Community
- Lightning Flow
We all know how to use Lightning component in Lightning App. Today we learn to how we can use lightning component in all remaining places.
Use Lightning Components in Salesforce1
The component we wish to use in salesforce 1, we need to include the interface implements=”force:appHostable” in your aura:component tag and save the changes.
<aura:component
The
appHostable interface makes the component available as a custom tab.
Include your components in the Salesforce1 navigation menu by following these steps.
- Create a custom Lightning component tab for the component. From Setup, enter Tabs in the Quick Find box, then select Tabs.
Note: You must create a custom Lightning component tab before you can add your component to the Salesforce1 navigation menu. Accessing your Lightning component from the full Salesforce site is not supported.
- Add your Lightning component to the Salesforce1 navigation menu.
- From Setup, enter Navigation in the Quick Find box, then select Salesforce1 Navigation.
- Select the custom tab you just created and click Add.
- Sort items by selecting them and clicking Up or Down. In the navigation menu, items appear in the order you specify. The first item in the Selected list becomes your users’ Salesforce1 landing page.
- Check your output by going to the Salesforce1 mobile browser app. Your new menu item should appear in the navigation menu.
Use Lightning Components in Lightning Experience
In the components you wish to include in Lightning Experience, add implements=”force:appHostable” in the aura:component tag and save your changes.
<aura:component
Follow these steps to include your components in Lightning Experience as a tab and make them available to users in your organization.
- Create a custom tab for this component.
- From Setup, enter Tabs in the Quick Find box, then select Tabs.
- Click New in the Lightning Component Tabs related list.
- Select the Lightning component that you want to make available to users.
- Enter a label to display on the tab.
- Select the tab style and click Next.
- When prompted to add the tab to profiles, accept the default and click Save.
- Add your Lightning components to the App Launcher.
- From Setup, enter Apps in the Quick Find box, then select Apps.
- Click New. Select Custom app and then click Next.
- Enter Lightning for App Label and click Next.
- In the Available Tabs dropdown menu, select the Lightning Component tab you created and click the right arrow button to add it to the custom app.
- Click Next. Select the Visible checkbox to assign the app to profiles and then Save.
- Check your output by navigating to the App Launcher in Lightning Experience. Your custom app should appear in theApp Launcher.
Click the custom app to see the components you added.
Use Lightning Components in Visualforce Pages
Add Lightning components to your Visualforce pages to combine features you’ve built using both solutions. Implement new functionality using Lightning components and then use it with existing Visualforce pages.
There are three steps to add Lightning components to a Visualforce page.
- Add the <apex:includeLightning /> component to your Visualforce page.
- Reference a Lightning app that declares your component dependencies with $Lightning.use().
- Write a function that creates the component on the page with $Lightning.createComponent().
Add <apex:includeLightning /> at the beginning of your page. This component loads the JavaScript file used by Lightning
To use Lightning Components for Visualforce, define component dependencies by referencing a Lightning dependency app. This app is globally accessible and extends ltng:outApp. The app declares dependencies on any Lightning definitions (like components) that it uses. Here’s an example of a simple app called lcvfTest.app. The app uses the <aura:dependency> tag to indicate that it uses the standard Lightning component, lightning:button.
<aura:application <aura:dependency </aura:application>
To reference this app, use the following markup where theNamespace is the namespace prefix for the app. That is, either your org’s namespace, or the namespace of the managed package that provides the app.
`$Lightning.use(“theNamespace:lcvfTest”, function() {});`
If the app is defined in your org (that is, not in a managed package), you can use the default “c” namespace instead, as shown in the next example. If your org doesn’t have a namespace defined, you must use the default namespace.
Creating a Component on a Page
Finally, create your component on a page using $Lightning.createComponent(String type, Object attributes, String locator, function callback). This function is similar to $A.createComponent(), but includes an additional parameter, domLocator, which specifies the DOM element where you want the component inserted.
Let’s look at a sample Visualforce page that creates a lightning:button using the lcvfTest.app from the previous example.
<apex:page> <apex:includeLightning /> <div id=”lightning” /> <script> $Lightning.use(“c:lcvfTest”, function() { $Lightning.createComponent(“ui:button”, { label : “Press Me!” }, “lightning”, function(cmp) { // do some stuff }); }); <script> </apex:page>
This code creates a DOM element with the ID “lightning”, which is then referenced in the $Lightning.createComponent() method. This method creates a lightning:button that says “Press Me!”, and then executes the callback function.
Important: You can call $Lightning.use() multiple times on a page, but all calls must reference the same Lightning dependency app.
Did you like the post or want to add anything, let me know in comments. Happy programming 🙂
|
https://newstechnologystuff.com/2016/08/21/use-lightning-components-in-visualforce/
|
CC-MAIN-2020-50
|
refinedweb
| 922
| 58.58
|
Importing Brian¶
After installation, Brian is available.).
The following topics are not essential for beginners.
Precise control over importing¶
If you want to use a wildcard import from Brian, but don’t want to import all
the additional symbols provided by
pylab, you can use:
from brian2.only import *
Note that whenever you use something different from the most general
from brian2 import * statement, you should be aware that Brian overwrites
some numpy functions with their unit-aware equivalents
(see Units). If you combine multiple wildcard imports, the
Brian import should therefore be the last import. Similarly, you should not
import and call overwritten numpy functions directly, e.g. by using
import numpy as np followed by
np.sin since this will not use the
unit-aware versions. To make this easier, Brian *.
Dependency checks¶
Brian will check the dependency versions during import and raise an error for
an outdated dependency. An outdated dependency does not necessarily mean that
Brian cannot be run with it, it only means that Brian is untested on that
version. If you want to force Brian to run despite the outdated dependency, set
the core.outdated_dependency_error preference to
False. Note that this
cannot be done in a script, since you do not have access to the preferences
before importing
brian2. See Preferences for instructions
how to set preferences in a file.
|
http://brian2.readthedocs.io/en/stable/user/import.html
|
CC-MAIN-2018-17
|
refinedweb
| 226
| 56.66
|
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#11420 closed defect (fixed)
HelloWorld sample plugin fails to load after failing to import cleandoc_
Description (last modified by )
Error logs are filled with the following:
2013-12-29 22:20:30,315 Trac[loader] ERROR: Failed to load plugin from /etc/tra\ c/plugins.d/!HelloWorld.py: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/trac/loader.py", line 90, in _load_py_\ files module = imp.load_source(plugin_name, plugin_file) File "/etc/trac/plugins.d/HelloWorld.py", line 39, in <module> class !HelloWorldMacro(!WikiMacroBase): File "/etc/trac/plugins.d/HelloWorld.py", line 40, in !HelloWorldMacro _description = cleandoc_( !NameError: name 'cleandoc_' is not defined
The problem seems to be solved by including
from trac.util.translation import cleandoc_
in HelloWorld.py.
Attachments (0)
Change History (5)
comment:1 by , 6 years ago
comment:2 by , 6 years ago
comment:3 by , 6 years ago
Oh, but there is no import of
cleandoc_.
… as you mentioned! Sorry for not reading more carefully.
Last edited 6 years ago by (previous) (diff)
comment:4 by , 6 years ago
comment:5 by , 6 years ago
Note: See TracTickets for help on using tickets.
It looks like an issue with your Python search path. You should put
HelloWorld.pyin your environment plugins directory, or in a Python
site-packagesdirectory.
|
https://trac.edgewall.org/ticket/11420
|
CC-MAIN-2019-39
|
refinedweb
| 229
| 51.85
|
A couple of weeks ago, I noticed an F1 live timing site with an easy to hit endpoint… here’s the Mac commandline script I used to grab the timing info, once every five seconds or so…
mkdir f1_silverstone i=1; sleep 900; while true ; do curl >> f1_silverstone/f1output_race_${i}.txt ;i=$((i+1)); sleep 5 ; done
Now I just need to think what I’m going to do with the data! Maybe an opportunity to revisit this thing and try out some realtime dashboard widget toys?
PS to get the timestamp of each file in python:
import os
os.path.getctime(filename)
|
https://blog.ouseful.info/2016/07/18/simple-live-timing-data-scraper/
|
CC-MAIN-2019-51
|
refinedweb
| 103
| 65.05
|
Job Queue Batch
Project description
This addon adds an a grouper for queue jobs.
It allows to show your jobs in a batched form in order to know better the results.
Example:
from odoo import models, fields, api from odoo.addons.queue_job.job import job class MyModel(models.Model): _name = 'my.model' @api.multi @job def my_method(self, a, k=None): _logger.info('executed with a: %s and k: %s', a, k) class MyOtherModel(models.Model): _name = 'my.other.model' @api.multi def button_do_stuff(self): batch = self.env['queue.job.batch'].get_new_batch('Group') for i in range(1, 100): self.env['my.model'].with_context( job_batch=batch ).with_delay().my_method('a', k=i) batch.enqueue()
In the snippet of code above, when we call button_do_stuff, 100 jobs capturing the method and arguments will be postponed. It will be executed as soon as the Jobrunner has a free bucket, which can be instantaneous if no other job is running.
Once all the jobs have finished, the grouper will be marked as finished.
Table of contents
Usage
You can manage your batch jobs from the Systray. A new button will be shown with your currently executing job batches and the recently finished job/queue project on GitHub.
You are welcome to contribute. To learn how please visit.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/odoo11-addon-queue-job-batch/
|
CC-MAIN-2021-49
|
refinedweb
| 242
| 60.92
|
Python Programming, news on the Voidspace Python Projects and all things techie.
A Little Bit of Python: Episodes 7 and 8
Two more episodes of A Little Bit of Python have been posted. A Little Bit of Python is an occasional podcast on Python related topics with myself, Brett Cannon, Jesse Noller, Steve Holden and Andrew Kuchling.
A little bit of Python is now listed on iTunes!
- A Little Bit of Python on iTunes (Opens in iTunes and a web page)
Episode 7 is a discussion of Unladen Swallow, a branch of CPython that uses the LLVM (Low Level Virtual Machine) to provide a JIT (Just-In-Time compiler) for performance improvements in Python. This episode was recorded before PyCon. Since then a couple of decisions about Unladen Swallow have been made, that were still open questions when we recorded this episode:
PEP 3146 has been tentatively accepted.
With the proviso that it depends on startup time improvements, memory use reductions and further performance improvements (all discussed in the PEP), Unladen Swallow has been approved to merge with CPython. A new subversion branch, py3k-jit has been created for this purpose.
The version of Python that the merge targets is Python 3.3. As Python 3.2 will be out later this year it is realistically going to be about 2 years before Python with an Unladen Swallow JIT is released. This is plenty of time for the outstanding issues to be addressed and for users to test Unladen Swallow.
Episode 8 is an interview with Mark Shuttleworth recorded by Steve Holden at PyCon.
- Episode 7: A Little Bit of Python (mp3)
- Episode 7: A Little Bit of Python (m4a)
- Episode 8: A Little Bit of Python (mp3)
- Episode 8:)
- A Little Bit of Python on iTunes
If you have feedback, insults or suggestions for new topics you can email us on: all@bitofpython.com.-21 15:16:03 | |
Categories: Python, Fun Tags: podcast, bitofpython
Exception Handling Code for Python 2 and 3
The right way to maintain a library for both Python 2 and 3 is to run your tests with Python 2.6 with Python 3 warnings switched on. This doesn't mean that you have to make Python 2.6 your minimum supported version of Python, but it will warn you where you are doing things that either won't work or will have different behaviour in Python 3. For example:
$ python -3 Python 2.6.4 (r264:75821M, Oct 27 2009, 19:48:32) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 1 > 'a' __main__:1: DeprecationWarning: comparing unequal types not supported in 3.x False >>>
Once you have done this you can use 2to3 to convert your codebase. Run your tests again and find any problems. The idea is to fix them in your Python 2 codebase (possibly resorting to compatibility layers with separate libraries for things like string / bytes IO) so that 2to3 is fully able to produce a working Python 3 version of your code. Using distribute you can even have 2to3 run automatically when your package is installed on Python 3.
That is the right way. For smaller modules it is possible, but sometimes not fun, to keep a single codebase that runs fine with both Python 2 and Python 3. The one module I maintain like this is discover, a backport of the new unittest test discovery [1]. There are various tricks to getting around the slightly different syntax and semantics between Python 2 and Python 3. One of these is handling exceptions.
For Python 2.5 and earlier you define your try..except blocks thusly:
try: do_something() except AttributeError, e: handle_this(e) except TypeError, e: handle_that(e) else: finish()
For Python 3, and also Python 2.6 if you don't mind being incompatible with earlier versions of Python, you do:
try: do_something() except AttributeError as e: handle_this(e) except TypeError as e: handle_that(e) else: finish()
So you can't write exception handling code that will work with both Python 2.5 and 3.X using these constructs. Instead you can use the following nasty trick:
import sys try: do_something() except: ExceptionClass, e = sys.exc_info()[:2] if ExceptionClass is AttributeError: handle_this(e) elif ExceptionClass is TypeError: handle_that(e) else: raise else: finish()
Not very pretty, and don't forget to fix it as soon as you drop support for Python 2.5, but it works fine.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2010-03-21 01:27:33 | |
Categories: Python, Hacking Tags: python3, exceptions, compatibility
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter...
|
http://www.voidspace.org.uk/python/weblog/arch_d7_2010_03_20.shtml
|
CC-MAIN-2014-42
|
refinedweb
| 791
| 71.65
|
This will be a short port of the most useful diagnostics you can inspect while running the Pdb debugger.
Quick overview
Pdb is the de facto debugger for Python scripts, with commands similar to GDB. Many Python developers use Pdb in order to diagnose an abrupt exception, or to see the internal state of their variables while their scripts run.
The minimum code to debug a function:
>>> import pdb >>> import mymodule >>> pdb.run('mymodule.test()')
The functions or code you want to debug has to be already imported in a module.
Or use:
python3 -m pdb myscript.py
To arbitrarily break into the debugger, put this statement in the place of your script you want it to break at:
pdb.set_trace()
The following function enters the debugger using the last exception raised as the stack:
pdb.pm()
At any rate, when the debugger enters you will arrive at a command prompt that looks like
(Pdb). Sometimes you will be at the first statement and you either need to
step into a function, step (
next) over one, or set a
breakpoint and
continue.
The help function
Use
h or
help to print a list of commands, or to print the documentation of a single command. Other basic commands are
p to print a variable,
pp to pretty-print it,
locals() (a Python function) shows you the list of variables, and
ll to list the source code of the entire function you're in.
Moving around functions
up and
down moves you up and down the call stack respectively, allowing you to view the variables in that local scope.
where prints a traceback with an arrow pointing at the frame in the call stake you're currently in.
Debugging code blocks within a debugger
For this, you need to enter a recursive debugger. Using
debug function_call(), you will get a clean state that lets you step through execution or set breakpoints, letting you print variables in the process.
Set a breakpoint
The easiest way is when the breakpoint is a line in the current file, then you can just type
break LINE_NUMBER. It is also possible to break in a different file if you give it the absolute path such as
break /home/pythonuser/src/script.py:42.
To avoid the hassle of spelling out the full path each time, add part of the folder into
sys.path while you're in the debugger:
sys.path.append("/home/pythonuser/src/") break script.py:42
Viewing traceback outside of a debugger
The last point I want to go over is how to get the traceback information without raising the exception again or entering a debugger. This can be done using the
traceback module and the exception's
__traceback__ member:
import traceback traceback.print_tb(e.__traceback__)
And we're done
I hope you learned something new from this post. If you see any errors, please let me know so I can correct them.
Discussion
|
https://dev.to/zenulabidin/pdb-debugging-tips-2ki7
|
CC-MAIN-2020-50
|
refinedweb
| 490
| 69.62
|
The forest contains only a single crazy ogre. It's not much of a challenge. Let's add a bunch of other enemies. How about Godzilla, Micro Godzilla, King Kong, Prince Kong, a Fearsome Dragon, a Hot Dragoness, a Drunk Dragon, a Killer Bunny, and a Wolf Pack for good measure. Simply passing the XML string that describes each enemy to the XML() function is enough to create an Element that contains sub-elements for each enemy. The wolf pack is a nested enemy that contains individual wolves.
# initializing from a string
enemies_xml = \
"""
<enemies>
<enemy name='Godzilla' life='8500' strength='200' special='stomping' />
<enemy name='King Kong' life='2500' strength='120' special='stomping' />
<enemy name='Micro Godzilla' life='0.5' strength='0.3' special='stomping' />
<enemy name='Prince Kong' life='300' strength='50' special='stomping' />
<enemy name='Fearsome Dragon' life='400' strength='80' special='Fire Breath' />
<enemy name='Hot Dragoness' life='400' strength='80' special='Fire Breath' />
<enemy name='Drunk Dragon' life='0.5' strength='0.3' special='Alcohol Breath' />
<enemy name='Killer Bunny' life='25' strength='15' />
<enemy name='Wolf pack'>
<enemy name="Wolf 1" life="10" strength='5' />
<enemy name="Wolf 2" life="10" strength='5' />
<enemy name="Wolf 3" life="10" strength='5' />
</enemy>
</enemies>
"""
# Creating an Element from a string
enemies = XML(enemies_xml)
Finding Your Way in the Forest
So, you have a nice populated forest with lots of enemies and the hero is ready to bravely enter it. The hero is powerful, courageous, and can dance like a ballerina. Unfortunately he is also a stompophob. Stompophobs as you very well know are afraid to death to be stomped. This is a very rational aptitude where the likes of Godzilla and King Kong walk the earth.
The hero naturally has access to our forest XML file, and he wishes to know about all the stompers in the area. ElementTree sports several flavors of finding stompers such as find(), findall(), and findtext(). All these functions accept a parameter that can be either a tag name or a limited XPath expression. ElementTree supports a very basic subset of XPath. You can search for a specific tag in your direct children or on an entire tree or you can start from a specific branch. For example, to find all the compound enemies in the forest the following expression will do:
compound = enemies.findall('./enemy/enemy')
for e in compound:
print e.get('name')
Wolf 1
Wolf 2
Wolf 3
Here is his plan: Scan recursively the element tree. For each element with attributes create an 'attributes' tag, insert into it a sub-element for each attribute (tag is the attribute name, text is the value), and set the original attributes to None. Note that I created the 'attributes' element using Element and not SubElement. This creates a standalone element that later I insert() as a sub-element explicitly. The reason I didn't use SubElement is two-fold: I wanted to show you another way to add sub-elements and also I wanted to make sure the 'attributes' sub-element will be the first sub-element. The SubElement() function always appends the new sub-element.
def attributes2elements(e):
for child in e.getchildren():
attributes2elements(child)
if e.items:
# make sure that the attributes element is the first one
attributes = Element('attributes')
e.insert(0, attributes)
for (name, value) in e.items():
a = SubElement(attributes, name)
a.text = value
e.attrib = {}
Detecting Stompers
At this point I can invoke one of the find() functions to locate stompers. However, it is not very simple. Here's why.
stompers = [e for e in enemies.findall('.//special') if e.text == 'stomping']
for s in stompers:
print pretty_dump(s)
<special>stomping</special>
<special>stomping</special>
<special>stomping</special>
<special>stomping</special>
Elementary ElmentTree
ElementTree is a fine piece of software that proves that a friendly API can also be performant. ElementTree offers much more than that, including decent namespace support, fine-grained XML tree building, reading and writing to files, etc. For performance buffs the cElementTree is a real boon. The official documentation is here:, but it is very weak. I recommend going to the source:. And be sure to keep your eye out for many fine tutorials and articles by third-party developers.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left.
|
http://www.devx.com/opensource/Article/33153/0/page/5
|
CC-MAIN-2013-48
|
refinedweb
| 740
| 57.06
|
I have been working on a solution to mirror the PWD of multiple terminals. As part of that I need a way to execute commands on other tty/pts. A simple
echo won’t work because
echo writes to the output buffer, while I need to push these commands to the input buffer of the tty/pts. I found this forum post after some googling. I adapted the code from the forum post for my purpose.
#include <stdio.h> #include <stdlib.h> #include <fcntl.h> #include <sys/stat.h> #include <sys/ioctl.h> #include <string.h> #include <unistd.h> void print_help(char *prog_name) { printf("Usage: %s [-n] DEVNAME COMMAND\n", prog_name); printf("Usage: '-n' is an optional argument if you want to push a new line at the end of the text\n"); printf("Usage: Will require 'sudo' to run if the executable is not setuid root\n"); exit(1); } int main (int argc, char *argv[]) { char *cmd, *nl = "\n"; int i, fd; int devno, commandno, newline; int mem_len; devno = 1; commandno = 2; newline = 0; if (argc < 3) { print_help(argv[0]); } if (argc > 3 && argv[1][0] == '-' && argv[1][1] == 'n') { devno = 2; commandno = 3; newline=1; } else if (argc > 3 && argv[1][0] == '-' && argv[1][1] != 'n') { printf("Invalid Option\n"); print_help(argv[0]); } fd = open(argv[devno],O_RDWR); if(fd == -1) { perror("open DEVICE"); exit(1); } mem_len = 0; for ( i = commandno; i < argc; i++ ) { mem_len += strlen(argv[i]) + 2; if ( i > commandno ) { cmd = (char *)realloc((void *)cmd, mem_len); } else { //i == commandno cmd = (char *)malloc(mem_len); } strcat(cmd, argv[i]); strcat(cmd, " "); } if (newline == 0) usleep(225000); for (i = 0; cmd[i]; i++) ioctl (fd, TIOCSTI, cmd+i); if (newline == 1) ioctl (fd, TIOCSTI, nl); close(fd); free((void *)cmd); exit (0); }
Copy the above code to some C file (For eg.
ttyecho.c). Run the following command in the directory you have created the C file in to compile the code.
make ttyecho
If you named the file as
abc.c then the command would be
make abc.
Copy this file to the bin directory under your Home Directory. In my case it is
/home/pratik/bin. Create the directory if it doesn’t exist. Its a good practice to keep all custom binaries/executables in this bin directory.
Start another terminal or switch to any other open terminal that you wish to control and execute the command
tty. You can see a sample output below.
@~$ tty /dev/pts/5
Now to execute a command on
/dev/pts/5, run the following command in the controlling/original terminal.
sudo ttyecho -n /dev/pts/5 ls
You will see that the
ls command is executed in
/dev/pts/5. The
-n option makes
ttyecho send a newline after the command, so that the command gets executed and not just inserted. This utility can infact be used to send any data to other terminals For eg, you could open
vim in
/dev/pts/5 and then run the following command in the controlling terminal to cause vim to exit in
/dev/pts/5.
sudo ttyecho -n /dev/pts/5 :q
To avoid using sudo all the time, so that the command is easily scriptable, change the owners/permissions of this executable using the following commands.
sudo chown root:root ttyecho sudo chmod u+s ttyecho
What we did was change the owner/group to root and set the setuid bit for the executable which will allow you to run the utility with root permissions.
It is a very interesting tool.
What I like to see improved is, instead of the “-n” switch, a statement like:
ttyecho -e /dev/tty1 “ls -l\n”
Making it more behave like “echo”.
Works on Mac OSX too! At least with 10.9.5. Does need the sudo
Hello, a really great tool, works like a charm on a raspberrypi! many thanks!
is it possible to send keystrokes like shift+pgup, shift+pgdn to /dev/tty1?
A little late to the party… From everything else I’d been reading, the only way to do this was attach gdb to the terminal session, stop the job, somehow attach the now-stopped-job to another pty and start it again. (To state the obvious) This is much simpler. Thanks so much mate.
Great tool!!!.. Wish Linux could incorporate this into tool bin :-)!!.. This really helped to make my script crisp!!..thanks a lott!!!..
Great tool tnx a lot!!!
Cool thanks!
Though the make command would only work if there was a makefile (on Debian anyway), you could just use “gcc ttyecho.c”
Also, for me, putting the binary in ~/bin doesn’t work, I haven’t heard of that before. I normally put them in /usr/local/bin
Great job.
while true; do echo print this; sleep 5; done;
this command works standalone but when combined with ttyecho throws an error? Can anyone help me out with that?
ttyecho -n /dev/pts/2 while true; do echo print this; sleep 5; done; – GIVES ERROR
bash: syntax error near unexpected token `do’
use quotes to mark the list of commands as a single argument
ttyecho -n /dev/pts/2 “while true; do echo print this; sleep 5; done;”
Can this login to a getty process?
When I run it with the user name it appears to accept it and show up the request for the password, but then sending the password always returns “Login incorrect”.
Is a security feature of Linux or is the characters ttyecho ends not compatible with getty?
Linux password entry fields are secured and protected; this utility cannot enter them. Essentially, Linux binds the password entry field directly to the keyboard. So it only registers physical keyboard presses directly from the hardware.
However, there is another way, I think, to do what you want with the getty terminals. I, for example, use it to help manage my server. It runs non-graphical, and is set up close to my desk: close enough I can see the screen, too far to interact with without getting up. So, I use this program to interact with the physical terminal. The trick is that you have to allocate a new console and attach a login session. This can be achieved with the openvt command.
# openvt -ls
will create a new virtual terminal, attach an authenticated bash session, and make that terminal active on the display. You can use ‘chvt ‘ to switch to TTY. ‘deallocvt ‘ removes that TTY.
Not as straightforward as I would like, but it works.?
You need to copy it into a secure path
# sudo cp ttyecho /sbin/
# sudo ttyecho -n /dev/pts/16 ls
Thanks a lot! Saved my day!
I had a disconnected SSH connection to a server, and in that SSH connection, the mail tool was running with an open overfilled mbox with more than 900.000 mails. Just opening this big mbox took quite a while. Killing the mail process and re-opening that mbox again would’ve been a pain in the a**.
As I only needed the last ~~ 100 of these mails, I used ttyecho to send “d 1-900000” and “q”, so the mail utility deleted most of the mails and finished nicely, leaving only a few hundred messages for later reading & filtering.
This worked perfectly, it took just a minute, and that includes the time needed to create ttyecho.c, copy&paste and compiling. 😀?
thanks for the code!!
Wow! Like others said, this tool is truly a gem. So many people states that writing to the stdin of an unrelated process isn’t possible…
Thanks to your need util, I didn’t had to restart a long taking process, that had already finished, but asked me to answer 17272 times “1”. I’m going to include it on all my installations from now on, it is so useful.
Thanks a lot.
Thanks! Ive been looking for this solution for a while.
Fantastic! I was poking around all day trying to figure out how to do with tee. Saved me a lot of frustration! Thank you kind sir.
Very useful program. Thank you very much
Great tool. I have been searching for a solution to this for a while and always running into dead ends. Nice clean solution to executing commands in other terminals. This is a real gem piece of code.
Thanks!
that tool just rocks dynamite!
I’ve written an initramfs generator () supporting dropbear (amongst others) to rescue LUKS systems remotly and I had a very hard time firing up the new init from a remote dropbear console (/dev/ttyp0).
Thanks to your tool and I can redirect my commands to /dev/console (as if it was local).
very very nice
Thanks for the fantastic tool!
I’m looking for the solution for days.
Learned from your code 🙂
|
http://www.humbug.in/2010/utility-to-send-commands-or-data-to-other-terminals-ttypts/
|
CC-MAIN-2016-30
|
refinedweb
| 1,471
| 73.07
|
Avoid a page reload on the review request UI.
Review Request #3389 — Created Oct. 1, 2012 and discarded
The idea here is to have both discard and draft banners on the window so that by using jQuery we can hide/show the appropriate banner instead of fully re-loading the page.
Note- Testing pertains banners only and it was performed as follows: -Analyse expected behaviour of a review request's state in demo.reviewboard. -Reproduce the review request's state locally making sure it behaves as expected(minus the page reload). * Brand new, not yet published: Discard-Review-Request button: draft-banner and "close" option in the nav bar hide. Discard-banner shows Discard-Review-Request link: draft-banner and "close" option in the nav bar hide. Discard-banner shows. Publish button: draft-banner hides and review-request gets published as expected. * Published and unchanged None of the banners are displayed- as expected. * Published with a change causing a draft Draft-banner is displayed as expected. "ok" button: saves description entered as expected "cancel" button: description entered is discarded as expected "publish changes" button: draft banner hides and request is published as expected. "discard changes" button: draft banner hides and request is discarded as expected. * Discarded discard-banner is displayed as expected. "ok" button: saves description entered as expected "cancel" button: description entered is discarded as expected * Reopened after discarded "reopen for review" button: discard-banner hides, draft-banner is displayed and request becomes a draft. * Submitted submitted-banner is displayed as expected. "ok" button: saves description entered as expected "cancel" button: description entered is discarded as expected * Reopened after submitted -submitted request was previously a draft: "reopen" button hides submitted-banner and displays draft-banner -submitted request was previously public: "reopen" button hides submitted-banner.
JZ
Can you combine these two lines?
Please combine these too.
This change isn't correct--we already were hiding it using CSS. We don't want it to be present in the case where the user isn't the owner/can-edit for the review.
You removed the spaces inside the {%. Please don't
Please put the spaces back before the else.
There's no reason for this change.
JZ
I think you may have found a bug in datastore.js - at this line: if (!options.success) is evaluating to true, even if a function is passed. We can fix that by using hasOwnProperty. So change that line to be: if (!options.hasOwnProperty("success")) { // ... } That makes the banner go away for me.
This semicolon is breaking reviews.js - you need to remove it.
JZ
Change Summary:
- draft-banner and discard-banner now show/hide properly. - Changed the template to address points raised by David. Note: There's a server error message that appears and quickly goes away. It also disappers from the console so i don't get to see what is causing the error message.
JZ
JZ
Hi Jesus, I'd like to see a complete, detailed, step-by-step set of tests in Testing Done that describes every permutation of states you're working with. We need to know what testing and that testing needs to occur with every draft, since this is something very core that we can't get wrong.
This isn't needed. Keep it as it was. You only really care about hasOwnProperty when you want to be sure that this specific base-level instance has it and not a prototype backing it.
Space before "{"
Should use gDraftBanner.
It'd be nice to have a variable at the top like gDraftBanner for this.
This seems wrong. Note that there's two conditions for showing this form: 1) It has a pending draft. 2) The user viewing it has permissions to edit this review request. #1 is dynamic. That state will change when there's an edit or when a review request is reopened from a discarded state. #2 is not dynamic. This will never change while the page is open. So that original check should remain, and if the user doesn't have permission to edit the review request, this form should never be in the HTML.
The first conditional is checking against 'P', so I don't see how the newly added check against 'D' can ever be false.
JZ
Change Summary:
Modifications: -Addressed Christian's comments. -"close" option in the nav bar now hides when discard-review request is clicked. Current issues: -Discard-banner: The draft shows two pencil icons instead of one. Also, the "reopen for review" button is broken. Console does not capture anything when the button is clicked.
I'd also like to see detailed testing showing things are working for when the review request is in the following states: * Brand new, not yet published * Published and unchanged * Published with a change causing a draft * Discarded * Roepened after discarded * Submitted * Reopened after submitted
JZ
JZ
Change Summary:
-Hide editable fields when user discards a review request. -Review request status is updated accordingly. current issues: -Testing-done/description fields are still editable when request is discarded. -Two pencil icons on discard-banner/submitted-banner -Can't figure out how to check if a review request is a draft using RB.ReviewRequest
nits - space before the equals signs here.
The style is a bit off here - should look like: gReviewRequest.publish({ buttons: gDraftBannerButtons, success: function() { gDraftBanner.hide(); } });
Maybe I missed something, but why are we generating this updated timestamp on the client side, instead of reading it from the returned object once the API call is successful?
space before =
Spaces after the ifs, and on either side of the =='s, and before the open braces. Also, I think you can simplify your replacements here like this: if (newState) { $("span.status").text(newState); } else { $("span.status").text("updated"); }
In an effort to prepare for our eventual localization, we probably want to abstract the English text into some constants that can be easily flipped.
So, instead of hiding each editicon, how about shutting down their editable-fieldness? This would mean changing inlineEditor so that it can be disabled.
Same as above.
Same comments as above - only in this case, we'd be enabling the inlineEditor.
What is this element? I can't find any elements with this ID anywhere...
Instead of changing the ID and adding this, let's just keep the old ID.
Not entirely sure this is necessary - see below.
Why is this ID changing?
Please remove this extra line.
Hm, so I'm afraid how how fragile this structure is for l10n. Not sure how important that is at this stage... David or Christian?
JZ
JZ
JZ
This looks really, really wrong. That's three things named the same ID, with tiny variations. That's just plain confusing. If you really need these things to share these styles, you should use a class - not a series of IDs.
Style is a bit off here. success: function() { gDraftBanner.hide(); }
I think I'd feel better if we didn't depend so much on whether the banners were visible, and just checked the review request object to see if it was in a state that allowed for editable fields.
This is really confusing - why are we still creating a client-side timestamp here? Why can't we retrieve this information from the review request instead?
This can be condensed to: $("span.status").text(newState ? newState : "updated");
Whoa - hold it...why are you passing "disable" as a string? I thought disable was a function that we'd call on inlineEditor widgets.
Same as above. Also, this function looks *extremely* similar to the one on line 2503. We might want to DRY* it up. * Don't repeat yourself.
I really don't think this is how we want to be enabling and disabling inlineEditors...
Why was this ID changed?
JZ
JZ
Change Summary:
-Original data from the review request is pulled when a draft review request is discarded/submitted. -Avoid code repetition. TODO: -More thorough testing. -A banner at the bottom of the page should appear when a review request is updated(discarded/changed/submitted etc).
Jesus - is this still a WIP patch, or can we start deeply reviewing it?
JZ
Swap these. var should go first. Blank line after the var.
loaded should be last. It signifies that we've finished loading.
I believe this can possibly fetch fields that aren't on the review request. I think what you want is .review-request .main
I'm certain this doesn't work. You're never terminating the publish() command. That makes me think that this code path was never tested.
There's no way any of this was ever tested. You're using Django template tags in here, but we never go through Django. That clearly can never work. What are you attempting to do here? Why can't we reuse the banner we'd otherwise have? That should just be in the HTML and shown.
Indentation problems.
Should be alphabetical.
Space after the ","
You have indentation problems here. We only call this once, so I don't think it's worth having a function for it. Just do this logic where it's needed.
Same comments here.
You can chain this: $('time.timesince') .attr('datetime', gReviewRequest.last_updated) .timesince();
We're not actually closing it. I'd call this "onReviewRequestClosed" instead, since it's really the handler after we've closed.
Can you keep the queries grouped and the non-queries grouped?
No need for ".editable"
onReviewRequestReopened
What's the "updated" all about?
There can only be one element with a given ID. This query is not reliable. You may need to change button IDs to classes and make sure nothing breaks.
You're missing a ;
Same comment as above.
Space before "("
The ID is the most specific. No need to append the class names. Needs to wrap to < 80 chars. Do: $('#submitted-description') .reviewCloseCommentEditor(...);
I think what you really want is: $('.review-request .editable').reviewRequestFieldEditor();
Keep the blank line.
You're defining this button ID multiple times, which won't work. You need to switch to classes and update call sites.
Levels of indentation look wrong. The last endif should only have one space, and the first here should have two. Make sure the template tags above only indent by 1 space each time and that they're all correct.
One space before "if"
|
https://reviews.reviewboard.org/r/3389/
|
CC-MAIN-2022-05
|
refinedweb
| 1,729
| 68.87
|
Le 10/10/2016 à 16:04,..
> >> +nf_conntrack_gc_max_evicts - INTEGER >> + The maximum number of entries to be evicted during a run of gc. >> + This sysctl is only writeable in the initial net namespace. > > Hmmm, do you have any advice on sizing this one? In fact, no ;-) I really hesitate to expose the four values or just a subset. My goal was also to get feedback. I can remove this one. > > I think a better change might be (instead of adding htis knob) to > resched the gc worker for immediate re-executaion in case the entire > "budget" was used. What do you think? Even if it's not directly related to my problem, I think it's a good idea. > > > diff --git a/net/netfilter/nf_conntrack_core.c > b/net/netfilter/nf_conntrack_core.c > --- a/net/netfilter/nf_conntrack_core.c > +++ b/net/netfilter/nf_conntrack_core.c > @@ -983,7 +983,7 @@ static void gc_worker(struct work_struct *work) > return; > > ratio = scanned ? expired_count * 100 / scanned : 0; > - if (ratio >= 90) > + if (ratio >= 90 || expired_count == GC_MAX_EVICTS) > next_run = 0; >
|
https://www.mail-archive.com/netdev@vger.kernel.org/msg132083.html
|
CC-MAIN-2018-47
|
refinedweb
| 167
| 51.95
|
04 August 2010 10:08 [Source: ICIS news]
(adds possible restart date of cracker, with recasts throughout)
SINGAPORE (ICIS)--Energy giant Shell’s new 800,000 tonne/year mixed-feed cracker in ?xml:namespace>
“The flare system at Shell's Pulau Bukom manufacturing site was activated in the afternoon of 3 August 2010 due to an unplanned disruption to a process unit,” said a Shell spokesperson, in an e-mailed reply to ICIS.
“The rest of the operations [were] not affected,” said the spokesperson, without elaborating on the current status of the cracker.
Commissioned in March, the cracker - an integral part of Shell Eastern Petrochemicals Complex (SEPC) - could also produce 450,000 tonnes/year of propylene and 230,000 tonnes/year of benzene.
“The cracker tripped yesterday. Maybe it will go up in the next 24-48 hours,” said a market source.
Ethylene prices climbed up $20/tonne ($15/tonne) to $870-900/tonne CFR (cost and freight)
But the price movement had more to do with strong crude and naphtha values, rather than the outage at Shell’s
The new cracker has been running at around 80% of capacity since it was started up because of some unidentified issues, market sources said.
SEPC is Shell’s largest fully integrated refinery and petrochemicals hub.
The olefins and aromatics products from the cracker would be used primarily for Shells downstream chemical plants in
($1 = €0.76)
With additional reporting by Mahua Chakravarty, Aaron Cheong, Peh Soo Hwee and Pearl Bant
|
http://www.icis.com/Articles/2010/08/04/9381949/shell-singapores-800000-tonneyear-cracker-suffers.html
|
CC-MAIN-2014-52
|
refinedweb
| 249
| 56.79
|
Post your Comment
Continue and break statement
statement in java program?
The continue statement restart the current loop whereas the break statement causes the control outside the loop.
Here is an example of break and continue statement.
Example -
public class ContinueBreak
What is the difference between a break statement and a continue statement?
and return statement etc.
Other-then Java programming language the break...What is the difference between a break statement and a continue statement? Hi,
What is the difference between a break statement and a continue
Java - Break statement in java
Java - Break statement in java
...;
2.The continue statement
3.The return statement
Break: The break statement is used in many programming languages
such as c, c++, java etc. Some
Break Statement in java 7
Break Statement in java 7
In this tutorial we will discuss about break statement in
java 7.
Break Statement :
Java facilitate you to break the flow... java provides the way to do this
by using labeled break statement. You can jump
Break statement in java
Break statement in java
Break statement in java is used to change the normal control flow of
compound statement like while, do-while , for. Break.... Break is
used with for, while and do-while looping statement and break
Java Break Statement
Java Break Statement
... is of unlabeled break statement in java.
In the program break statement....
Syntax for Break Statement in Java
Break Statement in JSP
Break Statement in JSP
The use of break statement is to escape... are using the switch statement under
which we will use break statement. The for loop
Use Break Statement in jsp code
Use Break Statement in jsp code
The break statement is used to terminate...
after loop.
break_statement_jsp.jsp
<!DOCTYPE HTML PUBLIC
PHP Break
loop
(for, while etc) then we can use break statement, generally we need... structure then we need
to provide numeric argument along with break statement...Break Control Structure:
Break is a special kind of control structure which
JavaScript Break Continue Statement
JavaScript Break-Continue Statement:
Generally we need to put the break statement in our programming language when we have to discontinue the flow....
Example 1(Break):
<html>
<head>
<title>Write your title here
break
to control other areas of program flow.
We use break statement to terminate... using break statement.
class BreakDemo{
...
1
2
3
The break statement has two forms
C Break for loop
;
In this section, you will learn how to use break statement in a for loop.
The break statement terminates the execution of the enclosing
loop or conditional statement. In a for loop statement, the break
statement can stop the counting
Java Break Lable
Java Break Lable
In Java, break statement is used in two ways as labeled
and unlabeled. break is commonly used as unlabeled. Labeled break statement is
used
Java Break
Java Break
Many programming languages like c, c++ uses the "break"
statement. Java also... to the next statement following the loop
statement. Break is one
C break continue example
;
In this section, you will learn how to use break statement... to force an
immediate jump to the loop control statement. The break statement... to enter any number. If the number is less than 0, the break
statement terminates for loop
Java break for loop
... baadshah. In the example, a labeled break statement is used to
terminate for loop... for Loop
Example
public class Java_break_for_loop {
public static void
The break Keyword
an innermost enclosing while, for, do or switch statement.
The break statement...
The break Keyword
"break" is the java keyword used
to terminate the program execution
Loop break statement
Java NotesLoop break statement
Loops are often used to accomplish the same... reading input
Immediate loop exit. When you execute a break statement,
the loop... that the execution continues to the end.
The break statement violates these expectations
C Break with Switch statement
C Break with Switch statement
In this section, you will learn how to use break statement..., but should not have the same value. The break statement ends the
processing
Java Break continue
Java Break continue
Java has two keywords break and continue in its
branching... control from the point where
it is passed.
Break and Continue in Java
The Switch statement
. To avoid this we can use Switch statements
in Java. The switch statement is used... of a variable or expression. The switch
statement in Java is the best way to test... a statement following a switch
statement use break statement within the code block
Switch Statement
Switch Statement How we can use switch case in java program ?
... switches to statement by testing the value.
import java.io.BufferedReader;
import...");
break;
case 2: System.out.println("Monday");
break
Java Break keyword
Java Break keyword
... and for handling these loops Java
provides keywords such as break and continue respectively. Among these Java
keywords break is often used in terminating the loops
Java Break loop
Java Break loop
... statement is used
to break two looping statements do-while & for loop... in Java
public class Java_Break_loop {
public static void main(String args
Switch statement in PHP
the break statement. When the expression and statement matches themselves the code...Switch statement in PHP HII,
Explain about switch statement in PHP?
hello,
Switch statement is executed line by line. PHP executes
Switch Statement in java 7
Switch Statement in java 7
This tutorial describes the if statement in java 7. This is one kind of
decision making statement.
Switch Statements :
The switch statement is used when you want to test many statements based on
some
break and continue
break and continue hi
difference between break and continue
What is BREAK?
What is BREAK? What is BREAK?
Hi,
BREAK command clarify reports by suppressing repeated values, skipping lines & allowing for controlled break points.
Thanks
Post your Comment
|
http://roseindia.net/discussion/22612-Java-Break-Statement.html
|
CC-MAIN-2013-48
|
refinedweb
| 965
| 66.44
|
Posted Apr 13, 2009
By Garry Robinson
For Access 2007 thru to Access 2000
In this article, I will show you how to
read information from a PayPal Notification email by using Microsoft Access,
Outlook and VBA. If you are unsure what PayPal is, it is one of the biggest
payment systems on the Internet and it is owned by eBay, the online auction
king. Anyway, the reason that you would want to read a PayPal notification
email is that the alternative is to meticulously copy and paste all the
different parts of the email into your computer system. This is something
that is both tedious and error prone and that should be avoided.
When you receive a payment notification
from PayPal, it is going to look like the email in figure 1. The structure of
this email is HTML so it is a bit tougher to read than ordinary text emails as
it includes HTML tags along with the text. In this article, we will concentrate
on retrieving the information in the email that is easy to find in code. In
Figure 1, you will see the items that I will retrieve programmatically; these
are numbered 1 to 5. You will also see further down the email that there are
details on the Buyer that can include delivery address and other notes related
to the purchase. These are quite difficult to trace in the email and I am not
going to cover that in this article (because I handle that manually).
So in a nutshell we are going to read the
following elements from the email using VBA
1. The transaction
ID
2. The amount
3. The
currency
4. The Buyers
name
5. The email
address
If we get all these correct, we will be far
less likely to send the order details to the wrong person and believe me,
getting these details and the email address wrong can lead to some messy
situations. In our case, as our prices rarely change, we use prices to identify
the product that the buyer has ordered.
Figure 1 Paypal Notification email with VBA items highlighted
To start the process, we need to get the
email into a separate Outlook folder. Your can do this by either moving the
email manually and dropping it into the folder as I have done in Figure 2 or by
setting up an Outlook rule to do this for you as I have done in Figure 3. Note
that I like to name this folder with an underscore prefix ( _Orders) so that
the orders appear at the top of the Outlook folders
Figure 2 - Move the email to the _Orders folder manually
Figure 3 - An Outlook Rule to move the Paypal email to the _Orders folder
To communicate with Outlook, we use MAPI
and the Outlook namespace using code that looks like this:
.Folders("Personal Folders").Folders("_Orders")
We then find the text of the email using
code that looks like this:
Now I have discussed the way you manage
Outlook with Access in a previous article on my own website. I suggest that you
either download this article's sample database or head to this page to read
about the details.
To process the text of the email, that is
the vba string EmailContents, there are three functions that we use. The first
two, the INSTR and MID functions will be familiar to most VBA programmers. The
other is GETWORD, which I will explain later. If you look at Figure 1, you will
see the Transaction ID highlighted by the 1 icon. Now what we want to do is
search for the start of the Transaction ID in the EmailContents string. If you
look at Figure 4, you will find that the HTML tag makes it harder to find than
a straight text search.
Figure 4 - The HTML body text as viewed in the VBA Immediate Window
So now we locate the start of the actual
transaction id in the HTML string using the following lines of VBA because TxnID
occurs just before the Transaction ID.
ipos1 = InStr(ppTransactionID, "TxnID")
This will return an integer number of
something like 232. We then do then same thing to locate Hello, which is right
after the end of the transaction ID. We now know the start and the end of the
string and with the use of some constants, we can lift the actual transaction
ID using the MID function. I highly recommend that you learn to use the
Immediate Window shown in Figure 4 so that you can find the exact values of
variables such as iPos1 and IPos2.
ipos1 = InStr(emailContents, "TxnID")
ipos2 = InStr(emailContents, "Hello")
ppTransactionID = Mid(emailContents, ipos1 + 6, ipos2 - 6 - ipos1)
Once you have mastered the art of pulling
text out of the Body text strings, you then need to tackle the payment entries
(number 2 and 3 in figure 1)
which I find using a function called GetWord. This function was released by
Microsoft as freeware back in the days of Access 97. In the following code, I identify the
start of the payment and then split the body string so that all the text before
the payment information is removed. The first two words in this shorter string
are the Amount and the Currency. Let's look at the code that makes this
possible.
ipos1 = InStr(emailContents, "You received a payment of ")
strEc = Mid(emailContents, ipos1 + Len("You received a payment of "), 200)
ppTotalAmount = GetWord(strEc, 1)
strCurrency = GetWord(strEc, 2)
I will bet, if you are still reading this article,
that you will be keen to look at these two functions in the demonstration
database. So open the database and do a global search in the VBA container for
these functions.
Function GetWord(StringReq, integerWordPosition)
and
Function CountWords(StringReq) As Integer
These handy functions allow you to find
words in sentences that are generated by computer software such as that used in
the PayPal email. I use these in other email reading programs in my business.
If you want to use the functions, import the module called StringParsing into
your database.
When the first form is open and you press
the Extract button as in Figure 5, the software looks to see if there are any
emails in the Orders folder and then works out the Person and the Cost. It goes
without saying that you need at least one authentic PayPal order email to use
this software but you may be able to use the demo email in the download
database.
Figure 5 - The first of the forms to process your PayPal email
If you press Yes (see Figure 5), the ID,
Name, Person and Email address are copied into a new record in a table in the
database and a button appears on the form as shown in Figure 6. The email is
then moved to another folder using Outlook Automation VBA.
Figure 6 - The Processing form after the email has been read
Click on the Review Order button and you
can see the information that was read from the email in a form that shows all
the fields in the PaypalOrders table (see Figure 7). At this stage, you
will add or edit any other information relevant to the PayPal email and then
you can proceed with other activities such as emailing the order instructions
back to the buyer.
Figure 7 - The form that is use to add additional information from the PayPal email
In this article, I have structured my text
to inspire you to stop processing PayPal orders manually and to go hunting in
the download database for the code to process your orders. I also hope that the
reason that you have read this article is because you are getting lots of
orders and haven't got the time to process them in the old way.
»
See All Articles by Columnist Garry Robinson
MS Access Archives
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
Subject
(Maximum characters: 1200). You have characters left.
|
http://www.databasejournal.com/features/msaccess/article.php/3813406/How-To-Read-A-PayPal-Notification-Email.htm
|
CC-MAIN-2014-15
|
refinedweb
| 1,351
| 63.83
|
A wonderful place to stay. Still reasonably new, and only 10mins from Manhattan on the subway. They are only too pleased to pick you up from the subway on return (and drop you in). Picked us up in the middle of a snowstorm - awesome service, and a great relief at the time.
Really the only reason we didn't give excellent was because they couldn't provide us with plain tea bags, only herbal teas. Some ordinary breakfast teas would have been appreciated. Having said that, the little coffee maker was great - really easy and quick, with nice coffee.
Don't expect a beautiful view, it's really quite industrial, but a nice view of the bridge.
Surprisingly quiet considering how close it is to Manhattan.
I'd definitely return, and also recommend it to, Priceline, Hotels.com, Expedia, Hotwire, Groupon, Cancelon, Odigeo, HotelTravel
|
https://www.tripadvisor.com/ShowUserReviews-g48080-d2464152-r155556751-Wyndham_Garden_Long_Island_City_Manhattan_View-Long_Island_City_Queens_New_York.html
|
CC-MAIN-2016-44
|
refinedweb
| 144
| 67.25
|
> I much prefer Linus's suggestion of agreeing on the top level API. I > would like to see disks, and removeable media having a single unified > namespace and set of ioctls so that the different user-space programs > don't need to worry about if they are dealing with a SCSI, PPA, > ATAPI-ish, USB, 1394 or whatever comes next drive. Is that work? yes, > but it's also about communication and keeping things in the appropriate > place. Let me hide the horrible things ide-floppy has to do from > user-space, and if that means I/we have to completely re-write the > ioctls etc so be it. I totally agree, why pick an arbitrary interface, and call it the 'standard', you might as well define your own standard, which suits the needs of supporting all future interfaces, (in the near future, anyway).John.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
http://lkml.org/lkml/2002/7/14/38
|
CC-MAIN-2015-06
|
refinedweb
| 173
| 59.84
|
This is one
This lecture uses the DLE class to price payout streams that are linear functions of the economy’s state vector, as well as risk-free assets that pay out one unit of the first consumption good with certainty.
We assume basic knowledge of the class of economic environments that fall within the domain of the DLE class.
Many details about the basic environment are contained in the lecture Growth in Dynamic Linear Economies.
We’ll also need the following imports
import numpy as np import matplotlib.pyplot as plt from quantecon import LQ from quantecon import DLE %matplotlib inline
We use a linear-quadratic version of an economy that Lucas (1978) [Luc78] used to develop an equilibrium theory of asset prices:
Preferences$$ -\frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t[(c_t - b_t)^2 + l_t^2]|J_0 $$$$ s_t = c_t $$$$ b_t = U_bz_t $$
Technology$$ c_t = d_{1t} $$$$ k_t = \delta_k k_{t-1} + i_t $$$$ g_t = \phi_1 i_t \, , \phi_1 > 0 $$$$ \left[ {\begin{array}{c} d_{1t} \\ 0 \end{array} } \right] = U_dz_t $$] $$$$ x_0 = \left[ {\begin{array}{ccccc} 5 & 150 & 1 & 0 & 0 \end{array} } \right]' $$
Asset Pricing Equations¶
[HS13] show that the time t value of a permanent claim to a stream $ y_s = U_ax_s \, , s \geq t $ is:$$ a_t = (x_t'\mu_ax_t + \sigma_a)/(\bar e _1M_cx_t) $$
with$$ \mu_a = \sum_{\tau = 0}^\infty \beta^\tau(A^{o'})^\tau Z_a A^{o\tau} $$$$ \sigma_a = \frac{\beta}{1-\beta} \text{trace} (Z_a \sum_{\tau = 0}^\infty \beta^\tau (A^{o})^\tau C C^{'} (A^{o'})^\tau) $$
where$$ Z_a = U_a^{'}M_c $$
The use of $ \bar e _1 $ indicates that the first consumption good is the numeraire.
gam = 0 γ = np.array([[gam], [0]]) ϕ_c = np.array([[1], [0]]) ϕ_g = np.array([[0], [1]]) ϕ_1 = 1e-4 ϕ_i = np.array([[0], [-ϕ_1]]) δ_k = np.array([[.95]]) θ_k = np.array([[1]]) β = np.array([[1 / 1.05]]) ud = np.array([[5, 1, 0], [0, 0, 0]]) a22 = np.array([[1, 0, 0], [0, 0.8, 0], [0, 0, 0.5]]) c2 = np.array([[0, 1, 0], [0, 0, 1]]).T l_λ = np.array([[0]]) π_h = np.array([[1]]) δ_h = np.array([[.9]]) θ_h = np.array([[1]]) - δ_h ub = np.array([[30, 0, 0]]) x0 = np.array([[5, 150, 1, 0, 0]]).T)
After specifying a “Pay” matrix, we simulate the economy.
The particular choice of “Pay” used below means that we are pricing a perpetual claim on the endowment process $ d_{1t} $
econ1.compute_sequence(x0, ts_length=100, Pay=np.array([econ1.Sd[0, :]]))
The graph below plots the price of this claim over time:
### Fig 7.12.1 from p.147 of HS2013 plt.plot(econ1.Pay_Price, label='Price of Tree') plt.legend() plt.show()
The next plot displays the realized gross rate of return on this “Lucas tree” as well as on a risk-free one-period bond:
### Left panel of Fig 7.12.2 from p.148 of HS2013 plt.plot(econ1.Pay_Gross, label='Tree') plt.plot(econ1.R1_Gross, label='Risk-Free') plt.legend() plt.show()
np.corrcoef(econ1.Pay_Gross[1:, 0], econ1.R1_Gross[1:, 0])
array([[ 1. , -0.42812698], [-0.42812698, 1. ]])
Above we have also calculated the correlation coefficient between these two returns.
To give an idea of how the term structure of interest rates moves in this economy, the next plot displays the net rates of return on one-period and five-period risk-free bonds:
### Right panel of Fig 7.12.2 from p.148 of HS2013 plt.plot(econ1.R1_Net, label='One-Period') plt.plot(econ1.R5_Net, label='Five-Period') plt.legend() plt.show()
From the above plot, we can see the tendency of the term structure to slope up when rates are low and to slope down when rates are high.
Comparing it to the previous plot of the price of the “Lucas tree”, we can also see that net rates of return are low when the price of the tree is high, and vice versa.
We now plot the realized gross rate of return on a “Lucas tree” as well as on a risk-free one-period bond when the autoregressive parameter for the endowment process is reduced to 0.4:
a22_2 = np.array([[1, 0, 0], [0, 0.4, 0], [0, 0, 0.5]]) info2 = (a22_2, c2, ub, ud) econ2 = DLE(info2, tech1, pref1) econ2.compute_sequence(x0, ts_length=100, Pay=np.array([econ2.Sd[0, :]]))
### Left panel of Fig 7.12.3 from p.148 of HS2013 plt.plot(econ2.Pay_Gross, label='Tree') plt.plot(econ2.R1_Gross, label='Risk-Free') plt.legend() plt.show()
np.corrcoef(econ2.Pay_Gross[1:, 0], econ2.R1_Gross[1:, 0])
array([[ 1. , -0.69994539], [-0.69994539, 1. ]])
The correlation between these two gross rates is now more negative.
Next, we again plot the net rates of return on one-period and five-period risk-free bonds:
### Right panel of Fig 7.12.3 from p.148 of HS2013 plt.plot(econ2.R1_Net, label='One-Period') plt.plot(econ2.R5_Net, label='Five-Period') plt.legend() plt.show()
We can see the tendency of the term structure to slope up when rates are low (and down when rates are high) has been accentuated relative to the first instance of our economy.
|
https://python-advanced.quantecon.org/lucas_asset_pricing_dles.html
|
CC-MAIN-2020-40
|
refinedweb
| 866
| 69.89
|
I have a model that contains an integer field
class myModel(models.Model): number = models.IntegerField()
Whenever I display or input data to this model I want to do so in octal. When I populate my edit form I do this:
number = oct(numberObject.number).replace('Oo','')
When I go to the edit form it prepopulates with exactly what I want but when I try to do error checking after submitting the form to avoid having that number occur twice I get a problem because number is in self.changed_data and therefore I'm getting the error that this number already exists (basically it's finding itself and saying it is a duplicate)
I can't think of a way to figure out whether I'm trying to change the number to one that already exists or if I'm just submitting the number without changing it.
My form/validation code:
class NumberForm(ModelForm): number = forms.CharField(max_length = 10) def clean_number(self): """Ensures the new Number is unique """ enteredNumber = self.cleaned_data['number'] changedFields = self.changed_data if number.objects.filter(number__exact = int(enteredNumber,8)): if 'number' in changedFields: raise forms.ValidationError("Error") return int(enteredNumber,8) class Meta: model = Number fields = '__all__'
You may be editing an existing object, right? I would simply exclude it from the result set:
def clean_number(self): """Ensures the new Number is unique """ enteredNumber = int(self.cleaned_data['number'], 8) queryset = Number.objects.filter(number=enteredNumber) if self.instance is not None and self.instance.pk is not None: queryset = queryset.exclude(pk=self.instance.pk) if queryset.exists(): raise forms.ValidationError("Error") return enteredNumber
Using the
.exists() method avoids loading the object from the database, should one exist.
By the way, this form won't ensure you cannot create duplicates. Two threads may run the validation code at the same time, accept the same value, then proceed to saving their respective object with that value. If you want to be sure you don't have duplicates, you must do so at the database level (by passing
unique=True to the field on the model).
|
https://databasefaq.com/index.php/answer/7314/django-django-form-output-error-checking-issue
|
CC-MAIN-2020-16
|
refinedweb
| 348
| 50.53
|
On Wed, Nov 24, 2004 at 11:01:13AM +0900, Horms wrote:
> On Mon, Nov 22, 2004 at 09:19:22AM -0800, Nishanth Aravamudan wrote:
> > On Mon, Nov 22, 2004 at 11:48:05AM +0900, Horms wrote:
> > > On Tue, Nov 16, 2004 at 05:30:59PM -0800, Nishanth Aravamudan wrote:
> > > > On Mon, Nov 01, 2004 at 12:07:49PM -0800, Nishanth Aravamudan wrote:
> > > > > Description: Adds ssleep_interruptible() to allow longer delays to
> > > > > occur
> > > > > in TASK_INTERRUPTIBLE, similarly to ssleep(). To be consistent with
> > > > > msleep_interruptible(), ssleep_interruptible() returns the remaining
> > > > > time
> > > > > left in the delay in terms of seconds. This required dividing the
> > > > > return
> > > > > value of msleep_interruptible() by 1000, thus a cast to (unsigned
> > > > > long)
> > > > > to prevent any floating point issues.
> > > > >
> > > > > Signed-off-by: Nishanth Aravamudan <nacc@xxxxxxxxxx>
> > > > >
> > > > > --- 2.6.10-rc1-vanilla/include/linux/delay.h 2004-10-30
> > > > > 15:34:03.000000000 -0700
> > > > > +++ 2.6.10-rc1/include/linux/delay.h 2004-11-01 12:06:11.000000000
> > > > > -0800
> > > > > @@ -46,4 +46,9 @@ static inline void ssleep(unsigned int s
> > > > > msleep(seconds * 1000);
> > > > > }
> > > > >
> > > > > +static inline unsigned long ssleep_interruptible(unsigned int
> > > > > seconds)
> > > > > +{
> > > > > + return (unsigned long)(msleep_interruptible(seconds * 1000) /
> > > > > 1000);
> > > > > +}
> > > > > +
> > > > > #endif /* defined(_LINUX_DELAY_H) */
> > > >
> > > > After a discussion on IRC, I believe it is pretty clear that this
> > > > function has serious issues. Mainly, that if I request a delay of 1
> > > > second, but msleep_interruptible() returns after 1 millisecond, then
> > > > ssleep_interruptible() will return 0, claiming the entire delay was
> > > > used (due to rounding).
> > > >
> > > > Perhaps we should just be satisfied with milliseconds being the grossest
> > > > (in contrast to fine) measure of time, at least in terms of
> > > > interruptible delays. ssleep() is unaffected by this problem, of course.
> > > >
> > > > Please revert this patch, if applied, as well as any of the other
> > > > patches I sent using ssleep_interruptible() [only a handful].
> > >
> > > Would making sure that the time slept was always rounded up to
> > > the nearest second resolve this problem. I believe that rounding
> > > up is a common approach to resolving this type of problem when
> > > changing clock resolution.
> > >
> > > I am thinking of something like this.
> > >
> > > ===== include/linux/delay.h 1.6 vs edited =====
> > > --- 1.6/include/linux/delay.h 2004-09-03 18:08:32 +09:00
> > > +++ edited/include/linux/delay.h 2004-11-22 11:47:03 +09:00
> > > @@ -46,4 +46,10 @@ static inline void ssleep(unsigned int s
> > > msleep(seconds * 1000);
> > > }
> > >
> > > +static inline unsigned long ssleep_interruptible(unsigned int seconds)
> > > +{
> > > + return (unsigned long)((msleep_interruptible(seconds * 1000) + 999) /
> > > + 1000);
> >
> > This is a good idea, but I have two issues:
> >
> > 1) A major reason for having msecs_to_jiffies() and co. is to avoid
> > having to do this type of conversion ourselves. A weak argument,
> > admittedly, but just something to keep in mind.
> >
> > 2) This still has a serious logical flaw: If I request 1 second of
> > sleep, and I don't sleep the entire time, then it is now guaranteed that
> > I will think I did not sleep at all (ie. ssleep_interruptible() will
> > return 1). That's just another version of the original issue.
> >
> > I just don't think it's useful to have this coarse of granularity, at
> > least in terms of interruptible sleep.
>
> If it is unacceptable to neither underestimate or overestimate the
> duration of a sleep to the nearest second (the unit of granularity of
> the sleep in this case) then I agree.
This is kind of my position. Overestimating leads to the potential, if a
loop is used by the caller, of never leaving the loop, e.g.
timeout = 1;
while (timeout) {
timeout = ssleep_interruptible(timeout);
}
Underestimating leads to leaving the loop too early, because the caller
thinks a full second has expired and thus a signal was *not* received in
on *full* second, typically leading to an error condition.
> That is unless you want to request
> a sleep in seconds but have the duration returned in milliseconds. But
> if that is the case then it is probably more sensible to just use
> msleep_interruptible() and be done with it.
Exactly, I think that an API which has a parameter in seconds and a
return value in milliseconds is pretty bad. Makes things very confusing
and really msleep_interruptible() is the same, just a difference of
parameter units, then.
-Nish
|
http://oss.sgi.com/projects/netdev/archive/2004-11/msg01948.html
|
CC-MAIN-2013-48
|
refinedweb
| 689
| 52.19
|
User talk:Spang/archive19
From Uncyclopedia, the content-free encyclopedia
edit arc2.0
And I see you've archived our lovely arc! /cry!
I go away for a couple weeks and you've erased all memory of me!
Well except for changing your name on your userpage, of which I heartily approve! --Ceridwyn 00:54, 16 February 2007 (UTC)
- What are you talking about, the arc was never gone! It was merely hiding until you came back! It is by far the best-shaped conversation I've ever seen. Although I do like the coloured section header on this one!
- So were you off purveying fine cherries again? Or perhaps great apples this time? Lovely bananas? Can't have been nearly as fun as conversation-arc building, that's for sure! → Spang ☃☃☃ 01:52, 16 Feb 2007
- It warms the cockles of my heart to see the arc's return :) Where have I been? oh, here and there. If you must know, the expansion for WoW came out, so yeah...but I'm lvl 70 now, and all is right with the world again. I gave up the purveying of assorted high-quality fruit to study Lua programming for WoW for 2 wks, and am now resuming normal student life :)
- I am in agreement about the aesthetic value of the conversation, certainly the prettiest I've partaken in! And thank you ever so much for the delightful colouration on my user page! <3 --Ceridwyn 06:01, 16 February 2007 (UTC)
- You're welcome! Wait, are you trying to tell me there are things to do for fun that aren't uncyclopedia? I didn't know such a thing existed! Congratulations on becoming level 70, you must be proud!
- You're welcome for the colouration too, I'm sure you would have done the same for me, should the situation ever require it! → Spang ☃☃☃ 13:25, 17 Feb 2007
- I'm not certain fun comes into all the time, more compulsion these days :)
- I logged into IRC yesterday and was sad to hear I'd missed you by 20 mins :(
- You know of course I'd repay the favour, or have I already? I've lost track of our tit-for-tat edits of each others userpages, can't tell who owes who anymore!
- In fact if I was to be completely honest, I've forgotten which pink is the correct pink these days too, its been so long. IIRC i liked FF3399, but you were in favour of FF69somethingsomething? Perhaps you can shed some light on this :)
- Also one day I'll get around to making an all pink signature for myself. the blue name and black time/date mar the pristine loveliness of my paragraphs.
- Compulsion, fun, it's all the same, right?
- Aw, what are the chances? Though I'm usually on IRC at quite odd hours, and usually not at any regular times. So it's often quite difficult to catch me on it.
- I can't remember who owes who what as far as the userpage edits go. Basically now I would just go by editing whenever I feel like it! Much easier than keeping track of things. I think the correct pink os the one that looks the best. There may be a reference to Ceridwyn Pink™ above, but I'm going to use my powers of advanced laziness to not look that up. I remember mine was more of a hot pink. Or something. I'm not really sure, to be honest, I though you were the one to keep the pinks in check!
- Ah, a pink signature would be rather good! It would make your paragraphs that much more pink, which is always a good thing! If you find your trivial powers aren't quite enough and need help making the signature, just ask and I'll assist (if I'm not feeling too lazy, that is)!
- I was thinking of making my paragraphs some kind of blue to offset the pink, and make arc 2.0 even better looking than arc 1! That'd be awesome. → Spang ☃☃☃ 04:24, 18 Feb 2007
- Yes I think I was the self-appointed guardian of Pink, but I've caught your laziness. I'm sure you are quite right and that all we'd have to do is scroll up and the answers would be there, but meh :)
- I'm sure I'll figure something out in the way of a signature, although my earlier dabblings didn't acheive my secret and nefarious goals, so perhaps I just suck at this CSS stuffs.
- Blue would be nice, however then we'd really be falling into awful clichededness. Perhaps you could find a nice gender-ambiguous muave, or even puce? Perhaps a shade of olive? Olive is quite nice, although it might not go well with my pink. Better stick with puce. I think I shall change the name of this heading to arc2.0 Yes without a space. I like it that way --Ceridwyn 07:07, 18 February 2007 (UTC)
- Oh dear, I didn't realise my laziness was catching! I'd do something about that, but meh... :)
- If it's not above, then it'll be in the archive somewhere... but that clicking as well as scrolling and looking, so I'd probably only do that in the case of some kind of Pink-emergency.
- If I see any signature-dabbling, I'll be sure to lend a hand if I can. I sucked at the css stuffs before I came here, and now I'm like the best, ever. So I feel honour-bound to pass on The Knowledge whenever I can!
- Ah, I didn't consider the clichédness of the pink/blue combo, I was just thinking of using the blue in my sig. Can't be having with terrible clichés, though, so a different colour it shall be! I'm too tired for colour choosing right now, but later I shall peruse a big colour list to find the perfect one. I'll keep your recommendations in mind though!
- Hmmm, arc2.0 without a space? Is omitting spaces a hitherto undiscovered trivial power? Maybe even trifling? :o → Spang ☃☃☃ 04:36, 19 Feb 2007
- The blue in your sig IS quite lovely, and might perhaps dispell those pesky rumours of your being a girl :P
- No ommitting spaces, as far as I can tell, is not part of my trivial powers, more part of my fleeting whimsy.
- Bleuch Ethics class is getting in the way of my arcing =/ --Ceridwyn 21:45, 19 February 2007 (UTC)
- Damn, I thought that userbox I made would have convinced anyone beyond their gender-doubting. Damn you, false rumours!
- Hmmm, how about this colour? I don't know how well it goes with the pink, but it has the manliest name of any colour I could find: Burly Wood. I don't know if I'll stick with it, but it'll do for now.
- Perhaps ethics class can be useful to the arc: I wonder how ethical it would be to retroactively colour all my parts of this arc? I don't know if your class covers arc ethics specifically, but if it doesn't, it probably should. → Spang ☃☃☃ 09:05, 20 Feb 2007
- Burly Wood eh? Certainly sounds very manly, however sadly it is quite a hard colour to read =/ Perhaps a darker shade of it? Although bear in mind it is critical to avoid it looking like poo. On that note did you intend to make the pun about "sticking" with it? If not its mightily funny anyway.
- My opinion on retroactive arc colouring is that it would be unethical, although under Rawlsian justice theory, those sections of the arc have been under-priveleged or deprived, and should therefore have more given to them to make up for this injustice. Its a heavily debated area of retroactive arc colouration ethics.
- Much to my dismay (and yours also I'm certain) my Ethics class is shamefully neglectful of recent retroactive arc colouring theory, almost conspiratorially so I might add. My trival powers fail me as to the correct spelling of the act of being privy to a conspiracy. :( --Ceridwyn 00:21, 21 February 2007 (UTC)
- Ok, today's manly-sounding colour is "golden rod". *snicker* It's quite similar to the last one, and maybe easier to read. Though I don't know how well it achieves the "not look like poo" goal. I think it's just enough gold in it to save it. Don't know if I'll keep this one either. I just get distracted by the ones with slightly funny names.
- I think it would probably be best just to leave the old ones as they are. It's like history. If I was to start retroactively colouring my arcs, it might start me on a slippery slope, and before I know it everything I ever did would be variously coloured with colours with vaguely amusing names! And that just wouldn't do.
- I think it is quite unethical that ethics classes aren't teaching people what they need to know about the modern-day issue of arc-colouring. What kind of world would they have us live in, one without arcs and colours? That's be a terrible place to live indeed. It's obviously a conspiracy - perhaps there ought to be a secret undercover investigation into this... → Spang ☃☃☃ 01:09, 22 Feb 2007
- Lol@golden rod, just where is this master list of colours you peruse? I'm not sure which is easier to read tbh, but its probably closer to baby lamb poo yellow than traditional brown, so you are probably safe at the mo.
- I definately agree on the anti-retroactive arc colouring stance you are taking, more power to you!
- Uggh I started writing this ages ago lol, got distracted. Thats enough from me for one arc segment! --Ceridwyn 10:12, 23 February 2007 (UTC)
- Aha, so you want to know the location of my super-secret ultimate list of all HTML colour names, eh? The place where I derive all my (admittedly inferior) arc colouring prowess? Ok. Awesome colour list here!. Today's pick is light sea green. I think it compliments the pink nicely, though it may be getting into the clichédness we were avoiding just a little. Ah well. I had considered "snow" colour at one point, but decided against it. :)
- At least it's definitely not poo coloured in any way! Which is always important.
- Hmmm, not very many section to go till we're back at the margin... I wonder who is going to win? I have absolutely no idea who it could be! → Spang ☃☃☃ 06:34, 24 Feb 2007
- Light seagreen is quite nice actually, and definately well clear of looking like poo! I think the argument that its actually a green, not blue would negate the cliché aspect easily.
- As for who is going to win, I suspect I know at this point, although I wonder if the decision to turn back was entirely random or is this somehow rigged? /suspicious
- What will the winner receive this time?
- Also, this raises an interesting question that I hadn't thought of last arc, once we reach the margin and arc back out again, are we locked into turning at the same point as the last arc, for symmetry's sake, or is each arc another race back to the margin? --Ceridwyn 06:49, 25 February 2007 (UTC)
- Well seeing as light sea green is ok, I'll stick with it for now. Also, I'm too lazy to find another good one right now :)
- Well, not so much rigged, as come to the conclusion that as long as each person indents or undents by one each time, the winner will always be the same, no matter where it turns back. That's the science of conversation arcs for you. I suppose one could use html/css to do a half indent, but it just wouldn't be quite the same.
- Hmmm, I'm not sure what the prize should be... Perhaps whoever is going to lose should think something up soon!
- Hmm, I'd say that one should be free to indent or undent at will. Others may feel that a more evenly structured approach would be better, but I say they're living in the past. The past! This is the future, there's no place in these arcs for rules any more. You could even start indenting from the right, or go crazy and indent both sides at once! The possibilities are endless!* → Spang ☃☃☃ 07:10, 26 Feb 2007
- *not actually endless
Woohoo! I win! Lets play again! The fun is trying to win when you play ArcConversation™ by Milton Bradley
I think I've figured out the formula for winning, the person who starts the undenting will lose. Always. Although I think this needs stringent testing. Also in the interests of aesthetics, I think if arc2.0 is ever left to die, we cannot let this happen until we reach the lefthand margin again. For example here would be an ok place, but another complete arc first would be preferable.
Not that I think we *should* let it die, its clearly superior to arc1.0 and I accept full responsibility for the expiry of that one, although I have seen the future, and its arc3.0, better, brighter and more colourful! Why we didn't start colouring it earlier I shall never fathom, and although we decided retroactive arc recolouring was inherently evil and wrong, it does somewhat offend my aesthetic sensibilities.
Anyway, I digress, the point here is 'I win', and I want a purty award, akin to the Squid Award, but much more topical to THIS arc (lets look to the present and future not the past after all!) with which to furnish the dusty recesses of my userpage, perhaps something in a tasteful light seagreen? --Ceridwyn 22:26, 28 February 2007 (UTC)
Congratulations! Well done on being first back to the margin, you deserved it. I'm considering another rainbow award, but that might be a bit excessive.
Actually, I think that whoever starts the arc always finishes it. If you think of the person who started it as always having an even number of indents, and the other person having an odd number, you can see that it's a foregone conclusion right from the start. Unless each person uses a different number of indents. Like that game with the conting numbers, that I can't really remember right now.
It's a shame that colouring the old sections would be wrong, but we can look back on them and be thankful we live in more colourful times. Of course it shouldn't be left to die, but I am very intrigued by this arc3.0 you speak of! It sounds brilliant. There was me thinking that this arc was pretty bright and colourful, I can only wonder at the wonders arc3.0 will hold!
For today's arc colour I was going to go a bit darker with normal sea green, but thought it wasn't bright enough. So I went with one slightly brighter than light sea green, medium turquoise. I still like light sea green the best out of my colours here, but I feel I should mix it up a bit. Though most of the colours I find either don't go well with pink, are too blue, or are too poo-coloured. It makes choosing colours quite difficult on occasion.
Anyway, here's to arc2.0, and the future of all arcs! → Spang ☃☃☃ 07:02, 1 Mar 20072
- Thank you very much for the award, however it is a bitter sweet triumph because, and I'm almost afraid to utter the words aloud(or typed?) but you forgot to indent! :O
- The shock! The horror! I'm nearly speechless! arc2.0 is ruined!!! /sob--Ceridwyn 07:54, 1 March 2007 (UTC)
- Rofl, I'm not sure I can handle such blatant bending of unwritten arc rules! I'm from the conformist school personally :P And no colour either?! You really are a renegade of arcness aren't you! As for the colour choices, I can see how you've been put in a difficult position, but as they say, um...necessity is the mother of creative arc colouring? >.> Excuse me I think I life the over on.../scuttles away --Ceridwyn 08:07, 1 March 2007 (UTC)
- I'm believer in free arcs, man. Rules are so... not rad. To be honest, I was in such a rush to correct the arc to its former glory that I completely forgot to colour it! Though I also believe that sections of colour should be treated as equally as sections of non-colour. Doesn't mean I'll stop using colours intentionally though! → Spang ☃☃☃ 08:17, 1 Mar 2007
- Yeah, I was also so tired I didn't see that till the second time I read it. I sure it could work as an actual sentence somewhere...
- I'm not sure there's much interesting going on now... people making articles, me deleting them, same old same old! Maybe it's time to start making your own loop. I know your forté is arcs, but that might be pretty interesting. → Spang ☃☃☃ 23:43, 2 Mar 2007
- I do apologise friend, its been a busy wee while and your last comment threw me for such a loop (oh yes I'm terribly funny!) that I honestly didn't know how to reply!
- Loops!? Not arcs?! The possibilities may in fact be endless, if in fact one could figure out where to start! I think thats always my problem, I see the possibilities of the great ball of wool of life but can never unravel it en2ough to find the beginning with which to knit the great jersey of experience!
- Excuse the rambling, I've just spent the last hour secreted away in my hallway closet looking through boxes of old sketch books and I feel a bit peculiar!I think I left my brain somewhere near number 5 Memory Lane. You know, the lovely little bungalow on the corner?
- Ooooooh I see where the whole loop thing came from now! Very tricksy of you Glittersprinkles!
- Wow, my web design tutor would be having absolute conniptions at the deprecated tags I use here!
- Anyway Mr. Pinchy dislikes loops and I am bound to his will just now! Good day! --Ceridwyn 00:59, 9 March 2007 (UTC)
- Ha ha! I see your joke! Very droll, very droll. :)
- Loops... now those are like the future of arcs, maybe one day, we can craft a looped conversation. It would be awesome! But in the mean time, arcs are quite awesome enough, especially this one, which is veritable the very best of all the arcs.
- Are you sure looking through these sketch books is entirely safe? You never know, they could be cursed (with two syllables). Ahh, number 5 Memory Lane. I have such fond memories of there. Well, I would, if I hadn't taken a detour through amnesia avenue.
- I think I'd be doing your tutor proud - no deprecated HTML tags here! CSS all the way! Even though it does take long22er to type it out this way...
- Gasp! Dislikes loops? He must be loopy! → Spang ☃☃☃ 02:47, 9 Mar 2007
- I'd use CSS but then it wouldnt be true to the essence of #FF3399 =/
- This arc is definately the pinnacle of all arcdom as we know it, I'm not sure about loops, I think its beyond my trivial powers.
- Speaking of which I had a wee chuckle to myself in the video shop the other day because one movie (Clerks 2 IIRC) had the line "With no power, comes no responsibility" on it.
- Lol at amnesia avenue.
- Mr. Pinchy can be quite crabby about things like loops. (teehee!)
- So has your flatmate surfaced yet or esconced safely in his room still. What is a sconce anyway? --Ceridwyn 05:34, 10 March 2007 (UTC)
- Hmmm, I think a colour is the same regardless of where it comes from. But maybe that's just the filthy hippy in me talking again.
- I don't think we have the advanced technology required for loops yet. But one day in the future... it shall be a reality.
- Heh, I think someone's been stealing our sayings! (Never mind that it probably came out before our trivial version)
- I've not seen my flatmate yet, but that's mostly because I'm not staying at my flat this week. But he is still replying to his phone, so not all is lost!
- According to google, a sconce is a wall-mounted light fixture, very fashionable around the late 17th century. So I'm not sure how exactly that applies in this situation. Perhaps if he were staying in his room in the manner of a wall-mounted lamp. Which might be fun to see. → Spang ☃☃☃ 12:23, 10 Mar 2007
- Yes but the essence of Ceridwyn Pink TM is that its a hex colour. To go around using some weird RGB colour would just be wrong wrong wrong! They may look identical to the untrained eye, but I would know, and quite frankly the knowledge would keep me awake at night. In fact, I'm not sure I could live with myself!
- Loop technology is certainly a field we should be investigating. I shall watch its developments with great interest! Hopefully there will be some really gripping conferences and workshops on it somewhere exotic that I can be paid to attend by uncyc. In the interests of developing this new technology of course!
- We should definately sue Jay & Silent Bob for stealing our completely original and creative saying!!! How dare they!!!
- Its when they stop answering the phone that you need to worry. Unless its a highly important raid, in which case its FINE if ppl ignore the phone in order to not interrupt their WoWing. Yeah... >.>
- I had a feeling you'd bring up wall-fixtures. I suppose that makes sense. --Ceridwyn
- Ah yes, but the wonders of CSS allow hex colours to be used too! Magic! Like the lovely random hex colour picked randomly from template:random colour. But I won't stop you using Ceridwyn Pink™ in any way and in any tags you please!
- I think if some kind of loop technology could be devised, here should be one of the very first places for it to be implemented. I too shall be on the lookout for developments. Though I think that attending conferences in exotic locations would interfere with my laziness, unfortunately. Though travelling the world in search of the elusive loop could be pretty fun too!
- You had a feeling? So does that mean that you're psychic or that I'm predictable? Or both? I sense that you don't find the subject of wall-fixtures particularly thrilling.
- This just in: a sconce could also be a kind of defensive fort, hence, ensconced. It can also mean sense or wit. It's really quite a versatile word. Shame I can't use any of my sonce to make up something clever about it.
- Yeah, sometimes real life just gets in the way of important gaming.
- Yeah, I'll make sure he doesn't lose himself in the game too much, unless he working on leveling up the character I told him to make, in which case I'll lock him in his room till he's suitably awesome. → Spang ☃☃☃ 03:45, 14 Mar 2007
- See, I like the fact that my deprecated tagginess is at least consistant. Whereas you sir have so far alternated btwn font and div tags. Your hippy freeloving tag swinging ways bah! But at least you don't oppress my oldschool ways :) Not sure I approve of this new colour, or are you trying to follow your latest vandal's advice?
- Certainly if loop technology breakthroughs are to be had, they will happen right here at Arc Technology Labs.
- I suspect it might be a combination of both predictability on your part (not a bad thing) and a touch of psychicness on my part, possibly.
- I wouldn't say I don't find them thrilling, just that I was aware that that was really one meaning of the term when I asked what it was anyway, so your initial findings weren't particularly surprising. However these recent unearthings are quite interesting, as much as sconces can be. Really if I were honest though I'd have to admit I find scones more interesting. At least you can eat them.
- What character did you tell him to make? It better be a warlock! XD You may have to make use of some kind of feeding tube to provide his RL body with nutrition as the road to Suitable Awesomeness is a long one, fraught with danger! --Ceridwyn 04:53, 14 March 2007 (UTC)
- Well maybe it's better to say inconsistent - it keeps your opponent on their toes. Or something. Yeah, I didn't even preview that colour, just copied the first thing that came up. I probably would have chosen a different one if I'd previewed, but then it wouldn't be so random. Back to the tried and tested for now though.
- I don't often take advice from my vandals, but I'm still hopeful that one day one will actually vandalise in a nice or useful way. Though they'd be hard pressed to better my userpage in its current form! Speaking of which, I feel it's time a added or changed something, but I don't know what yet.
- Hmmm, well I'm working on the predictability with my random colours and inconsistency, but that psychicness is quite a talent!
- Ah yes, scones. Those are pretty awesome. And so much better than a sconce could ever be, I'm sure. Aw, now I have a craving for scones, damn you!
- I'm not sure, I only specified the name, and that it had to be awesome. Obviously it won't be as awesome as if I had created it myself, but I have faith that it'll be awesome enough. I will suggest warlock, though I may hold out on deciding class until I'm 100% that there won't be a god class in the future. You never know! → Spang ☃☃☃ 05:36, 14 Mar 2007
- Well, I've graciously decided to let you win this one, but I'm not certain I'll be able to top your Triumphant Arc award! That reminds me I still have to get that put on my user page :D
- It might be nice to see something new on your userpage, though anything radical and you'd have to preserve the original somewhere.
- I'm interested to know what name you suggested for him! And as for God class, I'm pretty certain warlock IS it. Although having said that, I'm a bit bored with mine atm, been having a lot of fun on my priest alt. Perhaps its the constant awesomeness, it can get tiring you know.
- Sorry this is short and much delayed, you know how I am, all hiss & roar for a while then I just disappear :P --Ceridwyn 21:26, 25 March 2007 (UTC)
- By my calculations, I may still lose :S Though if you think I am going to win, you have plenty of time to think up2 an award!
- I'll take that hint and add all your awards to your page somewhere ;)
- Oh no, nothing radical, just minor tweaks here and there to keep it up to date! I don't think I could start again with something different now.
- The name I suggested was "Mentaldor". It's a bit of an in-joke. I'm not sure the Warlock enough for me. I mean, spells are all well and good, but I don't think I'd settle for anything less than absolute omnipotence from my character! Though I find constant awesomeness a real problem in real life too, it does get a bit repetitive. → Spang ☃☃☃ 16:32, 26 Mar 2007
- Well my formula states that you should win, I guess this is the test! Still no award ideas yet =/ Perhaps something with Dylan Moran standing beside an arc wearing a seagreen shirt holding a sign saying "#FF3399" but that might be tricky to arrange. Was he supposed to be also near a squid fuelled fire? I forget.
- Clearly one of those "you had to be there" in-jokes ;P So do you know what class he's playing yet? I'm personally taking a wee break atm, too many assignments and too addicted. Need some balance. --Ceridwyn 22:13, 29 March 2007 (UTC)
- Indeed, time will tell!
- That would surely be quite an award to behold! Now I surely will be disappointed by anything less that that, should I win!
- He's playing a dark elf druid or priest or something. He can turn into a bear and a cat though, and is level 22. That's about as much as I know. → Spang ☃☃☃ 02:04, 30 Mar 2007
- My absence is really just a form of procrastination because I have no idea what to do for an award! Truly >.> Ok not really. I'm just an incredibly slack and terrible person.
- Ah night elf druid. Can't say I've ever played once past lvl 12, I hope he enjoys it! I just never found them that appealing. Has he convinced you to start playing yet? --Ceridwyn 23:41, 8 April 2007 (UTC)
- It doesn't make you a terrible person! Well, it does, but I'm too nice to tell you. I've also heard that, in the same way as those tribes that believe taking someone's picture steals their soul, editing my talk page also steals your soul. So maybe it's a good thing.
- Ah, that must be it. He hasn't convinced me yet, as my laptop at my flat is too old to run it in the first place, and playing it seems to take up too much time that I don't have! That valuable time could be spent sleeping, or sitting around doing nothing, and those things are very important to maintain a healthily lazy lifestyle.
- Oh, and I still think you're going to win again :P Better start thinking up another award... → Spang ☃☃☃ 04:59, 10 Apr 2007
- Well I can't be having my soul stolen now...wait how many edits does it take for it to be stolen completely? I think I'm doomed. Surely no-one else has made quite as many edits as I have. Oh noes =/
- Probably just as well that your laptop can't run it. Best to stay clear of temptation. WoW is really horribly addictive and so far apparently humanly impossible to quit. I don't know a single person who has quit for good yet in the 2 yrs I've been playing. They always come back. Myself included, even when you don't really want to. So I suggest not even trying it. Its worse than crack cocaine. Not that I've tried crack but I can imagine...
- As for winning, well historical evidence supports my theory. If you aren't too lazy, scroll up and see what I mean. The person who wins is never the person who decides to undent first. In arc1.0 I turned first and you won, in the first loop of arc2.0 you turned and I won... Something to think on. Argg! I'm hastening the process by editting further! I only have 7 more edits in which to come up with an award! /panic!! --Ceridwyn 22:30, 10 April 2007 (UTC)
- I can't say for sure... but my soul seems fine, and I'm sure I've edited it a lot too. It wouldn't be that bad though, I imagine it would be on a par with being kidnapped and taken to a magical pixie land full of ice cream and happiness.
- Steer clear of temptation I shall then! Unless I find myself with absolutely nothing to do for the next year or so, in which case it might be a possibility. That or crack.
- Well then only time will tell! Remember the sooner you edit, the sooner it'll be over. And if you never edit it'll never be over. Though it might never be over anyway. I'm sure there's something Zen in there somewhere. → Spang ☃☃☃ 05:17, 11 Apr 2007
- This arc phenomenon is almost worthy of its own webpage IMO. I've been perusing blogs and twitter and such and I sincerely feel there is a place in the greater WWW for beautifully coloured arc conversations.
- EEP! Its occurred to me, and I may have counted wrong, but I almost wonder if you might be right about the results of this arc! How can it be so wiley and unpredictable?!
- A magical pixie land full of ice cream and happiness 'does' sound rather nice. Ok I'll continue to edit if thats the only consequences.
- By all accounts (which I totally made up right then) crack is better for you in the long run.
- Those are indeed wise words. I shall meditate on them at length.
- Why does the signature have to be two hyphens then four tildes? That seems slightly ridiculous. Perhaps its nearly time for me to work on building my signature?
- So I don't usually edit my posts, but Rcmurphy showed me this just before on irc and it raised some interesting questions. Should we allow others to join in the arc so long as they are aware of the few basic rules? It would certainly make for more interesting margin-races. Which got me thinking about arc3.0 I have some ideas about it, and I think we should really start discussing it, for its surely the future of arc technology. --Ceridwyn 00:15, 12 April 2007 (UTC)
- Indeed, why stop at a webpage? How about a whole shrine to the wonders of the arc! Although nothing else could come close to its actual awesomeness.
- Haha! You see my logic now! Oh crap, that means I should be thinking up an award now... damn.
- I'll combine the reply to the next two lines into one by saying you should meditate on crack. I don't see what the harm could be.
- There is a box in the preferences menu that lets you choose what you want to show as your signature. There's a few help pages lying around that'll tell you how to make an awesome signiture, not sure where they are though. And I could do a thing that changes the two dashes into something else, like one em dash, or nothing if you like. The wonders of technology!
- As I said in irc, rcmurphy is evil and has jealously had it in for this arc since it started! But if people want to join the arc, I can't really stop them as long as they know how to do it properly. And use colours that actually work. Part of me favours a private type arc, but I think there might at some point be a need for a multi-user arc. I have no idea if the multi-user version would be even more awesome or a horrible failure... too many cooks and all that! It is not a decision to be taken lightly! → Spang ☃☃☃ 05:19, 13 Apr 2007
- So close to the margin now, but I'm still uncertain, however I think you're going to need to get working on that award soon :P
- I'm not sure the world of shrines is quite ready for the arc phenomenon yet. But its own website perhaps.
- Hmm well all this signature business sounds suspiciously too much like work to me. So meh, prolly wont bother ;P
- But if you can make a spiffy thing to do stuff easier that sounds good! XD
- Clearly the arc's allure has caught rcmurphy in its spell and he is smitten, sadly this is inevitable because the arc's power is strong.
- I think your idea of one private arc and one multi-user arc is a good one. Clearly ppl would have to register and pass some sort of Arc Licencing Test before they can edit it, and the test program would teach them about the proper ways to indent, colour choices etc. It may fail horribly but such is the beauty of the interwebs! It would certainly be an interesting experiment. Clearly it wouldnt be a full release number of the arc, like this one or arc3.0 but it could be arc3.1 or possibly even arc2.2.0?
- Certainly something we should consider. Maybe it could have a sandbox section so they can't mess it up too bad to start with. And obviously we'd be there to guide and help them discover the wonders of arcdom.
- Surely there would be room for something like this here on uncyc? The tools are all here for us to control it what with you being an admin and all... --Ceridwyn 07:55, 13 April 2007 (UTC)
- I think it's practically certain, barring any catastrophic acts of nature, that you shall be first back to the margin again! Better get thinking.
- The sig thing is easy once you get a hang of it! I shall look around for the thing that'll change your two dashes to something else. Any preference? Long single dash? Nothing?
- It's true, perhaps one day rc may be able to add to the arc, if he can learn to control his impulses to write in huge letters withthe wrong indentation.
- A licence test for adding to the arc is a great idea! It's the perfect way to ensure the arc's greatness for future times to come.
- I'm certainly interested in your ideas for arc3, I can harly even fathom anything past 2, so I can't wait to see what the future or arcs holds. Does it involve loops in any way or is that still way in the future?
- If there was to be a room it'd probably have to be in someone's subpage or talk page or something. The main forum may seem like a good idea at first, but the masses aren't ready for the wonders of the arc yet, it could confuse them and they just wouldn't know what to do, and it'd all be ruined!
- Apologies for taking so long to write this, was distraced by other things that wanted my attention the whole time! → Spang ☃☃☃ 03:05, 14 Apr 2007
- Only one undent to go! This is like arc-Christmas! Yipee!!
- I always like when the number of colons is few so I don't have to laboriously c&p them on each line. It breaks the flow of my ranting, which is never a good thing.
- Does this dash changing thing remove the need for the tildes also? I just find having to type 6 characters a bit well, OTT. I know I can push the sig button, but I prefer to type things by hand, being the budding code monkey that I am. I don't really have any strong preferences at this stage as to what characters I'd prefer to type, just LESS.
- Well your nom for UotM looks like its going well, naturally I put my vote in, once I'd 'heard' about it that is. How come you never mentioned it?
- You must marvel at Rc's enthusiasm though. His efforts put me in mind of an exhuberant but wayward youth, trying to roll with the big kids. At least he did attempt to use colour, albeit garish and well, overlooking the fact that it didn't work. I think you are correct, in time he may show great potential. As we discussed yesterday arc3.1 may just offer him the place to flourish, after he graduates from the test and of course, arc3.0.1 aka the beginner's slope.
- Yes I certainly didn't intend to unleash it on the main namespace, as you have mentioned the world isn't ready for arc3.0 yet. I'd offer my subpages but lets face it, you're far more popular (and clearly world famous, barring the ignorance of my friend) so it would get more exposure here. Perhaps if it took off then one day it could migrate to a place of its own.
- Don't feel the need to apologise, I just get awful demanding when I'm going insane from boredom. I need more projects to busy myself with since I quit WoW. I'm like the workoholic who goes into retirement but doesn't know what to do with themselves so they meddle.
- Well here they go again, those cursed dashes and tildes >.< --Ceridwyn 00:20, 15 April 2007 (UTC)
- Wow, feels weird doing only one indent! I liked typing lots of colons, I had got really good at getting just the right number in one go too!
- Yeah, I almost fixed it and it worked for a while, but now it doesn't :( Oh well. In a while I'll get the code to make your timestamp pink and only have to use ~~~ to insert your sig. Hooray!
- You never asked! Well that and I tend not to whore myself for awards I'm up for (mostly because I'm never up for any awards)
- Rc's got a way to go, and despite his denials on irc, it's obvious he wants to join in. I don't see how anyone could not want to join in! In time he and others will have the tools and training necessary to join in, and everyone will be happier for it.
- Well I don't really mind where it's at either, but it'll have to start off small first anyway, so anywhere will do really.
- Nah, I like to apologise often just in case. As I always say, you're better sorry than unsafe.
- Oh crap, only one line to go before I have to think of a new award :s → Spang ☃☃☃ 05:33, 15 Apr 2007
Drumroll please! dnl-dnl-dnl-dnl-dnl-dnl!DN!BOOM!-CCCSSHH!
Oh yeah, thats right, two in a row baby. I'd like to thank my parents and my manager and all the wonderful ppl in #uncyclopedia who distracted me while I should have been doing assignments, and of course, the person without whom this would not have been possible Rcmu- *cough* I mean, Spang!
Well, I'm a bit overcome really, I didn't prepare a speech or anything *unfolds giant piece of paper* this is just such a surprise! It was such an honour just to be nominated...yeah ok I'm done :)
WOOOHOOOOO!!! I WIN I WIN I WIN!!!! ok now I really am done. Promise. Ok just one more,
YIPEEEEE!!!!!!!!!!!!!!!!
/me waits for her award :) -- . 06:08, 15 April 2007 (UTC)
- Hooray for you! You win an award! Ignore the fact that it over-relies on a single pun! Hooray! → Spang ☃☃☃ 04:46, 16 Apr 2007
- Yay! Thats a gorgeous award! Thank you!!!! :D I must get those moved to my user page sometime :) I'm considering breaking the VoNSE in order to revamp it but tbh that sounds like an awful amount of effort required!
- Squeee!! that wee penguie is so cute XD Oh and I thought of a wee update for your user page. Lemme know what you think >:D
- Steelblue huh? That is possibly one of the more readable colours you've chosen thus far in arc2.0 :)
- I'm still super excited about my signature. I must go around posting on the forums more so everyone notices >:D And I figured out how to put the (UTC) back in but I rather think I won't. Its not necessary.
- That reminds me, I'll have to add that award to mah sig too!
- Well I spose I'd better hop to and do some work on these dratted DFDs :*( -- 09:42, 16 April 2007
- You are most welcome indeed! Yes, the penguin is rather cute, although being as manly as I am, I much prefer fast cars and explosions and stuff.
- I do agree steel blue is a rather good colour. I may use it more often, clichés be damned! Yeah I find the small timestamps much better. The normal ones just seem too large and too long in comparison.
- Yes, I keep meaning to add the awards to your page, but I always seem to forget! I'll remember to do it one of these days. But any major reworking of your userpage will probably require you to do it, or give detailed instructions for how you want it to look. I think my original page was the best though, obviously!
- What's a DFD? Is that the report thing? Well good luck with it, whatever it is. → Spang ☃☃☃ 05:53, 17 Apr 2007
- Ohh I see we went with a div this time. fancy ;P I'd try CSS if I weren't so lazy, I've been learning a bit of the basics.
- Of course, cars and explosions are far more manly than cute penguins, or KITTENS for that matter.
- Sorry, you didn't honestly think I'd get through a whole edit without mentioning the tiny terror now ruling my life?!
- I habs moar pics I'll have to show you sometime XD Its good fun putting captions on them to create our own cat pics. I've emailed them with our versions :) Guna be famous XD
- Thank you again for adding all my lovely awards to my page :) Its nice to have it updated!
- Your original page? You mean the one with just the randomly coloured ellipses? Yeah that was pretty good but it is nice having a bit more info on it ;P One day I'd like it to be all funny and look like something else but for now I'm happy with the basics.
- The whole assignments thing has slowed down even worse now that this little kitten refuses to be quiet unless he's curled up on me sleeping, preferably being petted the whole time too. Makes it rather difficult to try and type a lot, but if he's gonna take up permanent residence on me he'd better get used to typing in a hurry. I do a lot of it. -- 01:35, 19 April 2007
- Yes indeed, divs and CSS are the wave of the future! font tags are so yesterday.
- I'll look forward to seeing those pictures! Although good luck getting famous with cat pictures, I could hardly get a quarter way thourough before skipping to the end! There must be millions of them.
- You're welcome, I'm suprised I actually ever got round to doing it, it only took me almost a month from first "getting the hint".
- What can I say, I like simplicity. And you know when they say, Brevity is wit. You could change the direction of your assignment to "how can I make my uncyclopedia user page look the best". A worthy topic if ever there was one!
- Kittens are fun though. You could build him some sort of hammock you can attach to yourself. I was always a cat person, I just don't get dogs. We've been meaning to get a cat/kitten for a while, but never got round to actually getting one. If you ever get bored of it, just get it to swim over here and I'll have it! → Spang ☃☃☃ 05:15, 20 Apr 2007
- Woohoo, we've cracked the 45 kilobyte mark. And thus, our arc becomes doubly-goaled!
- I agree regarding CSS however that doesn't change the fact that I am, as I'm sure you can sympathise, horribly lazy. Til I experiment with hex colours in a non-deprecated tag I'm just sticking with reliable ole 'font'.
- No word yet on the cat pics front, they have yet to respond to my email or update their site with what will obviously be the foremost cat pictures ever seen. A little kitten waits.
- There are a few pics on my blog if you still have the link, and oodles more (generally depicting some cute act by the kitten, but not all necessarily that good in the photographic department) on my photobucket which I can link to ya.
- A hammock idea had occured to me also, however shes getting past the "I want to spend every moment attached to you in some way" and is now progressing to "I want to scratch and bite you whenever I'm not sleeping, eating or chewing cables" so perhaps its best we leave that idea in the probably-not basket.
- As for swimming to Scotland, she *is* rather comfortable in the bath, so if I do ever get tired of her (probably related to the whole scratching/biting passtime I'd imagine) I'll be sure to point her vaguely northwards. And sort of 'around' the whole Europe thing.
- Anyhoo the assignments are calling! Adieu -- 04:18, 23 April 2007
- Yeah, it won't be long now before people start complaining that I should archive my talk page! If it were up to me completely, I would never archive any of it.
- I did see the cat pictures from your blog thing, it's very cute :) Makes me want my own kitten. You may link your photobucket if you like, pictures of cute kittens never get old!
- Cool, let me know if you do start her swimming, I'll be sure to look out for her.
- Oh and I apologise for almost stealing your colour here, I was experimenting with colours and thought this one was too "fab" to pass up! → Spang ☃☃☃ 12:27, 23 Apr 2007
- Wow that is indeed a "fab" colour!
- I can understand their crys for archiving when I try to scroll all the way to the bottom of this arc, but I can hardly complain since arcing is our want. On the other hand I suppose since its not their want then perhaps they have a right to complain.
- Anyway, we could probably arc elsewhere, but lets leave that for arc3.0. Its nice to have the arc here so everyone can see it ^_^
- I'll have to clean up my photobucket first before I link it, but I will soon I promise XD
- And I'll be sure to let you know if/when I send the kitten your way!
- In the meantime perhaps though you might want to look into obtaining a kitten of your own. I dont see me wanting to part with her anytime soon :P Then we can have "cute-offs" and pit them against each other in a battle of cuteness!
- Check me out being all CSSy and stuffs XD 10:25, 23 April 2007
- Wow, check out the divs on that! Welcome to the future!
- Well in most cases people have a right to complain about stuff they don't like, unless you're trying to complain about me, in which case they are automatically wrong.
- Here's just fine for now. Not just because the "you have new messages" box. Well, ok, it is just that. But my talk page wouldn't be the same without it.
- One day, I shall obtain a kitten of my own. And it will be the best kitten there ever was, and ever will be. No offence to your Jezebel, but my kitten's awesomeness would be off the scale. → Spang ☃☃☃ 07:00, 26 Apr 2007
- Indeed, what a knockout pair of divs. Phwoar! I almost never see the you have new messages box, it takes me by surprise when I do. Its true though, what would User_talk:Spang be without arc2.0? Well about 49 kilobytes smaller. Buh-dum-boom-chh!
- So is my sig really that bad? :( I thought the all pink really gave stealing Ghelae's sig a whole new dimension ^_^
- I'd like to see this imaginary kitten of yours try and beat mine, she's getting pretty darned cute these days. XD
- Anyway, I'm off to eat cake! (its mah birfday :D ) Byee! 02:50, 28 April 2007
- HAPPY BIRTHDAY!!! Sorry I didn't reply sooner, I was going to, but slept instead. Did you get nice presents?
- Indeed, them there's some divs to brag about! Ok I'll stop now.
- Well I'll be sure to leave you messages on your talk page every so often then. But that may lead to an unmanageable threads of conversation. Not that I can't multitask, it just involves the exact amount of effort that my sophisticated laziness seems to want to avoid.
- Well obviously the pinkness is good as always, but I've always found that style too... loud for my liking. You can see it from webpages three miles away! But I can't stop you if that's what you want!
- Well I think my imaginary kitten has the advantage as it isn't bound by the rules of reality!
- Well enjoy... I mean, I hope you enjoyed your cake! → Spang ☃☃☃ 06:21, 30 Apr 2007
- THANK YOU!!! :) Fair enough, sleep is very important!
- You don't have to leave me messages, I generally check my watchlist more than my talk page so its fine :P
- It is loud yes, but I felt it was time to be a bit more visible and active around uncyc, so I think its fitting. Its less jarring on the eyes than some sigs though. Its just a phase really :) I'm sure it will pass!
- Hmmm good point. Not being bound by the rules of reality MAY give your imaginary kitten some advantage, but does he/she/it have badges created in its likeness? I think not.
- Indeed, sleep is of the utmost importance.
- Yeah, watchlists are cool, but you don't get a fancy box telling you when it's changed! I suppose it might be more exciting if it happens less often though.
- Well I suppose it isn't the worst sig, maybe it's just because I prefer subtlety over loud brightness (except when it comes to arcs) :P Though speaking of sigs, I see you forgot to put yours in the last message... tut tut! Though I suppose any message in that pink doesn't need to be signed really :)
- Aw damn, actual badges are way cooler than imaginary badges :( Do you wear those around the place? Heheheh. That's cool.
- This is also a pretty cool colour. Heheheheh :) → Spang ☃☃☃ 03:46, 02 May 2007
- Ahem whoops, I wrote that one in a rush IIRC, I was on my way to a job interview. To which I was subsequently late, because of that post. However I got the job nonetheless, which goes to show, you can post in your arc and eat... erm yeah.
- Yeah I do wear them around, although I forgot to put them on today because they are still in my scanner >.>
- That is a pretty cool colour indeed. Its no Ceridwyn Pink TM but its pretty cool.
- I can't think of much to say right now, I'm sitting in Accounting class feeling my brain literally liquifying.
- 52 kilobytes now ^_^
- Edit because I just can't wait til my next turn to tell you this, Dylan Moran is doing a show here in town in a fortnight! Looks like I'll see him before you! I'll have the sign etc ready for the grand photo taking! 04:02, 02 May 2007
- Congratulations! I'm willing to stake my reputation on the fact that it was because of the arc you got the job. Ignore the fact that I don't actually have a valid reputation to stake.
- I think you may have missed the point of my colour choice... it's more to do with the hex code for it that the actual colour. :) Like this similar one.
- Heh, it's growing all the time, and one day, the arcs will account for most of the content on uncyclopedia's servers! That'll be a good day.
- Heh, cool, you should have a great time, he's very funny! Turns out he actually lives in Edinburgh though, so maybe I could see him before you if I find out where he lives and stalk him. Probably too lazy to do that though :) → Spang ☃☃☃ 07:30, 04 May 2007
- Its most definately because of the arc that I got the job :P
- Heh I hadn't looked at the hex codes, but thats pretty funny! XD How long did those take you to figure out?
- I look forward to that day for certain. T'will be a marvel to behold.
- I doubt you'll see him before me though, the show is in 2 wks, so most likely he's touring around other places one would imagine. And yeah, you are much to lazy to do that just to beat me at seeing a man from opposite sides of the world. I doubt theres even an award for that. Perhaps we should make one. Especially if he ISN'T touring, because then we have fairly even chances of winning! I hope it will be a good show, I'm sure it will be. So 'cited!!! XD
- Oh, did you see my spleen?! O_O I was most surprised by it! Surely my trivial contributions don't warrant that! 08:35, 04 May 2007
- Hopefully it'll help me on that front now too, seeing as I'm looking for a job now too. I forgot to put my arc skills on my CV though, so I may have ruined my chances already. :(
- Didn't take any time at all, I just had to think up words that had letters 1 - F in, no problem!
- Yes, I saw your spleen, it's icky. Oh, you mean the award? Yeah, saw that too. Congratulations! Well I'm sure you deserve it. With the non-voting awards, basically all you have to do it have someone think you deserve it and give it to you, and hey presto, award! Maybe it's because these days everyone's just used to giving out their own awards, people don't gie awards as often as they used to. → Spang ☃☃☃ 12:51, 06 May 2007
- Never fear, arc skills are plain to see for the trained interviewer. If the job is right, they'll be able to spot your vast arc experience a mile away. /crosses fingers for you, best of luck!
- Well it was very clever, it never occured to me. Of course, I don't have to think up new and exciting colours each time so perhaps I'm not pushing my creative envelope far enough.
- Eeep you saw my real spleen! Thats scary and creepy in many ways :P I'm sure Olipro would read something into that. Curse him. Anyway, yeah I know that its just handed out but I was surprised anyone noticed me. I guess the giant pink boxes is a bit of a give away. I find them useful to ensure that I've voted on everything. I just blur my eyes and scroll quickly down the VFD/VFH page and make sure I see the pink boxes, the voices say that I'm allowed to sleep.
- I suppose I'll take your pointing out the irony of whoring something on my own user talk page as an invitation to whore it up a bit here, at least its somewhere else. Though I'm not certain people take the time to read all of arc2.0 these days. Anyway: User:Ceridwyn/Proofreading Service User:Ceridwyn/Proofreading Service User:Ceridwyn/Proofreading Service. So far business is quiet but going well. Two customers that appear to be satisfied which is nice :D
- Well, should really do some work! 11:11, 06 May 2007
- That's good then. I knew the arc would come in handy for something other than making my talk page look cool!
- Yes, it is rather difficult thinking up new colours that are also good! I've resisted resorting to using {{random colour}} so far. I think this was in fact the hardest colour to come up with yet! It took ages! I might be lying!
- Well then it might be appropriate, giant pink boxes gets you a giant purple blob in return. I'm sure it make some kind of sense, somewhere.
- Feel free to whore it all you like, but remember to always be safe when whoring. I probably don't need it myself, as my spelling and grammar are naturally impeccable, aside from punctuating lists properly. But that's an iffy area anyway :)
- When I get round to writing more articles (I have a couple of ideas floating around) I'll be sure to drop it in. Though you may well spell check it anyway, you seem to have done most of my other articles :) → Spang ☃☃☃ 02:55, 09 May 2007
- Wow, the page now suggests that it might be helpful to archive the arc. Its under attack from all sides!!! Vigilante arc justice shall be doled out!
- I dunno, I think we might be entering a whole new era in arc colouration...
- Did you do the spelling error on purpose then? Because if not, its very hilarious. I've resisted editing it so that you can see the error of your ways. Well ok, I have corrected it twice thus far then managed to undo it before posting.
- Heh I think I've only edited the water powered bus of yours that I know of anyway, I tend to only encounter pages if they are on VFH these days and those certainly keep me out of trouble. The amount of articles on there with spelling errors gives me the chills frankly. And you are generally very impeccable with your spelling so it wouldn't give me a lot to do! Aside from that one typo in the above post...
- I had an idea for an article today but I'm not sure I'd have the expertise to write it yet, but if done correctly it would be just darn hilarious. Doubt I'd have time though, writing Kralnor took me 3 solid days 9-5 to complete and I just don't have days off like that anymore.
- Speaking of big pink blobs, you haven't mentioned my more elegant new signature! How rude! Especially after you and Famine guilted me into changing it. At least he complimented the new one, albeit backhandedly. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 03:30, 09 May 2007
- Gasp! That's not Ceridwyn Pink™! What kind of madness is this?!
- The arc shall only be archived when it is finished... and it'll never be finished! Haha!
- You'll be kicking yourself then, because there were actually two spelling errors there. And a grammar error! None of them intentional :( I've corrected them because I cannot live with the shame. I guess I just wasn't paying too much attention when I typed those out...
- Well the best thing to do is to just sit on your ideas until they're reasy to be written down, and it'll be easy. Well, that's what I've been saying, and I haven't written anything in forever, so maybe it's not the best advice.
- Yes, your elegant new signature is far better on my eyes than the last one. Bravo! It bears a few style similarities to mine, which is always a plus! Simple elegance is far better than giant boxes. And it's still noticeable, so it's all good! → Spang ☃☃☃ 04:17, 09 May 2007
- No its not Ceridwyn Pink™, I thought it might be funny to see what will happen eventually if we continue in this vein...
- Although I suppose it will take a LOOOONG time to show up...still, we'll know!
- Curses, I did only see one >.< And I see one in what you just wrote also...
- I disagree, if I don't get the idea down on "paper" while its still exciting, I'll never do it. It might not even be funny. But it was at the time, we were talking in class about how to Feng Shui your computer. Including the system unit physically as well as the desktop and such. We thought it was quite hilarious and I do love a nice HowTo.
- Thank you :) I initially only tweaked it as a demonstration but after that I found I was rather fond of the thing. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 04:57, 09 May 2007
It'll take quite a while for anything noticeable occurs I think! I think the servers would fall apart before we saw a difference in it! Assuming we type very, very slowly.
Heh, would I be allowed to add "any comments I make" to your proofreading thing? It would save me a lot of trouble! Not that I ever make mistakes or anything.
Heheh, a computer Feng Shui sounds good :) There's probably a whole lot you can do with that too, could be a lot of fun!
Yes, that sig is indeed the better of the two! Just my opinion though :)
And check this out - I've undented... but only half! Haha. Bet you weren't expecting that! Yeah, each colon is two ems of indent, so I decided to only undent one, for a change. I thought it might be interesting. → Spang ☃☃☃ 05:42, 11 May 2007
- Aww this takes away all the fun colon adding :(
- We'll just see how long it takes I suppose. Perhaps we should co-ordinate a rainbow coloured portion one day. Hmmm.
- I think I can see a slight colour difference but I might just be imagining things!
- As for adding "any comments you make" to my proofreading service, probably not. Primarily because I thought we had a Non-Restrospective-Editing agreement? That and I can't be bothered following you around checking everything you write. Especially because you never make any mistakes :P
- Well we thought so too re:computer Feng Shui. I guess one day I'll get started on that.
- Thank you for the sig compliment :P
- I find the colon adding a bit annoying really, the new way only involved making a number one or two less. But we can do it this way too... but my way is the wave of the future, I'll tell you! One day everyone'll be using it!
- A rainbow portion would be cool! I'm not sure how long I can keep up this pink, there's almost too much pink... almost.
- I think the Non-Restrospective-Editing agreement only applies to the arc, I'm fine with any other comments being edited. I don't think I need any more proofreading on my comments, they're all obviously perfect as they are.
- computer feng shui is coming along nicely now! Maybe just getting the idea down is the key then. That's one less excuse for me to not write anything! One day there will be no excuses left and I'll just have to sit down and write. :/ → Spang ☃☃☃ 05:28, 12 May 2007
- If you can teach me how much to change the numbers by, then we could change, however it will make arc3.0 less accessible to others...
- I don't believe theres such a thing as "too much pink."
- I think having the time on your hands is also helpful, theres so many other little things to do around the place (hundreds more for admins I imagine) that get in the way, you almost have to wait til all those have been done and you're really bored. At least, thats what happened for me. Or have something else even less exciting you should be doing and use writing the article as a way of procrastinating doing that.
- Can't think of anything else to write too tired. Dylan Moran was hilarious and wonderful and well worth the money :) You'll have to make an effort to go see him live sometime! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 11:59, 12 May 2007
It's actually quite easy. Each indent is worth 2 "em"s. So if, like above, there's 18 colons/indents, that's 36 ems. So the next one for me is 34 ems. Simple! Though I'm sure anyone else could just use colons if they wanted.
Indeed, I take it back, pink is great in all quantities. Though I may have to change colour sometime... I'm used to colour hopping, I don't think I'm ready to settle down to one colour just yet! :)
Nah, I'm sure I do a lot less than people think I do, I just do it in a way that makes it seem like it's a lot of work. I actually do practically nothing. In fact, it's mostly deleting stuff, so that probably counts as negative. But yeah, eventually I'll get it done. Maybe I'll start writing it in order to put off archiving this page.
Ah, I had forgotten Dylan Moran was so soon! Glad you enjoyed it, he's great. I've already seen him live, remember? ;) But yeah if I see he's doing a show here soon I'll definitely go and see him again. I can't remember what crazy thing you were going to get him to do, but did you manage it? → Spang ☃☃☃ 08:04, 14 May 2007
Woohoo welcome to the FF3380's! /me blows party noise maker thingamyjig. Wow I haven't replied here in a while, whoops. Kinda been busy with schoolwork. And its only gonna get worse, 5 more weeks of the semester. Then its all coding, all day baby! Can't wait. And the test went really well, so you must have been sending some good telepathic help! Thanks! Doesn't appear anyone has noticed the switcheroo yet ;) I can't remember the exact details either, and it was you who was meant to do it. I think it involved a squid, a seagreen shirt, a placard with "#FF3399" written on it and the backdrop was the Arc D'Triomphe but I'm sure I forgot something. And no, I was with friends who would have thought I was a tad weird. Not to mention Dylan Moran himself. :P ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:15, 20 May 2007
Yay! However... Sorry about changing the colour, but I felt I needed to change from pink. Not that pink isn't awesome or anything, but I felt the time was right for me to move on, colour-wise. My apologies I could keep this incremental pink run going.
Yeah, I know what you mean. I've hardly any time these days, in between sleeping, eating, drinking and watching TV. Life is tough.
Well done on your test! I'm glad my mental projection of the correct solution was of use. I'll be sure to help on the next one too. And in return, seeing as you're an accountant and everything, you can mentally send me money. That's a totally fair trade.
Indeed! I made it blend into the background colour better, and someone questioned what it was, but apart from that I'd say nobody has noticed! Haha!
They would only think it was weird because they don't understand! Heh, just finished watching my way through 3 series of Black Books, it's such an awesome show! Dylan Moran is great. → Spang ☃☃☃ 02:51, 21 May 2007
Ok, today's arc entry is quite offtopic, I have some questions:
- With the {{poll}} if voters are not logged in but are all on the same IP, does it allow each of them to vote or will the second, third and so on voters get "You have already voted on this poll"
- If so, is there any easy way around this?
- If 20 or so people all edit a subpage of my user area, whats the best way to let people know its not vandalism
Reason I ask, is I have to do a presentation on something to do with the internet for tech and I thought, hey why not do wikis, because I have to do a lame activity with the audience so I figured having things on a subpage they can edit would be easy, however I can't sit there and wait while the whole class signs up for a username, so they would all be editting as IPs, and because its all originating from the poly we all have the same IP... Reason I wanted to do the poll was because it was fairly simple, not requiring anyone to get into the coding.
Thoughts? ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 08:54, 22 May 2007
Edit: Also can you please please please help me do the image replacement thingy of the logo, either to one like yours (I tried copying yours but it wouldnt cover the Uncyc bit properly :(( or replacing the pic entirely. Can't seem to figure out how to do it! I'll be forever in your debt! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 12:15, 23 May 2007
I think I've answered all these questions already :D Man, I'm good!
Oh wait, the vandalism one I didn't. Well, the best thing would to be on IRC at the time, but that may not be possible in a presentation. Probably the best thing to do is just watch for people reverting it and just revert it back with a note in the edit summary saying it's ok.
I think it'd be quite cool to do a presentation on wikis. Tie it into Time's person of the year being "you" and stuff because the best websites these days are pretty much run by their users, and wikis are a prime example of that. Or something. *reminds self that it's not his assignment, and doesn't actually need to do any work for it* Aaanyway.
I think the new style of indenting is coming along well. It now comes more naturally than typing millions of colons anyway! → Spang ☃☃☃ 04:09, 23 May 2007
Yeah IRC isn't an option, nor can I really sit there and watch for reverts during the presentation, so I think I'll just put up a post on the forums letting everyone know when its gonna happen (my time) and again just before it starts telling ppl not to worry about a mass of anon IPs editting my userpages. Then I can just QVFD the lot afterwards :) Is it possible to get temporary protection on the userpages that form my presentation once I've finished editing them? I'd hate to advance to page number 3 to find all it says on the datashow is "This sucks!!!" because some wise guy in the class thought it would be funny. :( Hmmm actually that Time person of the year thing isn't a bad idea actually! Thanks for the tip! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 05:03, 23 May 2007
A reply! Finally! Woo!
Well I'm glad you did so well and I could help out! I'm sure the telepathic pass vibes I was sending helped too :)
You should get some kind or award for getting some actual real life benefit out of uncyc! → Spang ☃☃☃ 05:57, 29 May 2007
So, your telepathic pass vibes are 2 for 2 now, I just got my marks back from that web development test, 95% /dance! Third highest mark in the whole class! Waaheey! So I'd highly recommend your telepathy vibes to anyone now! Spang-Vibes™ have changed my life! Well not much else to add at the moment, since we've probably covered all this IRC mostly :P ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 01:47, 31 May 2007
Awesome, well done! I don't think I congratulated you for this one on irc, too busy contents-hating :P But only 95%? Damn, I'll need to perfect my technique still.
Spang-Vibes™? That's definitely something I need to start offering to people. :D
Yeah, talking on irc ruins all the conversation that could go in the arc! Perhaps I should stop using it and concentrate here. Though that would lead to an even bigger talk page. I'm only 12 sections away from having 100 sections! And it's larger than any article page on uncyc! And if arc2.0 was an article by itself, it would be the 19th biggest article on uncyc! Woo! → Spang ☃☃☃ 05:13, 01 Jun 2007
Heh I liked your colour, its actually a very nice grey as well as being amusing! Decided to try something else, though I doubt I'll change from Ceridwyn Pink™, its just a fad...
<whine> Spangleeeeessss, I'm booooorreeeddd. Accounting class is sooo boring. Wish IRC worked in this part of the tech :( Even talking about Bananadine is better than this. </whine>
Sorry, I thought I had replied to this already, but I just remembered we discussed the stats in IRC and I never got around to replying. Whoops! I do intend to make you another long talk page award sometime. Perhaps when it gets to 100 sections. Those are indeed thrilling statistics though. Did you uncover any new stats? I think arc2.0 should be an article by itself sometime. Perhaps if you do ever archive this, and I'm not suggesting you should, it could be moved to its own page. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0
Heheh, I see what you did there :P This colour's not bad. Or maybe it is. I don't remember.
I'm sorry, I'll work on strengthening my telepathic powers so we'll be able to conduct a conversation using only the power of my mind. So there'll be no need for IRC, and it may also mean the men in black with sunglasses and earpieces who've been following me won't be able to listen in on the conversation.
Yeah, I know what you mean, I do that all the time. Any award is a good thing! I'll never say no to one :) No new stats though, though to be honest I didn't really look any further than I did before.
If I do archive this page, I'll either give the arcs their own special pages or just leave them on here. Though leaving them on here would make the archiving not very good, this arc accounts for almost half of the size of the page by itself! I think it wouldn't be quite the same on another page, but it might be necessary eventually :( → Spang ☃☃☃ 06:58, 07 Jun 2007
Hmmm this one might be pushing the whole "color" thing a bit far...
Speaking of IRC its really quiet and boring right now :P Regarding the men in black, don't think they can't hear your thoughts. Haven't you watched Heroes???
Well, we're getting back towards the margin now, though I think this method should in theory make it easier to pick a winner, I'm too bad at maths and lazy to try and work it out.
I really should be working =/ Too many assignments /sigh ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:23, 07 June 2007
Oh this is too far! Sort of.
If it's quiet and boring I'll have to get on there to liven it up! Although I am about to sleep, so I could only stay on for a short time.
Don't worry about the men in black, I'll also learn how to encrypt my thoughts so they can't read them. It can't be that difficult.
I think I know who it's going to be... again :P Unless someone happened to undent by an irregular amount, which might make things a little more interesting :D → Spang ☃☃☃ 07:55, 07 Jun 2007
You're only prolonging the inevitable with the sneaky undents :P That is all! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 09:29, 07 June 2007
Ew C007IE5! Hee hee.
I can't believe I haven't replied here for like 12 days! I'm so ashamed. I'll try to do better in the future!
If I really wanted to prolong the inevitable, I'd start undenting one pixel at a time :D → Spang ☃☃☃ 03:41, 20 Jun 2007
Thats ok, my last edit didn't really leave much to reply to I'll admit, so I'll forgive you this once!
Phew I've been really busy the last couple weeks anyway, lots of assignments and I sort of fell off the no-WoW wagon which was kinda inevitable I guess. Just went away for the weekend too and came back to find my hard drive was dead, so yeah, haven't been around the Uncyc halls much of late.
Anyway to undent one pixel at a time wouldn't you have to know how many pixels an em is?
I guess being as awesome as you are you probably know that already though.
My CSS/XHTML skills have come along in leaps and bounds in the last few weeks because I've just finished the huge webpage assignment I had. Its like old hat to me now /flex. Once my computer is back from the shop I'll have to pop onto IRC, holidays atm, not much else to do though may have two job leads to follow up! ^_^ ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 04:54, 26 June 2007
Indeed, once I'd resolved to write a reply, I had to work to make the reply worth writing!
Yeah, I'm pretty busy too. Though busy doing stuff which involves the computer, so I still find myself on uncyc a lot of the time! Completely unintentionally of course... I just black out and find myself ICUing pages, no idea how I got there. Pretty weird.
Well... one em is just the height of the current font, so it differs depending on what you're viewing the page with. So it probably wouldn't work too well. You could use ens, which are half the size of ems. I don't think it really makes enough difference to warrant further research though! Ems are all good so far. Remember when people still used colons to indent? Ha ha ha!
I'm glad your skillz have come along, I look forward to being astounded by your knowledge any time soon! You can code up something magic for the arc, perhaps, or something.
Oh and I would have nominated you this month for UotM, but you havn't been around much this month :P Thought you'd have a better chance if I saved it for next month or later :) → Spang ☃☃☃ 04:31, 01 Jul 2007
Oh noes! You archived! Eeeep! Tis truly a sad day, but an inevitable one I suppose. It greatly amuses me that its still over the suggested limit and its only just begun! Muhuhahahaha!!!
I daresay my skills shall never amaze you. Amuse perhaps, but not amaze, however they amaze me in terms of how far I've come!
One day I shall have to pick your brains on the topic of display differences btwn IE & FF. Perhaps. Perhaps I don't really care enough.
Do not worry on the topic of UotM (though I believe we discussed this in IRC). I'm thoroughly dismayed to be out in the lead despite all my best protests. :(
VOTE SbU people!!!! DO ITTT!!!!!
Well, best get back to work!
Drat, forgot to sign it. This keyboard has no tilde :( Having to use the button sucks. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 12:31, 06 July 2007
Indeed I did indeed archive indeed. I was coming for a while, but I felt that 111 sections was a good point to archive. And then more people made sections, but whatever.
Yes, you shall have to show me some more of what you can do. The turtles site was pretty good! Not to mention the enthralling subject matter. There's quite a few differences between IE and FF, but I'm not exactly an expert on that. You're welcome to quiz me on anything I do know though!
And congratulations again on your feature! May it be the first of many! We'll get our collaboration featured one day, mark my words! That may involve actually working on it though :s ... all in good time! → Spang ☃☃☃ 03:48, 11 Jul 2007
Hmmm you never saw the turtle site though? :P Thanks for the congratulations, I'm really really stoked. I always thought that article might have been almost feature worthy but to actually have someone nom it and it win, that was awesomely awesome! Sorry my reply is so slow, I've been playing WoW again, and working for the last 2 wks, so my Uncyc time has been sorely depleted. Though lately when I have checked on things all I seem to see is pointless bickering and it makes me sad and I just leave again. :( I hope you are enjoying the balmy Scottish summer, its bloody freezing over here! :P~ ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 12:42, 26 July 2007
D'oh, forgot about this yet again!
Yeah I saw the turtle site, I think... I remember some kind of turtle site anyway. Or maybe it was just screenshots of it. But that counts!
Shame you finally gave in to the WoW urges, but it was probably going to happen eventually. As long as you WoW responsibly, you should be fine!
Oh yes, the Scottish summer is going great, we even had a few days where it didn't rain! That was pretty good. I can't wait for some global warming to brighten the place up a bit! → Spang ☃☃☃ 01:06, 26 Jul 2007
I wondered if you'd forgotten me again *sniff* Oh yeah, I showed you the mock ups before I started! See my blog for more about the whole WoW thing. Eeeep evil colour!!! /scared Did you see my new article idea? Lemme know what you think re:idea. User:Ceridwyn/Lolcats
:D ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 02:12, 26 July 2007
I'd nevar forget! Though I am pretty lazy, it might seem like I've forgotten sometimes, but there's always the intention somewhere in the back of my mind!
Ah, I tried to check your blog after that last comment actually, but my internet was being bad and wouldn't do what it was told. I wonder who the mystery commenter could be? ;) It's not really a bad thing that you started again, it was just a little good-natured teasing :)
Aha, the article's looking good! Maybe see if you can work lolcode in there somewhere? A code by which the lolcat debaters agreed to conduct themselves by? I dunno. → Spang ☃☃☃ 03:29, 26 Jul 2007
Nice indent :P I'm sure User:Tom Mayfair would approve. He seems to have a fascination with the stuff.
Heh I had a feeling that Anonymous was you :P I've subsequently written a blog post thanks to you! I hope you're happy!
Thanks, MO has improved it tenfold already, but that LOLCODE idea is SHARP! You are always full of such clever ideas! ^_^ I like how you always see angles like that on things I'm working on, that totally work in with it but I'd never have thought of it! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 03:48, 26 July 2007
Heheh. How about this for a mathsy indent, and relevant colour!
Heheh. Yes, very happy! :P
Yeah, well it would be a lot more helpful if I could actually do something about what I think up! Whenever I get to actually writing it down, I usually just draw a blank. That's why my article-writing progress is so painfully slow. But I will get there one day! I think the article should do well, I hadn't realised uncyclopedia was lacking an article on lolcats! If I'm struck with any more inspiration I'll be sure to let you know or add a little :) → Spang ☃☃☃ 09:15, 26 Jul 2007
Oooooh very clever :P I enjoyed that immensely, though I'll confess I had to google it. But I immediately appreciated it once I twigged. I'm sure you'll have no trouble with this one then. AND semi-relevant colour (if you use your right-brain). But lets not carry it on because this one was a stretch as it was, and maths definitely isn't my forte.
Yes I was rather surprised it didn't have an article already too. The idea popped into my head whilst walking to tech at the ungodly hour of 7.45am yesterday. Its amazing that my brain was functioning at all, let alone having good ideas!
Now if I could just focus on this assignment long enough to finish it I could get back to my article >.< Please feel free to add to it if the muse strikes! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 02:22, 27 July 2007
WOO! Back at the margin!!!! And it only took 3 and a half months! Hooray!
Took a while to get that indent, but I got it as I was googling! Still not getting the colour though... apart from it being to most difficult to read colour yet!
Heheh, yeah I find the best article ideas always come when my brain is only half switched on...
Congratulations! Your award will be arriving soon, I sent it via registered post so you can expect that any year now.
Yeah I thought perhaps the colour might be a bit farfetched. If you look at the Wikipedia entry you might get where I got the idea from...
3 and a half months? Really? I guess you haven't won the margin race in a lot longer than that too. I call haxx, all those tricksey indents and stuff. A girl knew where she was with colon indenting, you've gone and confused the whole issue :P ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 11:57, 28 July 2007
Awesome, I can't wait! I'm hoping for the biggest ice cream in the world. Not that I'm getting my hopes up too high or anything...
Oh I think I see now. Maybe. I was never actually that good at maths! I think I'll just stick to good old colour names for now.
Well, last time I got back to the margin first was just under 8 months ago! It's been a long time coming! But you have to admin that golden ratio indents are a lot more fun than normal colon indents! → Spang ☃☃☃ 12:22, 29 Jul 2007
And what, pray tell, am I meant to be able to reply to in that stupendous 2 line post, other than to rage at your will-he, nill-he indenting! I was trying to establish some sort of standard indent amount, lest we ruin the beautius curves of the arc. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 01:33, 09 August 2007
Well... there's more than what I had to reply to... though I did get a most fantastic award! Speaking of which, I just found time to update my awards page! It's finally up to date a vaguely sorted in some kind of order! Woo!
Sorry about the indenting, it's just hard to accept that those crazy days, where we would indent without rhyme or reason as we felt was right, are gone. Good times. Though I guess it's probably time that order and consistency had their turn again. It does tend to make better arcs I suppose, and that's the most important thing!
So how's New Zealand these days? All's good here, the festival's on now, so there's lots to do and see! It's awesome here at this time. → Spang ☃☃☃ 09:36, 10 Aug 2007
Those were indeed heady days of indenting however we felt like indenting, but those days are gone. Move with the times :P Because arc3.0 is going to require some pretty strict indenting rules!
Sorry its been so long updating this, kinda haven't been around here much for a while :) Hope you are well! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 03:23, 06 September 2007
Oh my! I have replied! I am fine, still doing whatever and stuff, you know. I'm looking for a flatmate at the moment, tis stressful getting all the stuff organised!
How are you? How's your course going? Still WoWing? → Spang ☃☃☃ 08:54, 16 Sep 2007
Hope the search for the flatmate is not so stressful now, have you found someone yet? Did the wowplayer dissolve into his office-chair? I'm ok, course is very stressful and the internet credit costs are owning me atm, so can't really do much on Uncyc, not to mention I have less easy, boring classes atm too :( That and WoWing again, but I'm cutting back atm, new flat ftw! XD We has sunshine and outside and stuffs XD How have things around here been coping without me? ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:14, 24 September 2007
Yeah, got a new flatmate now! He's pretty cool. And no, the WoW one is still here, the other one moved out! But he doesn't play WoW much any more anyway. Now I just need to find a job in order to pay for my internet costs! Do you mean you got a new flat? Cool! Is it a nice place? Damn you and your sun and shine! We have rain now. Actually, I quite like the rain, but warmth is good too. Well, things have been far less colourful without you for one! Except one guy down at the bottom of this page that likes random colours too. Random colours are always good. Other than that, I think uncyc is coping... just. → Spang ☃☃☃ 12:53, 25 Sep 2007
Yay for new flatmates! Yes we got a new flat! Its very nice, very sunny and has a bath and an outside (of sorts)! Sorry bout the sunny again, but I bet your rain is all cool and erm...Scottish? I picture red haired women frolicking in the misty highlands, since I was only 3 when I actually went to Scotland :P I just can't find the time for Uncyc atm :( My new timekilling obsession atm is Facebook. Don't suppose you've succombed to its lure yet? Glad to hear you are all struggling by without me however :P Don't cry yourselves to sleep TOO much :)
<3 ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 08:22, 10 October 2007
Yes yes, you have lots of sun and I don't, I get it! It sounds good though. Actually, I just got back from holiday in Barcelona, so had lots of sunny sun there. Also a spot of torrential rain and thunder storms in the evenings, but sun is the main thing. True, if the rain here is anything, it's Scottish. And cold. And you remember correctly, as that's all we do here. In between the maidens' frolicking, we go out and toss cabers and fight for freedom and stuff. It's jolly good fun. I am actually on facebook, but not very much. I can never be bothered finding and friending the people I know, so I just have the people who made me join and my mum as friends. These things are far too much effort for my liking! Yes, we're getting by, but I'm sure there are hundreds of spelling mistakes and incorrectly puctuated lists accumulating for you to fix when you find the time! → Spang ☃☃☃ 17:59, 12 Oct 2007
Barcelona?! Lucky! Sounds divine! Thats the suckiest bit about NZ, its so damn far from anywhere that travel is too bloody expensive for the likes of me. Ah the caber tossing. I can just see you out there beside some picturesquely misty loch in your kilt and sporrin ;P Hmm how to find you on facebook, there doesn't appear to be any "Spangs" :P (Well except this chick and a few other Nordic sounding crazy cats. They might be fun friends to make. Anyway...) and I daresay a search by your actual name would return a few too many results to be viable :P Perhaps you could add me :P Yes it does rather sound like the proofreading is starting to back up a tad. Now if only I could find the motivation to come back and do some. Maybe if someone wrote an in-game Uncyc mod that would help. I could be proofreading during raids. I'm sure my guild won't mind. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 10:21, 20 October 2007
Aha! Finally replying! I had to choose whether to reply here or on facebook, I chose here cos I was here anyway and thought I might as well. I'll might do a facebook one later too, if I feel up to the challenge. Though the lack of colours is mighty disappointing! I'm sure there must be some kind of upgraded wall that does colours, there's at least a million different poke applications, somebody must have thought up a better wall.
I actually have a reason for not replying so soon this time... I have hardly any time these days, because I got a job! Woohoo! I could tell you what it is, but I'd have to kill you. Maybe I'll leave that for facebook where there's less spies who I'm convinced are out to get me. Who else would they be out to get, eh?!
Here's to having even less time to do stuff! And money! No longer will I have to forage for dried squid! Despite never having achieved a squid award of my own.
Perhaps you just need to imagine proofreading as a kind of raid on badly spelled articles. Small and common mistakes are easy to spot and kill, but complex fragmented sentences with bad grammar are like the bosses, you need to find their weak points to defeat them. You could totally market that game to people and make millions! Assuming the one person who would want to play it was also willing to pay millions for it. → Spang ☃☃☃ 08:19, 05 Nov 2007
Yeah I still think you ought to write the arc app for facebook :P Apparently people don't seem to realise the arc is still alive and kicking! How rude! We should tally how many months total this one has been going for. I suck at maths and numbers and such so I nominate you for this prestigious job.
You do realise that if you don't reply to me soon enough I actually can tell your mum on you now :P
Yay for jobs! Go on you can trust me with the top secret information of what your new job is ;P But yes perhaps on facebook. My job is apparently offering me more hours tomorrow so yay for that too! Jobs all round! What do you mean never having achieved a squid award of your own? The squid award was MADE for you!!!
Lol your proofreading/raid analogy was tres cute. Especially the last line! XD However you neglected to bring the phat lewtz into the equation. Wheres the fun in raiding apostrophes and list punctuation if I don't get any sick purples from it?
In other wikinews I fought bravely to defend the right of my most revered shan'do, The Great and Mystic Lord Kralnor (may Blizzard bless him with many epics) to be in the main namespace. And with it, my article on him. There were tears and tantrums but I came through in the end after hours pleading his case! They have a policy on player character profiles being relegated to their server subspace but clearly he surpasses a mere "player character," no, he is a Legend. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 10:28, 07 November 2007
I seem to have left the arc for altogether too long again! But don't tell my mum on me! But I see it's almost at 100kb, just by itself, which will truly be a great milestone to behold! It has just slowed somewhat, it is still as great as it ever was, if not greater.
Yeah, my job isn't actually interesting. Just working in a shop. It's pretty good though! I've been terribly lazy with facebook too, it just seems like a lot of effort. Maybe when I'm less lazy I'll actually do more with it.
And damn, I can't believe I forgot the Squid award was mine! I'm terribly ashamed now. It's been too long. Maybe I should study old sections of the arc more often. Like arc history. I expect that'll become a real class in the future, when there's many tomes of the arc to be studies and wondered at.
Maybe you just have to imagine your own sick purples. But then if you can imagine your own, there's no real need to actually do anything to get them, and then there's no point. Ah well.
I'm glad you're taking your wiki skills to other wikis and showing them who's boss! Clearly, the article can't be relegated to a server subspace, it is far more important than that! Maybe there should be an award for having an article on the most different wikis at once. A million bonus points if you can get wikipedia to take it! → Spang ☃☃☃ 22:22, 20 Nov 2007
Heh its ok, we're allowed to be arc-lazy from time to time :) But I won't make any promises not to tell your mum :P
Yes we do draw near to the elusive 100kb mark. How will we celebrate this momentous occasion? And the arc is definately always improving. With each new addition we raise it to new levels of awesomeness. Recent reports suggest that this might now be in the Top 10 Arcs of the Century!
I've probably already been enrolled in arc history for next semester I don't doubt. At least that promises to be more interesting the other classes. If I go back that is, working is certainly more fun in the wallet department.
I'm sure that Wikipedia's request for me to post my article there will be in the Inbox any day now. Kralnor doesn't get out of bed for less than a million a day now. Million hits that is...
Hope work is treating you well! Be sure to stop by Facebook sometime, I've sent you a million or three invitations to all sorts of fun things, still no arc application though =S ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 06:22, 04 December 2007
Oh my, I had completely forgotten it was my turn to repy to the arc! D: I'm getting bad at paying attention to the arc in my old age! I'm so ashamed.
I think this should take us over 100kb... a momentous occasion! Cause for celebration indeed. Forget top 10 arcs of the century, it must now be the best arc in the world... probably. Not that it can't be even better...
Work is finished now, was only an xmas temp thing, so back to looking for a job now! Maybe I could get a job as a professional arc person. That'd be good. As long as I remembered to actually pay attention to it!
Yeah, I have a million facebook requests for stuff, I've no idea what each one does! So I just left them where they are. I might venture into the requests one day to see what they all are. I only hope that when we're attacked by zombies in real life, "ignore" is one of the options they give! Never really got into facebook that much though. It's still too much effort to find and add everyone I know!
And Arc 2.0's 1st birthday is coming up next month! It's grown so much, I'm so proud! That should be a joyous occasion. I'll put a note in my diary and everything. → Spang ☃☃☃ 22:10, 21 Jan 2008
Hooray! You replied! Why only last night I was pre-emptively mourning the arc's demise. Without you its just a pale echo of itself.
Indeed, the page is now, happily, 100 kilobytes long, or so it informs me. I dunno bout best arc in the world though. The Arc de Triomphe and Noah's Ark probably still top this one, if only for the recognition factor. Our arc has not yet gained the popularity in other parts of the world which it has here. Why just yesterday I saw a documentary about African tribesman who live so deep in the jungle that news of arc2.0 had not yet reached them. Also it does have a certain lack of tangibility. I suppose at the moment Noah's Ark does as well, but just as soon as they find it it will be a lot more tangible. Of course I'm making the assumption that tangibility is a criteria for Best Arcness. Perhaps the judges will be pleasantly surprised by this new, more modern approach to arcing? We can only hope I suppose.
Aww well at least you had some work, and at least now we get you back XD I'm working full time at the moment too and I must say, having some of that folding stuff is quite nice for a change. Figuratively speaking at least. We don't really use the actual folding stuff here. I know chip and pin was only just taking off in England when I was there in 05, and I assume its similar in Scotland, but we've had the equivalent for at least 12 years here, so cash is a bit obsolete.
Yeah I hear you on the facebook thing. You arent the first of my friends to bemoan the "million requests" issue this week. Perhaps I ought to stop sending so many ;P I have of late anyway, I just log in to clear my own :) Anyway its just nice to hear back from you *somewhere!*
Wow, a whole year? How momentous. I'll have to bake a cake. Do we have to get it presents?
/impromptu-nice-to-hear-from-you-eHug <3 ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 02:36, 22 January 2008
No, the arc isn't dead yet! It is getting a bit slow in its old age, but I think it has quite a way to go yet. Bring on 200kb!
Well perhaps some people haven't fully embraced just how good the arc is. True, those other arcs are pretty good, but they've have like a million years to perfect their arcness, at this rate arc2.0 will soon overtake them all! And then even African tribesmen will know about the arc!
Yeah, I'm starting to wonder what I did to keep myself occupied before getting a job, I'm so bored all the time now! In between the very intensive job hunting I'm doing, of course. And we've had credit cards etc for ages, but cash is just a lot easier sometimes!
Yeah, I might put more effort into facebook sometime! Just as soon as effort resources in my brain become available. Mostly when I'm online my brain is just switched off, and replying to the arc or sorting out all those invitiations is something I think probably needs me to pay more attention to, so I put it off till the next time. And then do the same the next time. And so on. I'll try to do better! If I haven't done much interesting for a while, giving me a nudge wouldn't go amiss!
I know, I can't even remember 12 months ago! An arc cake would be awesome. It'd have every colour imiginible in it! Presents might be going a bit far... maybe the arc itself could win an award for once! → Spang ☃☃☃ 09:04, 24 Jan 2008
104 kb down, 96 to go!
Indeed, I guess our arc has come a long way in such a relatively short space of time compared to them so give it another hundred or so years and it will have overtaken them as the most popular arc of all time!
Nah I didn't mean credit cards, I meant debit cards but oh well XD Cash is NEVER easier, dunno bout Scots money but English monies is very heavy! =S I had to walk to the supermarket with bags of baked beans tins full of change to cash it in and it was about 5 quid. Ok thats an exaggeration but you get my meaning :P
Just clear them all and start from fresh on facebook imo. I promise not to spam anymore >.> I'd rather have friends who reply than rack up mega zombie points spamming people who refuse to speak to log in =S
I dunno, I think there were a few colours you really wouldn't want in a cake used here. I can't think of any examples atm but there were definately some un-flavoursome ones.
Hmm we could make an award. It would have to be a collaborative work. Our last one went down so well and has left a real lasting mark on the site! XD ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 01:35, 27 January 2008
Make that 94! This is pretty exciting!
Paper isn't heavy! Well ok, you wouldn't pay for your shopping in coins, though it might be quite amusing. Interestingly, you can only use a certain amount of one kind of coin before it's not legal tender any more. But I wouldn't use my debit card to pay for a bus, or some tic-tacs or something! Some places charge you if you pay with your card for something that's under £5. Mostly small shops though, if at all.
Yeah, I think clearing the requests is the only way! Some look ok, but I do like my profile to stay relatively clutter-free, despite there only being like 2 people who'll actually look at it :) I did like the last.fm app though. Though now people can see when I'm online and just hiding by when I play music :) Soon there'll be no escape!
I dunno. As long as they're only colours, I'd be pretty adventurous with any strange coloured cake. Strange flavours is definitely something to be avoided though! I remember having a pink with green spots coloured cake before, which was pretty awesome. I've no idea how my sister managed to bake green spots into a cake, but it was pretty impressive.
Awards are always a good thing. I'm not sure what kind of awards would be appropriate to give to the arc itself. We'll need a brainstorming session or something, where we'll have a flip chart with diagrams on and throw around buzzwords, like they do on TV. That'd be so cool! → Spang ☃☃☃ 22:30, 12 Feb 2008
107!!
Ok, I know we had a no-retrospective colouring rule, but I felt this situation called for it! Can't believe you didn't notice what I was going for there.
Don't worry, you might just get to be orange yet...
Only two sleeps (by uncyc time) to go arc2.0! This is a big milestone in the arc revolution really. I think perhaps it might even call for a tickertape parade. With a marching band and elephants and Miss Arc smiling and waving from a fancy convertible. Of course that might take a bit of organising and we only have a day or so. Maybe we'll just throw some confetti then?
Wow a pink and green spotted cake does sound impressive, provided of course that the green spots were intentional... She didn't bake it a couple weeks in advance did she?
I think its your turn to find the next web 2.0 for us to haunt. Do forums count? I have a great fondness for forums. My friend and I have a forum all to ourselves to spam our inane chatter on. Only two other human beings dare to go near it.
I like the brainstorming session idea. I vote we do it a la Fran from Black Books.
"Is this.../draws a circle on the flip chart...the best we can be?!.../draws a big dynamic arrow coming out of the circle...Are we...or are we not....AN ARC?!" ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 10:58, 14 February 2008
Another fundamental arc rule broken in order to wish the arc a very happy belated 1st birthday!
/throws streamers and does 3 cheers
Hip Hip Hooray! x3 ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 08:58, 17 February 2008
Woohoo! Happy birthday arc! Well, yesterday, but that's a minor point. I've already let off a party popper or two in its honour. Pretty crazy stuff!
This is getting pretty crazy what with all the unwritten rule breaking going on! It's like the arc is old enough now to start rebelling against the rules. All for a good cause though!
Yeah, I thought you might be going for some rainbow colouring, but you mixed up violet and indigo! So I thought I'd go out of order too. More noticeably though.
As fun as web 2.0 things are, they'r quite a lot of effort to keep track of! I'll keep a look out for the next one though.
Happy birthday again, arc! → Spang ☃☃☃ 21:32, 17 Feb 2008
Wow, my apologies, I've been terribly remiss. Almost 2 whole months have passed since the last edit and I'm only now just getting into gear to write something!
Life is too busy!
However it is with some excitement that I can announce the end of the rainbow theme. You may now resume selecting your own arc colours as and how you see fit. :)
And yes I know I mixed up violet and indigo but I sort of hoped you wouldn't notice ;P
Seems your look out was working as I think I still owe you a reply elsewhere also! But we mustn't let the arc slip, the work we are doing here to forge ahead in this new and exciting field of arcing is too crucial to the cause!
Hope all is well in your neck of the woods :) ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:16, 08 April 2008
Woo, a reply! That's the longest gap in replying to the arc, like, ever! It's probably my fault though, leaving many so messages in different places last time. I'll stick to one at a time from now on I think.
And yeah, life always seems to get in the way of doing stuff. Damn life! I want to do stuff instead!
I notice everything! Though it probably doesn't matter much anyway. Fun fact: most people can't actually tell the difference between indigo and violet in a real rainbow, but we traditionally use 7 colours because Isaac Newton was into numbers and liked the number 7, so when he named the colours of the rainbow he decided there would be 7 of them, and added indigo to make up the numbers.
Yep, all's good here! It'll be summer soon, hurrah! → Spang ☃☃☃ 03:02, 09 Apr 2008
Yes sorry, it was such a long gap. Its usually you who does that, not me. And I quite like finding messages all over the place. It's like little notes in your lunchbox!
That being said, your last twitter was like a month ago!
I try and find a balance between doing stuff and life, often even doing stuff whilst living. But then I'm quite skilled. I wouldn't recommend trying this at home.
It appears you do indeed notice everything! I can only tell the difference in that the look the opposite of what makes sense to me, I think violet is darker than indigo. That is a fun fact though. Indigo has to be the coolest colour name though. Its like the bottled essence of hippies. Even Pusifer agrees with me.
Enjoy your summer, winter beginneth here and I couldn't be happier. April is one of my fav months and not just because my birthday's in it. And my wedding anniversary. I also just like coz the leaves turn and it starts getting colder too :) ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:58, 14 April 2008
Well here I am back to leaving ages between replying! I'm just not very good at this whole "communication" thing in general, I think.
I actually sent a twitter update by text not so long ago that never showed up on the website. Odd. I've updated it now!
Yeah, April's pretty good here too, it's just starting to get warm enough to walk round without a warm coat on. Bring on summer! Leaves turning in April? You and your crazy backwards seasons! → Spang ☃☃☃ 23:57, 25 Apr 2008
Perhaps it is you northerners who are backwards!
Thank you for the birthday wishes :) Facebook is handy like that with the reminders and what not! I haven't got round to replying to everyone there because quite frankly there's too many.
I see some other uncycers have finally tracked you down on there too ;P
Why is the edit page complaining about a dollar sign?
I don't have a lot else to say at the moment, its cold and I'm sick of studying. General autumnal grumps all round! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 08:20, 07 May 2008
Well, we were here first! Probably!
Heh, yeah it's good for that! Yeah I finally gave in and let people know when they ask me if I'm on facebook. What's the worst that could happen?
Dunno, I find the best answer is to just blame wikia. For everything. Ever. They're always to blame for stuff like that.
Damn Autumn and Winter, I for one am glad they're gone for another half year! You should move to the other side of the world every 6 months and just live in summer constantly! → Spang ☃☃☃ 01:24, 08 May 2008
Well, if we are going that far back, technically I'm on the same side of the argument as you. Heck I don't even have to go further back than 2 generations!
Well, technically the worst that could happen is that someone stalks you and kills you after tracking you down from your details. Or just steals your identity... But yeah, I'm sure that won't happen! >.>
I'm quite upset, your Mum figured out how to reply to ppl on twitter before me. QQ! But I got it sorted now!
Personally I love autumn and winter, I love the cold and all the scarves, gloves, earmuffs, wooly jerseys etc. Permanent summer would be awful! I was just in a grump when I wrote that. Can't even remember what it was about! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:40, 23 May 2008
That's true. But I think that mean I win! At whatever it was.
Well at least having a stalker might be quite interesting, I've found the internet in general to be quite boring lately. Ah well.
Heheh. I'm still not using twitter much, except to get updates from muse on my mobile now, haha! I don't think I do enough interesting things or have enough interested people for it to be of much use.
Yeah, winter and summer are both pretty good at the time, you're always glad to get into them :) I don't like autumn so much though, it gets cold and wet, but there's never any snow, and never any interesting rain. Just cold. Which sucks. Spring isn't much better, but at least it's getting warmer then! → Spang ☃☃☃ 18:25, 05 Jun 2008
So I took a bit of an arc-break as I'm sure you noticed, but I can't just let it fade into obscurity. Vivant l'arc!
Not that I have that much to say really. You know my big news, and thank you for your congratulations, I gave up trying to reply to them individually pretty quickly.
Btw your South Park avatar suits you, especially the background :P I started to build one then I couldn't work out how to delete bits when I accidently added them, and then I tried the Delete button, hoping it might allow me to chose what to delete, which it didn't, so I gave up.
Thanks for your help on the JoCo wiki, I haven't gotten very far through my to-do list lately as I've been a bit preoccupied with RL.
Well that's all I got really for today, just wanted to give the arc CPR. :) ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 01:47, 28 July 2008
Heh, though the arc my get slower with age, it still keeps on going! :D And I notice firefox has now stopped spellchecking the arc because it's too long. Maybe that should be submitted as a bug - "Doesn't spell check the arc. Severity: critical"
Yeah I can imagine there'd be a lot of congratulations to get through :) Must be very exciting times for you!
Funny you should say that, because I made exactly the same mistake when making mine the first time! I gave up and made that one about a week later. It's just a matter of finding a time when you're bored enough. Which happens to me a lot, funnily enough.
And yeah, I don't seem to have done much at JoCopedia except a couple of templates... maybe I'm just a one-wiki kinda guy! Though it does email me when my talk page there is updated, so if I am ever needed, I'll not be too far away :) → Spang ☃☃☃ 18:23, 28 Jul 2008
I don't let firefox push me around, I can't stand it nagging at me and tell me I'm spelling it wrong, because so often on the internet I'm /intentionally/ spelling stuff wrong. As is my wont. However that is crazy that it's just given up. Maybe it sensed my spelling-fu was strong and decided it wasn't needed?
Yes very exciting times full of many firsts, with many more to come! Love the colour btw ;)
I haven't found myself to be bored enough yet, but thats more to do with the fact that I've found other things to do with my time instead than with me not actually being bored.
Thats ok, its a quiet little place and I know how to use the SpangSignal™ if you are needed! That's nice that you've decided on a life of wiki-monogamy though! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 03:06, 01 August 2008
Well that was a long gap! Your CPR should probably be directed at my procrastination, not the arc itself. But as long as it's still going, it's all good!
I've been busy in the meanwhile though! Got a new job, new flatmate, lots of fun at the festival. It's all good!
Been keeping up with your "progress" on facebook, not long now till you'll probably have no time for the arc at all! ;_; Though it probably won't be much different to my laziness! → Spang ☃☃☃ 21:45, 25 Sep 2008
Hi! I don't know why exactly but I felt like coming back here today! Thought I'd best apply the defibrillator to the poor neglected arc! Lots seems to have changed around these here parts, though some things never change. Your talk page is still a bit short for instance and MO is ignoring my talk page like usual. Did Famine ever come back? I wonder if its possible to exist around here and not actually DO anything. Also, how do I edit my watchlist now? Its full of rubbish and I don't want to have to unwatch each page manually! There used to be tickboxes somewhere... I think my sig could use a revamp. What's in vogue these days? ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:56, 18 May 2009
It's because you feel the calling of the arc! You can stay away for a while, but never forever! Hard to believe this conversation is over 2 years old :)
My talk page is the longest in the land, but still not long enough! One day I'll it'll be able to go round the whole world, or wiki, whichever is biggest. Or smallest. Famine never came back that we know of... And it's totally possible to exist around here and do nothing! I think the difficult thing is to exist around here and actually do something.
Also I fixed your watchlist thing. You had some CSS hiding the bar the watchlist editing links appear on for some reason. Olipro added them 2 years ago. Almost to the day! Who knows why!
As for sigs, I'm pretty sure random colours are definitely in. That and purple ponies. Anything else isn't important. → Spang ☃☃☃ 21:49, 19 May 2009
edit We have to talk,
about the article Ecole Centrale Paris you just deleted (twice I think) for vanity reasons. I just arrived here in Uncyclopedia, but the reason why the article was removed seems a bit obscure to me, since it passes the test of vanity relatively easily: in France the ECP is a very famous college as can be inferred from the great number of related hits and the article in several languages. Plus the text that had been written by that colleague of mine was funny (at least I thought it was). So I hope you won't mind if I recreate the article.
Cheers
Régis B. 10:10, 5 July 2007 (UTC)
- Alright then. I actually deleted it because "ecole" means school, and I usually automatically delete anything with "school" in the title. But if it's a famous college, it's usually OK. I've restored it for you, but it still needs to be a lot longer (at least a few more paragraphs/sections) before it's out of the firing line ;) → Spang ☃☃☃ 02:21, 05 Jul 2007
edit Archiving
I never thought it would happen. Never in a million:35, 5 July 2007 (UTC)
- Heheh. Notice though, despite that archiving, it is still giving the "this page should be archived" message. Ha ha ha! → Spang ☃☃☃ 02:41, 05 Jul 2007
- Hah, I didn't notice that. I guess that message will never go away then, unless something unforseen happens to the arcs. Let's hope:59, 5 July 2007 (UTC)
edit deleting my article
Excuse me, why have you deleted my article on the Breton language?
- It was deleted because it was one sentence long. Hundreds of other one-sentence articles are deleted every day, so don't take it personally! I've restored it and tagged it as being a short article, so you have a week to make it proper length. Just a few sections should do it, just so it's article-sized, as one-sentence articles are discouraged here. The counter goes from a week from whenever you edit it last, so really as long as you're still editing it it won't be deleted. And you can remove the tag yourself when it's a good length. Enjoy! → Spang ☃☃☃ 04:35, 05 Jul 2007
edit VFH
OK, I see your change (regular TOC, that is). This is great for load times (kills regular VFH by far), but kinda makes the page ugly. In fact, it somewhat takes away from one of the old purposes of the paradigm change (ie: less cluttered look). Is there a way to do a DPL version sans all the extras, so it looks more like it did of late? Like, all I want is a list of articles (score is unnecessary as well). If you've already tried this, then fine, because I've been around in less capacity lately (I'm back in full force now). In any case, I don't like that there are two tables of contents.
If you can sort through that mess of sentence-building to get my point, please respond.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 14:37, 7 July 2007 (UTC)
- Yeah, me and Mitch were talking about it on IRC, and decided that we decided that having two DPLs was just making the page have massive load times, especially at busy times - even at its most cut down, similar to what you suggested, loading was taking up to a minute or more to load compared to current VFH's 20 or so. And if you think about it, a DPL list of all the articles on VFH would look just like... a TOC.
- I was basically aiming to have the full VFH page as similar to current VFH page as possible, to keep the people who liked it that way happy.
- The two TOCs thing comes from the limitation that headers on the page and headers put in by the DPL are separate, and as such you can't get them all in the same TOC. And we need the TOC for the articles, and it's better if there's one for the main headers too, especially if the nominate box is going to be at the bottom (though I think it looks better above the articles on the full page, and below on the summary page, as there's minimal scrolling required to nominate something). Though you could make a box with wikilinks to the respective sections, it'd still be a TOC, just look different, though you wouldn't get the show/hide TOC bug. → Spang ☃☃☃ 02:51, 08 Jul 2007
- I totally agree with your decision. I wasn't there for that (probably working...blech), but I'm sure you guys were right. What we maybe could do is spread the TOC out (ie: make it a wikitable with more than one article per row). Because with the nomnate box now, etc etc, it's getting rather long again. But if the load times are down now (which I think they are), then that's really the best thing we could do.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 14:59, 8 July 2007 (UTC)
Beautiful, beautiful work. Absolutely great. Perhaps we should start on VFD now. Or you can with the code that I sort of helped with a little. It'll be a new age!-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 15:30, 9 July 2007 (UTC)
- Yup, I think the VFH project has been a resounding success! Great job team! And yeah, VFD will be done all in good time, don't you worry... → Spang ☃☃☃ 03:58, 09 Jul 2007
- If you insist.../me ceases to worry. And, also, when you do the VFD thing, if you need my help, just ask. I doubt you will, but in any case, I hope something you learned (or altered to your liking) here has inspired you. I'm off to not give a shit.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 21:13, 9 July 2007 (UTC)
edit Front Page
Hello can you please make the title Front Page (עמוד ראשי) disappear? no one seems to know how to do it. thanks. User:Kakun/sig 18:26, 7 July 2007 (UTC)
- Well I can't do it myself, you have to be an admin to do it. It's much easier to do on eincyclopedia than here, just add the following code to here, on a new line.
body.page-עמוד_ראשי h1.firstHeading { display: none !important; }
edit Spangsignal!
Forum:Something_strange_logging_on... Sir Modusoperandi Boinc! 19:47, 8 July 2007 (UTC)
- I think I got there just in the nick of time. And this spandex one-piece suit makes me look awfully dashing, don't you think? → Spang ☃☃☃ 09:39, 08 Jul 2007
- It wouldn't be the same without the Spangmobile. Also...
edit
For reacting so quickly to the Spangsignal. Sir Modusoperandi Boinc! 23:14, 8 July 2007 (UTC)
- You're welcome, citizen! → Spang ☃☃☃ 11:16, 08 Jul 2007
- Can I be your sidekick? I've already got the MOmobile...it's turbocharged. Sir Modusoperandi Boinc! 23:28, 8 July 2007 (UTC)
- Yeah, totally! Ohh, does it have a CD player too? We can ride around Uncyc City and fight wiki-crime and valiantly save noobs in distress! I'll need to work on some epic one-liners for when we take down the bad guys first though. → Spang ☃☃☃ 11:56, 08 Jul 2007
- CD changer, baby (no Red Hot Chili Peppers). You're on your own if we meet the Crufter, though ("Edit me this, dumbassterly duo..."). His gang, the IPs, spook me a little. Sir Modusoperandi Boinc! 00:06, 9 July 2007 (UTC)
- Never fear, my butler Jeeves is right now working on creating the very best fight scenes, plot twists, character developments, witty repartees and subtle innuendo for us to whip out and take down the bad guy with. I already have one that'll totally kill, if only I can manage to take him out with a breeze block of some kind. → Spang ☃☃☃ 01:12, 09 Jul 2007
- Cool, all I can do is go on IRC and report vandals...*Logon!* "Holy vandal, Spangman!". Ooo! Can we be the Ban Patrol? Sir Modusoperandi Boinc! 01:29, 9 July 2007 (UTC)
- Awesome! I think that would literally be the Most Awesome Thing Ever. I'll get started on a theme song now. → Spang ☃☃☃ 02:10, 09 Jul 2007
edit VFS
Should be open now, eh? You voted for, so I figured I'd let ya know.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 01:09, 10 July 2007 (UTC)
- Not quite yet... midnight on the 11th is when it foes. Patience! → Spang ☃☃☃ 04:26, 10 Jul 2007
Since this time of the month is nomination only, can I delete the vote made after the person was nominated? -- 04:27, 11 July 2007 (UTC)
- No, it's not nomination only, but it's recommended that you wait to see all the nominees before voting. → Spang ☃☃☃ 04:42, 11 Jul 2007
edit O_O!
I was just as disgruntled as anyone else by Deathwatch2006, but 2 years? That seems rather harsh, even for a dumbass like that.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 15:46, 16 July 2007 (UTC)
- It would have been infinite, but I thought he might like to come back when he's better. Would you have preferred infinite? As far as I could see, from everything I've seen of him on and off uncyclopedia, he revels in causing trouble with other people, and I'm not going to let him try to do that here. He needs help of a kind that uncyclopedia can't provide. → Spang ☃☃☃ 04:40, 16 Jul 2007
- Well, as I don't really have a stake in this, I just felt it necessary to tell you that I was amazed at the harshness of the ban. Didn't know you had it in you.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 16:43, 16 July 2007 (UTC)
- I ban people infinitely all the time - surely that's more harsh? → Spang ☃☃☃ 04:45, 16 Jul 2007
- I just didn't know that side of you. Honestly, the only logs I saw of yours were the ones where you blocked Splaka for arbitrary amounts of time.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 16:46, 16 July 2007 (UTC)
- There you go, all my blocks. You think 2 years is harsh? Check out the 15 year block on July 2nd. → Spang ☃☃☃ 04:51, 16 Jul 2007
- /me reads... \o/! You're the one who blocked that retard IP that I ranted to twice! You're more my hero than you were before.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 16:59, 16 July 2007 (UTC)
edit New ICU thingumy
Hiya Spang - saw your new ICU thing, and while I'm not exactly opposed, I have to say I use ICU without a sub or fix occasionally - mainly when I put my reasons in the edit summary. Any chance of at least getting a sub=nothing I can use to do this without looking amateurish in front of the n00bs? --Whhhy?Whut?How? *Back from the dead* 10:41, 17 July 2007 (UTC)
- Ok, now you can use a
nofix=yto suppress the message. It doesn't matter what the "y" part is, as long as it's not whitespace. Though I think it's still better to use a sub or fix message, as a lot of n00bs won't know to look in the history for messages or reasons. Also, it makes it easier for the person checking the expired ICUs to know what the original reason for tagging was at a glance. → Spang ☃☃☃ 04:35, 17 Jul 2007
- Yeah, maybe you're right. Still, it's good to have the option. --Whhhy?Whut?How? *Back from the dead* 08:50, 20 July 2007 (UTC)
edit Sig
Hey Spang, you mind giving me a hand with ForestAngel's sig? The gif violates the 12px rule but it's nearly invisible under 20...Thanks! ~
11:17, 17 July 2007 (UTC)
- Well at 20px it only makes the whole line 1px higher, so I wouldn't worry about it too much. → Spang ☃☃☃ 04:53, 17 Jul 2007
OH NOES! IT THE EVIL EYE OF FAMINE! WE'RE ALL GONNA DIE! WE'RE ALL GONNA DIE! ~
21:35, 18 July 2007 (UTC)
edit 69.105.96.36
That page you deleted was actually revenge on his vandalisim. He was really annoying me so I did that in revenge. Do you have any other concerns over that other than the fact that it isn't necessary? Could I at least put {{Indefblockeduser2}} on his user page?:02, 20 July 2007 (UTC)
- Revenge on vandalism is hardly better than the vandalism itself. If someone on the internet is really annoying you, either get off the internet to cool down, and think about it rationally, or discuss it with someone else. Revenge is never a good thing.
- Often, the only reason a vandal vandalises is that they want attention, or to provoke a reaction. By doing that kind of thing, you're giving them exactly the reaction they want. If a user gets blocked indefinitely, the best possible thing for you to do is to ignore them, and forget they ever existed. Deprive them of attention, and they have no reason to come back. Start fights with them, and they'll want to come back even more to finish it off.
- As for the blocked user template, why do you want to put it on? What would it achieve? Who's going to care that that user was banned infinitely? Nobody's ever going to care about that particular user any more, same as goes for 99.9% of all the other indefinitely blocked users. And anyone who does care knows how to check the block log.
- So to recap: If someone is annoying you, talk to someone else or report them to ban patrol, and then forget about them. If someone gets banned infinitely, just pretend like they never existed. → Spang ☃☃☃ 05:45, 20 Jul 2007
- I understand now. This guy was already infi-banned so I don't need to report him to Ban Patrol or anything like that. It's just when I logged in this morning, it said I had messages. I check, and it was the same as yesterday. As I checked the history to see if anyone made spelling changes, I saw 69.105.96.36 vandalising my talk page. Other established users were making fun of him on his talk page and I felt that I wanted to join to. So, I created that profile on his user page to join the revenge. So I guess that there really isn't a need to revenge on users or put that template on the talk page as most people Don't Care. Thanks for the advice.-:05, 20 July 2007 (UTC)
edit I SEE ALL!!
I saw your new addition to the autowatch js. Very cool for slow net days (like right now, as it happens) - I thought for a split-second that my computer had gained sentience and was actually being helpful all by itself...! --Whhhy?Whut?How? *Back from the dead* 21:24, 23 July 2007 (UTC)
- Ahah, I often wonder if people notice these little changes ;) If you have any suggestions for any others, feel free to suggest away!
- I'm currently updating my autodelete script to be even more awesome, which should be useful to you soon, if my evil plan comes together... → Spang ☃☃☃ 11:39, 23 Jul 2007
- So it's your evil plan? I thought this whole thing was a giant conspiracy, but if it's just one person then that's totally alright... :-) --Whhhy?Whut?How? *Back from the dead* 20:17, 24 July 2007 (UTC)
- I'm sorry for interrupting your nefarious plans, but that this new JS do? I'm just too tired to read the code...:) ~
21:20, 24 July 2007 (UTC)
- Yep, moved my lazy arse and just realized what it actually does. I'd give you a pint of tea, but Mhaille got my last one yesterday. ~
21:23, 24 July 2007 (UTC)
edit As Braydie seems to be busy
I thought to ask you about the ffs and what it means as i don't know what it is i also ask braydie but as he seems to be busy i thought to ask you as well Richardson j 01:48, 24 July 2007 (UTC)
- FFS is just a joke, don't worry about it. You can't actually nominate someone for a ban. Nothing will happen, whatever happens on the page. You can remove the template from your userpage if you like, it's not official in any way. And if anyone complains, you can refer them to me. → Spang ☃☃☃ 03:47, 24 Jul 2007
edit HP spoilers
Hmm, I just noticed (after I RVed) that you were the one who removed the HP spoilers. You said to ask you. So, let's ask. Why? Within half an hour of me posting them the first time, someone went to my talk page and wrote, "LOLOLOLOLOLOLOL". That's probably the biggest laugh the HP article has gotten yet. Also, most of the comments on the talk page seem to be for "keep". Besides, is this anti-spoiler policy going to be applied equally throughout uncyclopedia? Because if so, you've got a *lot* of work ahead of you. -- Rei 17:20, 24 July 2007 (UTC)
- There was a large discussion on irc about this, and everyone agreed that there should be spoilers for anything less than a year old, in order to give people who really don't want it spoiled enough time to see it for themselves. And the general feeling in the forums where I've seen it is that spoilers are a bad thing.
- The are a couple of main issues about spoilers which were discussed:
- A spoiler joke is inherently entirely on the reader, for the benefit of the person doing the spoiling. The humour benefit of having it there is nothing compared to how disappointed someone could be if they get directed to the page and have something they could have been looking forward to for years spoiled. To me, doing that is very much being a dick, which as we all know is not uncyclopedic.
- Fake and mixed up spoilers are funnier. Which sounds more uncyclopedic? Snape kills Dumbledore, or Voldemort is Luke/Harry's father, Dumbledore was a ghost the whole time, or something similar? And better yet, the latter form doesn't actually spoil anyone but people who know the respective plots already.
- The issue is not entirely with the spoilers themselves, it's the manner in which they are presented, i.e. those who come to the page and don't want to know the spoilers have no way of not seeing them, as they're right at the top there. Compare that to the page on Lost, for example, which generally always has information from the very latest episodes, but it is all used for or turned into actual jokes in the article after the spoiler warning. The difference being that one form tries its hardest to spoil something for someone who doesn't want it to be spoiled (going back to being a dick), while the other uses the potentially spoiling information to make actual jokes, and gives anyone who doesn't want to be spoiled a chance to leave unspoiled.
- Also the guy who said LOLOLOL had edited the harry potter article in the days before the book came out, so it's likely he knew the spoilers already.
- That's my reasoning for removing the spoilers, and will likely be sticking to it. I'm already quite close to locking the article, and if the spoiler jokes keep getting added, I will. → Spang ☃☃☃ 07:22, 24 Jul 2007
- Actually, I posted the spoiler as soon as Slashdot broke the story about the spoilers coming out, while news of them was still proliferating. When I broke it, there were only a handful of blogs that had the spoilers posted. If the poster had known the spoilers, they were very fast at reading them.
- As for who laughs, I had the ending of Cowboy Bebop spoiled for me by the Cowboy Bebop article. So? I cracked up. It was hilarious, and I smacked myself for going to an article on a series I hadn't seen the end of. Sure, the joke was on me, but if you can't take a joke on yourself, you're a humorless drone. ;) That's what "roasts" are all about. In fact, it was the Cowboy Bebop article that inspired the original "Snape Kills Dumbledore" header on the HP article that ended up morphing into the new header (which you removed). Lots of articles have spoilers right up front, and the spoilers are the humor of the article.
- Fake/mixed up spoilers: Your examples violate Uncyclopedia:How To Be Funny And Not Just Stupid. Random nonsense is not humor. It's boring and tedious reading a bunch of made-up garbage that has no rhyme or reason to it. Spoilers, however, do create genuine laughs -- both laughing at and laughing with. You may consider this to be offset by the humorless drones who get angry by having something spoiled (while, for some crazy reason, not expecting to find spoilers on a humor site), but you really can't deny that many people get a hefty amount of genuine amusement from it. It is not only for the benefit of the ones who posted/edited the spoiler, but also the readers who already know the spoiler and get to giggle at the thought of people seeing the spoiler who didn't mean to, for those who didn't know the end but have a sense of humor about it all, and so on.
- Lastly, I'm not too fond of your chosen process here: taking it to IRC, discuss the issue with a bunch of people who probably have had little involvement in the HP article, and then come in and overrule the general wishes of the people on the talk page who *have* been editors to the article. That's poor form. The two main editors on the HP article in the past year have been Ace Attacker and myself (both supporters of the spoiler). Where have you been during the daily HP vandalism? What irony it is that I've been having to put up with, and to correct, vandalism to HP for over a year (on the order of "Harry has gay sex with Ron while Hermoime watches"), over and over, without any protection for the article. Probably due to all of the preteen fans, HP gets more vandalism than George W. Bush. And now protection is proposed, but only for the purpose of keeping one of the first laughter-inducing things to happen to the article out. If you actually cared about this article, why weren't you helping keep that garbage out? Why leave it to people whose opinions you clearly didn't care about when making your decision? -- Rei 23:18, 24 July 2007 (UTC)
- Actually you can't always expect someone to laugh off being spoiled, and they shouldn't be expected to. If you insult someone in good humour, or even in bad humour, then it's only words. But to spoil a plot for someone is to take away something they can never get back. When you visited the Cowboy Bebop article, had you at the time been waiting over a year to see it, only to have it spoiled before it was even possible to watch it? That's not something you have a right to expect people to laugh off. And speaking of which, the cowboy bebop article is entirely based on that spoiler, and that one joke gets old by the first few sentences.
- As you were looking for spoilers the moment they came out, I expect you're not the kind of person that puts much value in a good storyline, and so maybe can't appreciate the disappointment of having it spoiled against your will. Or were you looking for the spoilers specifically so you could spoil it for other people?
- My examples violate HTBFANJS? I disagree strongly. Everyone I've discussed it with except you thinks mixing the spoilers up with other well-known spoilers is a great idea, and very much in the spirit of HTBFANJS. And is quite the opposite of being random nonsense. Think of it as a parody of spoilers in general and the kind of person who likes to spoil things for people. As opposed to the sledgehammer to the face tactic of "ha ha I ruined a book for you" (a type of (so-called) humour discouraged by HTBFANJS in fact), it takes well-known spoilers and mixes them up so at first they think they've been spoiled, but then recognise it as being from something else. Hilarity ensues. Or if they don't recognise it, either they think they've been spoiled and you get your laugh, but then they are amused when they find out it wasn't actually spoiled at all, or they just think it's not a real spoiler and they think "Oh ha ha, that uncyclopedia and its absence of facts, whatever next?".
- I particular, I don't understand your objection to fake spoilers, for things that don't actually happen. I mean, if the person hasn't read it, you get all the satisfaction of someone complaining that they've had the book spoiled, and they person appreciates the joke with a delay of however long it takes them to find out it's not real. The only thing you're missing out on is actually having spoiled it for someone. That's pretty dickish if you ask me.
- I see you use "truth is funnier than fiction" as an argument on the talk page. Actually, there's a caveat to that: only when the truth is funny. And even then, by in no way means that you should put the actual truth there as the ultimate in humour.
- As for coming in a changing what the main author thinks; I do that nearly every single time I delete an article. Of course the author thinks it's funny; he wrote it, and he thinks he's funny. But that doesn't mean it is funny, no matter how much time he spent on it, and how much little time I spent on it. The amount of time I've spent on the article has no relevance. But that doesn't mean I don't care about your opinions; the world isn't black and white like that. Your looking after of George W. Bush is good, the removal of vandalism from harry potter is good, the addition of the spoilers is bad. That's the way it goes. → Spang ☃☃☃ 02:21, 25 Jul 2007
- "When you visited the Cowboy Bebop article, had you at the time been waiting over a year to see it, only to have it spoiled before it was even possible to watch it?" -- I had been watching the series as it unfolded on Cartoon Network/Adult Swim, so yes, I was emotionally invested in the plot. I still smacked myself for being dumb enough to go to an article on Uncyclopedia for a series I hadn't finished watching. The joke was on me, and I laughed -- probably a lot more than I would have had the article just been your typical uncyclopedia fare.
- "As you were looking for spoilers the moment they came out, I expect you're not the kind of person that puts much value in a good storyline" -- Now where did that come from? I love a good storyline. I added those spoilers simply to replace the *old* spoilers in the old header, as soon as I found out that the new spoilers were out.
- "And is quite the opposite of being random nonsense." -- And how is that? You're making up a bunch of random stuff and saying that they happened, with no rhyme or reason to it. How is that not random nonsense? That's the very definition of random nonsense. You could say "Harry kills Dumbledore and marries McGonagall" or "the last Horcrux is buried in the Louvre" or "Neville's skill comes from an experimental drug, which is slowly regressing and making him even more inept" (and in fact, you *do* have it say all of them), and it wouldn't make a whit of difference. It's interchangable, random nonsense.
- "a type of (so-called) humour discouraged by HTBFANJS in fact" -- And what rule are you referring to on that? It says no such thing. On the other hand, it most definitely is humor, as it has made multiple people laugh. Probably a lot more than the rest of the article ever has.
- "I particular, I don't understand your objection to fake spoilers" -- They're random nonsense. Transparently random nonsense. HTBFANJS is there for a reason, Spang.
- "But that doesn't mean it is funny, no matter how much time he spent on it" -- That might be true in this case if it was only my opinion that it was funny. But it's not. You're overruling what multiple users, including the main contributors to the article and a majority on the talk page, have supported -- all based on the opinions of people who haven't been involved in the article. Can you see why this might be just a wee cause for offense to me? -- Rei 15:28, 25 July 2007 (UTC)
- Just to give you an idea of what Ace and I have been dealing with, I stopped cleaning up the article as soon as you RVed. Now the article begins with, "Harry Potter, dear readers, is a twat. Seven books were tasty by J.K Rowling." You're taking actions that are driving off vandalism-preventing editors from the article. Possibly the site as a whole. -- Rei 15:35, 25 July 2007 (UTC)
- And now it doesn't begin with that, it's been cleaned up already. If the article's that bad, a better use of time might be making the article better, rather than focusing on one line at the top.
- Looks like we're just going to have to agree to disagree then. And to be honest, I've seen more people against the spoilers than I've seen for them. → Spang ☃☃☃ 12:31, 26 Jul 2007
- And now it says "Dumbledore made an exception on account of frequent sexual escapades with older men, according to my fanfic her skill at baking Nuclear Muffins." And it's said that for the past two days.
- You're not just saying we must agree to disagree; you're saying that we must agree to do it the way you want it and disagree over whether that's right.
- Lastly, let's count. Against spoilers on the talk page: You, Kalir, and (just recently) SirCS1987, all of whom just showed up and hadn't been doing a damned thing to keep vandalism out before (in fact, Kalir was the one who added the nuclear muffins/sexual escapades comment). Supporting: User:165.21.154.17, myself, Ace Attacker, and Penfish, the middle two being the prime vandalism removers for the article. The tally was 2:4 Keep, but now is 3:4 Keep.
- Look, you have every right to say, "I don't give a damn what you or the majority of people who worked on this article think; I'm showing up out of the blue, and this is how it's going to be. Deal with it." Likewise, I have every right to say, "Who gives a damn that I was potatochopper of the month, that I made four featured images, three featured articles, and that I'm the sole vandalism preventer for over fifty articles and a contributor to vandalism prevention on many more. If you're going to treat contributors this way, I'm out of here." -- Rei 21:38, 26 July 2007 (UTC)
- Hmm, if Rei needs more convincing, let me just repeat what I said on the talk page:
- I like this new fake spoiler template. I hated the old one on this page for three reasons:
- 1. The spoiler template joke is getting old. REALLY old.
- 2. The original spoiler template (see Template:Spoiler) is the best spoiler template. Every spoiler template created since has been stupid, unfunny, boring and annoying.
- 3. Templates that spoil a whole book/movie simply aren't funny, just annoying. The funniest thing about Template:Spoiler is that it takes completely random spoilers that are mostly totally irrelevant to the actual article.
- So don't put it back, m'k?
- Also, to quote Rei, "but you really can't deny that many people get a hefty amount of genuine amusement from it." I bet most of these people you refer to are IPs/n00bs who think ED is the pinnacle of humour. We do not aim to please these people, that is ED's job.:15, 25 July 2007 (UTC)
- K, since I'm sticking my nose in, I figured I'd chuck in my 2c here also.
I'll keep it short and sweet.Yesterday on IRC I saw someone blatantly ruin the ending of Harry Potter 7 without any warning or provocation. One user who hadn't yet read the book was very upset about this, and understandably so. If I had not just days earlier finished it myself I too would have been very upset, having eagerly awaited this book for years. Think about other peoples feelings before you put the Pursuit of Humour above all else. And those fake spoilers are not as you put it, random nonsense, instead they blatantly refer to quite different and famous stories, The Da Vinci Code and Flowers for Algernon to name just two (and at a stretch, Hamlet). That is funny because immediately the reader knows (or should know) that its a fake spoiler and appreciate the effort taken to not just do an actual spoiler. The humour is that its an obvious fake spoiler, and thus, not a spoiler at all! I appreciate that you and other have been working hard to keep the article free of vandalism, but such is wikis I'm afraid. It does not give you ownership of the article, especially not over admins and well-respected and established users. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 02:39, 27 July 2007
- Well said! And for the record, I would have kicked whoever did that from the room. Though as I said before, something like that can never be undone, so it's worse than just insulting someone, which is why spoilers are bad. → Spang ☃☃☃ 06:36, 27 Jul 2007
- This seems to be slightly old, but just for the record, I completely:46, 30 July 2007 (UTC)
edit Spang, could you help me again?
Hey, Alksub won't stop ragging on my article. Could you like, make a tab or something that says "This article has Spang's seal of approval"? That would be awesome! Thanks!)}" > 08:25, 26 July 2007 (UTC)
- Well, Mordillo already took the ICU off, so it's ok for now. Perhaps you should put a notice at the top, saying that you need to have heard the audio to appreciate it, and make a more noticeable link to the audio. Then people who don't get it would be sure to listen to the audio before making their judgement. Dunno about a "spang approves" template... but I suppose there might be a use for some kind of "you have to be familiar with the subject to get this article" template. → Spang ☃☃☃ 09:47, 26 Jul 2007
- Hate to stick my nose in unbidden, however "You had to be there" articles kinda get my goat. In my opinion a good article should be amusing and able to be appreciated even by those who HAVEN'T read the book, seen the movie, bought the audio-book and own the collector's edition figurines in their original packaging, with an extra mylar layer around the outside to protect it from the drool of other aficionados. Because to me that kinda whiffs a little of elitism, which we all know is a Bad Thing™ I prefer an article which instructs and enlightens. Imagine if Wikipedia articles didn't actually explain what the thing the article is about was and instead only spoke to those who already know...~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 02:29, 27 July 2007
- Ah, but this one has a link to the "required listening" only one click away. I think it's ok here, as it only takes one click for the reader to become acquainted with what they need to have listened to to understand, should they wish to. And aside from that, I don't think that an article should be marked down because it's obscure and some people don't get it. On the contrary, it's probably even more funny to the people who do get the article that there's an article on it at all, balancing it out. As long as it's not vanity. It's unlikely that anyone will get all 23,000 articles on the site, so at the same time they shouldn't have trouble finding something they do get! → Spang ☃☃☃ 06:32, 27 Jul 2007
edit Cute pic!
Wow! Who knew you were so cute and adorable? I love the hat, do you have the matching cymbols? XD ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 05:00, 28 July 2007
- I've always been cute and adorable! Just in a very subtle way. I am indeed a master at the cymbals, and bouncing. And at the typewriter. → Spang ☃☃☃ 05:10, 28 Jul 2007
- Oh don't get me wrong, I've always known it, just hadn't seen photographic evidence ;) ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 07:10, 28 July 2007
edit BARHAH!
Add to userpage. --Lt. High Gen. Grue The Few The Proud, The Marines 00:38, 29 July 2007 (UTC)
- I shall find a way to fit it in nicely. Perhaps make it userbox stylee. → Spang ☃☃☃ 05:04, 30 Jul 2007
edit Or'se'la
It appears to me that, lacking the sense of unity that might inform them of their insignificance, these Gue’la have come to think that they might own the stars themselves, even the spaces in between them. Only by our presence, I think, might we now convince them otherwise. Yeah, I read your user-name policy and it's Gay 04:52, 30 July 2007 (UTC)
edit Uhhhh...
. How did you know I was sypathetic towards starving children? --Lt. High Gen. Grue The Few The Proud, The Marines 17:20, 30 July 2007 (UTC)
- Score! I mean... my starving children will appreciate this very much. Yeees. → Spang ☃☃☃ 05:16, 31 Jul 2007
edit Problem with SpangSock!'s QVFD script
Since I assume that you are the owner of SpangSock!, I'd like to make a request. Because of a paranoid security feature, some browsers won't open with XMLHttpRequest, but will work if the URL points to uncyclopedia.org. I found the bug in the function QVFD in User:SpangSock!/uncyclopedia.js. Because this security feature isn't in IE's XMLHttpRequest(), it seems to work for some users in IE. --Starnestommy (Talk • Contribs • FFS • WP) 17:26, 30 July 2007 (UTC)
- Well, for a start... The script isn't SpangSock!'s, it's Villahj Ideeut's. The correct version is at User:Villahj Ideeut/QVFD.js. I've fixed the bug there, so just use that one instead. That account is just for testing stuff when I can't be bothered blanking my own js, anything that appears in there is probably just a test version of something that's already somewhere else. → Spang ☃☃☃ 06:14, 31 Jul 2007
edit Unbooks Jr - The Boy Who Could Nor Breathe
Huffed because it was not Not Original? Do you mean it sucked or that it wasnt my own story?
- Dunno, it would help if you linked the deleted article... I can't find anything with the name "Unbooks Jr - The Boy Who Could Nor Breathe". If it was huffed as not original, it probably looked a lot like it was copied and pasted from somewhere else and not formatted wiki-style. If it was me, I could have made a mistake, but I can't check unless I know what the exact title was. → Spang ☃☃☃ 06:18, 31 Jul 2007
edit Hey Splangy!
A long time ago, when I was a mere n00b, I wrote an article on Maya Angelou. Then, I left for a number of months, during which time I think ZB huffered it. I just realized this today, and after checking the logs, the time frame fits. If I recall, it was pretty blatant idiocy and probably deserved to be banished. However, for old time's sake, would it be possible to resurrect the text of that page? I'll house it in my userspace if need be; I just want to see what the hell I wrote. Thanks bud! --THINKER 20:10, 1 August 2007 (UTC)
- Ok, I restored it for you. You can move it to userspace if you want, but I doubt anyone'll notice if you work on it there, or even if you don't. → Spang ☃☃☃ 12:24, 02 Aug 2007
edit Ban Alksub!
Whenever I put up something to be reviewed about Dirty Potter, he comes and immediately subjectively trashes it. He seems to stalk the article, and I don't know how! Please banninate him! He keeps calling the article "fan fiction"! Well, I guess if you want to get TECHNICAL, it would be. But it's just messing around with an audio editor and the article is supposed to summarize the audio! Please help! -)}" > 14:26, 2 August 2007 (UTC)
- Spang, will you protect Dirty Potter? Pleeeeeease?)}" > 06:52, 3 August 2007 (UTC)
edit Search Suggest
- I'm not sure where we voted for this, but its acctaully getting in the way of me doing stuff. My browser has this handy thing where it remembers what I type into things, thus meaning that I don't have to remember what the exact title of that template that I'm currently looking for is. Search suggest prevents my browser from telling me. So, either give me the magical ignore code, or just remove it. -- Brigadier General Sir Zombiebaron 18:40, 3 August 2007 (UTC)
- The ignore code is in the desciption on common js, if you'd looked a litter closer at the description you'd have seen it. Add
disableSearchSuggest = true;to your user js to disable it. I might add a link to instructions to disable it at the bottom of the suggestions. Feel free to start a vote on it. → Spang ☃☃☃ 06:58, 03 Aug 2007
- Psh. Once I add that code, I'll probably forget this whole thing. Thanks. -- Brigadier General Sir Zombiebaron 19:05, 3 August 2007 (UTC)
- Also, it didn't work. -- Brigadier General Sir Zombiebaron 19:15, 3 August 2007 (UTC)
- Whoops, I'll see what's doing that, and possibly add the disable link at the same time. → Spang ☃☃☃ 08:07, 03 Aug 2007
- Ah, it was a typo, it should work now. → Spang ☃☃☃ 08:23, 03 Aug 2007
- Thanks, eh? -- Brigadier General Sir Zombiebaron 21:23, 3 August 2007 (UTC)
edit Thanks :)
I just wanted to say thanks for adding a disable to your new predictive search thing. its a great). 22:28, 3 August 2007 (UTC)
- No problem, but I'll probably end up having to disable it for everyone too. Ah well! → Spang ☃☃☃ 11:17, 03 Aug 2007
edit About that little thing we discussed on IRC...
I thought perhaps you should read this. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 05:25, 05 August 2007
- You, Spang, are awesome. It's unfortunate that we lost such a promising young writer (^_^). In any case, thanks for banning her.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 14:09, 5 August 2007 (UTC)
edit Tabs
Some of my tabs aren't working. Could you fix that? If you need help, here is my Javascript. -- 18:32, 5 August 2007 (UTC)
Actually, I think my might have a JS crisis again, all sort of gadgets aren't working at the moment, including some tabs, quickedit and rollback. Are we having a wiki version problem again? ~
23:18, 5 August 2007 (UTC)
- Yeah, it's because the skin has changed slightly, and yeah I can probably fix it. I know why the tabs aren't working, have an idea as to why quickedit isn't working, but don't know anything about the rollback script, so no promises there. → Spang ☃☃☃ 05:36, 06 Aug 2007
- Is it fixed now? If not, could you paste in any errors showing up in the error console (if you're in ff) → Spang ☃☃☃ 07:28, 08 Aug 2007
- Well, first of all mighty thanks for your help :) most of it is working, including the quickedit function. The only thing which wasn't resolved was the rollback function I stole from Oli. And I keep getting a weird tag over and over again. It's recreated every time...~
16:53, 8 August 2007 (UTC)
- Got one relevant error -
Error: fcktest is not defined Source File: Line: 120
That error line looks quite naughty actually, something that Oli would write himself :) ~
16:56, 8 August 2007 (UTC)
- Yup, that's olipro's code, not covered by your Spang JS Warranty™. Ask olipro to fix it. As for the extra tab, try clearing your cookies. → Spang ☃☃☃ 08:34, 08 Aug 2007
edit follower?
it looks like i have one follower.
the FSM
- I always take a bit of time out every day to pay my respects to The Noodly Master. Ramen! → Spang ☃☃☃ 08:36, 08 Aug 2007
edit More QVFD problems
For some reason, User:Villahj Ideeut/QVFD.js isn't working (again...). this diff seems to work,.) 20:30, August 10, 2007
- It should work now. And that old revision shouldn't work. It could work all the time if someone could convince wikia to validate their HTML before adding anything to uncyclopedia. → Spang ☃☃☃ 08:49, 10 Aug 2007
edit Weee!
Bugs! Woo! The {{title}} template isn't working. This makes me sad. The much maligned "username" one (<insert name here>) is not usernaming, as well. Also, the non-existent {{Cabbage}} template is broken again. Woo! Sir Modusoperandi Boinc! 10:06, 14 August 2007 (UTC)
- Those are because there's an error in some of the js that wikia have put into the skin. Unfortunately, it's in the parts that I can't change. If I could it'd have been fixed yesterday, but wikia do seem to like their bugs. I mean, this is the kind of error that would show up as soon as the script is run the first time, so I've absolutely no idea how it could happen, unless it wasn't tested at all before they added it sitewide... and people wonder why we get these bugs so often.
- Oh, and I fixed the cabbage template. → Spang ☃☃☃ 12:03, 14 Aug 2007
- Oh...my...god. How long did that take to make? Excuse my 80'sness, but the cabbage template is totally wicked. This means, of course, that the next update will break it. Pity. Sir Modusoperandi Boinc! 12:12, 14 August 2007 (UTC)
- Well, I spent 600 hours in paint, meticulously colouring every pixel, only to discover that this site lets you do it automatically. Ah well.
- But if you think that's impressive, for something similar that is actually a million times more wicked, see this poster of the Godfather, made using the entire script of said film. Check out the other posters too. Pretty durn impressive! → Spang ☃☃☃ 07:11, 15 Aug 2007
- It's like magic. Sir Modusoperandi Boinc! 07:30, 15 August 2007 (UTC)
edit Gay-Z
Hi I am new here, and wanted to know if this article was OK: Gay-Z. If its not, please let me know that you've huffed it. Thanks. Fuck Gay-Z 07:32, 15 August 2007 (UTC)
edit N00b-licious question
Good morning/afternoon/evening/night/no preference (delete as appropriate) Spang. Quick question or so: Eugene_Worley - this was on VFD on Aug 3rd, and huffed by your good self on Aug 4th. It was then re-created in what appears on first glance to be an identical form on Aug 9th by an IP. I spotted this and was going to pop the IP up for ban patrol, when I noticed Cs1987 had ICU'd it. I checked in with Cs, and he'd forgotten it was on VFD. So, as a n00b concerned with not pissing off admins, the question is: do I proceed with the Ban Patrol nom, wait for the results of the ICU, Pick up the phone booth and aisle, or something else? Tatty byes --Sir Under User (Hi, How Are You?) VFH KUN 08:19, 15 August 2007 (UTC)
- Dunno, it might be an actual attempt to rewrite the deleted article into a better form, but it might just be a recreation. Best thing to do is ask him on his talk page, ask him if he's planning on making it better, if yes, give it the ICU's week, if no or no reply, delete it and warn him, and if he does it again, block. Easy! → Spang ☃☃☃ 05:36, 21 Aug 2007
- Eh, someone's tidied up the formatting, but it still looks like a cyberbullying page. Ah well, it's not been touched recently and the ICU is almost up. If it survives that, I'll VFD it or at least park it in the Poopsmith's lounge. Thanks for the response anyway. May you be blessed with cake in the near future. --Sir Under User (Hi, How Are You?) VFH KUN 11:53, 21 August 2007 (UTC)
- I didn't actually check it before, but it did look like a lot like vanity. But it has somehow disappeared now. Oh well! Oh, and I'm not really a fan of the vanity fix for ICU, nor the {{vanity2}} that came before it... if it's vanity, the article's bad to the core and can't be fixed, so just use qvfd. If there's a hope it can be made not-vanity, ICU it, but otherwise it's just giving it another week to live. → Spang ☃☃☃ 12:17, 21 Aug 2007
edit Helpin' With Sig22:46 August 16, 2007
edit Thanks
User:Yellow 130/Yellow awardHere's an award for unbanning me now to be useful! User:Yellow 130/sig 05:20, 21 August 2007 (UTC)
edit Dearest Spangle G. Glittersprinkles
When you add a VFH page to the feature queue, please add the page to the appropriate Top 3 of <month> page. This will save a bunch of time at the end of the month (as no one will have to do it all at once). Clothing, as usual, is optional. Sir Modusoperandi Boinc! 07:45, 21 August 2007 (UTC)
- Ok, that sounds like more work, so instead I've set up this page: User:Spang/test1. You may move it somewhere more permanent if you like. It shows you the code that will be needed to dump all the articles featured in any particular month into a top10 voting page in one go. Well done for volunteering for the task, I'm proud. You'll need to wait until all the articles of the month have been featured or queued, but clothing is still optional. → Spang ☃☃☃ 01:43, 21 Aug 2007
- So, what you're saying is that I should be doing what I've been doing, basically the same way that I've been doing it? Sir Modusoperandi Boinc! 14:41, 21 August 2007 (UTC)
- Oh, wait...I hit edit. Sneaky. You're all full of code, aren't you? <*poke*> Sir Modusoperandi Boinc! 14:42, 21 August 2007 (UTC)
edit I was wondering...
Seeing as you're good with CSS I was wondering if you could answer me a question: Is it possible to create a Div layer for use on Uncyc with an image as the background instead of a colour. I've tried using my limited skills but I can't get anything, so I though i'd ask an expert. Thanks in advance:) --) 23:12, 21 August 2007 (UTC)
- It's possible to do if you directly position an image under where you want it to be, but other than that i's not really possible. Mediawiki blocks CSS background images on divs for some reason. → Spang ☃☃☃ 09:45, 22 Aug 2007
- Damn Okay never mind, thanks) 13:32, 22 August 2007 (UTC)
edit IRC
Now. --:06, 22 August 2007 (UTC)
edit Your yearly message from Splarka
Even though I don't hate you and FU Spang is a stupid meme...
- FU SPANG
I was supposed to chan it 50 times, but then I'd get banned fr spamming on a sysop's talk page. --Lt. High Gen. Grue The Few The Proud, The Marines 21:50, 22 August 2007 (UTC)
edit Missing person report .
Well as you and Braydie appears to be friends (could be wrong) i was wandering if you know what happened to him as he appears missing .Richardson j 22:18, 24 August 2007 (UTC)
- He's probably just still taking a break from uncyc. These new admins never do stand up to the test of time. → Spang ☃☃☃ 01:37, 25 Aug 2007
- At least i got you as my backup administrator Richardson j 00:13, 26 August 2007 (UTC)
edit Long Live the Health Meter!
Thank you so much for the template! We've taken a system that (so they tell me) works simply and looks complicated and replaced it with a system that looks simple and probably has more lines of code than the flight control computer for Apollo 11. I'm reasonably sure that's progress. -Tritefantastic 00:18, 25 August 2007 (UTC)
- Minor note: Gerrycheevers asked for a clarification that the health meter is for removing articles, and that overall votes are used for approving them. I suggest this. -Tritefantastic 01:50, 25 August 2007 (UTC)
- You're welcome, and it's not actually that complicated! Yes, the health meter is for removing only, and the articles will still be featured by score alone, and age of nomination in a tie. → Spang ☃☃☃ 01:16, 25 Aug 2007
edit UnCameron footnote problems
Hello. For some reason the footnotes have disappeared form UnCameron. You seem to know your stuff and I was hoping you could help figure out why this happens. The problem seems to be something to do with the poll on the page, as the footnotes return if that isn't there. Could it be something to do with the skin upgrade? -- 15Mickey20 (talk to Mickey) 12:31, 28 August 2007 (UTC)
- Yep, it looks like the poll is causing the problem. The <choose> extension was doing that before too, so I don't know if it's a problem with the poll or with the footnote extension itself. It's nothing I can fix though, so best thing to do is to ask Sannse to tell the technical team about it, and hope they fix it. In the mean time, you could either copy and paste the footnotes in manually and keep the poll, or remove the poll, though that might cause the votes to reset. Or just wait till somebody fixes the problem, but there's no telling when that would be! → Spang ☃☃☃ 01:33, 28 Aug 2007
- Ok. I'll ask Sannse about it. Thanks for the help. -- 15Mickey20 (talk to Mickey) 13:52, 28 August 2007 (UTC)
edit Dearest Spang
Someone found the top o august before August was done. As it was the not-your-code version, it's only got 30 days worth of pages on it. I went on IRC and got Flammable to protect it. When the 31rd/Aug gets featured, can you unlock it and add the last day? Or just paste your code over the existing bits (as people voted before they were supposed to). Or, something else entirely. Sir Modusoperandi Boinc! 06:34, 30 August 2007 (UTC)
- Ok, I queued the artice for the 31st and added them all to the page, and set it to expire on the 1st september. I also set up september's page's protection to expire on the 30th September, so there shouldn't be that problem next month. → Spang ☃☃☃ 05:24, 30 Aug 2007
- Can we have a new problem next month? Aw, c'mon! You promised! Sir Modusoperandi Boinc! 18:17, 30 August 2007 (UTC)
- Sure thing. Next month we'll be working on world peace. The month after, how to eat biscuits without getting crumbs everywhere. Unless you have more challenging problems to solve. → Spang ☃☃☃ 10:29, 01 Sep 2007
- This G-spot is problematic, I hear. Been looking for it myself for a while. That hunt at the DMV was a bust. Pretty much ruined the ladies' pictures for their drivers licenses. Live and learn, I guess. Sir Modusoperandi Boinc! 13:37, 1 September 2007 (UTC)
edit Cool
as a matter of fact, that's exactly enough features queued for me. however, i'm not exactly sure what you mean by I'm not substing and fixing it for you. When you do, don't dare screw up any of the code! but if it's something i need to attend to, please instruct me as to my further duties! -- 06:49, 3 September 2007 (UTC)
- It means that I assume you're going to copy the template in and mix up the words so that they're like the VFH template before and the rest of the article. Don't screw up the code means that there's parts of the FA template that are important for feature stuff, so when you change the words, only change text you can see, and nothing else! And be sure to leave all the links as they are. Other than that, the world is your oyster! → Spang ☃☃☃ 07:04, 03 Sep 2007
edit Images
Is it possible to restore an old image that has been reuploaded? Some douche overwrote an image in UnNews:Russia's communist resistance sends bear army to retake Romania, four US tourists left dead. The pic was Image:Bear.jpg. If possible, could please restore some older version of that pic so that it looks like just a:31, Sep 4, 2007
- I did it for you. Ah, that wonders of the (rev) button! – Sir Skullthumper, MD (criticize • writings • SU&W) 20:16 Sep 04, 2007
- Beat me to it! You won't be so lucky next time! → Spang ☃☃☃ 08:39, 04 Sep 2007
- Really? Wow, I didn't know you could revert images. That's the last time I try to edit uncyc on my friend's computer... Thanks for the:02, Sep 4, 2007
edit Nooooo!!!!!!!!
No! I said ban, not block, ban! BAN! Ban me Talky Talky 00:27, 6 September 2007 (UTC)
edit Signatures
I was just wondering, how do you make the timestamps on your signature automatically <small>? I was looking into doing that for my signature-:50, 9 September 2007 (UTC)
- Go to your preferences and add after your sig template:
<small>{{subst:#time:h:i, d F Y}}</small>. And then you have to remember to only sign with 3 tildes (~~~) or you'll get two timestamps. Simple! → Spang ☃☃☃ 02:48, 09 Sep 2007
- Done, thanks!!! --User:Manforman/sig202:54, 09 September 2007
edit Coping with loss.
Well Braydie's gone and i'm greaving at the moment at the lost .
He was the one i get advice from .
Now he's gone i'm lost without him .
Just i'm not sure why i'm talking to you .
Maybe to let you know i will need advice from you time to time .
Maybe this is a
misstake mistake .
Richardson j 22:53, 11 September 2007 (UTC)
- Don't worry, people leave all the time. If you need to know anything, just ask at the village dump, or in the help forum. That's what they're for! → Spang ☃☃☃ 12:38, 13 Sep 2007
edit You are hereby deemed "Competent in the line of duty"
-- 21:53, 12 September 2007 (UTC)
edit VFP
I've knacked up a proposal for VFP and the <DPL> tags aren't working. Do you know how to fix the tags?-:08, 12 September 2007 (UTC)
- Well, probably because just changing all occurances of "VFH" in the code for "VFP" is unlikely to make it work somewhere else! It's probably not a good idea to try and do something like that unless you understand all the code in the first place. The DPL extension is one of the most complicated ones to use, so it'd be quite complicated to tell you exactly why it's going wrong without having to explaining a lot about how it works.
- I also don't know if having subpages for VFP is a good idea, I mean there's only ever 3 or 4 pictures on there at once, so it seems hardly worth the bother of making the subpages thing work. And the process of featuring pictures is already one of th most complicated things on uncyc, and it probably wouldn't help that either.
- And on top of that, VFP has changed so much recently, I don't know if people would go for another change.
- If you still want to try and convince people it's a good idea, you're welcome to, but you'll probably need to learn how to use the DPL extension first, if it's still to be formatted like VFP, and not VFH. Which is no small task! → Spang ☃☃☃ 12:38, 13 Sep 2007
- Well, I guess it isn't necessary to pursue it than. I didn't realize those DPL tags and the subpages were so complicated. --Sir Manforman 01:18, 13 September 2007 (UTC)
edit Conservation Week 2007Well, Conservation Week is over, so the template can be removed off the main page. Thanks. ~)}" > <p class="nounderlinelink" style="display:inline; font-variant: small-caps; font-family: serif;">Jacques Pirat, Esq. Converse : Benefactions : U.w.p. 15/09/2007 @ 04:32
edit Thanks on forum, Template Speed
I know I just do templates,but I just did a major revision on one of my articles!Anyway, thank you for the answer to my question.At least YOU and that other guy weren't huge jackasses about it. • <-> 20:18, 16 September 2007 (UTC)
- Heh, you're welcome. On any topic on templates, you generally have to put up with half the people making no sense, and the other half telling you you shouldn't be doing it. But if you're lucky one or two people might actually answer your question. ;) If you need to know anything else, you might as well just ask here instead of the dump, as I'm likely to be the one that answers your question there anyway! → Spang ☃☃☃ 10:23, 17 Sep 2007
At last,someone who understandsmy obsession!!!!-- • <-> 16:14, 17 September 2007 (UTC)
edit Hey...
Your TaggedNewPages.js told me that wikia's bad html is screwing it up. It also suggested that I report it here, so that's what I'm doing. Your .js is quite an intelligent creature. I think I'll call him Teddy.:12, 17 September 2007 (UTC)
- Yeah, I figured it saying that was better than it saying it had tagged them when it hadn't. There's no point fixing it right now, because it's the spotlight code that's screwing it up, and wikia keep changing it but never fixing it. So I'll wait till wikia stops messing around with the code for it, at which point it should either work, or be easy to work around. → Spang ☃☃☃ 02:51, 17 Sep 2007
edit Regarding {{VFH}}
Yeah, I know you've done quite your due diligence on anything involving the Votes for Highlight page and probably wish that it never come within 50 miles of your presence, but I have a question (I'd do the coding and whatnot, it's not too difficult). In addition to the featurecode on Template:Featuredarticle/queue, could there perhaps be a link to automatically add the Featurecode on Template:VFH? It should be relatively easy, but I figured I'd ask your opinion, since you're rather the architect of the whole thing.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 23:27, 20 September 2007 (UTC)
- Well, first of all, why? It's be more trouble to do it that way than with how it's done now. Second, it used to do this by showing the current date, but I took it out, because it's not so simple to get the date of the next feature slot. It would probably be more trouble than it's worth, as the template that gets that date is made specifically to be subst-only, and wouldn't work unless {{VFH}} was substed also. Third, featuring needs to be done in a specific order, with placing the {{FA}} feature code done last, so putting the feature code there would be too tempting for people who don't know how the system works to place it in, thinking they are helping, and mess up some feature stuff. → Spang ☃☃☃ 08:24, 21 Sep 2007
- First of all, I didn't know about that handy {{tl}} template. I'll keep it in mind, thanks. Next, I was simply considering the cool and sexy admins who add the feature. They tend to like the easiest possible scenario, and one click is always easier than....more than one. In any case, it's a superficial request that I just figured out upon completing my newest article and saying "Hey, I should do something actually useful." But no matter.-Sir Ljlego, GUN VFH FIYC WotM SG WHotM PWotM AotM EGAEDM ANotM + (Talk) 22:49, 21 September 2007 (UTC)
- Yeah, I understand, but it's actually easier the way it is now. Don't let it put you off suggesting further improvements though! → Spang ☃☃☃ 02:42, 22 Sep 2007
edit I normally find the on this day to be ****
But whoever did september 26 Commemeration of the brick wall day must be thanked & i don't know who i should thank
- According to the history, the person you should be thanking is User:TheRealSheep, though they haven't made any edits since about a year ago. → Spang ☃☃☃ 02:16, 27 Sep 2007
edit Heeeelllllll ellllllllllppppppp
I did an experiment in my sandbox here and for stange reason im unable to get rid of my edits (most probably at my end). And if you could do me a favor and clear out everything underneath the ==Sandbox here== header (just ignore the do not edit warning)thanks. And i may even consider you as a replacement for braydie . Richardson j 02:55, 27 September 2007 (UTC)
edit I need your help
Okay once again i'm trying to sort my sig out (Again) and well basically i'm basing it on Famine's sig. Now I want the shortened timestamp the most (21/08/07 16:24) but I also would like the sig to expand using this image ( (Talk) 22:10, 28 September 2007 (UTC)
My sig code
<small><small><span class="nounderlinelink plainlinks" style="font-family": Garamond;[[User:Jocke_Pirat/TheRegister|<span style="color:#666666">Senator, </span>]][[Forum:Join the Grue Army! (Yes, again.)|<span style="color:#666666">Marshal</span>]] <span style="color:#666666">&</span> [[User:Jocke Pirat/UNSOC|<span style="color:#666666">Minister</span>]] [[User:Bonner|<span style="color:#666666">Bonner</span>]] [[Image:Is_loading_3.gif|12px|Wait...]] [[User Talk:Bonner|<span style="color:#666666">Speak!</span>]]</span></small></small>
Ok, to add the small timestamp, you just need to go to your preferences, change the nickname box to read
{{nosubst|user:bonner/Sig}} {{subst:#time: d/m/y h:i}}, and then remember to only ever sign your posts with three (3) tildes, not four, or you'll end up with two timestamps.
To make your sig expandable, the code is
<span class="sigexpand">image here <span class="sighidden">hidden popout stuff here</span></span>
You should probably be able to work it out from that, if not I could help you more. → Spang ☃☃☃ 01:05, 29 Sep 2007
- Bugger me spang that's bloody fantastic :D Thanks so) 29/09/07 08:41
edit Template:Vfd
Hi Spang it's Cajek, ideas for VFD template to make the VFD easier:
- Someone should add Special:whatlinkshere/page to it for research purposes.
- Also a link to the history of the page, because people forget.
- There should probably be a link to a gallery of all the images that are used on that page, for QVFD in case it IS deleted.
- The VFD should sound a little less like there's no hope: "This page does not fit in Uncyclopedia, or is not funny with little chance for redemption" should be changed to "This page may not fit in Uncyclopedia, or may not be funny with little chance for redemption"
What do you think? -- • <-> 18:52, 29 September 2007 (UTC)
- Well the history and "what links here" links are already on every page, and adding them to the template would be a bit redundant. And I don't think there's a way to list all the images used on a page, and definitely not if it's been deleted. I changed the wording to what you suggested though. → Spang ☃☃☃ 04:56, 30 Sep 2007
edit Prepare for the thanking of your life!
Prepared yet? TOO LATE!
Thanks for voting for the page with the ridiculously long:20, Oct 3
edit Oh noes!
The {{title}} template is bust! Again... Still... Sir Modusoperandi Boinc! 02:25, 4 October 2007 (UTC)
- It's working for me... Must have been fixed already. Or maybe I'm just that good. → Spang ☃☃☃ 04:15, 04 Oct 2007
- Odd. Is it IE, then? UnNews:Famous person does something should read UnEntertainment Tonight:Famous person does something. Sir Modusoperandi Boinc! 05:02, 4 October 2007 (UTC)
- It works for:09, 4 October 2007 (UTC)
- Works for me in IE too. Are you sure javascript isn't turned off on your computer? → Spang ☃☃☃ 07:52, 04 Oct 2007
- Javascript? Computer? It doesn't work here, I'll check a different computer when I get home...not that I'm at work (/me looks over shoulder) Sir Modusoperandi Boinc! 09:53, 4 October 2007 (UTC)
- Ah. It works on both IE and FF at home, which makes it a work thing. Did I say "work"? I meant "coffee shop" or "something else"... Sir Modusoperandi Boinc! 01:01, 8 October 2007 (UTC)
edit Once Again
I am in need of your assistance...
(Sorry) :S
I was wondering how you replaced the Uncyc logo on your userspace with something else. I asked on IRC and was told to ask an admin...And you seem to be the best at this sort of thing so. I have the image where it should be on my userpace but it appears under the Uncyc logo. 6, 17:03
- I didn't replace the logo, it's just a positioned picture. It just happens to be the same colour as the text, so it just looks like it goes over it, and actually goes under it like any other picture will do. → Spang ☃☃☃ 23:55, 06 Oct 2007
edit Javascripts
I was wondering, could you help me with my javascript. I needed it so when you revert an edit, the text in the edit summary will come up as
Reverted edit(s) of $1 (talk) to last version by $2 and when you add a maintence template (i.e. {{Expand}} and {{VFD}}) to an article, there will be an automatic edit summary,:06, 6 October 2007 (UTC)
- I fixed your js, you hadn't copied the functions properly. They should both what you asked anyway. → Spang ☃☃☃ 00:23, 07 Oct 2007
- Thanks. I was wondering if (1), when I click the VFD button, the VFD template would appear at the top instead of the bottom, (2), if the added edit buttons would only appear when I'm editing an article in the mainspace or UnNews space,:40, 7 October 2007 (UTC)
edit Your IRC Javascript Help
edit Undo an image update .
Suspect image
It appears someone uploaded this image im place of an old one . As an administrator could you reverse this image to the image of a cute anime styled girl as this new image is completely unreleated . From your friend of many ? years Richardson j 22:35, 11 October 2007 (UTC)
- Don't worry .
Somebody allready fixed it .
edit Heeeeeelp Meeeee...
Someone on IRC told me you know how I can get my Unicode fonts upgraded. Is this true? --SPY Administrator (Complain|I rock|In memoriam) HMRFRA
WH 22:10, 17 October 2007 (UTC)
- What do you mean, upgraded? You can download a unicode font, but you can't upgrade a font. Also, change your username. → Spang ☃☃☃ 11:24, 18 Oct 2007
- He asked on IRC because he can't see Tom's commie symbol (☭). I thought that he was using an older IE, as it displays as a box on it, but he's tried both the newest IE and FF. I told him to ask you because you're the Carnac the Magnificent of such nerdy things. Also, this isn't my talkpage. Sir Modusoperandi Boinc! 11:55, 18 October 2007 (UTC)
- In that case, he needs to install a font that has the unicode symbol he want to display. Unfortunately, fonts with all the unicode characters (there's currently 96,382 of them) are hard to come by if you haven't got Windows XP/Vista or Microsoft Office installed. Follow the instructions here to install it if you've got Office installed, otherwise this page lists many fonts that support the misc. symbols which is probably what you're looking for. → Spang ☃☃☃ 12:15, 18 Oct 2007
edit Thingy!
Hey there, Spang. I gave up on the cow thing, so here's this instead.
hooray!)}" >philanthropist!! 02:26, 21 October 2007 (UTC)
edit ABUSE!
You abused my user page! :)
-:52, 23 October 2007 (UTC)
- Like z0mg! :O -:58, 23 October 2007 (UTC)
edit DPL
Thanks for getting my DPL to work. Nothing like some good, useless, yet shiny functionality on the ol' userpage. I'm really starting to dread "upgrades" around here... – Sir Skullthumper, MD (criticize • writings • SU&W) 23:24 Oct 24, 2007
- You're welcome. I do like the ol' functionality too, there's nothing like putting hundreds of hours work in for 10 seconds saved in the future! → Spang ☃☃☃ 23:29, 24 Oct 2007
- Not at all. It serves a function, which it does rather well. The function just happens to be useless, and takes more effort to implement than it does to manually perform it. But that's most of technology anyhow. I'm being technological! – Sir Skullthumper, MD (criticize • writings • SU&W) 16:06 Oct 27, 2007
edit Rick Roll
Okay, original I had pasted a bunch of Rick Astley gifs, wrote down the the lyrics to Never Gonna Give You Up, and said "FU Spang! You will not censor me!" But I have learned in the past that aggrivating people does not solve your problem. So I will ask you nicely. Can you please restore me Rick Roll page? I barely even use it anymore. It was EnzoAquarius that would paste his version in forums, and Manforman would spam my page here and there. Once again, could you please restore it? It has sentimental:57, 27 October 2007 (UTC)
- Actually, I haven't spammed the template for 2 weeks, it was Capercorn who spammed the template 2 days ago-:50, 27 October 2007 (UTC)
Spang, I have a complaint for you as well regarding the Rickroll template. Why didn't you delete it and list it on UN:PT instead? :) —Hinoa talk.kun 16:01, 27 October 2007 (UTC)
edit 2 Things
First it appears Braydie's back & 2 it takes 10 hours to download your talk page & thats on broadband . Could you archive some of you talk page please so those dialup fools don't have to wait an enternity for it to load up .
- Nah, I like it when it takes a long time... if you know what I mean. Page loading times are my thing, I mean. Yeah. → Spang ☃☃☃ 08:08, 05 Nov 2007
edit HEY!
HELP! --Andorin Kato 06:56, 5 November 2007 (UTC)
Please kill this user, revert his edits, and delete the pages he moved: Special:Contributions/Grawp_the_Jolly_Giant --AAA! (AAAA) 06:56, 5 November 2007 (UTC)
edit Signature Help
Hello, can you code me a siggy with my name in bright red with 12px font and a little fire right next to it please? Thanks, -Razorflame 18:48, 8 November 2007 (UTC)
- Mayyybe... if I get bored enough. But it's more fun to learn how to do it yourself, right? Right! → Spang ☃☃☃ 18:04, 14 Nov 2007
edit RE: that IP Newpages forum
/me locks the door. I hope you enjoy it in there, voting.....for the rest of your life! Muahahahahaha! Sorry to bring my idiocy here, but the forum is locked. :( It makes me a saaaaaad p 13
- You can't blame me, I didn't protect it! I didn't even realise it was protected when I added the comment. But at least now only the cool people can contribute to the discussion. That's the best way to gain consensus, right? → Spang ☃☃☃ 20:10, 13 Nov 2007
- Yeah, only the cool people, now we can-- wait... I'm not an admin...are you saying I'm not cool?!? I'll have you know, sir, that my mom tells me I'm cool almost every night. Usually right after she tells me what a huge mistake I was. Ah, good, Nov 13
edit UnNews
Hey Spang, something very weird happened in the UnNews main page. The first two featured had something wrong with the pic presentation within the template (the images aren't showing, just the image text). When I switched to IE from FF, you don't even get to see the page. Can you take a look? ~
21:04, 17 November 2007 (UTC)
- I fixed this ages ago, just forgot to say here. In case you were still wondering or something. → Spang ☃☃☃ 21:44, 28 Nov 2007
edit Kitten Huffing
Why did you un-nominate it? --—The preceding unsigned comment was added by Benno briton (talk • contribs)
- You can see the reasons for removed nominations by looking at Uncyclopedia:VFH/Failed (the "recently failed" link in the nav bar at the top of VFH). In this case, I removed it because Kitten Huffing has already been featured 3 times. → Spang ☃☃☃ 21:43, 28 Nov 2007
3 times on what? --—The preceding unsigned comment was added by Benno briton (talk • contribs)
- Uhh... 3 times on uncyclopedia. You do know you're on uncyclopedia, right? And you do know what VFH is for, right? You nominated the article to be featured on the front page. It's already been on there 3 times. Perhaps you should also read the Beginner's guide. → Spang ☃☃☃ 21:50, 28 Nov 2007
edit thanks spang
kbai --Charitwo 03:00, 29 November 2007 (UTC)
edit Yoinxx News!
Hello, Yoinxxer Sp:22, 29 November 2007 (UTC
edit Article of da year 2007
Do you know when voting will start?
- The voting will start on January 1st, 2008, as as soon after that as someone can be bothered starting the voting. → Spang ☃☃☃ 20:18, 29 Nov 2007
edit My IP edits
Before I created this username, I made many edits under the IP address User:68.111.167.39, and I would like to merge my anonymous contributions with my logged-in ones. Is there any way I could do that? --YeOldeLuke 23:57, 29 November 2007 (UTC)
- Nope. Well, you could try and get the wikia techs to go into the database and change the values themselves manually, but there's a less than 0 chance of anyone doing that. The best thing to do would to create a userpage for the IP and just redirect it to yours or put a note on it saying it's you. → Spang ☃☃☃ 01:57, 30 Nov 2007
- Can I still use the anonymous edits to help my edit count reach 500? I want to adopt somebody, but I need 500 edits first. --YeOldeLuke 20:29, 2 December 2007 (UTC)
edit Oh, noes: part XIVRX
Trying to be a good cowboy, after noticing that Uncyclopedia:VFH/queue has nothing for today, I attempted to featurate (or featurificate) Jaws did WTC (the high score on VFH). As per Forum:Feature_queue, I did step one, refreshed, step two, step three...and it says ...This will give you a link to add the article to the queue. Click it and follow the instructions in there. Save.. There's no "save". The page is all "This page has been protected from editing, because it is included in the following pages, which are protected with the "cascading" option turned on...", and I'm all "No way!" and it's all "Way!". The code works, I just can't save it (well, I did save it here User:Modusoperandi/Jaws did WTC), and did put the templatey thingy on the Jaws/WTC page. What did I do wrong? Did I not turn it off and on again enough? Why is Template:Featuredarticle/queue protected from my sweaty hands? Sir Modusoperandi Boinc! 09:03, 30 November 2007 (UTC)
- Well, thanks for picking up where the admins failed, but the featured article page is protected because that's what goes on the front page. If there was something between "be an admin to edit this" and "wait 3 days to edit this", we'd probably use that, but it's best it be protected from evil. Plus there's lots of codey stuff in there that about 2 people understand, and is best not messed with. So yeah, you need to be an admin to feature. Another couple of days and you might have been fine! Was there no admins in irc? Not even a wikia helper/janitor that could be convinced? Must have been a slow night. I'll just skip today's feature and move it to December 1st for featuring. Just another one of the arguments to have more admins! → Spang ☃☃☃ 20:41, 30 Nov 2007
- There was nobody. Just me and a tumbleweed. I named it Darlene. We're getting married in the Spring. Sir Modusoperandi Boinc! 21:48, 30 November 2007 (UTC)
edit UNLIMITED POWER
~
16:43, 3 December 2007 (UTC)
edit FU SPANG
*Flips the bird* EugeneKay wuz here (whine thank) 00:00, 4 December 2007 (UTC)
edit Just checking
Hey Spang! I sent you an e-mail via Uncyc the other day - did you ever get that? I'm not sure how important it is, though I suppose it seemed important at the time... c • > • cunwapquc? 14:56, 6 December 2007 (UTC)
- Ah, so you did! I don't check that email address too closely these days, so must have missed it. I shall reply forthwith! → Spang ☃☃☃ 19:39, 06 Dec 2007
edit Javascript troubles...
Hey Spang, I recently decided to implement some javascript to enhance my uncyc experience, and after deciding to steal all of U.U.'s js(sorry, I'll give it back, I swear!), I noticed that I had to edit the autowelcome script, so that I could display my own, personalized, extra shiny welcome message. My question to you, Spang, (I know you're out there!) is why isn't the bit of code I whipped up(as in "stole from UU again") welcoming users with my version of the welcome template(which I stole from HerrDoktor)? -:00, Dec 6
- Should be fixed now. Oh, and that looks more like Bonner's js than Under user's, so it probably came from there. And Bonner's JS is mostly mine originally. Damn javascript thieves! → Spang ☃☃☃ 21:40, 06 Dec 2007
- I AM NOT A THIEF!! Well, actually, yeah, I kinda am. Oh:42, Dec 6
- Just got a chance to test it. Thankies for all the help, Sp:23, Dec 6
edit QA Alert!
Hellooo I'd like to post a QA issue. I removed an article from VFH today, and posted out a long and tiresome remark that explains why am I such a tiresome person. That long remark completely fucked up the recently failed que, when I reverted and reposted it in a short version, it was ok. Now do with me as you will....:) ~
16:13, 7 December 2007 (UTC)
- Had a look, but no idea what happened. Could have been a stray comma or something odd somewhere. Wiki fairies, probably. → Spang ☃☃☃ 23:31, 14 Dec 2007
edit Complimentary award
User:Uncyclopedian/challengefail
I win in some way, because I did that picture you used in that template. So there! → Spang ☃☃☃ 23:30, 14 Dec 2007
edit Hello, me again
Would it be possible to implement a similar auto update system on the UnNews main page, like the one we have on the main page? ~
11:49, 14 December 2007 (UTC)
- Possibly, but would it be necessary? VFH articles don't have any specific time they have to be on the page by, but news articles are supposed to highlight the most interesting "headline" story at the time that people will know about and want to read about. If you see what I mean. Having a VFH-style system would probably involve a voting process which would slow it down, a queue for which we could never plan ahead for (because you can't predict the news), and each highlight would only be there for a limited time, no matter how long it was actually in the news for. I think the current system is far easier and far more useful for news stuff. → Spang ☃☃☃ 23:41, 14 Dec 2007
- Thing is, it's entirely dependent on Zim, and if he's away nothing moves for days. I wasn't thinking about a vote, more like an auto movement once the lead article has been updated. ~ Mordillo where is my FUCK? 23:47, 14 December 2007 (UTC)
- Nobody's stopping other people doing what zim's doing. If he's not doing it fast enough, feel free to jump in if it needs changing. → Spang ☃☃☃ 00:07, 15 Dec 2007
- Take over Zim? He's the dirt under my rollers! :) ~ Mordillo where is my FUCK? 00:09, 15 December 2007 (UTC)
edit Xmas!
-- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 21:43, 16 December 2007 (UTC):08, Dec 17
--:20, 24 December 2007 (UTC)
--YeOldeLuke 08:00, 26 December 2007 (UTC)
edit Template:Spoiler
Could you remove the Everyone Dies™ message from Template:Spoiler? It's outside the option tags and it kinda annoys me. Also remove the message about how someone is quite possibly about to lock the template, because, um, it's already locked. --L 10:09, 20 January 2008 (UTC)
edit Top 10 articles
Hello master Spang. Since we'll start featuring the top ten articles soon, do you have any smart way to work the queue? The only thing I thought about was just playing manually with the featurequeue template... ~ Mordillo where is my FUCK? 22:35, 29 January 2008 (UTC)
- Featuring them in the normal way should work fine, unless you want to feature them all at once. In which case just replace the
{{subst:nextfeatureslot}}part with the actual date you want it featured, and it should be fine. And don't try to feature anything the normal way until there's less than 3 articles still queued. Also, the whole matial law VFH thing is quite unnecessary, having been around for 2 of these before, there's nothing to worry about there. → Spang ☃☃☃ 06:12, 30 Jan 2008
- Thanks! And for the martial law, it's not because of that, it's because I just keep seeing articles stacking and stacking and failing with 4 votes while the top articles get a max of 12 and 13. It worked, more or less, last time - hopefully it will again. ~ Mordillo where is my FUCK? 07:39, 30 January 2008 (UTC)
- BTW, does this mean that I can built up a queue for 10 days? Or work with three day bloc? And do I need to update to FA template on the articles themselves? Sorry for being a pain.... ~ Mordillo where is my FUCK? 15:15, 30 January 2008 (UTC)
- Yup, it's just that {{nextfeatureslot}}, which automagically puts in the next available feature date in lots of places, won't go further than 5 days in the future, so you'll need to enter the dates yourself instead of the template where it appears. In 31 January 2007 format. The articles don't need an FA template, because they all already have one. They should get a 2007Top10 template though, in the style of {{2005Top10}} and {{2006Top10}}. → Spang ☃☃☃ 20:13, 30 Jan 2008
- allright! I'll be sure to contact you when I fuck everything up! :) ~ Mordillo where is my FUCK? 22:15, 30 January 2008 (UTC)
- Me again, I saw the change you made for the featurearticle/include - and unfortunately we can't use it due to tied articles. However, is it possible to make a template that I will put a value beforehand, and it will change automatically when it's 0000GMT? ~ Mordillo where is my FUCK? 12:10, 3 February 2008 (UTC)
- Yeah, didn't cound on that. The ways you could do that would be either: {{#ifeq: {{CURRENTDAY}} | Wednesday | 8 | 9 }} or something, but you'd still have to edit it each day, or, use a switch to put all the numbers in beforehand, including ties, like {{#switch: {{CURRENTDAY}} | Monday = 9 | Tuesday = 8 (tied) | Wednesday = 8 (tied) }} which is probably better. Although that example won't work because the days would be the date numbers, not names. Not too hard to work out though.
- Also, {{nextfeatureslot}} has stopped working because it needed FAs to be placed to link to the right place so it knew what was featured then. So now {{2007Top10}} has a parameter for the date it was featured (in the form {{2007Top10|1 February 2008}}), and that should fix {{nextfeatureslot}}. → Spang ☃☃☃ 12:31, 03 Feb 2008
- I was wondering about the queue, it started behaving all weird...As for the switch command, I just need to create it in my space and link to it from the featurearticle/include headline? ~ Mordillo where is my FUCK? 12:38, 3 February 2008 (UTC)
edit Hey
Archive! See this page please User:Severian/sandtable--:09, 1 February 2008 (UTC)
edit Tables?
Felicitations and solicitations!
Some kindly old fellow has directed me to you, as I would like a bit of help with formatting. My article needs better tables, as at the moment, they are dog ugly. If you could sort of direct me in the right direction, that would be brilliant.
Ciao --DyslexicRetard 02:56, 7 February 2008 (UTC)
- Replied on your talk page, because you're probably not watching this any more! → Spang ☃☃☃ 01:15, 15 Feb 2008
edit Can you
Come over to IRC for a:15, 7 February 2008 (UTC)
edit Tanks
edit NINJASTAR
- No need to thank me kid, that's just what I do. *gazes into middle distance, walks into sunset* → Spang ☃☃☃ 03:07, 11 Feb 2008
edit User:Mr C. Norris
Hey, saw yew were talking on that dude's talk page. I've seen him doing things to warrant banning but he's sort of passed under the radar. Just thought I'd give yew the heads up, he's pissing me off :P --User:Fag/sig2 16:21, 15 February 2008 (UTC)
edit Given name
Spang, why did you delete given name? - 04:55, 23 February 2008 (UTC)
- Because it seemed to be just a list of names, and names aren't generally funny per se. Also, it was a list. Lists are discouraged unless they have actual content to go with them. I could restore it to your user space to work on if you'd like? → Spang ☃☃☃ 12:50, 28 Feb 2008
edit America
Hellair! How do I queue it for a feature on the 4th of July? Does it need to be on the queue for the next four months? Thanks! ~
22:26, 28 February 2008 (UTC)
- Just make sure you replace {{nextfeatureslot}} with "4 July 2008". And yes, it'd have to sit on there for 4 months. You could put a box round it reminding people not to remove it yet. Or just feature now, the featured revision would be 4 months out of date by the time it was featured anyway. Could always do a reskin on July 4 and just have it as the featured article again as part of the reskin? Or whatever. → Spang ☃☃☃ 13:56, 29 Feb 2008
edit Thankings
Hey Spang, just wanted to say thanks for the nom+votes for me on VFS. I appreciate it very much, and I'll try my best to be an all round good admin. Obviously. :27, Mar 1
edit And just to show how much I really:50, Mar 1
edit Late Thanks
• • • Necropaxx (T) {~} 01:01, 2 March 2008 (UTC)
edit Deleting a page I made
Apparently I made a page with the title Otter, and someone huffed it because the only text it contained was "YOUR MOOTHER". Was that true? Because when I made it, it did NOT say that. You probably won't remember, but I'm just asking. --Beachpenguin 02:46, 2 March 2008 (UTC)
- That version was deleted before you made your version. Your one was deleted because it was tagged as {{ICU}}. I've restored it for you, feel free to edit. → Spang ☃☃☃ 02:52, 02 Mar 2008
- Oh, it was dumb. You can leave it deleted. --Beachpenguin 02:58, 2 March 2008 (UTC)
edit Ban needed!
[1] needs a ban ASAP. Ban him and I shall grant you 1 (one) pie of a flavor of your choosing. -:54, Mar. 2, 2008
- Way ahead of you! Well, only a little, but it still counts! → Spang ☃☃☃ 05:57, 02 Mar 2008
- And I would've beaten you if it weren't for this page's meddling size! -:58, Mar. 2, 2008
edit Archive Your Talk Page Please
IT IS REALLY BIG --CharitwoTalk 06:02, 2 March 2008 (UTC)
- I know! High 5!! → Spang ☃☃☃ 06:07, 02 Mar 2008
- Grr... Either make it smaller or I will use my shrink ray to gradually reduce the size of your genitalia until they no longer:09, Mar. 2, 2008
- It's also making firefox go so slow that it's about to crash and my computer is practically grinding to a halt. -:13, Mar. 2, 2008
- Use section editing? → Spang ☃☃☃ 06:31, 02 Mar 2008
- That doesn't help with actually loading the page itself and viewing diffs. --CharitwoTalk 06:42, 2 March 2008 (UTC)
- Use
&diffonly=yon diffs. I'll archive it when I'm good and ready! → Spang ☃☃☃ 07:06, 02 Mar 2008
edit LOLZORZ for Gobshite
You know you want to. -- §. | WotM | PLS | T | C | A 22:04, 2 March 2008 (UTC)
edit John Cage/Kamelopedia
I'm glad you like my little work (I like you article as well) ... feel free to take, what you need (sorry for my terrible English) --WiMu 19:28, 3 March 2008 (UTC)
edit Canadians video
Hey, wanted to invite you to check out the video I started to Modusoperandi's audio "Canadians". You're welcome to comment and help me finish it. IdoSet 13:19, 4 March 2008 (UTC)
edit JS
Many thanks for your comments and help with my .js I have it working now, and it's much faster. I don't understand why my cache is not working correctly on Firefox, as IE is fine (though the js mostly does not work). I still think it's a good idea for people to paste code together into one lump even if speed is not an issue, but I guess not doing so allows others to automatically get bug fixes and the like. I'm going to be doing some other new .js stuff in the near future, so hopefully you will not mind me picking your brains now and again if I get stuck with something. Thanks again for the:09, Mar 5
edit Video extension
Hi, it was taken off after Sannse wrote, for some reason (or Unreason) that there was a consensus for it to be disbaled. I know "true" and "Untrue" are pretty delicate words, so I'll just say this: IT SUCKS!!! I thought we agreed it was worth a shot, and that you liked the canadians video. Were you lying all along? Seriously now: enough people said it was worth a shot. How come a wikia staff memeber can come and disable it, 10 days after the discussion was over, just like that? Where is this world coming to? IdoSet 08:15, 11 March 2008 (UTC)
- Well I thought it was worth a shot, and had no part in disabling it. It does suck. I was trying to convince people it's a good idea! Some people just won't be convinced though :(
- And it's a well known fact that wikia is evil, but they can only go on what the majority of people say to try and keep most people happy, unfortunately. → Spang ☃☃☃ 13:50, 11 Mar 2008
edit About the rating system
Do you think you could incorporate a setting for disabling ratings? -) 22:44, Mar. 11, 2008
edit Sir
Please ban this dude --CharitwoTalk 02:05, 17 March 2008 (UTC)
edit Give me some credit
Spang, I think I'm the last person around here you can call trigger happy. Have you noticed the RC with this username? If he edits a lot it would become impossible to read. And when you follow the damn RC all day long, it becomes a nuisance. Cheers. ~
16:32, 17 March 2008 (UTC)
edit YOU BASTARD!
YOU TOOK AWAY MY 666!!!
FOR THIS YOU SHALL PAY!
MWAHAHAHA!!!! 14:25, Mar 18
edit Help with wikicode
Hello; I'm a Sysop of Wikichan, and recently we've been bombarded with several vandal attacks. Anyways, I've talked to Thekilerfroggy, and he suggested for me to ask both you and Algorithm for help with editing the code to allow for Sysops to use "Semi-protection". Bot 22:55, 21 March 2008 (UTC)
- ...among other helpful things you could think of. -- 23:09, 21 March 2008 (UTC)
- What we call semi-protection is the same as your "no anonymous editors" option with a few settings changed. You need whoever owns the webserver itself (wikisysop, I'm guessing) to make some changes if you want it to be any use. This manual page on preventing access list lots of methods you could use to stop vandal attacks, but all of them require the owner of the server to do them.
- Some useful changes would be - changing $wgAutoConfirmAge to something more than 0. What that'll do is make sure an account has been registered for a certain amount of time before it can edit a "no anonymous edits" protected page. Here, it's 3 days, and is usually enough to stop most determined vandals. You can also edit the group permissions for the entire wiki, so after registering an account, they'd need to wait that amount of time before editing anything. You could also set it to require a confirmed email address to be able to edit. A more extreme method would be to have only sysops be able to create accounts for people who ask for them. If none of that made sense, or the owner needs help doing them, I'd be happy to explain better.
- When you're dealing with very determined vandals like this, there's not a lot a sysop can do except revert and block them when they appear. More sysops would help, so get wikisysop to make you a bureaucrat if he's not around that much. But it wouldn't fix the problem. → Spang ☃☃☃ 04:36, 22 Mar 2008
edit Okay Spang, found you.
I saw you in the shoutbox. So now I just have to know, how do I change my skin so that I can use that thing, and the other widgets too? The best I can do right now is add &useskin=quartz to the end of a URL. – Sir Skullthumper, MD (criticize • writings • SU&W) 04:44 Mar 27, 2008
- Yay! Someone else found the shout box!
- To see the widgets, you need to use a skin that has them enabled, which at the moment is only quartz and monaco. Put useskin=monaco on the end to see that (it's a lot nicer than quartz). But as far as I know, you can't save those skins as they're not in the options, because they used to break uncyc. They seem to work now though. There might be a way to hack it to be able to save them, I'll have a look in a bit. → Spang ☃☃☃ 03:00, 28 Mar 2008
edit You liked my Main Page, didn't you?
DIDN'T YOU! --Lt. High Gen. Grue The Few The Proud, The Marines 03:15, 1 April 2008 (UTC)
Spang. Things were said. FU SPANGs were said, specifically. And I just wanted to say that I'm sorry again, and here's a carrot cake as an apology:
APRIL FOOLS I ATE IT.
But I'm still sorry. -- §. | WotM | PLS | T | C | A 03:58, 1 April 2008 (UTC)
I didn't get the chance to bitch about it on IRC, so I'll bitch in your talk page instead. The community voted to unprotect the main page. Generally, when the community votes to do something, that's what the site does, even if it does happen to annoy one admin. --THE 22:57, 2 April 2008 (UTC)
- Oh, I just realized you probably only protected it because someone moved it and screwed up the site. If that's the case, then I'm sorry for whining and completely misunderstanding. --THE 23:08, 2 April 2008 (UTC)
- Bit of both, to be honest. Someone moved it somewhere else, and while the real Main Page was somewhere else, people created a new one in its place, and somehow one of those revisions got corrupted so it couldn't be edited or even deleted. I figured out how to get rid of it (which involved protecting it, ironically) and then moved the real main page back and protected it to stop it being moved again, and to make sure it was actually still working. But then couldn't be bothered unprotecting it again, and went to do something else. But STM's main page was way better anyway. → Spang ☃☃☃ 01:42, 08 Apr 2008
edit Arc-hiving
I know this may seem like sacrilege to suggest, but the arc is a big boy now, all grown up at the wise old age of 1. Most web-content his age is moving out of home and getting its own place, maybe its time we at least let him move into the sleep-out? *sniffle* Our little boy is growing up so fast *lump-in-throat* ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 10:03, 08 April 2008
- I dunno... I think we can agree the arc can't be compared to any other normal web content! It would just feel strange to not have the arc on my talk page! → Spang ☃☃☃ 03:10, 09 Apr 2008
edit Template:ICU
Thanks for getting rid of that extra code. I hope everything else is all sound 'n' stuff.-~~ Sir Ljlego, GUN [talk] 00:21, 9 April 2008 (UTC)
- No prob. Everything else seems fine, although probably a bit unnecessary. I mean, there's nothing stopping someone leaving a fix message that says "check the talk page", and then you avoid it linking to a talk page that has unrelated stuff on it. Because a new page will have a talk created for it way more often than someone will actually take the time to write a full talk page explanation of why they're ICUing it. And, even if someone was going to do that, it probably be easier to go ahead and fix the article themselves. → Spang ☃☃☃ 03:16, 09 Apr 2008
- I made that talkpage thing in response to the new trend of really really good Pee Reviews. Since the people sometimes make more effort to review an article than the author did to make it, I knew that there were some people for whom the feature would be helpful.-~~ Sir Ljlego, GUN [talk] 19:46, 10 April 2008 (UTC)
edit CVP
Why not use it? ~
19:04, 10 April 2008 (UTC)
- Because it leaves blue links where it's linked other places, which among other things can confuse interwiki bots who think it's a legitimate article and link it from other uncyclopedias. And it's probably nicer to follow a red link and be told you need to ask to create it, than follow a blue link and be redirected to that instead of the article you were expecting. Probably. Also, Protected titles is much neater. Use that instead. → Spang ☃☃☃ 19:16, 10 Apr 2008
edit I need a title removed
Can you please remove the title from Make Money Fast, it works so much better without the title. Thanks! --User:The Improver/sig05:31, Apr 23 2008 (UTC)
edit DPL pruning
I was thinking about just how many things run on DPL, and how many things probably could work similarly without the DPL. The one thing I keep coming back to is the Feature Queue. I know you worked really hard on it, and I know you had to use substantial amounts of magic, but the feature queue isn't serving much purpose right now, as the queue only often goes one day ahead, and it could just as easily be done manually. I'm not trying to burst on your bubble or nothing, but if we need to get rid of complex DPL, then that's the place to start. If it still acts like a problem, we could always change the UnNews Main Page to forums as opposed to DPL, although it would suck majorly. If you need any help, I've learned substantially since our last endeavor, and would gladly assist.~~ Sir Ljlego, GUN [talk] 01:39, 25 April 2008 (UTC)
- Well the thing is, it can't just as easily be done manually. We'd have to go back to updating the front page template at exactly the same time each night, which I'm sure nobody want to do. Also, the featured article DPL query is cached and presumably accessed all the time, so it only actually needs to do the query once a day. Depending on how the caching is set up, it might do more, but it's a lot less work than, say VFP. Which uses a lot of complicated DPL, including random parts, so it can't be cached. To be honest, I think VFP and the featured picture template is the most resource intensive of all the DPLs, then probably VFH, and then the featured article system and then the Best Of page, which I'm about to fix.
- So in my infinite wisdom, the feature system is fine as it is, but if anything needs to not use DPL to save resources, it's the featured picture system. Though having said that, the way it was done before with very large choose tags and hundreds of templates, changing back to the old system probably wouldn't do that much good. And I am so not changing everything back to the old system, cos it'd take forever to do. → Spang ☃☃☃ 21:43, 25 Apr 2008
- Wikipedia uses a very simple, non-DPL system to update its featured articles, and there's no limit to the queue. Why don't we use that? – Sir Skullthumper, MD (criticize • writings • SU&W) 17:04 Apr 26, 2008
- Ours would need to be significantly more complicated than that, as wikipedia will always without fail have a featured article queued up. We won't. And I honestly don't think that the feature article DPLs are the ones causing problems. It's most likely to be VFP, if anything, so spending time knocking up a new featured article system wouldn't do much good unless VFP is scaled back first. → Spang ☃☃☃ 00:20, 27 Apr 2008
edit Fnoodle
Fnoodle has fallen over. Could you restart him? • <May 01, 2008 [13:13]>
- Hey! That's a lie! Fnoodle didn't stop at all! ...I thought talking on its talkpage interrupts it... • <May 01, 2008 [13:16]>
- Well first of all, no, because the bot is running on Dr. Skullthumper's computer and I'd have to go there to restart it, which I don't really have time for. And second, no, because it hasn't actually fallen over. He's still going strong! (but yeah, you knew that already, damn edit conflicts) → Spang ☃☃☃ 13:19, 01 May 2008
- Are you saying that... nothing can stop it??? • <May 01, 2008 [13:25]>
- Well, it could be banned, but where's the fun in that! It'll be interesting if it makes it through Vandalism/example though. → Spang ☃☃☃ 13:29, 01 May 2008
edit Radiohead
Dear Spang,
Thank you for referring to the Radiohead article as “awesome”. I am a little ticked off that somebody who didn't “get it” had the nerve to call it random and try to delete it. You know what would stop all these people from trying to “fix” the garble? A successful or even quasi-succesful VFH. Somebody oughta do that. cough. nudge. I would have nominated it myself a while ago, but modesty (and Uncyclopedia policy) prevents me from doing so.
Once again, thanks for the vote of confidence! YouFang 17:33, 4 May 2008 (UTC)
- You'd be surprised at how often someone doesn't get an article, and their first reaction is to nominate it for deletion. It's hard to strike a balance to make a page good for people who know the subject and people who don't, but as far as I'm concerned, making an article funny to people unfamiliar with the subject is just a bonus.
- And y'know, nobody really cares about the self-nom rule thing, it's really only to stop people spending 5 minutes on a page then VFH'ing it :) → Spang ☃☃☃ 22:44, 04 May 2008
- Awesome. I've just made a self-nomination with your blessing. We'll see how it fares. YouFang 18:07, 5 May 2008 (UTC)
edit I knew that headline will catch your attention
Thanks for that! :) ~
14:12, 5 May 2008 (UTC)
edit Internet Exploder sucks
But you already knew that. When I view my userpage in Internet Explorer, and click "show" on the awards section, the images fly all over the place. Is there any way to fix it? Or should the IE users just suffer? – Sir Skullthumper, MD (criticize • writings • SU&W) 17:08 May 07, 2008
- Works fine for me in IE6 and 7. Maybe your IE just just broken. More than usual. → Spang ☃☃☃ 17:23, 07 May 2008
- It's only after the "overflow: auto;" section is displayed. And I just found the magical fix. The page describes both problem and solution - just to show you I wasn't hallucinating. This time. – Sir Skullthumper, MD (criticize • writings • SU&W) 17:25 May 07, 2008
- Didn't even apply the fix yet (I only applied the fix to the UnSignpost, where there was also a problem, and it works now!). Go figure. It could be some JS is broken on this computer too also, considering I'm in a tech-paranoid school which blocks and mangles pages. – Sir Skullthumper, MD (criticize • writings • SU&W) 17:35 May 07, 2008
edit How do I make a new page?
Hey...I'm new to Uncyclopedia and want to know how to make a new page.
That is all.
Answer my question as soon as possible.
Then go back to eating bacon or getting slain by Chuck Norris or whatever it is you do.
Ninjas? 17:28, 7 May 2008 (UTC)
- Go to the page you want to create, edit it, then click save. The page is created! To get to a non-existing page, you can type your title into the search box and click go. If the page already exists, it'll take you there, and if it doesn't it'll show you search results. At the top of the search results page there'll be a red link to the title you put in. Click it and start editing! → Spang ☃☃☃ 17:36, 07 May 2008
- Or, use Special:Createpage, which is awesomer. Also, Spang, archive your %$#!ing talkpage already. – Sir Skullthumper, MD (criticize • writings • SU&W) 17:38 May 07, 2008
- Oh yeah, that too. It's been updated since the last time I looked at that. Possibly.
- And I'll archive my talk page when Forum:Count to a million get to a million. Haha! → Spang ☃☃☃ 17:47, 07 May 2008
edit Whoops, did I do that?
My sources indicate that I broke your page... Sorry bout that! ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 08:40, 07 May 2008
- You didn't, Wikia did. You know, I'm very disappointed that Spang wasn't on IRC to say FU WIKIA. – Sir Skullthumper, MD (criticize • writings • SU&W) 20:42 May 07, 2008
- Oh, awesome for me then! Also, be nice and stop demanding he archive it please. The more you demand with #@$#*ing symbols the less likely he is do it, and I can tell you, he's already fairly uninclined to archive! Just let the page run its natural course. It really is worth the loading time. ;P ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 08:52, 07 May 2008
- Is this nicer? ARCHIVE YOUR F♥CKING TALK PAGE, SPANG ...please? – Sir Skullthumper, MD (criticize • writings • SU&W) 20:55 May 07, 2008
- I think that is a rhetorical question. Imo a user's talk page just like their userpages are sacred. It's their's to do with as they wish, and to archive as they wish. I'm gonna leave it at that. ~ Dame Ceridwyn ~ talk DUN VoNSE arc2.0 09:25, 07 May 2008
edit Lolwut?
What's with the "no choose tags in sigs" thing? I don't want to make a rash revert, so I figured I'd get the whole story first. Did this decision take place somewhere and I missed the memo? I don't want to make too big of a scene, I don't use that sig anymore (although I may in the future) but I still want to know what's up. ~Minitrue Sir SysRq! Talk! Sex! =/ GUN • WotM • RotM • AotM • VFH • SK • PEEING • HP • BFF @ 21:10 May 7
- The key part was "for now". Choose tags are kinda broken at the moment, and they mess up talk pages where they're used. I'd seen that sig of yours in a lot of places screwing up the formatting, so just simplified it for now. Feel free to put it back when the choose extension is fixed, but until the it's just going to be really annoying on talk pages, so it's best to leave it like that until then. And if you see talk pages where the formatting is all broken, chances are it's someone else's sig that is the same, so at least you know it's not yours that's breaking things now. → Spang ☃☃☃ 01:34, 08 May 2008
edit One year ago today (sort of)
I first posted this here. Funny how things change, innit? Sir Modusoperandi Boinc! 21:23, 7 May 2008 (UTC)
- A year, for serious? Man. Maaaaan. Well, here's to the next year! → Spang ☃☃☃ 01:52, 08 May 2008
- I know, right?. You know, this reminds of that time I posted on Spang's talkpage. Sir Modusoperandi Boinc! 04:01, 8 May 2008 (UTC)
edit A
Semi-Formal Petition to Reduce The Length Of your Enormous Talk Page So That We Can All Be Happy and Dial-Up Users Can Access This Page Without Having To Wait 50,000 Years For it To Load
Myself along with several users are heavily upset by the ridiculous length of your talk page. It's so big that I cannot load it sometimes and when it does load my browser gets so slow that I have to kill it by force to be able to use anything else on my computer. This forum with an extremely long and impossible to memorize name has our complaints and a list of our supporters on it. We strongly recommend that you listen to this complaint and reduce the length of your page so that we can reduce the length of our arguing. Should you choose not to accept, we shall declare war on you, this talk page, your family, and you. Sincerely, -:03, May. 8, 2008
- P.S.: FU Spang.
- You can never load it sometimes? Whoa. There's some grammar to get your head round. And yes, I had noticed that. And... no. → Spang ☃☃☃ 02:12, 08 May 2008
- My internet usually works fine for everything except this page. I'm also not the only person against the huge size of it. Also, I have made a few grammatical corrections to the request above. -:16, May. 8, 2008
- My internet only works fine on this page. So there! Sir Modusoperandi Boinc! 04:04, 8 May 2008 (UTC)
- Don't archive this page, it's more colorful than any other talk page here!--Witt, UNion Entertain me* 05:44, 9 May 2008 (UTC)
edit @ Spang's talk page:
~Minitrue Sir SysRq! Talk! Sex! =/ GUN • WotM • RotM • AotM • VFH • SK • PEEING • HP • BFF @ 15:55 May 9
edit Question about spelling
(Based on an inquiry at User talk:Dr. Skullthumper#Question regarding Fnoodle on Game Boy) Why is "Gameboy" considered a misspelling? I am fully aware that it is technically incorrect, but aren't there some conceivable circumstances where writing it that way could be intentional? (Admittedly, the Game Boy article is not such a case.) Should a bot really be dealing with edge cases like this? --Pentium5dot1 19:21, 13 May 2008 (UTC)
- If it's technically incorrect, where would there be a legitimate use for it? There have also been cases where it's corrected an intentional misspelling or two (like teh), but it's not hard to just change it back in these cases, especially compared to the benefits correcting just about every common typo on the site. And it's only doing this once. → Spang ☃☃☃ 11:11, 14 May 2008
- In response to your first sentence: I was thinking that we could use "Game Boy" to mean the game console and "Gameboy" to mean something analogous to Playboy. I have nothing else to complain about, so I rest my case. --Pentium5dot1 03:05, 17 May 2008 (UTC)
- Haha, you got a Fnoodle question and I didn't! – Sir Skullthumper, MD (criticize • writings • SU&W) 03:07 May 17, 2008
edit @ Length
Case in hand, your talkpage is more than twice as long as long article. You may want to consider the possibility of archiving. =D VGD >=3 20:15, 19 May 2008 (UTC)
edit Linux
You reverted a revert I did on the Linux article, with the summary 'Check closer next time'. I checked closer but still don't see what makes the new revision better. I'm not questioning your revert, I was just wondering what I missed.) May 23, 18:00
- It was legitimate cleaning up, not vandalism. The prose is better and it looks less ugly. I meant if you look past the drop in bytes, you can see it's mostly restructuring and quote removal, not just taking random stuff out. And if we reverted every time someone cleaned up an article that needed it just because it ended up with less bytes on it... → Spang ☃☃☃ 18:08, 23 May 2008
edit The Rating System
Hello Spang!
I am JuninhoJuninho, a friend from the Brazilian Uncyclopedia (Desciclopédia).
I'm very interested in the rating system that I heard you implemented on Uncyclopedia, and I wish we could use it at Desciclopédia.
Is that a MediaWiki extension? How could we implement it at our unclyclopedia?
Waiting for your response.
Thank you very much! JuninhoJuninho 01:04, 24 May 2008 (UTC)
- Hi. The rating system itself is actually an extension made by wikia for their "monaco" skin, and the part I wrote was the javascript that lets you use it on monobook type skins. But unfortunately wikia haven't released their code for it yet, so only wikia wikis can use it right now. At the last mention, they said they might release it when it was ready, but I don't know when that'll be. You could always ask them to release it and see what they say! → Spang ☃☃☃ 15:49, 24 May 2008
edit Eurovision
I was just wondering if you could protect Eurovision Song Contest for a few days. With it being featured and the real thing being last night, it's getting a fair bit of unwanted attention. Thanks. -- 15Mickey20 (talk to Mickey) 11:45, 25 May 2008 (UTC)
- Nevermind. Sannse did it. -- 15Mickey20 (talk to Mickey) 14:05, 25 May 2008 (UTC)
edit A CSS request/question thing...
...hey Spang! I ventured onto your long talk page and wasted a good minute or so of my life loading it, cause I have a little request/question for you? Would it be okay for you to insert one line of css code into MediaWiki:uncyclopedia.css or MediaWiki:common.css? All it does is essentially create a class for creating hover links...wait, have a look for yourself:
.hoverchange a:hover{ background:#9CBBC6; border: 1.5px solid black; padding: 3px ! important; }
Is this allowed? Is it okay? I dunno... - [17:10 28 May] Sir FSt. Don Pleb Yettie (talk) QotF BFF NotM RotM UNPotM UGotM CUN PEE SR UnProvise
- You can always click the "leave a new message" link before it's finished loading, you know.
- What would you need that class for? It's kinda ugly and there's no way to use a different style without adding more classes... → Spang ☃☃☃ 19:18, 28 May 2008
- It's ugeley? Oh I didn't test it to see what it would be like. Like this Oh yeah, that's pretty ugly. Ok, what about this:
.hoverchange a:hover{ background: #CCE7F1; padding: 3px; border: 1.5px solid #9CBBC6 ! important; }
- That's slightly less ugly... What do you think? Oh and I'm going to use this class (if you're kind enough to add it for me :-) ) in menu header things (I'm creating something called UnProvise with Orian57). - [20:05 28 May] Sir FSt. Don Pleb Yettie (talk) QotF BFF NotM RotM UNPotM UGotM CUN PEE SR UnProvise
edit Mordillo sent me here...
I'm hoping that you are an admin. I could not get any help for my issue concerning an article I'd really, really, really like to have in my userspace here and since no one seems to know how to fix it, I'm turning to you - hoping and praying that you can help. You see, I want it to do something like this and I'd like to be able to put my own font on the text (it'll only be one font throughout), and I'd like the title centered and the rest of the text to be just the way it'd normally be - from left to right. If the whole thing has to be centered to get the effect, then that's fine, but I want the background and it needs to be infinite because there is more text to be added than there is at the moment. I wrote the Phantom parody some years ago and I'd really like to host it here on Uncyc. Is there anything that can be done? — Mgr. Nacky (talk) 00:26, 4 June 2008 (UTC)
- I've added the background to the page for you. Let me know if you want it to have a particular font too. Only thing is it'll stop working if you move the page, and I expect you'll want to move it at some point, so just let me know when you're going to move it. → Spang ☃☃☃ 10:58, 04 Jun 2008
- I can't see the maroon background anymore...and no, I wasn't going to move it as it's not an article but just a parody. I know everyone is against me using a textured background, but I say give it a chance first. I did upload the file for it already. Now I can't find it again as it was editted out. :( — Mgr. Nacky (talk) 11:59, 4 June 2008 (UTC)
- Okay I found it, it's here: — Mgr. Nacky (talk) 12:23, 4 June 2008 (UTC)
- I know, it should already be there, but you might have to refresh... try pressing shift+refresh or ctrl+refresh or something. → Spang ☃☃☃ 20:18, 04 Jun 2008
- I'm getting it now...THANKS! Hey can you put a font of Times New Roman on there? Something that looks more "Phantom-ish of the Opera-ish"... — Mgr. Nacky (talk) 03:51, 5 June 2008 (UTC)
edit You're Spang. I'm sure you can help.
I want Rafael Nadal to have a black background and need to do something with mediawiki reskinning. I've made this and I think it needs to be added to MediaWiki:Skin/Rafael_Nadal.css or something. I have no idea what I'm doing. All I've done so far is copy the css code for emopedia and change the logo. -- 15Mickey20 (talk to Mickey) 10:22, 12 June 2008 (UTC)
- Done! Eventually... → Spang ☃☃☃ 19:12, 16 Jun 2008
- That looks amazing. Thank you so much! -- 15Mickey20 (talk to Mickey) 15:59, 17 June 2008 (UTC)
edit Prophet?
Yeah, so I sees me a thing on IRC on one of my incredibly rare forays there, and suddenly all I can think is "for soon the cold of night will fall, summoned by your own hand". Now why would that be, I wonder? --SirU.U.Esq. VFH | GUN | Natter | Uh oh | Pee 15:56, Jun 16
- Um... uhhhh... suuuuure. O_o *backs away slowly* → Spang ☃☃☃ 19:04, 16 Jun 2008
- Er, there was this line from Queen's "The Prophet's Song" on there, and I thought it had been set by you. That's basically the next line, I think. I don't spend much time on IRC, now you know why! ;-) --SirU.U.Esq. VFH | GUN | Natter | Uh oh | Pee 19:11, Jun 16
edit splendiferous5 has a problem
I seem to be completely inept when it comes to even beginning to comprehend this whole signiture/userpage code/cool little box thing. which is hard for me, as I happen to be an otherwise pretty intelligent guy. PLEASE END MY EMASCULATION! and get back to me when you can.
- You should read the bottom section here, first. Actually, that should pretty much cover it. Hope you get over your emasculation soon! → Spang ☃☃☃ 06:36, 30 Jun 2008
edit {{Given name}}
Well, it's OK to use lots o' names. --Penis · Talk · Contributions 06:21, 30 June 2008 (UTC)
- Pommie bastard, leave it. I like it lots. It's not bad. --Penis · Talk · Contributions 06:38, 30 June 2008 (UTC)
- So do I. --MegaPleb • Dexter111344 • Complain here 06:41, 30 June 2008 (UTC)
edit I must be drunk
Because I saw your name appearing in pink!? And with a bot flag?! ~
07:46, 2 July 2008 (UTC)
- You must be drunk! → Spang ☃☃☃ 01:07, 03 Jul 2008
- I must be. I also must be stoned because I saw someone called RC Sucks in recent changes. Can I get a new toy too? I played with it yesterday and couldn't get it to work. ~
07:46, 3 July 2008 (UTC)
edit Your sig - My sig
My sig is shitty and fucked up. It has the "expand" spans and I don't know what to do, because the date always ends up like you see it below and wtf. You DO SOMETHING ABOUT IT or DIE! --Sir General Minister G5 FIYC UPotM [Y] #21 F@H KUN 12:54, 7 July 2008 (UTC)
edit and when I said crap
I didn't mean you, I meant threatening users. ~
22:18, 8 July 2008 (UTC)
- And I'll say again I'm pretty sure he wasn't seriously threatening you. Did you read the last bit? (It was rather good and imagine my dissapointment, upon showing it to my friend, who has quite a bit more technical know-how than me to tart it up, to a decent level, and it not existing.). He also said "biatch". Nobody who says that is being serious, it's physically impossible. If the guy wants his page back, saying something like "not until you ask nicely" doesn't really help anyone. And remember who has op seniority, biatch! → Spang ☃☃☃ 22:30, 08 Jul 2008
- Well, your majesty, you made me laugh, which is always a good thing. That and your gay sig. Biatch my ass :) ~
22:32, 8 July 2008 (UTC)
edit Excuse me, but
¶¶¶$¶¶¶¶¶¶¶¶¶¶¶¶¶øø¶¶¶¶¶¶¶¶¶$$¶¶¶ ¶¶¶$¶¶¶¶¶¶¶o´´´´´´´´´´´7¶¶¶¶¶¶$$¶ $$¶¶¶¶$¶¶¶´´´111111111´´´´$¶¶¶¶$¶ ¶¶¶¶¶$¶¶ø´´11111111111111´´¶¶¶¶$¶ ¶¶¶¶¶¶¶ø´´1111111111111111´´´¶¶$¶ ¶¶¶¶¶¶¶´´1111111111111111171´¶¶¶¶ ¶¶$¶¶¶´´11111111111111111111´1¶¶$ ¶$¶¶$´111´´1´´´11´´´11111111´1¶¶$ $¶¶¶1´77ø¶¶´ø¶¶¶ø¶¶¢11111111´7¶¶¶ ¶¶7´´´´´´´´¶´´´´´´´1¶7´11111´¶¶$¶ ¶¶´´¶´´´´´¶´´´´´´´´´´¶71111´´¶¶¶¶ ¶¶´´´´´´´´ø´´´ø¶´´´´´oo´´´´´¶¶$¶¶ ¶¶¶1ooøø71¶´´´´´´´´´´¶1´´¶7¢¶¶$¶¶ ¶¶¶´´1117´1¶´´´´´´´´$ø´´¶1¢¶¶¶$¶¶ ¶¶¶1´´´´´´´´7¶øoø¢¶¶7´´¶´´¶¶¶¶¶¶¶ ¶¶$1¶¶¶¶¶¶$¶ø´´´1´´´1171´$¶¶¶$$¶¶ ¶¶´1111111111¶¶¶7´´111´¶¢7¶¶¶$¶¶¶ ¶117777777777111ø¶1´11´1´¶´$¶$¶¶¶ ¶´7777777777777711¶´´11$´´´¶¶$¶¶¶ $´17777777777777711¶´111o¶¶¶$¶¶¶¶ ¶7´´´1111111117777´¶´11´´¶$$¶¶¶¶¶ ¶¶¶¶¢øø¶¶¶¶ø¶¢7777´¶´111´¶$¶¶¶¶¶¶ ¶¶¶¶¶´´´11111o77711¶´111´¶¶¶¶¶¶¶¶ ¶¶¶¶¶¶¶´´17777111ø¶´´11´´¶¶¶¶¶¶¶¶ ¶¶¶¶$¶¶¶¶ø7117¢¶¶o´´´´´7¶¶´$¶¶¶¶¶ ¶¶¶$¶¶¢´o´ø¶$¢1´´1¶¶¶¶¶o´´´´¶¶$¶¶ ¶¶¶¶¶´o´ø´´´´ø¶7¶7´´´´´´´´´´7¶$¶
Thanks, I guess! → Spang ☃☃☃ 21:27, 13 Jul 2008
edit Doctor Spang
The VFH/VFHS DPL broke, so it seems. It's not being updated and new noms/old noms are not being added/removed. Please help/advise/ignore/abort/retry Y/N? ~
08:46, 17 July 2008 (UTC)
- Seems to be working to me! → Spang ☃☃☃ 09:18, 17 Jul 2008
- O_O You know what they say about the "technician's syndrome"? As soon as you call the expert everything works?! Well...there you go...~
09:21, 17 July 2008 (UTC)
- It's possible that wikia changed it to use cached results by default, and it just happened to un-cache as I looked at it... I've changed it so it should always be current unless DPL actually breaks. → Spang ☃☃☃ 09:26, 17 Jul 2008
- It's back again, especially this entry. I cleared my cache several times, but it's still there. apparently MrN had the same issue, look at the logs for that entry. ~
09:28, 17 July 2008 (UTC)
- Yeah, I'm seeing the same problems. This is still showing (on both VFH and VFHS), despite being closed, while this, which I created today, isn't showing on either. (Note: this is not whoring, just giving an example!) Odd. -:37, Jul 17
- And now, in the time it took to save this change to your talk page, it's behaving itself again. Madness, I tells you,:39, Jul 17
- And now it's not showing again, and neither's MrN's Goldfish nom, and this is still showing and TKF's just tried to close it second time. Aaaaah! Broken wikist:47, Jul 17
- Broken wikia more like. I remember someone mentioning that wikia was doing something with having some DPL specific caching servers or something. This may be the result of that. Go and complain to them about it. Though it seems as though it shows old results only sometimes, it's possible there's more than one DPL caching server, and one of them isn't updating its cache for some reason. Either way, complain to wikia about it. And then tell them they suck, they love that. → Spang ☃☃☃ 13:24, 17 Jul 2008
- Gotcha. -, Jul 17
edit PRESS
hey there spang, this nosy reporter was wondering if you have any comment on the count to a million project, specifically on the subject of it being an elaborate attempt to test the wiki's restraints / illustrate how easily amused the average uncyclopedian is / obtain my social security number. -- 19:10, 21 July 2008 (UTC)
- Well every man wants to leave some kind of legacy behind. One day, our children's children's children will be counting to a million, and I'll be smiling up at them, knowing that I've left the world something great. Same goes for last person to edit wins. That, or the entire thing is just an urban myth, and doesn't really exist. → Spang ☃☃☃ 20:08, 21 Jul 2008
edit Your user page...
Fucking
Epic
WIN! - 03:39, 22 July 2008 (UTC)
edit I've got a question
I've heard that you are the guy to go to if you want code things. Now, with that said, is there some code that you know of that will make the Uncyclopedia emblem go away up in the left-hend corner? I would like to remove it on my userpage. ~ Mgr.ReadMeSoon!? 23:49, 12 August 2008 (UTC)
- It's unfortunately not possible without changing the sitewide css, and we don't really do that for userpages. Maybe one day I'll change it some way so it's possible to place things over it like you're trying to do on your page. I tried already, but then it was impossible to place things under the logo, which the featured template uses. Seems we can't have it both ways! → Spang ☃☃☃ 00:23, 24 Jul 2008
- Not really, it involves changing the whole site so either everything goes on top, or nothing does. And there's already stuff that goes under it. I'll let you know if I think of a way to do it. → Spang ☃☃☃ 16:05, 25 Jul 2008
- Ok, I added in a thing that lets you hide the logo so you can put something else in its place. Use the {{nologo}} template anywhere on the page and it should disappear like magic! → Spang ☃☃☃ 15:53, 26 Jul 2008
edit So...
Did you consider asking why Hinoa blocked half of Italy? You know, maybe asking Hinoa or something radical? Or even consulting with some other admins... I know that's a wild concept an all... -- sannse (talk) 09:12, 28 July 2008 (UTC)
- Considering Hinoa was asked about it over a week ago, and never replied, I didn't think there'd be much point. And sure, if the bans had been made last week, I'd have asked someone, but the bans were nearly a year ago, so thought it was unlikely that the vandal(s) that was presumably the cause of blocking half a country would have given up by now. And as you're someone who dislikes infinite bans on IPs, on account of their probably eventually affecting the wrong people, I would have thought you'd think infinitely hard blocking 5 and a half million IPs (which definitely was affecting the wrong people) would be a bad thing. If there's a reason they should still be there, it's not a problem to put them back again, soft block preferably. → Spang ☃☃☃ 18:58, 28 Jul 2008
edit Your Top3 automatificator is about one year old! Hurrah!
I know this because it has started listing features from one year ago. Do you have any idea how to make it not do that thing that I mentioned just now? Sir Modusoperandi Boinc! 23:42, 31 July 2008 (UTC)
- Haha! Well there's future planning for you. It should work now. When 2009 comes just change the 2008 to 2009. → Spang ☃☃☃ 00:57, 01 Aug 2008
- What do we do when the calendar runs out of years? Sir Modusoperandi Boinc! 05:11, 1 August 2008 (UTC)
edit Pee Review Stuff
Hi Spang. :) I was wondering if there was any chance you could take a look at this. I'm hoping it's an improvement to the Pee Review process which takes on board the best of what was said in that forum.... As you said, all we need to do is change some text on the page, and get people to start to follow it... Users can submit by putting the "(quick)" thing after the articles name, so the guys who normally give long reviews can bash something out faster, and notice the (quick) when they look at Uncyclopedia:Pee Review/Current Pees. The wording about adding brief comments without using the table is important. That means that lots of people can add brief comments until such time as someone does a full review using the table (which will remove it from the queue). It means that people don't have to bother to read the guidelines if they just want to make some brief comments...
Led is happy for me to copy this into the actual page now, and Mhaille is cool with it. I just wondered if you had any thoughts before I do. Basically the problem being that when users put a (quick)-'for quick pee' or (resubmit)-'to get a second review' after the name they need to change the link on the generated page. I'm not that great with wiki hacking, so I wondered if you had a solution to this? Is there a way to get it to automatically link to the correct page? Could we use two different boxes, or maybe have a "tick box" or something which people can use to indicate that they want a (quick) review? Not sure how this would work with (resubmit) as obviously there could be 3,4 or multiple review requests (requiring different pages). We are trying to avoid reverting reviews unless they are done in bad faith.
Also, as you probably know (I noticed you edited the template), PEEING have stopped (or should have stopped) using those "Pee Reserved" templates except when actually doing the review immediately. Not sure how we started leaving em on for days, that was not the intention...
Anyway, I hope you like what I'm trying to do here... Obviously go ahead and copy my page over to the actual page if you think you have a solution. You will need to un-comment the categories at the bottom. Cheers. :-):44, Aug 1
- Ok, I changed it so that it'll remove things in brackets from the title, unless the brackets are part of the title. So adding (quick) or (resubmit) or whatever to the end works well. It might also be possible to have a template that shows up when (quick) is in the title, so people who don't notice that or don't know what that means can be enlightened. And possibly automatically add a Quick review category.
- It might also be an idea to add a "quick review" or just "comments" header to the standard review page for quick comments and suggestions, to show that quick reviews are welcome even for articles that don't specifically request them. That also won't take it out of the queue to be properly reviewed. If you're happy as it is though, feel free to copy it over. → Spang ☃☃☃ 00:44, 02 Aug 2008
- Awesome. That appears to work rather well. I will make a few more changes to the "guidelines" to try to encourage some more people to make brief comments, and we can see if we actually get an improvement in how things work. I did notice that UN:PRG appears to be appearing in Uncyclopedia:Pee Review/Reviewed Pee I think because there are two Pee Tables on there which includes the page in the reviewed category. We had been stopping this from happening by putting a <option><choose> block around the tables, but that looks like it's not working anymore... I wondered if you had any idea why:07, Aug 4
- Could be because the choose extension has been upgraded to parse the contents of options, which is why we can now have choose tags within choose tags, and categories work inside them too. You could fix it by adding a notcategory=Pee_review_stuff or something line to the DPL (or forum, if it uses that), and then add that category to the pages you don't want appearing in the list. → Spang ☃☃☃ 23:27, 04 Aug 2008
- Cool. Turned out there was a "notcategory=Uncyclopedia" in the DPL already, so I just added UN:PRG to that category.:48, Aug 4
edit Ratingsystem
Hello Spang,
This is Sergeant Pepper from the Dutch Oncyclopedia. Lately when I was clicking random page here on Uncyc I noticed that a Ratingsystem was added. I've been asking around on IRC and by a short detour I ended up with asking you about it. See, the thing is that we're intrested in using it on our wiki. The import of that was in MediaWiki:Common.js did nothing (as expected). So I would like to ask you if you're willing to help us getting it to work. If so (or if you don't feel like doeing it) could you please leave a message at Vsotvep's discussionpage (a admin who's helping me with this mather) at. we would really appreciate it if you want to help us with this.
Hope to hear from you soon, greetings, Sgt pepper 23:17, 23 August 2008 (UTC) ()
I'm sorry for the language, it's getting rather late her in Holland at the moment.
- Hi. The rating system thing here uses wikia's rating system, so it wouldn't work on your wikia, as it's not hosted by wikia. Though trust me, it's hardly worth moving to wikia for. Your best bet would be to find a rating system extension for mediawiki that someone else has already written if one exists, or write one yourself, and get carlb to install it. And then adapt my code to work with it, if it needs it. The one here is lacking a good few features, such as getting the total number of votes, or getting lists of top rated articles, which I can't do anything about. So finding one that does those things would be better anyway. → Spang ☃☃☃ 06:25, 30 Aug 2008
- Okay, thanks for the reply. We'll be moving to our own sever soon. Adopting your code sounds like the best thing to do next to finding a excisting one. Again, thanks for the reply. Greetings Sgt pepper 11:48, 30 August 2008 (UTC)
edit ?
? -:12, 30 August 2008 (UTC)
- Well basically the secret is that I made you break your ultimatum two more times. But only two because you stopped playing along. I'll have to try harder next time. → Spang ☃☃☃ 06:17, 30 Aug 2008
- DAMMIT! /me fell for it big time D: Someone here just got owned -, 30 August 2008 (UTC)
edit Template
So yeah, y'know the VFH template? I was just wondering: it picks up the page name automatically to link to the VFH nom and stuff. Is there a way to add the ability to specify a different page name for the VFH nom if it's a re-nom (of a quasi, for instance)? Just occurred to me, so may well have been discussed before, but two seconds' intensive searching revealed nothing and I'm too lazy to look everywhere, so I figured I'd just:53, Sep 2
- Use the
nompageparameter on the VFH template. Like
{{VFH|Article (2nd try)}}or whatever. → Spang ☃☃☃ 08:05, 02 Sep 2008
- Ah. I see. Sort of tried that, but got confused when it didn't work in a preview. Works fine. I know nothing about anything. Good job I don't work with computers for a living or anything, isn't it?:17, Sep 2
edit FU...
... From the Philippines! —rc (t) 09:17, 11 September 2008 (UTC)
- FU RC... from space! (The space in my head) I'm surprised the Philippines has enough internet to load this page, actually. → Spang ☃☃☃ 09:33, 11 Sep 2008
- Yeah, I just have to tolerate slow/broken connections and tiny sweaty internet cafes where locals have no problem blatantly peering over your shoulder at your personal emails and such.
- Okay, see you again in another month maybe. Give my love to whoever's left at this joint. —rc (t) 09:56, 11 September 2008 (UTC)
edit categories
hey spang, is there a way to add a category to a page without the category box appearing at the bottom? i ask because the same articles (An article that contains nothing but a full stop, International Page Blanking Day, etc.) tend to clog up the uncategorized pages. thanks! 19:45, 11 September 2008 (UTC)
- I left those out, it was my understanding that re-skins or pages that were novelty rather than articles were not categorized... — Sir Sycamore (talk) 19:49, 11 September 2008 (UTC)
- right, i'm just looking for a way to put these pages in, say, Category:Uncyclopedia In-Jokes, without mucking up the page itself with a category box. 20:14, 11 September 2008 (UTC)
- No, because putting anything else on those pages, even hidden, would defeat the joke of them. And I doubt they're "clogging up" the maintenance pages. Maintenance pages like that are never going to be empty really, so you might as well have some you expect to be on there. People were the same with the full stop article always being at the top of short pages list, but they got used to it eventually. → Spang ☃☃☃ 21:05, 11 Sep 2008
- okay, that's what i expected, but i just wanted to double-check. most of those pages are protected anyway, i was thinking more along the lines of Phishing. and i guess you're right about the 'clogging up' bit, there's only ever about 100 pages anyway and it's easy to pick out the new additions. thanks. 21:20, 11 September 2008 (UTC)
edit Thanks!!
Hey! I'm pretty sure we have talked before but your page isn't on my watch list so I'll take this opertunity to give you my formal greetings. Hi, I'm Orian, pleased to meet you. Also, I've been asking everyone, what was it you liked about the article? I've never seen something get so many votes and no againsts and am just wondering what I did right. thanks 03:12 13 September 2008
edit Is TKF being a Numpty?
I noticed you unblocked a few people who TKF banned, but his non-descript ban reasons don't really say much.
I'm not intervening or anything, just curious. --) 03:34, 24 September 2008 (UTC)
- TKF likes to ban people who leave "I'm leaving" notes on their userpage. I'm not sure of his reasoning, but I don't think it's entirely constructive. Or maybe it is. Who knows! → Spang ☃☃☃ 13:34, 24 Sep 2008
edit Forum:Last person to edit wins Dearchival
Well, I am noting this on your talk page that you dearchived Forum:Last person to edit wins. I Archived it because it was over 50 KB long and to try to save you guys some bandwidth. Well, based on looking at your talk page here you POV seems that you love long pages, so can you go ahead and huff Forum:Last person to edit wins/archive1. My only other option is revert war and I don't want to go there again. ----Pleb- Sawblade5 [block me!] ( yell | FAQ | I did this ) 04:55, 29 September 2008 (UTC)
- Making that page harder for other people to edit is not really a problem :P → Spang ☃☃☃ 04:58, 29 Sep 2008
Whoa Black Betty, spang-a-lang
Whoa Black Betty, spang-a-lang
Black Betty had a child, spang-a-lang
The darn thing gone wild, spang-a-lang
She said she's beating up her mind, spang-a-lang
The darn thing is blind, spang-a-lang
I said oh Black Betty, spang-a-lang
Whoa Black Betty, spang-a-lang
--Bill Hart 16:38, 30 September 2008 (UTC)
edit Thanks for feature update
Thanks for updating the featured version of the Marijuana article to include my last edit. Though, the change I made was small and probably not that significant, I do appreciate that you took the time to include:37, 30 October 2008 (UTC)
edit Feature queue
Thanks for the tip. – Sir Skullthumper, MD (criticize • writings • SU&W) 01:36 Nov 01, 2008
edit Gadget thingy
Hi Spang. I keep forgetting to leave this message... the Gadgets Extension you asked for some time ago is now live on Uncyc. Regards -- sannse (talk) 14:25, 13 November 2008 (UTC)
edit Sitenotice?
Hey Spang, why'd you get rid of that banner just now? Wikipedia still has their thingy up, so I kinda feel like we should have something at the top of Uncyc,:56, Nov 25
- Thought wikipedia had taken theirs down because I forgot that I blocked their sitenotice before. Feel free to put it back up. Though it is pretty ugly. → Spang ☃☃☃ 01:38, 26 Nov 2008
edit UnNews main page
Something very bizarre is happening there. Can you take a look? Thanks. ~
11:55, 1 December 2008 (UTC)
- I took a look, but I don't see it... → Spang ☃☃☃ 02:18, 03 Dec 2008
- It's the Spang syndrome. Post anything on Spang's talk page, and the thing sorts itself all by its own, out of sheer fright. ~ Mordillo where is my FUCK? 08:31, 3 December 2008 (UTC)
edit have a ninjastar
edit Happy Hanukkah
edit Merry Christmas:33, Dec 21
edit Happy Holidays from all of us at SysRq Waste Disposal and Grinder Co.
edit Here's Your Christmas Tree
----Pleb- Sawblade5 [block me!] ( yell | FAQ | I did this ) 08:47, 25 December 2008 (UTC)
edit About Sex and The Intercourse Article you redirected
Hi, I hope you can reconsider the redirect because they have their own satirical meanings. I spent quite 2 days in doing them both up as such. - Red1 05:29, 31 December 2008 (UTC)
- Oh sorry.. it is still there! I mixed up mine with the leading caps one. Mine is sexual intercourse. So i wonder if we can redirect the diff caps to this? - Red1 05:37, 31 December 2008 (UTC)
edit Spangalicious
Hya Spang, can you check that {{2008Top10}} is OK code wise? Also, if you can take a look on the top 2007, something funny happened with those templates. Thanks. ~
14:29, 20 January 2009 (UTC)
- Looks ok to me, I think... And I saw the other one not working, but now it is. I blame wikia. → Spang ☃☃☃ 02:14, 25 Jan 2009
edit Queue
Hya,
I think there's something wrong with the queue or else I can't purge my cache :). The queue itself looks fine, but the updated article isn't showing on the main page. Can you take a look? ~
12:13, 4 February 2009 (UTC)
- Probably because you numbered the Top 10 list wrongly :P. → Spang ☃☃☃ 03:57, 05 Feb 2009
edit VFD for Template:Pokemonwarning
I have been watching the UN:VFD for Template:Pokemonwarning and it appears to be a majority vote for deletion. They mention your edits on the VFD, so I would think you make a vote (while you still can) on the deletion of the template there. Otherwise I'm afraid we lost one of the good In Joke Templates forever, in the recent rash of Template VFDs. --Pleb- Sawblade5 [block me!] ( yell | FAQ | I did this ) 06:06, 8 February 2009 (UTC)
- Dunno if I cared about it that much, just noticed that most people were voting to delete based on a version that had the relevant jokes removed and random gifs added. I restored it to your userspace anyway, continue using it where you see fit :) → Spang ☃☃☃ 21:20, 08 Feb 2009
edit New Spang Meme
Inside. ~
21:11, 9 February 2009 (UTC)
- I'm not so sure about that one. Also, the main page is correct - that article was featured one year ago (as part of the top 10), and also on the 6th of November 2007. It'll do the same for all the other top 10 features last year. I could probably fix it, but I'm not sure I care enough :) → Spang ☃☃☃ 22:03, 09 Feb 2009
edit Erk, something seems to be happening that I don't think should be happening
Hi Spang - the "create new VFH" and "create new pee review" boxes don't seem to be working - they're importing the text of the articles entered instead of the appropriate template tables. And it seems like when saved, the changes are made to the article proper as well. This lacks goodness - any ideas what might have caused:25, Feb 24
- Wikia suck, is the answer you are looking for. They seem to have updated the inputbox extension to one that doesn't support the prefix= parameter for "create" type inputboxes. Complain to them about it. Might have another look later. → Spang ☃☃☃ 14:20, 24 Feb 2009
- Rightyho,:23, Feb 24
- Yeah, I went and complained about it, but it was already fixed, so now I look like a fool. A FOOL! Thanks for nothing! → Spang ☃☃☃ 23:06, 24 Feb 2009
- Thanks Spang, much:17, Feb 25
- Looking like a fool for complaining to Wikia? How is that even possible? -Sockpuppet of an unregistered user 09:22, 25 February 2009 (UTC)
edit Private Eye
Thank you so kindly for setting the table for the article. Now I also have a reference point should I decide to get ambitious again and I can dood it myself!--01:14, 2 March 2009 (UTC)
edit Animal "Catch All" categories
I've been doing some bot categorization to form some "catch all" categories. For example Category:Mammal Images CA is a catch all category, while Category:Mammal Images is a [edit: "whatnot/presort" category]. I think keeping the whatnot separate from the pre-sort was a good idea, the mammal images has over 2000 images, overwhelming the nonsubcategorized images.
Also, having the separate catchall category makes it easier to spot images that need proper subcating. --Mnb'z 08:09, 6 March 2009 (UTC)
edit The last thank you is never the least
The epic would still be stranded on VFH if it weren't for SPANG! A double thank-you is also in order for tabling the article. I'm already using your handy guide to write the next epic. It's still in raw construction mode but the research is done. Might you be available for questions should I venture outside my box of knowledge again? (Shhhh, your here to thank Spang, not mooch). Thanks so much for your help/vote!-- 12:23, 8 March 2009 (UTC)
edit Purim
OK, so you're not Jewish. Who cares? This is a great excuse to get drunk.
edit Undeletion Request
Could you please hack up Image:Sexy girls.jpg. Yes, I know Socky likes to spam it. However, I think the dead links look worse. --Mnb'z 07:06, 12 March 2009 (UTC)
Hey look, I'm sorry for maybe using it a little bit too much. I wont spam it again, but please restore the image. -Sockpuppet of an unregistered user 15:59, 12 March 2009 (UTC)
edit Hello
I've reuploaded the image, but before you freak out. Please, I promise not to spam the image anymore. If you want, I'll even change the image in my welcome messages. -Sockpuppet of an unregistered user 17:10, 12 March 2009 (UTC)
- I hates spam, I do. I'll probably do the same the next time I see a Rouge the Bat image. And maybe I'll protect it next time too. → Spang ☃☃☃ 22:50, 12 Mar 2009
edit Help! Spang support required!
Please? :-) It would appear that someone (probably me) has broken things such that Template:Recently featured appears to not be appearing with the correct appearance. I noticed this yesterday, and figured that maybe things were lagging a bit, but it's still not changed. UnBooks:Travels Through the Tropics was the most recent when I wrote this. Maybe it's this new cache thing and we need to put a dummy parameter somewhere? Tis probably my general incompetence I'm sure. Anyway, I had a look, and have not been able to figure it out, so I was hoping you might be able to take a closer:49, Mar 14
- Was probably the template caching thing. Should be fixed now. → Spang ☃☃☃ 06:48, 14 Mar 2009
- Maybe, but it's still not changing for me on the actual template itself. Maybe it will later:52, Mar 14
- Turns out it was (also) a combination of a side effect of switching back to featuring every day, and a bug in the code which was sorting the articles by feature date alphabetically (1st, 10th, 2nd, etc). It's actually fixed now. → Spang ☃☃☃ 23:22, 14 Mar 2009
- Spang support successful. Again. Thanks, Mar 14
edit I saw you!
Yea, it's me again, begging for help... Looking at the Main page it appears that there is a problem with Template:FA.dpl.default ? That's the thing which shows previously featured articles right? Anyway, as you can see it's just a red link. Sorry to pester again, I will learn this stuff (I watch what you do to fix things) but I have no idea. Maybe there was no old feature? I can see [[2]] but I admit to having no idea what the default thing does. Hopefully you have a few mins...:36, Mar 19
- Thanks, fixed it. The default thing comes up when the DPL query can't find the {{FA}} template on the page. In this case because someone had replaced it with a custom version. I've created the FA.dpl.default and made it so that if there's no FA it just guesses the feature date and ignores the featured revision, which is what it needs the FA template for. Could also have it just not show up if there's no FA template, if that might be better. → Spang ☃☃☃ 01:39, 19 Mar 2009
- Cool. I was watching your edits and I'm starting to learn... Yea, I'm not sure about what's best if there was no FA. Did I mention that you rock? I'm sure I did somewhere. :-) 19
edit Thanks!
Also do you ever archive? )}" > 12:43 26 March 2009
edit I was...
Starting to think that the joke was on me! I will fix the formatting, so it looks better. Thanks for doing whatever you did to fix, Apr 1
- It helps to know how it actually all works, I guess. → Spang ☃☃☃ 02:12, 01 Apr 2009
- Hmm, it would appear that this and this got caught up in the flack when sorting out:59, Apr 1
edit Oh the spectacle
Whenever an internet 10 car pile up occurs, you can't help but read and be entertained while a handful of people take it all deadly serious. Whenever it happens I'm always reminded of a line I once heard in a British documentary. I forget what the gist of the program was but they were filming scenes on a fairly large tourist boat heading up the Amazon. A section of the river is a known crocodile hangout and the ship's crew are tossing them butcher scraps - causing a feeding frenzy of crocs in the murky brown water. The narrator talks briefly about this "environmentally unfriendly" spectacle and ends with the line "....but the tourists love it and the birds pick up what's left". I always remember that when I see any form of social train wreck.--10:00, 2 April 2009 (UTC)
- It's like reality TV, except it's not on TV. And may or may not be real. → Spang ☃☃☃ 10:04, 02 Apr 2009
edit Oh Spang!
I have an HTML-over-my-head situation. I'm going to attempt to fix Uncyclopedia:Store but one item I'm unqualified to correct is the pages w/shopping cart. The Uncyc nav buttons are being pushed to the bottom of the article and creating the familiar massive white space of poor formatting. If you can fix one, I can probably go do the rest after seeing what you alter. Pretty please?-- 00:39, 5 April 2009 (UTC)
- That happens when there's an unmatched tag somewhere. In this case the div around all the "buy it now" buttons wasn't closed. Fix like so. Though that whole store thing is pretty useless anyway. → Spang ☃☃☃ 21:47, 06 Apr 2009
- I think I'm getting smarter. I read your post and saw "tag problem" with div and, without looking at your helpful cheat sheet, went in and figured it out fairly quick. Tanks! Seriously though, I'm going to make this sucker look good. It's great HTML experience.--12:48, 7 April 2009 (UTC)
edit Just noticed
You have a tiny little notice up top that says to not leave voting thx templates here......only just noticed. It looks like a lot of other people didn't see it either! Well, if you want to look briefly at naked barbies for you vote, you may do so here.-- 12:27, 10 April 2009 (UTC)
edit Dearest Spangles:
This isn't really important or relevant to the wiki at all, but I've got a bit of formatting mystery on my hands that I'd like solved. It's this template, you see. People around here, they call it {{QVFD checked}}, or, {{QVFDc}} for short.
It's supposed to be in fucking monospace font, but here (where here = Firefox 3.0.8 on Mac OS X) it appears horribly misaligned. How the devil is this possible in monospace font? – Sir Skullthumper, MD (criticize • writings • SU&W) 20:55 Apr 08, 2009
- It's probably your browser settings. Go to tools > options > content > fonts & colours > advanced, and make sure the font listed for "monospace" is actually a monospaced font. It sound like it would be obvious to actually use a monospace font there, but sometimes firefox forgets, or the system thinks it knows better. If it's not that, then I'm not sure what else it could be - it looks fine to me.
- Also, a handy tip: use browserpool to check to see what websites look like on different systems/browsers with minimum fuss. Use the free test version, which says it gives you 10 tries, but it nicely resets that number every time you you quit and restart the program. → Spang ☃☃☃ 20:33, 09 Apr 2009
- Thanks muchly for the explanation & the link; it seems that the font is indeed monospace, but it might be that the ASCII the template uses is somehow different... or maybe the spacing is... or... hell, I don't know, it's firefox on a Mac, something's bound to go wrong. I'll check out that browserpool thing, as it looks mad useful. – Sir Skullthumper, MD (criticize • writings • SU&W) 05:53 Apr 17, 2009:58, 17 April 2009 (UTC):03, 17 April 2009 (UTC)
- So much that I had to do it:13, 17 April 2009 (UTC)
Thanks I guess. Though you should probably look into using memes correctly :P → Spang ☃☃☃ 22:17, 29 Apr 2009
edit Hello Spang
I've been noticing how some VFH nominations fail to appear on VFH, or at least don't appear right away. I suspect that it has something to do with having more than 20 nominations at the same time. Could you look into this:14, 29 April 2009 (UTC)
- Which ones? Probably a caching thing. Try purging the page. → Spang ☃☃☃ 21:16, 29 Apr 2009
- That's what I tried, repeatedly. It did appear eventually, but not because I purged it, it just seems to take some time. Maybe that DPL thing is just a bit slow at:55, 29 April 2009 (UTC)
- Failed ones aren't disappearing now, either (there's one there that was failed 2.5 hours ago). It's like the world has gone crazy or sumpin'. Sir Modusoperandi Boinc! 22:36, 30 April 2009 (UTC)
- I think it's probably some caching thing at wikia. Always blame wikia. → Spang ☃☃☃ 02:32, 01 May 2009
- Ah. Sir Modusoperandi Boinc! 02:56, 1 May 2009 (UTC)
edit While I'm here...
edit We need SPANG SUPPORT!
Master Spang, it seems that the watch/unwatch gimick thingy stopped working after the latest mediawiki update. Can you take a look? ~
16:57, 3 May 2009 (UTC)
- What's not working about it? → Spang ☃☃☃ 21:02, 04 May 2009
- It doesn't allow me to change status, it's just stuck.~
21:06, 4 May 2009 (UTC)
- It's working for me... what browser are you using? Any messages in the error log? → Spang ☃☃☃ 21:26, 04 May 2009
- FF 3.0.10. Didn't find any specific messages to that one on the error console, only a couple regarding Oli's auto QVFD one. ~
10:01, 5 May 2009 (UTC)
- Ah, but if one script is broken, it can sometimes break all the rest. Try taking it out and see if it works. → Spang ☃☃☃ 21:50, 05 May 2009
- Ah! Got this very scary error now - Error: uncaught exception: [Exception... "Access to restricted URI denied" code: "1012" nsresult: "0x805303f4 (NS_ERROR_DOM_BAD_URI)" location: " Line: 1"] ~
22:00, 5 May 2009 (UTC)
- I just realised it was because you had my autowatch sript in there too. I took it out, because mediawiki does that anyway now. I didn't realise anybody was still using it :P → Spang ☃☃☃ 22:05, 05 May 2009
- Horray! success! While you're at it, is that the reason that I only see rollback when I'm actually in the history of the article but not in recent changes? ~
22:18, 5 May 2009 (UTC)
edit well hello
Hello. I was told that you might help me here. Right here. I am planning to rewrite Thom Yorke in an apparent style of his own attitude. As there is in the Wiki article, I wish for an infobox of which to play with in the article. The kind of coding is thus:
{{Infobox musical artist | Name = Thom Yorke | Img = Thom Yorke.jpg | Img_capt = Thom Yorke in concert at [[Brixton]]. | Img_size = | Background = solo_singer | Birth_name = Thomas Edward Yorke | Born = {{birth date and age|1968|10|7|df=y}}<br>[[Wellingborough]], [[Northamptonshire]] | Instrument = [[Vocals]]<br>[[Guitar]]<br>[[Piano]]<br>[[Keyboards]]<br>[[Percussion]]<br>[[Bass guitar]] | Genre = [[Alternative rock]]<br>[[Electronic music|Electronic]] | Occupation = [[Musician]] | Years_active = 1991—present | Label = [[XL Records|XL]] | Associated_acts = [[Radiohead]], [[UNKLE]], [[Björk]] }}
But there seems to be no kind of infobox coding to be found on Uncyclopedia. Would you help me, please? --
-kun "whisper sweet nothings into thine ear..." 14:49, 10 May 2009 (UTC)
- Steal that infobox template off wikipedia? Or just copy and adapt one of the infoboxes we do have. Also, I kinda like the Thom Yorke article. Articles "in the style of the subject" are quite overdone, and usually way too obvious... → Spang ☃☃☃ 01:14, 11 May 2009
- well, fine. i wanted an excuse not to contribute anything here anyway. bye. --
-kun "whisper sweet nothings into thine ear..." 10:28, 11 May 2009 (UTC)
- Aw, there's still plenty other things for you to do! If you really want to do it, go for it. Maybe write it as an autobiography and have it as a separate article? Best of both worlds! → Spang ☃☃☃ 14:12, 11 May 2009
- no, no, no, no, no, no, no, no, no. i've already stated what i'm going to do and my depression protocol insists that i stick by it because people in this condition are literally impossible to move from their argument and they'll go on and on and on and on about how they are right in order to impress their despondence all the more. you know what i'm going to do, miss? i'm going to write the article in my userspace, deny anyone from moving it, and put in very clear letters at the top of the talk page, when complete, a very large rude word followed by your name. i'm the generation's streamliner and i'm going to be here all night. --
-kun "whisper sweet nothings into thine ear..." 22:14, 11 May 2009 (UTC)
- If I may butt in, but I have done quite a bit of work on some infobox templates. Check out the one on smartass. I can adapt it for you... Let me 14:57, 11 May 2009
- thank you message sent (where applicable). --
-kun "whisper sweet nothings into thine ear..." 22:14, 11 May 2009 (UTC)
I got Template:Infobox musical artist to the right specifications, more or less. Could you please took at it and tell me why it's not wrapping the way wiki tables normally do? --Pleb SYNDROME CUN medicate (butt poop!!!!) 05:08, 12 May 2009 (UTC)
- Nevermind. Sorted. Have fun never archiving your talk page. --Pleb SYNDROME CUN medicate (butt poop!!!!) 05:18, 12 May 2009 (UTC)
edit {{Title}} is officially borked
Anything you can do about it? ~
21:26, 13 May 2009 (UTC)
- Working for me. Any specific pages it's broken on? → Spang ☃☃☃ 01:04, 14 May 2009
edit Auto-Generated Categories?
It appears that Category:Pages with too many expensive parser function calls Category:Pages where template include size is exceeded are auto-generating. The first one only contains Vandalism/example on wheels!/Archive 07, and the 2nd Captain Oblivious & Making up Oscar Wilde quotes, both {{Q}} -spam articles. Does that have something to do with the wiki upgrade.? --Mnb'z 03:03, 18 May 2009 (UTC)
edit Show/Hide
Socky told me you were the one to answer my question. Do you know if it's possible to change the words on the [show] [hide] buttons, or make an expand thing with chosen words or something along those lines?)}" > 16:44,18May,2009
Maybe it'll help if I'm more specific. I'm gonna write an article on Homestar Runner. I wanna give the option to [view intro], causing the intro to become visible, or [come on in], causing it to stay hidden. Go to if you are unfamiliar with:31,19May,2009
Please respond as soon as possible. I really want to get a start on this article and this advice would be a big help.:58,20May,2009
- Sorry, not possible. If want, you can learn Javascript and change the function that creates the show/hide links to allow custom text in a safe way. Other than that, there's no way to do what you want. → Spang ☃☃☃ 03:01, 20 May 2009
edit Thanks & questions
Thanks for support, first of all. Then:
- randomness seems to defeat benefit 1
- of course it's all the same to me who edits the portals but there could be SOME supervising, so that at least accidentally typo-ridden stuff is left out. OK, I'm the father of the bastard, so I guess I should take care of policing myself. People usually listen to what they're told. Sometimes. Seldom? Come on, it's gotta be more often than "never!"
- one banner? please heggsbl:57, 19 May 2009 (UTC)
- Right, now I got you about the "one banner" thingy. Would you only leave the "All"-category as it is? I thought it would be necessary to leave the original pages of the categories as they are, and links to them as well. Or is it enough to have the link to the original category page on the portal page? --:12, 19 May 2009 (UTC)
- I meant as in a random selection of articles from a certain category, or ones that have been hand picked, so there's no need to update too often. Also, I don't think they need any more supervision than a normal wiki page. No page need an "owner", just as many people interested in editing it as possible. It's what the talk page is for.
- And if there was, say, a science portal, there would be no need for the science category link, so it could be swapped right out, as a category link isn't that useful. And then maybe on the science portal banner or whatever there could be a link to the category for "all science articles", next to some categories for more specific science subjects. If you want to make one, don't wait for someone to say it's ok, just go for it! Copy the equivalent wikipedia portal (there's thousands to choose from) and go from there. As long as you focus on content, and not the colours of the boxes and/or the layout and stuff like that. → Spang ☃☃☃ 03:08, 20 May 2009
- basically I agree
- the portals I tried won't work straight off here because they have templates and stuff that don't exist here. But OK, I'll find a right one sooner or later. I'm not actually waiting for someone to tell me I can make one, just general lack of time and knowledge.
- "No need for category link" bit is clear
- I cannot help but want them to be hand-picked. That's my main reason for suggesting the whole thing. There could also be some guideline, like "if you promote something here, run a spellcheck at least".
- with an owner or without, it's for you in the Cabal to decide. If it goes without, there could be some rule like I suggested in the first place: don't change more often than every two days, once a week, or 15:52, 20 May 2009 (UTC)
- Got it - I'll have to swipe the templates as well. Only took me about two days to understand that. I'm already smarter than, er... dumb? -- 19:08, 20 May 2009 (UTC)
- Yup, if these portals work out, we'll probably need the templates anyway. Also, I only suggested the random thing in the case where there isn't anyone who wants to become an editor of the portal. Obviously someone picking the articles is better. And yeah, though "run a spell check at least" would go for any article. What I'm trying to say there is that all you need is common sense - no need to have a designated editor or protecting the pages or anything like that. And there's no need for voting for what goes on the portal. Common sense should be enough. If something goes wrong, it can always be reverted! Also, there is no Cabal. → Spang ☃☃☃ 21:34, 20 May 2009
- "Should" being the catch. But OK, I agree. I tried making sense of the Wikipedia portals but it's too much for a beginner, and there are too many links upon links upon links. But I'm starting with Mordillo's user page copy:23, 21 May 2009 (UTC)
edit You should archive the current version of Last person to edit wins and start a new one
If it gets any larger, it could lag a Cray supercomputer. User:SPARTAN-984/sig 17:26, 24 May 2009 (UTC)
- Coincidentally, the same goes for your talkpage, Sp:02, 25 May 2009 (UTC)
- Either way, I figure not archiving keep out the weak ones. This way you need to have a real need to leave a comment if you have to wait till it's all loaded to do it. And nobody ever said you couldn't blank last edit, the winning edit just has to be the last one. To that specific page. So archiving wouldn't work. That's what a history is for, surely. → Spang ☃☃☃ 01:08, 25 May 2009
- "a real need to leave a comment" That sounds kinda pathetic. , 25 May 2009 (UTC)
- You look kinda pathetic. → Spang ☃☃☃ 01:16, 25 May 2009
- What the:19, 25 May 2009 (UTC)
- What the YOUR MOM! → Spang ☃☃☃ 01:30, 25 May 2009
- FU Spang! Why do you always involve my mom in this sorta thing? Keep her out:40, 25 May 2009 (UTC)
- I bet a future user will name himself FUSpang. XD. User:SPARTAN-984/sig 22:30, 25 May 2009 (UTC)
edit Science portal next to ready
In a couple of days it should be ready to run, just needs some polishing, better (and more) quotes and one idea more. If you come up with something feel free to help: User:Multiliteralist/Science portal. --:38, 24 May 2009 (UTC)
- Awesome. If it looks mostly finished, I'll add it to the main page. As soon as I can be bothered. → Spang ☃☃☃ 01:18, 25 May 2009
- Doesn't yet, I'll let you know. It has clear emptiness on it:41, 25 May 2009 (UTC)
- The Game Portal is also almost done (or good enough). Although it could stand some cosmetic fixes, and I'm not sure if the cabal approves of the "Promotion zone" setup, i.e. its a free-for-all right now. --Mnb'z 05:20, 25 May 2009 (UTC)
- It's a bit overwhelming in my view, I'd remove some content (not the sections, but the amount of articles you have on each sections). Also, the "featured X of the moment" sounds a bit like ED, if you ever seen their main page. How about just calling it - "Featured X"? ~
07:29, 25 May 2009 (UTC)
edit Any idea why VFH/Failed gave up after the 13st?
Yeah. That thing in the header. I probably should've put more of it down here. Sir Modusoperandi Boinc! 20:41, 25 May 2009 (UTC)
- I don't see anything wrong with it... → Spang ☃☃☃ 22:49, 25 May 2009
- You need to refresh, Mod:51, 25 May 2009 (UTC)
- It's fine. Now. It was wrong before. Don't make me dig out my WABAC machine! Sir Modusoperandi Boinc! 02:00, 26 May 2009 (UTC)
edit Thanks
...plus, any old excuse to make this talk page even longer... :) --T. (talk) 11:31, 28 May 2009 (UTC)
edit That thing in that place isn't working rightly
Your magic program is forgetting pages (see here). Have the walls of reality crumbled? Sir Modusoperandi Boinc! 06:24, 1 June 2009 (UTC)
- Almost. I think I fixed it just in time. I changed it about slightly, but you still do exactly the same thing to make it work.
- It was probably because the number of articles featured using the new system has gone over 500 (actually 615 now), so started ignoring some random articles. And just noticed we have 1,105 featured articles now. That's pretty impressive. → Spang ☃☃☃ 18:09, 01 Jun 2009
- "Impressive"? I think you mean "Worst". Sir Modusoperandi Boinc! 20:10, 1 June 2009 (UTC)
- Modus said "Worst", LOL. :28, 1 June 2009 (UTC)
edit Socky
Is fucking with You know what. Saberwolf116 14:06, 6 June 2009 (UTC)
- Nobody cares. → Spang ☃☃☃ 21:15, 06 Jun 2009
- Indeed. —Sir Socky
|
http://uncyclopedia.wikia.com/wiki/User_talk:Spang/archive19
|
CC-MAIN-2015-48
|
refinedweb
| 51,675
| 81.73
|
Mon - August 30, 2010
A little Python program to create to-dos in Things from a template file
I quite like the to-do list manager
Things
from the company Cultured Code. (Like plenty of other people, I expect, I'm waiting for repeating to-dos on the iPhone version, but even without that feature it's far the best one I've seen.) I also have a list of things to do in preparation for going on a trip that I've refined some over the years. (Make sure I have enough books, get any foreign currency I'll need, get a hostess present, etc.) Since some of the items on the list may require that I wait for something to be shipped or for other people to do things, I have various indications in the list of how long before I leave that I ought to begin each item.
I've sometimes thought that it would be convenient if Things could make use of that list without my having to go to the trouble of entering each item. That's the sort of thing that computers are meant to do, after all. I thought of emailing the folks at Cultured Code and suggesting a feature of that sort. And then I noticed that the Macintosh version of Things can be controlled by AppleScript. And there's a Python module that can send the necessary events (so I didn't have to figure out much about AppleScript). So I hacked up a little Python program to read my file and create the appropriate to-dos in Things.
It's unimaginatively called todosfromfile.py and it's licensed under the GPL. You can get it
here
.
The file it reads has two sorts of lines: "time-before" lines and to-do lines. The time-before lines specify how long in advance the subsequent to-dos should be due. So the file might look like this:
1 month
Make sure I have the clothes I'll need
2 weeks
Get books
Get hostess present
Foreign currency
1 week
Program phone with addresses
Make sure I have enough cash
2 days
Get weather forecast
1 day
Check in
On day
Pack laptop, chargers
Blank lines and lines beginning with a "#" are ignored.
The program creates a project in Things and adds the to-dos to it. It's run from the command-line and takes three arguments: the name of the template file (--file), the name to give the project (--name), and the date to count backward from (--date). The date needs to be in YYYY-MM-DD format. So I might run it as:
$ python todosfromfile.py --file trip.templ --name 'Trip to Maine' --date 2010-10-31
It requires the
Appscript library
and Python 2.7. It only needs Python 2.7 because I used the argparse module and that could easily be changed.
Posted at 07:04
Main
Permalink
Wed - April 21, 2010
Errno 35 in Python's socket.sendall() under OS X
A little while ago, on the python-help mailing list, a question came up that took a bit of work to find the answer to. Since that list's archives aren't public and Google doesn't seem to have indexed a page with a good discussion of the issue, I thought I'd post about it here (with the original poster's permission, of course).
The poster was using Python 2.6.2 under Mac OS X 10.6.2.
The poster was using Python's ftplib module to upload certain files and some of them would fail consistently with the same number of bytes transferred and with a traceback that ended with:
File "/Library/Frameworks/Python.framework/Versions/
2.6/lib/python2.6/ftplib.py",
line 452, in storbinary
conn.sendall(buf)
File "<string>", line 1, in sendall
error: [Errno 35] Resource temporarily unavailable
It's pretty clear from that that the OS was temporarily running out of network buffers. But why doesn't the socket's send() method just block until it completes?
The reason is that a socket timeout had been set. If you set a socket timeout in Python (whether through the socket module or something that uses the socket module), sockets are set to be non-blocking "under the covers". (That's pretty much the only way to implement that feature.) It's a somewhat awkward side-effect of doing that that errors resulting from timeouts don't always look like what they are.
[Edited April 27, 2010; the earlier version of this post was based on an incomplete understanding of the problem.]
Posted at 07:37
Main
Permalink
Thu - December 21, 2006
Magellan RoadMate 2200T GPS receiver review
The Magellan 2200T portable GPS unit is mostly unremarkable. And that's brilliant. But before I tell you why I think that, let me tell you a little about how I came to buy one because that may be relevant to knowing why I think what I do.
I'm a confirmed gadget dork. I own more than one iPod. When I saw a buddy's DS Lite, I knew that I had to have one. I have two sets of lightweight headphones for different purposes. I live in Minneapolis, but my cellphone was never sold by an American carrier. I once bought a Japanese-market laptop.
And I considered getting a portable GPS unit more than once. But looking at the products on the websites of two main manufacturers, Garmin and Magellan, is an exercise in annoyance. It's true that, from a marketing perspective, It makes sense to "segment" a market by selling different products at different price points to people who are willing to pay different amounts of money. But these guys are nuts. As of this writing, counting just portable GPS units that are intended for use in cars, Magellan sells 11 of them and and Garmin sells 21. Who wants to read through the marketing drivel for that many products in order to try to figure out which one you really want?
The companies further annoy potential customers by leaving things out of the box that you're going to want and so have to pay extra for. Such as maps in some cases. Or enough storage to hold the maps for the whole U.S. I can't think of a product category in which the manufacturers do a better job of alienating potential customers. They certainly put me off a few times.
But then I used a car GPS unit on unfamiliar roads in difficult driving conditions. I was traveling alone to visit a buddy who lives in a small town in Maine. Northwest Airlines cancelled the leg of my flight from Detroit to Portland, Maine a few days before my trip and so I had to fly into Manchester, New Hampshire. The drive from Manchester into Maine looked like it would take a little more than two hours and I had good maps and good directions. That shouldn't be a big deal. I've done any number of drives like that in the past without any trouble. But the route was reasonably complicated, my plane would be landing late in the afternoon, and the weather forecast for the area wasn't all that good. So I called up Hertz the day before I left and told them I wanted a car with a GPS. They said that that wouldn't be a problem.
Getting in my car in the Hertz lot in Manchester, I found that making their "NeverLost" GPS unit work was snap. Which it would really have to be since otherwise they would have a bunch of annoyed customers going back into airports demanding that someone show them how the thing works. Hertz and Magellan must have done a lot of usability testing to make the unit as easy to use as it is.
Following the unit's guidance, I made a wrong turn just out of the airport. That was because I had mistaken the scale of the map it was displaying. But even that turned out to be just fine because it calmly displayed "Recomputing route" and told me where to go from where I had gone to. From there to the end of the trip (and, indeed, the return to Manchester) I had not the slightest difficulty reading its map or following its spoken directions. The Hertz unit was mounted near the passenger's left knee. It might have annoyed a passenger slightly but it worked fine for me. It was a little like playing Mario Kart DS, with the moving map below the view forward.
And it was great that the NeverLost worked so well and was so easy to use because it rained the whole way and most of the trip was in the dark. The rain varied between heavy and ridiculously heavy for almost all of the trip. I had the car's windshield wipers at their highest setting pretty well all the time and I occasionally wished for a higher setting. People were driving 45 MPH on highways posted for 65 and some of the roads in my buddy's town were flooded.
In circumstances like those, turning on the car's dome light to read a map or directions would have been pretty unsafe. And the limited amount of information I would have managed to gather doing that under those circumstances combined with the very limited visibility outside would certainly have resulted in my making several wrong turns and often having a suspicion that I had gone wrong even when I hadn't.
But the NeverLost's bright moving-map and voice instructions took me directly to where I was going. The noise of the rain on the windshield was often loud enough that I was concerned that I wouldn't be able to hear the voice prompts, but at the loudest setting it shouted comfortably over the racket. I was thoroughly glad to be done driving when it said in its synthesized voice, "You have arrived", but I would have been there considerably later and considerably more tired without it. As it was, the trip was very annoying. Without the NeverLost, it would have been a nightmare.
When I returned the car at the Hertz facility in Manchester, I told the person who was taking down the car's mileage and printing my receipt that I didn't know just what they had charged me for the GPS unit but, whatever it was, that it had been worth ten times the price. And so I returned home willing to go to the trouble of finding which portable GPS would be best.
A bunch of research suggested that the 2200T would probably be best and I suspect that it is. Other than the NeverLost which is similar, the only other car GPS I've used is one that was built into a buddy's car and so I can't give any very useful comparisons, but I can tell you why I like the 2200T.
I like it chiefly because, like the NeverLost, it does its job in an unremarkable way. The thing is most valuable in difficult conditions and the last thing you need then is a confusing display or an elaborate user interface. On the 2200T, both are admirably simple and clear. A geek should need ten or fifteen minutes and a glance or two at the manual in order to get familiar with the unit. A more normal person might take a little longer, but shouldn't take very long.
(I may be able to save you even brief trips to the manual, which is currently only available as a PDF to download, by telling you that "Enhanced POIs" are locations you enter using Magellan's PC software and the "Trip Planner" is for multi-stop trips. Magellan's software is Windows-only but you don't need it to use the normal functions of the unit. Also, many areas of the moving map screen that aren't obviously buttons respond to tapping on them.)
To use the 2200T, you begin by picking a destination. Actually, you don't have to do that. It will be happy to show you a moving map of your vicinity at any of various scales. But the screen is sufficiently small that it can't show an area more than a few miles across in any significant detail, so it's not all that useful for orienting yourself in an unfamiliar area. The same would be true of any map that's slightly smaller than a file card.
So you begin by selecting a destination. That can be a street address, an intersection, a "point of interest", an address that you've previously entered in your addressbook, or a place picked of the map. You pick a place on the map by zooming out, dragging the map around with your finger, zooming back in again, ensuring that where you want to go is at the center of the map, and tapping there. It's not very convenient, but it works and you're unlikely to need to use it much. "Points of interest" is GPS-speak for a sort of telephone book of locations. If you want to go to the nearest gas station or a particular restaurant or shopping mall, you can query the unit's database and pick where you want to go. GPS manufacturers differentiate their units in part by how many zillions of POIs are in their databases. The 2200T has a database of 1.5m POIs, which is small by current standards. Still, it seems to have everything in it that I can think of and it's been useful in a couple of real-world situations. Naturally, a database like that will eventually become out of date, but I'm sure that Magellan will offer updates.
Once you've picked a destination, the unit will display "Calculating route" for a few seconds or a little longer and then will display your route as a magenta line on the moving map. Just start driving along the line. The unit will display the map with your route ahead of you (assuming that you've configured it to show ahead as up). You'll be able to see your route easily because the screen is a wonder. It is perfectly readable in everything from complete darkness to bright sunlight. The map changes scale automatically using what seems to be a pretty intelligent algorithm to determine how much detail you'll want at what time. You can also change the scale yourself.
The unit displays the name of the road you're on, the name of the road you'll turn onto next, the distance to the next turn, the distance remaining for the trip, and the sort of turn you'll make next. (You can vary that a little, but that's the essence of it.) If you've configured it to (and you should, it's useful) the display will change to a split-screen shortly before a turn. Half of the screen displays the moving map and the other half displays a 3-D rendering of the turn to be made. That may not sound especially useful. How interesting is a picture of a left turn? The answer is that it can help to know what to look for if the turn is something other than a plain 90-degree turn. A picture of a right turn at a shallow angle followed by a sharp left turn over a bridge across the road is quite helpful.
But much better than just a moving map, the unit gives you voice prompts. Its synthesized voice informs you about upcoming turns well in advance, repeats the information closer to the turn, and then plays a chime just before you'll be turning the steering wheel. The software also pronounces the name of the road you'll be turning onto. GPS geeks call that "text to speech". Inevitably, there are some slightly odd pronunciations, but I haven't yet run into one that was incomprehensible, and that includes Wayzata Blvd. The virtue of the voice prompts is of course that you don't have to look at the thing much. It may be desirable to glance at it occasionally, but you can take your eyes off the road a lot less often than you would if you were using an ordinary map or directions.
The 2200T has a couple of other small advantages: The battery life seems pretty good. Mine ran about eight hours from a full charge before switching itself off. I strongly suspect that battery life is affected by how much work the unit is doing and since I wasn't asking it to do much that was very hard during those eight hours, I'd suggest taking that as an upper limit. Still, I'm pretty impressed.
The unit feels solidly built and it appears to be sealed against weather pretty well. I wouldn't advise you to take it swimming with you, but I doubt that a little rain would harm it.
The downside to its being pretty well sealed is that the battery isn't user-replaceable. Making a watertight lid for a battery compartment would be hard. That means that when the battery no longer holds a useful charge, you'll need to send it in for service. But you'll probably run it mostly from its cigarette-lighter power cord and so a battery replacement may be pretty far in the future.
I don't have a lot to compare it to, but the unit's reception of GPS signals seems quite good. It works just fine in my apartment as long as I stay pretty close to the windows. In a car, the unit doesn't have to be stuck to the windshield to get a good signal. Oriented randomly in a bag in the back seat seems to work fine.
There are a few other compromises and imperfections.
There are a couple of small problems with the user interface. One is that there's no simple way to change the volume when you're looking at the map screen. There's a mute button that you can tap on that screen to silence the unit, and that's probably a useful thing. But the button should really pop up a volume control. If it starts raining hard enough to make a racket, you're not going to want to go digging through various screens looking for the volume control.
It would also be useful if there were an audio indication of when the split-screen view of the upcoming turn is available. Since the screen returns to the usual map display just before the turn, you pretty much have to guess when to look at it if the unit isn't mounted in your line of sight. We're not supposed to mount things to our windshields here in Minnesota.
There's another small imperfection that's a bit more subtle: When you've specified a route, you can simulate driving it. In a preferences page, you can tell the 2200T that you want it to offer you the option of simulating driving a route after it has calculated it. You can also specify some options for the simulation, such as doing the simulated driving faster than you'd actually drive the route. I can imagine that that's a potentially useful feature. I might like to preview an unusual or complicated route before driving it so as to see what I'd shortly be doing or to have a look at the route to decide if I liked the one that the unit had chosen. (Within limits, you can influence the route that the unit picks.)
But there is a minor problem with the feature. If you have the option to preview a route turned on, it gives you that option even when the unit computes a new route because you've departed from the original one. If you've departed from the route that the 2200T has chosen, it's probably because of bad traffic or to detour around a road that's closed. No one is going to want to preview the new route then. They're driving. It's a relatively minor thing to tap a No button to say that you don't want the preview, but it's a distraction. And the 2200T is good because it almost always doesn't distract you.
If your route has you continuing on the same road for more than a few minutes, the 2200T occasionally speaks, telling you that you'll be continuing on the road. That may not sound like a very useful feature. But the NeverLost didn't have it and, on a stretch of highway that I was on for about an hour, I wished that it would talk to me once in a while. Was the volume set loud enough that I could still hear it? Was it still working correctly? An occasional spoken status message would have spared me a few glances at the NeverLost and a few unnecessary clicks of the volume-up button. The 2200T says "Continue on the current road" every few miles and I think that's a small but valuable feature. It would be even better in my opinion if it incorporated some useful information into the message, such as the number of miles remaining on the current road.
The underlying database of road information that the 2200T uses is licensed from NAVTEQ. It seems that they don't have a lot of competition for the US and Canada. Garmin's GPSs also use their data and so do the map websites of Google, Yahoo, and MapQuest. The database is good but it's not perfect. For example, it doesn't know that in Minneapolis, where I live, 1st Avenue South is two-way between Franklin and 28th St.
The road database knows about the reversible lanes on I-394 near downtown Minneapolis, but it doesn't know all the ways they're connected to the rest of the highway. It correctly identified that it was traveling on them, but at the point where they were about to rejoin the rest of the highway going west, it requested a turn that, while safe, wasn't in the right direction. Proceeding in the right direction on the highway, the unit requested a u-turn, cautioning that it should be a safe and legal one. Still proceeding in the right direction, the unit gave a "You can't get there from here" message and I needed to re-enter the destination. (Happily, I was a passenger at the time and could do that without any fuss.)
In addition, here in Minneapolis, streets and avenues are generally labeled as being North or South or Northeast or Southeast, depending on their position with respect to the Mississippi River and Hennepin Avenue downtown. But we're a bit sloppy about whether we say "First Avenue South" or "South First Avenue". Also, some streets exist only in one of the four divisions. According to the post office, those streets shouldn't have a direction marking since they don't need to be distinguished from different versions.
Entering a street address in the 2200T is admirably simple and hard to do wrong. You begin by spelling the street name on an on-screen keyboard. Letters that can't come next disappear, making it easier to find the ones you want. When you've entered enough letters that there are only a few possibilities for what you're spelling, the unit shows you a list and you pick the street from it. Minneapolitans' casual attitudes to where they put their souths and norths is mirrored in the database. For example, in the database "S. Bryant Ave" has house numbers from 1600 to 3599 and "Bryant Ave South" has house numbers from 1600 to 9399. Holmes Avenue, which exists only in the south section of the city, is listed as "Holmes Ave", "Holmes Ave S.", and "South Holmes Ave". The house numbers listed for the first one don't actually exist.
None of those imperfections particularly surprises me. The 2200T contains a database of all the roads in the United States and Canada. In my experience, a database of that size pretty much can't be perfect. But it's worth remembering that when you're using the unit. Apart from the issue with the reversible lanes, none of the errors would cause any significant trouble, and the unit hadn't recommended taking the reversible lanes.
Once you've specified an address, you can satisfy yourself that you picked the place you meant by tapping a bulls-eye icon. That will show you a map centered on your destination. A text button saying something like, oh, "Show on Map" would have been a little more obvious, but once you know what the little bulls-eye does it's easy enough.
The routes that I've seen 2200T choose have always been good but, in cases in which there are a large number of very similar routes, such as traveling diagonally through a grid of city streets, the routes it has picked haven't always absolutely optimal. That's hardly surprising. It would require information about every stop sign and traffic light in the US and Canada in order to pick the optimal route from dozens or hundreds that are very similar in distance and time. The unit seems to have a preference for broad and one-way streets. That's probably a good preference to have since it's likely to be guiding you on unfamiliar roads.
When you're asking it to calculate a route, you can tell it to pick a route with a minimum use of freeways. That's often handy at rush hour around here. It will also notice that you're in slow traffic on a freeway and indicate that it's willing to find you a different route. There's also an optional traffic receiver (not available yet and probably requiring a paid subscription) which ought to enable it to choose better routes when traffic is heavy.
I mentioned earlier that GPS manufacturers often leave things that you're going to want out of the package so as to make additional sales from accessories. The 2200T is just missing a case and an AC adaptor. That's actually pretty good as these things go. And you may not really need an AC adaptor since you'll probably use the unit in a car almost all the time and it comes with a cigarette-lighter adaptor. Still, I knew that I'd want to learn how to use it at home rather than sitting in my car and I may possibly want to use it on my motorcycle so I wanted a AC adaptor. At the time I ordered the unit, Magellan's website didn't show an AC adaptor as an accessory available for the 2200T. It did show them for other similar units and I supposed that they were the same. Still, I wasn't sure enough to place my order on the website, so I placed the order by telephone, specifying that I wanted an AC adaptor for the 2200T. A few days later the adaptor arrived, but without any prongs to plug it into the wall. Magellan sells their GPS units in the United States and also in Europe. Cleverly, they've designed a universal power supply that just needs to have prongs appropriate to the local outlets clipped to it. So I literally had an AC adaptor, just not a useful one. I called the same number and explained the situation and the nice person who took the call sent me the right prongs without charge. I think that's pretty good. Anyone can make a mistake, but Magellan fixed that one quickly and pretty painlessly.
Magellan still doesn't have a case available for the 2200T. I'm using
this
,
which is a sort of fluffy padded napkin with the hook side of velcro (er, hook-and-loop fastening material) at the corners. You wrap it around something, forming s sort of envelope and then open one corner to slip the thing out and back in. It works as advertised, but the result is a bit bulkier than I'd like.
The unit has an MP3 player and a picture viewer built in. Since I have no idea why anyone would want to use those features, I have nothing else to say about them.
I'm very impressed with the 2200T. It's not the sort of thing I need every day because I travel on familiar routes most of the time. But if it guides me on another trip or two like the one from New Hampshire into Maine and saves me a bit of fuss on an occasional trip to the wilds of St. Paul, it will have been worth the price.
Posted at 06:27
Main
Permalink
Sat - November 11, 2006
Inheritance vs. delegation in Python
In previous episodes of my tiny introduction to object-oriented programming in Python, we've had a look at
what objects and classes
are and at
inheritance
. Then we took a brief detour to look at
closures
, which are like objects in some ways. Today let's have a look at delegation.
We've previously seen that inheritance is a useful way of specializing or otherwise changing the behavior of a class that already exists. Another way of looking at that is that your class is getting some other class to do most of the work.
There's another way of getting another class to do most of the work that's sometimes useful. It involves explicitly handing particular method calls off to an instance of another class. That's such an obvious technique that it seems that it hardly deserves a name, but it's called delegation.
Let's say we wanted a dictionary that didn't raise a KeyError when a key wasn't found, but instead returned a default value of None. (You don't need to implement that since it's in the collections module as of Python 2.4 but it's a useful example.)
One way to implement that using inheritance is like this:
class defaultDict(dict):
def __getitem__(self,key):
if self.has_key(key):
return self.get(key)
else:
return None # Default value
If, for some reason, you didn't want to use inheritance, you could give your defaultDict instances dicts of their own and call those dicts' methods when the same methods of your instances were called, making whatever changes you wanted. Here's one way to implement that:
class defaultDict:
def __init__(self):
self.d={}
def __setitem__(self,key,value):
self.d[key]=value
def __getitem__(self,key):
if self.d.has_key(key):
return self.d[key]
else:
return None # Default
(Here's a bit of terminology: Delegation is also sometimes called "containment". The inheritance relationship is sometimes calls "is-a" or "ISA", as in "the defaultDict is a dict". The delegation relationship is sometimes called "has-a" or "HASA", as in "the defaultDict has a dict".)
You might reasonably ask why anyone would want to do that. It's more code (a lot more if you wanted all of a dict's methods to work). One reason you might have done it once upon a time is that you used not to be able to inherit from built-in types. But that has since been fixed in Python, though the UserDict and UserList modules that did the delegation so that you could inherit from them are still in the standard library.
The real reason that you'd want to use delegation is that the class you were designing has some resemblance to another class, but isn't really enough like it that you'd call it the same kind of thing. That's obviously not the case with our defaultDict. But take, for example, the Message class in Python's email module. You can index a Message object as though it were a dictionary to get at the message's headers, such as "Subject" and "From". Even if you didn't index an email message, it might well make sense to store the headers in something like dictionary. But I don't think that anyone I know would call an email message a kind of dictionary. (The Message class doesn't actually use a dictionary, but that's an implementation detail that's not important for our purposes here.)
One reasonable way to choose between inheritance and delegation is to look at whether you want all of the other class's methods. While it makes sense to get and set an email message's headers, a dictionary's pop() method probably doesn't make sense for an email message and neither, really, do a do a few others. Similarly, a dict's len() method is obvious enough, but finding the length of an email message isn't really related to the number of keys in a dictionary.
So if inheriting would mean that you would need to turn off some methods or implement some in a way that's not related to the parent class's implementation, you may be better off with delegation.
Posted at 07:43
Main
Permalink
Mon - September 18, 2006
Closures in Python
I've previously written two tiny and not at all exhaustive posts about objects in Python (
1
,
2
). Here's a similar discussion of another programming topic: closures. I think that closures are a bit more obscure than objects. The examples here are in Python, but you can do pretty much the same thing in many other languages.
Closures fall out of two things that Python can do. The first is that in Python we can treat functions as though there were data. (That's approximately what's meant when some people say that functions in Python are "first-class objects".)
If I define a function:
>>> def f(x):
... print x+1
I can assign it to a new name:
>>> g=f
And call the function with the new name:
>>> g(42)
43
I can put it in a list:
>>> l=[1,"a",f]
>>> l[2]
<function f at 0x4c670>
And call it from there:
>>> l[2](11)
12
It's not amazingly common, but it can occasionally be useful to put functions in various sorts of data structures. Perhaps a dictionary of functions indexed by the sort of data that they work on or produce.
(For example, I once had occasion to parse a calendar format. In calendar formats, repeating events (such as "2:00 pm on the last Thursday of the month") aren't recorded as dates and times since there could be an infinite number of instances of such an event. Instead, they're recorded as specifications for generating the actual instances. It was convenient for me to have the functions that would generate the next event in a dictionary that was indexed by the kind of repetition they did (daily, weekly, etc). But that's by the way for our purpose here.)
The second thing that closures depend on is that Python programs can have nested functions. That is, functions defined inside other functions:
>>> def outer(x):
... def inner(y):
... return y+2
... print inner(x)
...
>>> outer(10)
12
Nesting functions can be useful in making a program more readable. If a particular function can be made clearer by defining a helper function and the helper function is only going to be useful to the first function, it can make sense to define the helper inside the first function. That way someone who's reading your program and sees the helper doesn't need to wonder where it's going to be used.
Now, we can put those two facts together by having a function return a "customized" version of an inner function. For example:
>>> def outer(x):
... def inner(y):
... return x+y
... return inner
...
>>> customInner=outer(2)
>>> customInner(3)
5
The trick that you want to notice in what's going on there is what happens to the value of x. The argument x is a local variable in outer() and the behavior of local variables isn't normally very exciting. But in this case, x is global to the function inner(). And since inner() uses the name, it doesn't go away when outer() exits. Instead inner() captures it or "closes over" it. You can call outer() as many times as you like and each value of x will be captured separately.
The function that's returned is called a closure. The idea is potentially useful because we can specify part of the behavior of a function based on data at runtime.
At this stage you might say, "OK, I followed your tedious explanation, but what good is such a thing? Is it anything more than a curiosity for ordinary programming?" The answer to that is that it is occasionally useful when something, such as a library interface, requires a function and you want to specify more than a couple of them that are very similar.
Imagine that you're designing a GUI interface and you need six buttons that do similar things. Tkinter buttons take a function to call as an argument and it would be tedious to write six very similar functions. Instead you might do something like this:
from Tkinter import *
def makeButtonFunc(buttonName):
def buttonFunc():
print buttonName
return buttonFunc
class mainWin:
def __init__(self,root):
self.root=root
self.createWidgets()
return None
def createWidgets(self):
for buttonName in ("A","B","C","D","E","F"):
b=Button(self.root,text=buttonName,
command=makeButtonFunc(buttonName))
b.pack()
return None
def main():
root=Tk()
mainWin(root)
root.mainloop()
return None
if __name__=="__main__":
main()
That's clearly better than writing six functions that are virtually identical.
There are lots of people who like using closures. I, personally, don't. To me, it feels like using a subtle trick and I prefer my programs to be as obvious as possible. In a similar situation, I'd use a Python object with a __call__() method. If a Python object has that method and it's called as though it were a function, that method is run. In a program I wrote, I'd probably replace makeButtonFunc() with something like:
class makeButtonFunc:
def __init__(self,buttonName):
self.buttonName=buttonName
def __call__(self):
print self.buttonName
Which would do the same thing. Of course, I'd give the class a different name.
Posted at 08:07
Main
Permalink
Sun - October 2, 2005
Creating iTunes playlists with Python
I've written a smallish Python
program
to create iTunes playlists according to particular rules that suit me. You're welcome to the program; you can distribute it under the
GPL
. But since the particular rules that suit me probably won't suit you exactly, it may not be of much use to you unless you want to hack on it a bit yourself.
There are surely bugs in it and legitimate sorts of data in the music library file that it reads that I haven't anticipated. When bugs have caused it to create a bad playlist file, iTunes has rejected the file. But maybe I've just been lucky and a bad playlist file could do something Very Bad. As with programs in general, use it at your own risk. It cheerfully assumes that your terminal wants text output in UTF-8. That may not matter if you don't have any non-ASCII data in your music library. Bug reports and patches are naturally very welcome.
The only non-standard module it requires is Fredrik Lundh's excellent
ElementTree
module for manipulating XML.
I wrote the program after going down to my local Apple store and having a look at the iPod nano. It's hard to pick one up and not want one. (Though it was immediately clear to me that the thing would scratch easily and therefore really needs a case.) But practicality stopped me from buying one then. I have a reasonably big music collection: one reason that I like my 60GB iPod so much is that it lets me store my 300-some CDs in the basement. Would an iPod that holds only 4GB be of any use?
I imagined that 4GB ought to hold most or all my current favorites, so I created a playlist with my current favorites on it. It fit comfortably in 4GB, but it turned out to be pretty tedious to listen to. Because they were my favorites, I'd listened to most of them recently. I wanted something I hadn't heard recently pretty often.
It would be possible to create a playlist by hand that contained a bunch of favorites and also some albums I hadn't listened to recently, but it would quickly become a nuisance to maintain. So I wondered if it was possible to have a program create a playlist automatically. And it is.
It turns out that iTunes saves an XML description of your music library in:
~/Music/iTunes/iTunes Music Library.xml
That's not iTunes's main data file. As far as I can tell, it's a version that's generated for other programs to read. Changing it doesn't seem to accomplish anything. But iTunes will import playlists that are written in XML. So a program can read the XML version of your music library and generate a playlist based applying whatever rules it likes. And you can then import the playlist.
As far as I'm aware, neither of those file formats is documented publicly. So what the program expects to read and decides to write are based on inspection and experiment. I've very likely missed a few cases. As I mentioned above, bug reports and patches are very welcome.
The program actually creates several playlists, all in one file that's saved on the desktop. It starts with all the music that has been added within the last 60 days and then adds everything that hasn't ever been played. It puts them in a playlist called "AG-Recently Added". It then gets the music that has been played most and puts that in a playlist called "AG-Most Played". Assuming that it hasn't run out of space already, it adds albums from configurable genres chosen quasi-randomly, with a bias toward albums that haven't been played recently. Those playlists are named "AG-" and the genre name.
I still haven't gotten a nano. I'll have to see how I like the playlists that get generated and maybe tweak the rules a bit.
Posted at 08:30
Main
Permalink
Sun - September 11, 2005
Klipsch ProMedia Ultra 2.0 powered speakers
A while ago I bought a pair of JBL Duet powered speakers to use in a spot where I'd like to listen to music but where I don't have a place for a subwoofer. Unfortunately,
they suck
.
A little wiser but only $35 poorer, I ordered a pair of
Klipsch ProMedia Ultra 2.0
speakers. Happily, they don't suck. Indeed, they're good, or maybe very good for what they are.
Make no mistake, they're not stunningly great speakers, and the absence of a subwoofer shows in the sound they're able to reproduce. If you have a place for a subwoofer, get a decent set of speakers that includes one. But if you don't, the ProMedia Ultras are respectable speakers. Given that they do a good job, $100 isn't an unreasonable price for them. I certainly think that they're more than three times as good as the JBLs.
The ProMedias are pretty highly directional. The stereo "sweet spot" they produce isn't big. But if you wanted to fill a room with music as opposed to listening to something while sitting at a computer, you'd use a set of speakers that has a subwoofer.
On the front of the right speaker, there are volume and bass knobs and another input and an output jack. The output jack is there so that you can plug in headphones if you want to use them occasionally without having to re-cable things. The jack would be much more useful if its output didn't contain really nasty hiss. The hiss isn't in the sound from the speakers.
Their industrial design isn't going to win any awards. They're relatively narrow when viewed from the front (around 3 1/4") but they're pretty tall (10 1/2" or a bit more) and pretty deep (7" or so). They're around the size and shape of a good-sized hardcover book. If you put them next to a monitor that faces a wall, they'd probably be pretty unobtrusive. If you put them on a table that faces a room, they look large and awkward from the side. In addition, the green power LED on the right speaker is brighter than it needs to be. It's bright enough to be a bit distracting in a room that's not brightly lit. Still, decent sound reproduction is more important to me than fabulous industrial design and so I'm pleased with them.
Posted at 07:12
Main
Permalink
Fri - September 2, 2005
JBL Duet speakers
I recently moved and in my new apartment I have a use for a pair of desktop speakers. My
Monsoon
stereo-plus-subwoofer set (2.1 in the jargon) remains very good indeed, but the best place for them in the apartment isn't particularly near the best place for me to work on my laptop.
I could just put them where I work on my laptop, but then they'd be in a lousy place to play when I want music in the living room, and there's no good place for a subwoofer under the table I like to work at.
So I went clicking around, looking for decent desktop or "computer" speakers that included just two stereo speakers and no subwoofer (2.0 in the jargon). I don't need for them to perform well when they're turned up loud since they'll be just a couple of feet from me. The JBL Duets seemed to get reasonably good reviews at Amazon and they were on sale there for $35 delivered. They also have the small advantage that they look reasonably nice from the back which, given how the room is arranged, is how most folks will see them.
Unfortunately, they suck. The highs are muddy and the bass doesn't exist. A cymbal crash sounds like a sketch of a cymbal crash and anything around the tone of a cello or a drum sounds like some irregular thumping that you can just about perceive.
I'm not a lunatic audiophile, but I think I'm also not what Dan Rutter would call, in his inimitable way, a "
cloth-eared git
". I wasn't expecting wonders from the Duets. They're cheap and it's going to be hard to produce good bass without a subwoofer. But they don't even live up to what I expected for $35. If someone would like a pair and is willing to come to Minneapolis to pick them up, send me an email. You can have them.
If anyone can suggest a 2.0 set of powered speakers that doesn't suck, I'd be glad to hear about them. In the meantime, I'll just wear my Sennheiser
PX-100s
.
Posted at 07:15
Main
Permalink
A brief introducton to object-oriented programming in Python: Part 2, inheritance
A while ago I
posted
a tiny introduction to object-oriented programming in Python. Here's part two, about inheritance. It's equally tiny and equally doesn't attempt to be comprehensive.
Inheritance is a way of specifying a new class that's almost, but not exactly, like a class you already have. Let's extend the example from the page above. Assume that I (or maybe someone who has written a library module) has already defined class c3 for us:
>>> class c3:
... def __init__(self,x):
... self.val=x
... def getValAsDouble(self):
... return self.val*2
And let's assume that, for the purposes of the program we're writing, that's a useful class. But it would be even more useful if it had a getValAsTriple() method. I can save some work by saying that class c4 inherits from class c3. Then I only need to specify what I want to add:
>>> class c4(c3):
... def getValAsTriple(self):
... return self.val*3
...
>>> o6=c4(42)
>>> o6.getValAsDouble() # Comes for free through inheritance from c3
84
>>> o6.getValAsTriple() # Defined in c4
126
As a matter of terminology, the class you inherit from is often called the "superclass" and your class that inherits from it is its "subclass".
Now, it's pretty rare that you find a class that just happens to be useful to inherit from. Many classes that you inherit from were designed specifically to be inherited from.
Inheritance has another advantage: If the class you inherit from is supplied by a library module, you don't have to know or care about the details of how it gets its work done.
Let's say that the author of some module thinks of a better way for one of its classes to do something. Or maybe they fix a bug or two. If you install the new version of the library, you get the improvements "for free". You don't have to go to any extra trouble, because you're now inheriting from the improved version.
There's one more thing: What does inheriting from "object" do? That class is more like a signal than a class that provides useful methods that you'll use every day. It tells Python that you want your class to be a "new-style" class. New-style classes solve a few problems that not very many people have. (For those people they're important, but nobody who's starting out with object-oriented programming is one of them.) So don't worry about inheriting from object any time soon. I've been programming in Python for years and very rarely need to use a new-style class.
Posted at 06:47
Main
Permalink
Mon - March 21, 2005
XinFeed passive crossfeed filter for headphones
I've previously
mentioned
my
AirHead
headphone amplifier from
HeadRoom
. I like it very much and I use it routinely when I'm listening to something on my iBook at home. Ironically, at the moment I don't have any headphones that are particularly difficult to drive so I mostly use it for its crossfeed.
Crossfeed?
Yes: When you listen to stereo speakers, most of the right channel goes to your right ear, but some of it goes to your left ear, and vice-versa. Most music is recorded with the expectation that "air mixing" of that sort will happen when it's listened to. But there isn't normally any mixing when you're listening with headphones. The left channel is delivered only to your left ear and the right channel only to your right ear. A crossfeed circuit fixes that; it mixes some of the right channel into the left channel and vice-versa.
Actually, it can be a little more complicated than that. It seems that, on account of the shape of a person's head and ears, the crossfeed that a person perceives from air mixing varies some depending on a sound's frequency. It's straightforward enough to approximate that variation in a crossfeed circuit. Of course, as with many things in audio, it's possible to take matters to an
extreme
. (There's
more
at HeadRoom's admirably informative site.)
I'm no nutty audiophile, but after having listened to various things for some time with my AirHead's crossfeed circuit, I find it substantially more pleasant to listen to headphones with crossfeed than without. The sound seems more natural and it seems to be coming from in front of me rather than from beside me.
But what about portable music? It's perfectly possible to carry an AirHead around, and its four AAA batteries last an admirably long time. But it's about the size of a standard iPod, And so while you
can
carry it, I'm not likely to. And connecting one to an iPod Shuffle would be plain silly.
Happily, there's a solution to the portability problem in the form of
XY Computing & Network
's
XinFeed
(specifically, the low-impedance ampless version). The XinFeed is a crossfeed circuit built into a slightly lumpy and asymmetrical widget that's not a lot bigger than the 1/8" mini jack and plug that it has to have. Here's a photo of it plugged into my iPod Shuffle:
For those unfamiliar with the Shuffle's size, the gray control clicker is about 7/8" in diameter, just about the size of a US quarter.
How well does it work? Very nicely. I find that listening with it is much more pleasant than listening without it. It seems to have just about the same effect as my AirHead's crossfeed circuit. I may not be the most discerning listener in the world, and someone else may be able to find a significant difference. But even if someone can, it would be very hard to beat the thing's portability and, at US$30 delivered to me, its price.
Of course, one of the things that makes the XinFeed so portable is that it doesn't run on batteries (it's "passive" as they say). That means that it's going to consume some of the signal in order to do its job. It doesn't eat much, but I found that when I was comparing the sound with it and without it, I needed about two clicks more volume out of my Shuffle to have the sound seem the same after I plugged it in. That means that if your music source is only barely able to drive your headphones satisfactorily, the XinFeed may not be a useful solution for you. Happily, my iPod Shuffle has plenty of power to drive my Sennheiser
PX-100s
and my Etymotic
ER-4Ps
even with the XinFeed plugged in. But if I add the
cable-thingy
that Etymotic sells to make their ER-4Ps work like ER-4Ss, it sounds like the Shuffle is running out of oomph with the XinFeed.
You could probably build something similar to the XinFeed yourself if you know a little about electronics. Googling "passive crossfeed circuit" yields plenty of
circuit diagrams
and a few pages with
a bit more help
than that. I know one end of a soldering iron from the other, but I was perfectly happy to pay Mr Xin Feng (who seems to be more or less all of XY Computing & Network) to build it for me. The result is nicer and smaller than anything I'd be able to build and probably cheaper too. Certainly if I valued my time at anything. And, who knows, his circuit may be better than the ones published on the net.
Mr Feng's
site
is more interesting than elegant, but given a choice between interesting and elegant, I'll take interesting any day.
In all, I'm thoroughly delighted with my XinFeed. When I take my standard iPod on a plane trip or my Shuffle out for a stroll, I'll no longer be thinking that the sound could be a lot more pleasing.
Posted at 07:09
Main
Permalink
Tue - October 26, 2004
Casio EX-S100 digital camera
The best way I can explain what happened to me when I
first
read about Casio's Exilim EX-S100 camera is to paraphrase
Penny Arcade's
guest columnist
Storm Shadow
and say that I missed my saving throw against cool gadgets. It's not surprising that I missed; the camera is small and shiny and sleek and has a really big LCD display. From the front, it's barely bigger than a credit card and it's no thicker than a slice of toast. And it records 3.2m pixels. How could any self-respecting gadget geek not be entranced?
I did read a couple of reviews (
1
,
2
) first, and if both of them had said "Turned our reviewer into a bug-eyed zombie" I probably would have thought twice about buying one. But neither did. And as it turns out, I'm very happy with the camera. I won't bother to repeat what's in those reviews, but there are a few things I think are worth mentioning.
Casio had to make a compromise or two in making the camera as small as they have. For one, there's no viewfinder. But I find that doesn't bother me. The LCD is big and comfortable to use and I'm sure that I'll eventually stop putting the camera up to my eye before remembering that that doesn't work.
The most significant compromises seem to be in the lens and the size of the image sensor. It seems that resolution suffers a bit. I bought the EX-S100 to replace my mildly-antique Canon Elph PowerShot
S200
, so that's what I have to compare it to. Here's a magnified detail from a shot with the Casio at its highest resolution:
And here's the same detail from the same scene shot a moment later with the Canon:
The Casio's image is bigger because it records more dots, but there's no more detail. (The Casio's color is a little more accurate.)
To go with detail that's not better than a 2.0m-dot camera, there's some image noise. The EX-S100's sensor is 1/3.2 (or in
sane
units about 4.5 x 3.4 mm) vs. 1/2.7 (5.3 x 4.0 mm) for the Canon. The Casio squeezes 3.2m pixels into that area while the Canon has only 2.0m on its larger sensor. That means that the individual photosites on the Casio are going to be considerably smaller. If I'm counting right, the Casio's dots are slightly less than half the size of the Canon's. That seems to cause a little trouble because the camera has to turn up the gain a fair amount in low-light situations. When the camera decides that it needs ISO 400 sensitivity, there's visible noise in the picture. Here are enlarged details of a shot of a blank wall shot at ISO 100, 200, and 400:
(The color varies a bit because I didn't bother to fiddle the white balance.) Noise is visible in the middle one, but it's pretty pronounced in the last. In practice, images taken at ISO 200 don't look noisy. At ISO 400, they do. Happily, it seems that the camera doesn't think it needs ISO 400 very often.
I think that the EX-S100 may be able to record a slightly larger range of brightnesses than the S200 can. But if that's true, the difference is marginal.
On the good side, Casio uses the big, clear LCD to provide a the user interface that's very clean. Indeed, it's as good as I've seen on a digital camera. And that's matched by control buttons that work nicely and feel good. And in addition, there's one feature that I haven't seen before and which I'm very glad to have: You can specify which settings revert to default after power-off and which are remembered. I like that because I think it's rude to take flash pictures in most public places. My Canon would enable automatic flash each time I switched it on. I'm sure that most people like that; they won't miss a photo if the light is dim. For me, it meant that I had to remember to switch the flash off if I didn't want it each time I switched the camera on. With the Casio, I can tell it to remember the last flash setting.
You use a cradle both to import photos and to charge the camera. You can set the camera to look like a USB mass-storage device when it's in the cradle and so it works just fine under OS X and ought to work equally wall under any sane operating system. The cradle and its wall-plug transformer aren't big or heavy, but people who want to travel as light as possible can buy a
travel charger
for the battery and an SD card-to-USB dongle. Interestingly, the connector that mates with the camera when it's in the cradle looks a lot like a USB Mini-B plug. I have one of those cables, but since nothing I've found in the documentation or elsewhere mentions USB Mini-B cables, I think I'll let someone else try plugging in the camera that way first. (If you do, I'd be glad to know what happens.)
The camera doesn't fit into Casio's "
EXCASE3
" very exactly:
Happily, the cooler
Japanese cases
seem to be
available
here in the US. I plan to order one soon.
In the end, I like the camera a lot. I could wish that there weren't any compromises involved in its engineering. And I could also wish for a pony. It works perceptibly better than my S200 and it's half the size of that small camera. That's quite good enough for me.
Update October 29, 2004:
My Japanese-market case arrived today and it's much nicer than the American-market one. I ordered the
soft case
in "ebony". The case that arrived is an attractive shade of very dark brown. The packaging is marked "cioccolato" and "chocoraato". The leather is pleasantly soft and it fits the camera snugly. Even in the case, the camera fits tidily in a shirt pocket. The case's flap is secured with the sort of magnetic disks you've almost certainly seen before. The camera's power switch is designed in such a way that it seems unlikely that it will get switched on accidentally. The case doesn't have a belt-loop or clip; it's suitable for protecting the camera in a pocket or a bag. That suits me just fine because I don't like carrying things on my belt. You may have a different opinion.
Posted at 04:27
Main
Permalink
Thu - September 2, 2004
Book: Network Security Hacks by Andrew Lockhart
Andrew Lockhart
Network Security Hacks: 100 Industrial-Strength Tips & Tools
O'Reilly, 2004
ISBN: 0-596-00643-8
$24.95
280 pages (main text)
As you can tell from this book's subtitle, the word "hacks" in the title
Network Security Hacks
is used in one of its
original senses
: a good or ingenious idea. This book isn't a catalog of computer break-ins.
Each of the 100 hacks is described in a short section with a title, a one-sentence description, and generally a couple of pages of discussion. The hacks are organized into eight chapters:
Unix Host Security
Windows Host Security
Network Security
Logging
Monitoring and Trending
Secure Tunnels
Network Intrusion Detection
Recovery and Response
The hacks aren't evenly divided among chapters; "Monitoring and Trending" is pretty short and "Network Security" is pretty long. Each chapter has an introduction that isn't very interesting ("In this chapter, you'll learn....").
The hacks themselves are pretty good. I'm not qualified to comment on the ones that have to do with Windows, but on Unix and network security there's plenty of good sense here. The discussions are of varying value. That is, once you've said "Run ntop for Real-Time Network Stats" (hack 63), someone with clues (and this book is addressed to people with geeky clues) probably doesn't need a lot more help. I mean, you'd Google for ntop's site and read the documentation there to see if it does something you'd find useful. On the other hand, the discussions for "Firewall with Netfilter" (hack 33) and "Firewall with OpenBSD's PacketFilter" (hack 34) have useful examples of the sorts of rules you'd want to run on a firewall host. Most of the discussions are useful but few are vital.
One thing that's missing is any indication of why you would or wouldn't want to use a given hack. That is, "Test Your Firewall" (hack 38) probably makes sense for any network admin. But "Create a Static ARP Table" (hack 32) would be a big nuisance on any but the smallest networks. I'd need to be pretty scared of ARP-table poisoning attacks before I went to that much trouble. The book is a toolbox, not a tutorial.
If these 100 hacks here were made available as a list of possibly-useful security practices for free on a website, that list would be less useful than this $25 book. But maybe not a whole lot less useful. Still, they're not available that way and some of the discussions are quite good.
Neither of my two favorite security hacks is mentioned. The first was told to me and I haven't yet used it, but I expect to eventually. It is: Use a dedicated log host and cut its transmit pair. The second is from my own experience: Don't run Sendmail, BIND, rsync, or Kerberos. And try really hard not to run any IMAP server or an FTP server that allows non-anonymous logins.
Posted at 08:16
Main
Permalink
Sat - August 14, 2004
Mini WiFi access points
I don't travel all that much, but in the next few months my iBook and I are likely to travel to a couple of places where there's high-speed internet access, but where the Ethernet jacks aren't necessarily in the places most convenient for me. So it would be nice to bring along an 802.11g wireless access point so that I could use my iBook in a convenient spot. A Linksys
WRT54G
access-point with appropriately
hacked
firmware is unquestionably cool, but I hoped to find something a bit friendlier to my carry-on bag.
Apple's
AirPort Express
looked like just the thing and I promptly got one. How well does it work? Heck if I know. You see, you need OS X 10.3 (the latest version of OS X) in order to configure it and my iBoork runs OS X 10.2. I'm not going to upgrade my iBook's OS right now because dong that would break too many things. (It seems that you can use various flavors of Windows, but only one flavor of OS X to do the configuration.) I have no idea what the people at Apple were thinking when they decided that you'd have to use an OS version that's less than a year old in order to make the AirPort Express work. I'm pretty sure that when I ordered the AirPort Express, Apple's website said that under OS X 10.2 the thing had "limited functionality". I interpreted that to mean that the music-streaming and/or printer-sharing features probably wouldn't work. I didn't interpret it to mean that I'd be completely SOL because I couldn't even configure the thing.
Yes, I could have borrowed a 10.3 machine to do the configuration. But the idea is to have a portable access-point and I'm quite sure that if I took an access-point that I couldn't reconfigure on a trip, something would require that I reconfigure it.
OK, fine, then. Does anyone else have something similar? It turns out that they do. Tom's Networking had a
review
of the ASUS
WL-330g
and they rather liked it. So I ordered one.
The WL-330g is undeniably small. Here's a photo of it, its AC adaptor, and a ballpoint pen for scale:
Taken together with its small AC adaptor, it's about the same size as an AirPort Express, though not nearly as attractively designed. For some reason that I can't guess, ASUS decided to use high-intensity blue LEDs for the status LEDs. They're bright enough and blinky enough that they'll be distracting if they're in your line of sight. I can live with all that, but we're clearly not talking about Apple-like attention to design.
Configuration was easy. (Kind of ironic, huh?) There's a configuration utility that I didn't use because I'm sure it runs only under Windows. Instead I did the configuration by web browser. The WL-330g has a tiny webserver in it and its default address is 192.168.1.1. So all you have to do is turn on your machine's 802.11b/g interface, manually assign it an IP address and subnet mask on that puts you on the WL-330g's network (such as 192.168.1.2 and 255.255.255.0), and point a web browser at. The configuration pages aren't very pretty but they're straightforward to use. If you get something badly wrong, just press the reset button for ten seconds or so and the defaults are restored.
Of course, one of the ways that the WL-330g manages to be compact is that it doesn't have an external antenna (or indeed a jack for one). So it's going to be important that its internal antennas give it reasonable range. For that reason, I conducted a very scientific experiment. I put the WL-330g on top of a desk in a Mark A1 standard suburban office building. To ensure randomness, I paid no attention at all to its orientation. I then picked up my iBook and walked as far away from the access-point as possible while staying in the building. From there, the access-point was on the other side of a wall, around 120 feet away. The signal strength shown in my iBook's menu bar had gone down one bar (actually it's an arc) and Apple's Internet Connect showed about half signal strength. In keeping with the spirit of the experiment, I didn't do any detailed throughput experiments, but web pages loaded at what looked like the full speed of the 640kbps connection there.
So color me thoroughly satisfied; at least as satisfied as I can be without having used it "in anger". At $75 from
NewEgg
, I can't see anything to complain about in this handy little gadget.
The WL-330g can also be used as a client. (There's a switch on the bottom to change modes.) That might be useful in some conceivable circumstances, but I haven't tried it because I don't think I'm likely to run into any of those circumstances. There's a cable in the box that allows the WL-330g to run from the power available on a USB port, which might be useful if you were using it as a client. The quick-start guide is reasonably good and there's a carrying case and a short Cat-5 cable in the box. As for the carrying case, I continue to prefer AeroStich's nylon envelope
bags
.
Update: September 2, 2004
The WL-330g worked very well in practice. In an ordinary house, I plugged it into an unused cable-modem port and I got excellent signal strength 20 meters or so away. Or rather, I did after I realized that the switch on the thing's bottom that's labeled Access Point/Ethernet Adaptor had gotten moved by accident while it was in my luggage. It seems that it may not have been the best idea for ASUS to put that switch on the bottom since I didn't notice it or think of it for the first ten minutes of troubleshooting. The switch has a ridge around it that's intended to prevent it from being moved accidentally, but it's obviously not completely effective when the thing is packed in a suitcase.
After I thought of the switch on the bottom, I found that it's much more reliable to configure the WL-330g by wired Ethernet than by its radio link. After the configuration is done, the radio link is perfectly reliable. But when I was applying the various configuration options by radio, the link to my iBook would often drop and when I reconnected, the options generally hadn't been applied. Plugging the WL-330g's Ethernet cable into my iBook solved the problem. In that process, I also found that it may be desirable to configure the WL-330g to use an IP other than the default of 192.168.1.1 since it's quite possible that whatever you to plug it into is already using that address because that is
its
default. I set the WL-330g to use 192.168.1.150 which I judged likely to be outside the range that the network's DHCP server was using and everything was fine.
Posted at 06:19
Main
Permalink
Sun - June 27, 2004
A patch for SpamBayes to record URLs' IPs and a cache for PyDNS
Non-geeks and geeks not interested in the details of Bayesian spam filtering may prefer to skip this post.
A while ago I
mentioned
the spam filter
SpamBayes
. I've used it almost from the beginning and it works very well for me.
Starting early this year, I found that spammers had begun sending messages with bland or almost entirely nonsense text and a link to click on. SpamBayes would generally score them as unsure because they contained so little information that it could make use of. (In his original
article
, Paul Graham predicted that spammers would respond that way to the widespread use of Bayesian filters.)
Turning on SpamBayes's mine_received_headers option helps, but not enough in my experience. Especially if you have any legitimate correspondents on Comcast's network.
In April, I posted a
patch
for SpamBayes's tokenizer to the spambayes-dev list that creates synthetic tokens for the IP addresses that the host part of the URLs in a message resolve to. That turns out to help a lot on those messages. That's because the IPs of spammers' webservers aren't uniformly distributed. Indeed, it seems that there are relatively few networks that are willing to host spammers' websites and it doesn't take very long for SpamBayes to start using those tokens as evidence.
At first, it didn't
seem
to
help. At least not much. It even produced a small decrease in accuracy in some cases. But that was on historical data. On more recent data, it's a significant
win
for me.
If you're doing a lot of scoring all at once (as you might with certain
training regimes
), doing lookups that way generates a lot of DNS traffic. Unless your resolving DNS server is electronically very close to you (like on the same Ethernet segment), that's going to slow scoring down a fair amount. Depending on the details of your situation, it may also be a significant load on your (or your ISP's) DNS server. To deal with that, I've hacked up a
cache
for PyDNS and a slightly different version of the
patch
. (With the new version, the clue "timeout" is now a slight misnomer. A better name would be something more generic like "error", but I've left it as it was for compatibility with the data in my database.)
By default, the cache respects the time-to-live of the data returned by the resolving name server it uses. The resolving name server component of D.J. Bernstein's
djbdns
returns TTLs of zero under most circumstances. Probably some others do too. Dan
explains
that that's to prevent cache snooping. If the cache doesn't seem to speed scoring up, you can set its attribute printStatsAtEnd to see if you're getting any cache hits. If low TTLs turn out to be the problem, you can set the attribute minTTL to 300 or 600 seconds or something harmlessly small like that and the cache will cache everything for at least that long.
Posted at 08:46
Main
Permalink
Wed - May 19, 2004
Headphone amplifier and audio-image enhancer
I don't mean to be an audio geek. I blame Steve Jobs for it. I was perfectly happy just being a coder and sysadmin geek. But, you see, an OS X laptop is really good for a Unix coder geek. And it's hard or maybe impossible to have an OS X machine and not subsequently be sucked into getting a cool iPod. I call it the iPod tax.
That's the beginning.
It seems that in order to sound at all good, earbuds need to fit your ears correctly. The earbuds that ship with the iPod sound somewhere between OK and lousy depending how they fit your ears. It seems that my ears (more exactly, ear canals) are bigger than average (no doubt to match my big mouth) so I fall on the lousy end of that spectrum. But that's easily remedied by buying a pretty-inexpensive set of
headphones
. And maybe a not-so-inexpensive set of
earbuds
that sound very good indeed and block a lot of external sound, for when that's desirable. And maybe a set of
powered speakers
since the iPod is more convenient to use than a CD changer and also sounds better through them than my unremarkable stereo sounds through its speakers. And if you're going to listen to an iPod that way, you might want a
remote control
.
Whew! Well, at least we're done now and can close the chapter on audio geekery.
Um, maybe not. You see, there's something called a headphone amplifier. On the face of it, that sounds like the silliest thing ever. I mean, how much power can it take to drive a pair of headphones? And wouldn't every manufacturer of something with a headphone jack put at least that much power on the jack? As it turns out, not a lot of power and, even so, no they don't. There's no shortage of headphones that require more power than is available on many headphone jacks, especially headphone jacks on portable players. Dan over at Dan's Data has a cool
review
of a hand-built headphone amplifier and more background on the subject.
But I don't actually have that problem. The headphones I have can easily be driven by the power available on my iPod's and iBook's headphone jacks.
"Sowhat'syerprollem?" I hear you ask. Well, it has to do with "imaging". It's like this: Almost all recordings are made with the idea that you'll listen to them using stereo speakers. With speakers, you get some crossover; you hear the left channel some with your right ear and vice-versa. Feeding the left channel only to your left ear and the right channel only to your right ear, as headphones do, is apt to make what you're listening to sound at least a little funny. The sound that OS X makes when you drop a file in the trash is an extreme example. On speakers it sounds fairly well like something bouncing between the sides of a trash basket that's in front of you. With headphones, it sounds like a ping-pong game.
A good audio-image enhancer for headphones doesn't just mix the channels. It also delays very slightly the arrival of the portion of each signal that it sends to the opposite ear, since that's what happens in real life. The appropriate delay is a small portion of a millisecond, but you still can hear it, albeit not consciously. A really good audio-image processor will also vary the mixing depending on the frequency, since that happens in real life too. There's more
detail
about that on HeadRoom's admirably informative site. Follow the links on the left margin ("How We Hear", etc) for the other pages of the article. HeadWize also has
more
on the subject.
As with many things in audio, you can take price and even, perhaps, quality to absurd heights. (That's one reason that I'm glad to be a digital geek: with bits, the engineers know when the circuit is done because the bits come out right; with analog circuits, you can spend as much time and money as you like improving the waveform just a little more.)
So I was chiefly looking for something to do audio-image enhancement. There's no important reason that it's necessary to combine the imaging function with the amplification function, but it's convenient to. And there's always the possibility that I'll want the amplification function later.
So I got a HeadRoom
AirHead
. Actually, I swallowed their marketing pitch and got the
Total AirHead
version. And it sounds very nice. When I plug my headphones into it and switch on the crossover circuit, the apparent sources of sound migrate from the sides of my head to places in front of me. Depending on what I'm listening to, the difference may be subtle, but in all cases it's significantly more pleasant in my opinion. As for amplification, the AirHead appears to be able to do plenty.
The thing sounds very good. The way it looks is, um, another story. If you put it next to an iPod, I suspect that the best thing you'd say is, "At least it's black".
You can see the input from the AC adapter (sold separately) on the left of the photo, line-in at the top, a mini-plug plugged into one of the headphone output jacks on the right, and a second headphone jack unused at the bottom. It looks like the AC adapter isn't plugged all the way in, but that's as far in as it will go. The green LED indicates that it's switched on and the red one indicates when it doesn't have as much power as the volume level requires (most likely because the batteries are running low). Between them is a thumbwheel for volume.
I can't yet verify HeadRoom's claim that it will run for 40 hours on a set of batteries (it takes 4 AAAs). Indeed, I'm also not yet sure that I'll take it with me when I travel. It's just about the same size and weight as an iPod and it's not immediately clear to me that the improvement it produces would be worth carrying it with me. Clipping my iPod's case to the pouch on the back of the seat in front of me in an airliner is easy enough. I'm not sure that cabling an AirHead to it and stuffing it in there would be all that much fun. Time will have to tell on both those counts. Nevertheless, for headphone listening at home, I'm very pleased with my AirHead.
Coincidentally, Dan has a review of a related
product
from the same company just now on his site.
I promise that I am
not
going to become one of those people who think that cables need to be
broken in
before they will conduct electricity well.
Posted at 03:51
Main
Permalink
Fri - March 5, 2004
Headphones for my iPod
A while ago I
mentioned
how much I like the Etymotic ER-4P earplug-style earbuds that I use with my iPod. My opinion of them hasn't changed a bit since then: they're fabulous. On the last flight I took, I was seated in among two families that had a total of eight children. A baby in the row behind me spent most of the flight crying, a teenager across the aisle watched a portable DVD player without headphones, and a youngster to my left played a portable videogame without headphones. I put my ER-4Ps in my ears, switched my iPod on, and had a pleasant flight.
But the sound-isolation that makes the ER-4Ps so nice in situations like that isn't always desirable. If I were to use them while I was sitting in a departure lounge, I might well miss my flight being called. Eric Rescorla has
remarked
on the same thing, and he's talking about ER-6s which provide slightly less isolation. When I needed to be able to hear things over my iPod, I used to use Apple's cool-looking white earbuds. But after listening with my ER-4Ps and my Monsoon speakers (which I also mention in the post linked above), I decided that it ought to be possible to find something that sounds better than Apple's earbuds but doesn't provide much isolation and is easily portable.
I found HeadRoom's
site
to be admirably informative and I judged from what they have to say that
Sennheiser
PX-100s
ought to be suitable. And at $40 from HeadRoom, there's nothing to complain about in the price. Still, I was a bit leery. Would $40 headphones be any good? So I emailed their sales manager, explaining what I was looking for, and asked if the PX-100s were what he'd recommend, or if I should spend a bit more because something else would likely suit me better. He mailed back quite quickly and said that the PX-100s were what he'd recommend for me. In my experience, if a someone in sales doesn't recommend that you spend more money, you've found someone who really knows their products and is interested in making their customers happy. Naturally, I ordered the PX-100s from them.
And I think that the PX-100s sound very nice indeed. They sound a lot like my Monsoon speakers. They're certainly a considerable improvement over Apple's earbuds. Next to the PX-100s, Apple's earbuds sound thin and lacking in bass. The PX-100s aren't as good as the ER-4Ps, but it would be silly to expect them to be.
In addition to sounding good, the PX-100s are small and light and quite comfortable. The headband has two hinges and the earpads rotate 90 degrees, so they fold up like a pair of glasses. They come in a hard-plastic case which some folks may like but I think is too clever by half. If I want to put things in my
shoulder-bag
and have them not rattle around, I prefer the nylon
envelope bags
from
Aerostich
. Velcro wire-ties solve the snarled-cable problem.
Update July 8, 2004
Dan over at Dan's Data also
likes
the PX-100s.
Posted at 03:29
Main
Permalink
Sun - December 21, 2003
Voice-over-IP is interesting but not in the way most people seem to think
It's a tempting thought: In most offices and many homes there are two networks, one for voice and one for data. Why not run voice over the data network? In a lot of places, including here in Minneapolis, it looks like that would be convenient and cheap. It also doesn't look very hard. Voice is easily digitized and compressed. Your cell phone does it and even land-line telephone companies routinely carry voice over Asynchronous Transfer Mode networks. (ATM networks are data networks where the packets are 53 bytes long and are called cells.) The highest data rate that's ever used for voice is 64 Kbps and that wouldn't make much of a dent in my DSL line's 256 Kbps upload speed. In practice you could use much a much lower rate.
So why aren't we all using phones that have Ethernet ports to make calls that don't cost us anything beyond our monthly fee for internet service? Well, actually some people
are
. And telephone companies are understandably somewhat
concerned
about losing voice revenue. But there's a little more to it than that. The problem is that we have different expectations for voice networks and data networks. If an email were delayed by three seconds, I wouldn't notice or care. But if my "Hello" were delayed by three seconds, it would be an awkward way to start a conversation. I'm no fan of giant telephone companies but at least part of the reason that voice is expensive and data is cheap (where that's true) comes from the different expectations we have for those networks.
In addition to expecting little latency when we send voice, we also expect our voice networks to be extremely reliable. I'm pretty sure that that annoying "five nines" thing started with telephone companies. If any telephone company actually achieves 99.999% reliability (that's about 5 1/4 minutes of downtime per year) by any meaningful calculation it's news to me. Still, I have to admit that they're much more reliable than even quite good data networks.
Beyond low latency and high reliability, voice networks also have lots of capacity. It's often the case that something causes lots of people to want to make telephone calls all at the same time. Whether it's a holiday or a disaster, it's very possible that I can have a particularly strong desire to make a telephone call just at the time that everyone else does. As Steven den Beste has
observed
, building capacity that goes unused almost all of the time is expensive, but the capacity of the voice network around here is great enough that I don't remember the last time I got a fast busy signal or an "all circuits busy" recording.
My ridiculously cheap data network can be laggy, it goes down a couple of times a year, and I routinely run into its capacity limits. And that's just fine. We use data networks in ways that mean we can tolerate those things. The interesting thing about voice-over-IP isn't that voice can be sent using some particular protocol over a cheap data network. Rather, it's how cheap you can keep that data network while providing what people want from a voice network. Here's hoping that's pretty cheap.
Update March 3, 2004
The Register
reports
on similar issues.
Update May 30, 2004
It seems that some other folks have come to the same
conclusion
from a different direction.
Posted at 08:05
Main
Permalink
Sun - December 14, 2003
Free vs. Open-Source software
The folks at the
Free Software Foundation
and the folks behind the
Open-Source Initiative
seem to want about the same thing. They both want software that everyone is free to inspect, modify, improve, and re-distribute. A lot of software that's identified by its authors as being open-source is licensed under the FSF's
GNU
General Public License
. But if their aims are so similar, why would
Richard M. Stallman
, the founder of the FSF say, "
I disagree with the Open Source-movement
" and why would
Eric S. Raymond
, a promoter of open-source software, just about say that Richard Stallman should "
shut up
"? And those are moderate examples of the acrimony between the two camps.
The reason for the acrimony is that the two groups arrive at the same place from entirely different directions. RMS
believes
that my ability to give software that you wrote to a third person is a "natural" right, up there with life, liberty, and the pursuit of happiness. I don't see it that way, but there's little point in arguing with someone about what are and aren't natural rights.
By contrast, ESR and the OSI folks, whatever their personal principles, make pragmatic, utilitarian arguments that I should be able to give away software that you wrote. Over
here
they say, "Open source promotes software reliability and quality by supporting independent peer review and rapid evolution of source code." And, referring to the RMS's position slightly obliquely, "We think the economic self-interest arguments for open source are strong enough that nobody needs to go on any moral crusades about it."
It's easy for me to see things the open-source way. Some time ago, I released some
software
under the GNU General Public License. It was far from complete, but it was in a state where I thought that it might possibly be of some use to someone else. Pretty late at night, I typed the upload command and went to bed. The next morning, I checked my mail and found a very nice note from someone who had downloaded my program and had found a bug and sent me a patch to fix it. Literally, someone had improved my software while I slept. I had previously liked the idea of open-source rather abstractly, but the patch in my morning's mail was concrete. Since that time, I've gotten plenty of improvements to my software from strangers.
It's much more difficult for me to try to see things RMS's way. I have an ownership interest in the labor of my hands and I don't know why I shouldn't have an ownership interest in the labor of my brain. But RMS thinks that I
shouldn't
. On the other hand, if I look at a place where I do see a natural right, for example in free speech, RMS's impatience becomes clearer. If someone came to me and made utilitarian arguments in favor of free speech, I'd probably come off as pretty peevish in reply. Even if the utilitarian arguments were correct, I wouldn't want my right to free speech to depend on free speech being useful.
I don't expect to see an end to the arguments any time soon. There's no way to persuade someone who sees a natural right that they should be pragmatic about it, and no way that pragmatists are going to start seeing natural rights in software distribution.
Posted at 02:09
Main
Permalink
Sat - December 13, 2003
Who's trusting what in "trusted computing"
"Trusted computing" is a marvelous bit of marketing weasel-speak. I mean, who wouldn't want to trust their computer more? I know several people whose computers can't be trusted to get through the day without falling over at least once. Alas, that's not the kind of trust that trusted computing refers to. The
Trusted Computing Group
's "
backgrounder
" (warning, as is often the case with tedious documents, it's a PDF) says (p. 6):
As an example, per TCG PC Specific Implementation Specification
v.1.0, the CRTM for PCs is the BIOS or BIOS boot block and the BIOS
is required to load HASHES of pre-boot information into various
PCRs of the TPM. This establishes the
"
anchor
"
for the chain of trust
and the basis for platform integrity metrics. This can be used to
validate that the platform configuration has not changed and that the
BIOS has not been changed by malicious code such as a Trojan
horse. While not required, verifiable attestation of the platform
configuration can be extended by a chain of trust to the boot loader,
operating system, and applications if software support for this is
provided. TCG does not provide specifications for how this is
accomplished, as this is under the control of these software suppliers
.
Un-obfuscated, that means that when you turn on a computer that conforms to their specification, the first thing that happens is that some hardware gets to decide if the BIOS gets to run. In order to be acceptable, the BIOS is likely to need to be cryptographically signed with some particular secret key by the people who wrote it. The BIOS then becomes trusted and can choose to load only an operating system that has been similarly cryptographically signed. The operating system can choose to run only programs that have also been cryptographically signed. All this is couched in terms of "opt-in" and larded with "may"s but it's easy enough to see where this is going: your computer's ROMs would be picky about what operating system their code was willing to pass control to. And your operating system could be picky about what programs it was willing to run. That might increase security slightly since it's unlikely that a virus would be signed by Microsoft, but it's an even better way to reduce choice.
In general, elaborate restrictions of this sort cause users more problems than they solve. Such a system would be sufficiently elaborate that it's virtually certain that there would be flaws in its implementation and quite possibly errors or limitations in its design. The bad guys would go to the trouble of finding and exploiting those errors and limitations and the ordinary users would be left with less choice of software. So "trusted computing" doesn't mean that you can trust your computer to do what you want, but rather that the software publishers can trust your computer to do what they want.
Microsoft's somewhat different trusted computing project used to be called
Palladium
but they changed the name to "
Next-Generation Secure Computing Base
", presumably to make it less memorable. "Palladium sucks" has a ring to it that "NGSCB sucks" just doesn't have.
Here's a bit from Microsoft's NGSCB
: running.
That's fine as far as it goes, but Microsoft's claim that they're only interested in the OS and not in the boot process matches up surprisingly well with the Trusted Computing Group's claim that they're only interested in the boot process and not in the OS. Indeed, one looks pretty useless without the other; for the OS to trust what the underlying hardware tells it, it would want a trusted BIOS. And a trusted BIOS isn't of much use alone. Indeed, in the same FAQ Microsoft says a little later:
Q: How is NGSCB related to the Trusted Computing Group (TCG) and
the Trusted Computing Platform Alliance (TCPA)?
A: ... Microsoft is a founding member of TCG and anticipates that some
of the industry standards being developed by the group will be
incorporated into NGSCB.
It seems that that convenient match-up may not be a coincidence.
It may be that it's only some future version of Windows Media Player that will say "I don't trust your system" and refuse to run. But on past form, people betting on the most benign interpretation are likely to be disappointed.
For myself, I'm not interested in anyone but me deciding what programs will run on hardware that I pay for. I'm very ready to deal only with publishers who trust me rather than my hardware.
Posted at 07:00
Main
Permalink
Sun - December 7, 2003
The DMCA and what it says about open-source software
There's no reason here to go into all the reasons that the
Digital Millennium Copyright Act
is a bad law. Plenty of
other
people
have done that. But there's an implication of one aspect of it that I haven't seen mentioned elsewhere.
As most every geek knows these days, the DMCA makes it illegal (with a few exceptions) in the US to circumvent anti-piracy functions built into software. A non-geek might find that odd. Why should a big software company need to make subverting part of their software illegal? Couldn't they just make it impossible or at least very hard? Alas, on past form, they can't. Every form of copy protection I've heard of have been cracked sooner or later. And it's not large companies that crack copy-protection systems. It's mostly teenagers working in their spare time. The DMCA is just a matter of the big software companies running up a white flag. They're sure that whatever code they write will be cracked by a socially irresponsible teenager so they need to be able to sue.
But how is it that they can admit that they can't outwit teenagers and then say with a straight face that they produce better code than the thousands of dedicated people who work on
open-source
projects such as Linux and Python? Doesn't make sense, does it?
Posted at 01:53
|
http://www.mondoinfo.com/blog/C182263547/index.html
|
CC-MAIN-2020-10
|
refinedweb
| 16,928
| 70.53
|
Important: Please read the Qt Code of Conduct -
Qt Plugin problem "Plugin verification data mismatch"
I am new to Qt and am trying to take advantage of the Qt plugin system. I am assuming that the plugin system is not just for developing Qt Creator/Designer plugins.
My project design is package based in that it is designed to load individual "packages" that are modules and extensions of its interface. So I tried using the Qt plugin system to help accomplish this goal.
Everything compiles fine. The problem is that I cannot load the plugin. It gives a "Plugin verification data mismatch" when I try too (that is the message is given with QPluginLoader::errorString).
It may have something to do with using the macro: Q_EXPORT_PLUGIN2. I have seen this macro used in examples and the documentation for Qt says I should and I haven't yet done so. From what I have read however, this has something to do with using it in the Qt Designer. I do not need or want to export it to the Qt Designer (at least at this time). Is it still necessary?
If it is, I cannot figure out how to use it. The documentation says that the arguments are the target name and the class name. Yet, when I put in the "target" name as specified by Visual Studio, it does not work. I get an error: C2338: Old plugin system used.
If this is not the problem, then I cannot figure this out. I am using the same version of qt for all components of the project. I have used the Q_INTERFACES macro in the class derived from the interface (BuilderClient), and the Q_DECLARE_INTERFACE macro in the interface header.
It might help to know that I am using the Visual Studio Qt plugin with Visual Studio 2010
Hi,
You should take a look at the Plug & Paint example. It shows how to create plugins for your application.
Hope it helps
Ok. I followed the above advice and found one, possibly two (three?) problems. Fixing the first problem did not solve the issue. I am not sure what to do about the second problem because I use Visual Studio 2010.
I was missing the Q_PLUGIN_METADATA macro. I put it in but I do not have a json "file" that specifies anything. According to the Q_PLUGIN_METADATA documentation, the FILE macro is optional. Is this not true? Do I need to make a json file? Putting in the Q_PLUGIN_METADATA macro did not solve my problem.
The steps that are outlined in the example are:
Declare a plugin class.
Implement the interfaces provided by the plugin.
Export the plugin using the Q_PLUGIN_METADATA() macro.
Build the plugin using an adequate .pro file.
One thing that might be important is that the plugin interface itself inherits from QObject and has the Q_OBJECT macro. This is different from the examples. Is this a problem? This seems my best approach because each object inherited from the same interface must give off the same set of signals and I cannot declare signals without using the Q_OBJECT macro. If this is a problem I will need away around this issue.
Step 4 seems to be a problem because I am using Visual Studio. The instructions in the example were pretty explicit in what this file needs to include. How do I incorporate a .pro file when I am not using QtCreator as my IDE?
Folks, could someone please help me with this? I have been searching everywhere for a solution. I found out what I think is causing it but I still do not know how to get it to work.
I am fairly certain that the problem is that I am using Visual Studio. The project .pro file requires that one adds the CONFIG += Plugin setting but there is no way to do this setting in visual studio that I am aware of.
If you are not using the express edition, you have the Qt AddIn plugin for Visual Studio to handle Qt projects. Otherwise, you have to do it by hand. In that case, I recommend Qt Creator
I am using a professional version and the plugin. That was stated in my first post (at least the plugin part). However, I cannot find any setting for changing the config variable use in pro files for the qt project settings for the Visual Studio plugin. Is there a way to do that?
I believe I found a way to do the above. I added /D "QT_PLUGIN" to the C/C++ command line in the Visual Studio Project properties. I found that out by importing a .pro file with the CONFIG += Plugin set.
However, that did not get rid of the error. I also put in a blank { } json file reference just to cover my bases as was done in the plug-n-paint example. No change.
So I am going to post my code to see if anyone can find something wrong with the way I set everything up.
First... the Interface class that is used (which is located in BuilderCore.dll):
@
#define BUILDER_ROLE_INTERFACE_IID "Builder.DefineRoleInterface/1.0"
namespace BuilderCore
{
class BUILDERCORE_EXPORT_ROLE_INTERFACE_IID)
@
Here is the header for the derived class that is the plugin. It is important to note that the plugin file contains several more classes other than this. Is that a problem? I
NOTE: I had to comment out Q_OBJECT in order to get it to compile. I presume that is because this class inherits from QObject already (IBuilderRole is a QObject).
@
namespace BuilderCore
{
class BUILDERCLIENT_EXPORT BuilderClient : public IBuilderRole
{
//Q_OBJECT
Q_PLUGIN_METADATA(IID BUILDER_ROLE_INTERFACE_IID FILE "builderclient.json")
Q_INTERFACES(IBuilderRole)
public:
BuilderClient();
~BuilderClient();
public:
virtual void AddRole();
virtual void SetupRole();
virtual void Start();
virtual void Stop();
virtual void ShutdownRole();
virtual void RemoveRole();
private:
//BuilderRoleData* roleData;
};
}
@
Finally... the relevant import code from the Builder.exe program
@
QPluginLoader loader(unpackDir.absoluteFilePath(folderfileName));
loader.load();
QString loaderError = loader.errorString();
// QObject* object = loader.instance();
// the following if statement is never true because loader.instance() is always null
// the error is always the same.
if (IBuilderRole* ifRole = qobject_cast<IBuilderRole*>(loader.instance()))
{
// inherited signal connections
connect(ifRole, SIGNAL(RoleAdded()), this, SLOT(OnRoleAdded()));
connect(ifRole, SIGNAL(RoleReady()), this, SLOT(OnRoleReady()));
connect(ifRole, SIGNAL(RoleStarted()), this, SLOT(OnRoleStarted()));
connect(ifRole, SIGNAL(RoleBeginRemove()), this, SLOT(OnRoleBeginRemove()));
connect(ifRole, SIGNAL(RoleEndRemove()), this, SLOT(OnRoleEndRemove()));
connect(ifRole, SIGNAL(RoleCleared()), this, SLOT(OnRoleCleared()));
mRole = ifRole;
//mRole->AddRole();
}
else
failureString = new QString("Requested Role exists but the role could not be established.");
@
I wonder if it has something to do with the namespace… I haven't tried it with namespaced plugins.
|
https://forum.qt.io/topic/40298/qt-plugin-problem-plugin-verification-data-mismatch
|
CC-MAIN-2020-40
|
refinedweb
| 1,098
| 66.54
|
KStandardDirs
#include <kstandarddirs.h>
Detailed Description
Site-independent access to standard KDE directories.
This is one of the most central classes in kdelibs: It knows where KDE-related files reside on the user's hard disk. It's meant to be the only one that knows – so applications and the end user don't have to.
Applications should always refer to a file with a resource type. The application should leave it up to e.g. KStandardDirs::findResource("xdgdata-apps", "Home.desktop") to return the desired path
/opt/kde/share/applications/Home.desktop or ::locate("data", "kgame/background.jpg") to return
/opt/kde/share/kgame/background.jpg
There are several toplevel prefixes under which files can be located. One of them is the kdelibs install location, one is the application install location, and one used to be
$KDEHOME, no longer applicable in KDE Frameworks 5 (splitted into XDG_CONFIG_HOME and XDG_DATA_HOME, mostly).
Under these toplevel prefixes there are several well-defined suffixes where specific resource types can be found. For example, for the resource type
"html" the suffixes could be
share/doc/HTML and
share/doc/kde/HTML. The search algorithm tries to locate the file under each prefix-suffix combination.
It is also possible to register absolute paths that KStandardDirs looks up after not finding anything in the former steps. They can be useful if the user wants to provide specific directories that aren't in his
$KDEHOME directory, for example for icons.
Standard resources that kdelibs allocates are:
autostart- Autostart directories (both XDG and kde-specific) (deprecated since 5.0, use xdgconf-autostart)
cache- Cached information (e.g. favicons, web-pages).
A type that is added by the class KApplication if you use it, is
appdata. This one makes the use of the type data a bit easier as it appends the name of the application. So while you had to ::locate("data", "appname/filename") so you can also write ::locate("appdata", "filename") if your KApplication instance is called
"appname" (as set via KApplication's constructor or KAboutData, if you use the global KStandardDirs object KGlobal::dirs()). Please note though that you cannot use the
"appdata" type if you intend to use it in an applet for Kicker because 'appname' would be
"Kicker" instead of the applet's name. Therefore, for applets, you've got to work around this by using ::locate("data", "appletname/filename").
KStandardDirs supports the following environment variables:
KDEDIRS- This may set an additional number of directory prefixes to search for resources. The directories should be separated by
':'. The directories are searched in the order they are specified.
KDEHOME- The directory where changes are saved to. This directory is used to search for resources first. If
KDEHOMEis not specified it defaults to
"$HOME/.kde"
KDEROOTHOME- Like KDEHOME, but used for the root user. If
KDEROOTHOMEis not set it defaults to the
.kdedirectory in the home directory of root, usually
"/root/.kde". Note that the setting of
$HOMEis ignored in this case.
- See also
- KGlobalSettings
On The Usage Of 'locate' and 'locateLocal'
Typical KDE applications use resource files in one out of three ways:
1) A resource file is read but is never written. A system default is supplied but the user can override this default in his local .kde directory:
2) A resource file is read and written. If the user has no local version of the file the system default is used. The resource file is always written to the users local .kde directory.
3) A resource file is read and written. No system default is used if the user has no local version of the file. The resource file is always written to the users local .kde directory.
- Deprecated:
- since 5.0, use QStandardPaths, see KDE5PORTING.html for details
Definition at line 177 of file kstandarddirs.h.
Constructor & Destructor Documentation
KStandardDirs' constructor.
It just initializes the caches. Note that you should normally not call this, but use KGlobal::dirs() instead, in order to reuse the same KStandardDirs object as much as possible.
Creating other KStandardDirs instances can be useful in other threads.
Thread safety note: using a shared KStandardDirs instance (such as KGlobal::dirs()) in multiple threads is thread-safe if you only call the readonly "lookup" methods (findExe, resourceDirs, findDirs, findResourceDir, findAllResources, saveLocation, relativeLocation). The methods that modify the object (all those starting with "add", basically all non-const methods) are obviously not thread-safe; set things up before creating threads.
Definition at line 375 of file kstandarddirs.cpp.
KStandardDirs' destructor.
Definition at line 381 of file kstandarddirs.cpp.
Member Function Documentation
Reads customized entries out of the given config object and add them via addResourceDirs().
- Parameters
-
- Returns
trueif new config paths have been added from
config.
Definition at line 2004 of file kstandarddirs.cpp.
Adds another search dir to front of the
fsstnd list.
Since 5.0, this prefix is only used for "lib" and "exe" resources, and the compat "config" resource. Use addXdgDataPrefix for most others.
- When compiling kdelibs, the prefix is added to this.
KDEDIRSis taken into account
- Additional dirs may be loaded from kdeglobals.
- Parameters
-
Definition at line 450 of file kstandarddirs.cpp.
Adds absolute path at the beginning of the search path for particular types (for example in case of icons where the user specifies extra paths).
You shouldn't need this function in 99% of all cases besides adding user-given paths.
- Parameters
-
- Returns
- true if successful, false otherwise.
Definition at line 568.
- Deprecated:
- , use addResourceType(type, 0, relativename, priority)
Definition at line 522.
Definition at line 530 of file kstandarddirs.cpp.
- just to avoid unwanted overload
Definition at line 291 of file kstandarddirs.h.
Adds another search dir to front of the
XDG_CONFIG_XXX list of prefixes.
This prefix is only used for resources that start with
"xdgconf-"
- Parameters
-
Definition at line 472 of file kstandarddirs.cpp.
Adds another search dir to front of the
XDG_DATA_XXX list of prefixes.
- Parameters
-
Definition at line 494 of file kstandarddirs.cpp.
This function will return a list of all the types that KStandardDirs supports.
Definition at line 416 of file kstandarddirs.cpp.
Returns a number that identifies this version of the resource.
When a change is made to the resource this number will change.
- Parameters
-
- Returns
- A number identifying the current version of the resource.
- Deprecated:
- since 5.0. Only kbuildsycoca needed the multi-dir version of this. In other apps, just use QFileInfo(fullPath).lastModified().toTime_t()
Definition at line 640 of file kstandarddirs.cpp.
Check, if a file may be accessed in a given mode.
This is a wrapper around the access() system call. checkAccess() calls access() with the given parameters. If this is OK, checkAccess() returns true. If not, and W_OK is part of mode, it is checked if there is write access to the directory. If yes, checkAccess() returns true. In all other cases checkAccess() returns false.
Other than access() this function EXPLICITLY ignores non-existent files if checking for write access.
- Parameters
-
- Returns
- Whether the access is allowed, true = Access allowed
Definition at line 2174 of file kstandarddirs.cpp.
Checks for existence and accessability of a file or directory.
Faster than creating a QFileInfo first.
- Parameters
-
- Returns
trueif the directory exists,
falseotherwise
- Deprecated:
- since 5.0, use QFile::exists or QFileInfo::isFile()/isDir() to be more precise.
Definition at line 728 of file kstandarddirs.cpp.
Finds all occurrences of an executable in the system path.
- Parameters
-
- Returns
- The number of executables found, 0 if none were found.
Definition at line 1483 of file kstandarddirs.cpp.
Tries to find all resources with the specified type.
The function will look into all specified directories and return all filenames in these directories.
The "most local" files are returned before the "more global" files.
- Parameters
-
- Returns
- List of all the files whose filename matches the specified filter.
Definition at line 1043 of file kstandarddirs.cpp.
Tries to find all resources with the specified type.
The function will look into all specified directories and return all filenames (full and relative paths) in these directories.
The "most local" files are returned before the "more global" files.
- Parameters
-
- Returns
- List of all the files whose filename matches the specified filter.
Definition at line 996 of file kstandarddirs.cpp.
Tries to find all directories whose names consist of the specified type and a relative path.
So findDirs("xdgdata-apps", "Settings") would return
- /home/joe/.local/share/applications/Settings/
- /usr/share/applications/Settings/
(from the most local to the most global)
Note that it appends
/ to the end of the directories, so you can use this right away as directory names.
- Parameters
-
- Returns
- A list of matching directories, or an empty list if the resource specified is not found.
Definition at line 662 of file kstandarddirs.cpp.
Finds the executable in the system path.
A valid executable must be a file and have its executable bit set.
- Parameters
-
- See also
- findAllExe()
Definition at line 1415 of file kstandarddirs.cpp.
Tries to find a resource in the following order:
- All PREFIX/<relativename> paths (most recent first).
- All absolute paths (most recent first).
The filename should be a filename relative to the base dir for resources. So is a way to get the path to libkdecore.la to findResource("lib", "libkdecore.la"). KStandardDirs will then look into the subdir lib of all elements of all prefixes ($KDEDIRS) for a file libkdecore.la and return the path to the first one it finds (e.g. /opt/kde/lib/libkdecore.la). You can use the program kf5-config to list all resource types:
Example:
- Parameters
-
Definition at line 597 of file kstandarddirs.cpp.
Tries to find the directory the file is in.
It works the same as findResource(), but it doesn't return the filename but the name of the directory.
This way the application can access a couple of files that have been installed into the same directory without having to look for each file.
findResourceDir("lib", "libkdecore.la") would return the path of the subdir libkdecore.la is found first in (e.g. /opt/kde/lib/)
- Parameters
-
- Returns
- The directory where the file specified in the second argument is located, or QString() if the type of resource specified is unknown or the resource cannot be found.
Definition at line 692 of file kstandarddirs.cpp.
- Returns
- the path where type was installed to by kdelibs. This is an absolute path and only one out of many search paths
- Deprecated:
- since 5.0, use QStandardPaths::standardLocations(...).last()
Definition at line 359 of file kstandarddirs.cpp.
Checks whether a resource is restricted as part of the KIOSK framework.
When a resource is restricted it means that user- specific files in the resource are ignored.
E.g. by restricting the
"wallpaper" resource, only system-wide installed wallpapers will be found by this class. Wallpapers installed under the
$KDEHOME directory will be ignored.
- Parameters
-
- Returns
- True if the resource is restricted.
Definition at line 386 of file kstandarddirs.cpp.
This returns a default relative path for the standard KDE resource types.
Below is a list of them so you get an idea of what this is all about.
data-
html-
share/doc/HTML
icon-
share/icon
config-
share/config
pixmap-
share/pixmaps
sound-
share/sounds
locale-
share/locale
services-
share/kde5/services
servicetypes-
share/kde5/servicetypes
wallpaper-
share/wallpapers
templates-
share/templates
exe-
bin
lib-
lib[suffix]
module-
lib[suffix]/plugins/kde5
qtplugins-
lib[suffix]/plugins
kcfg-
share/config.kcfg
emoticons-
share/emoticons
xdgdata-
sharedfiles (QStandardPaths::GenericDataLocation)
xdgdata-apps-
applications
xdgdata-icon-
icons
xdgdata-pixmap-
pixmaps
xdgdata-dirs-
desktop-directories
xdgdata-mime-
mime
xdgconf-menu-
menus
xdgconf-
configfiles
- Returns
- Static default for the specified resource. You should probably be using locate() or locateLocal() instead.
- See also
- locate()
- locateLocal()
- Deprecated:
- now returns % + type + / ...
Definition at line 1557 of file kstandarddirs.cpp.
(for use by sycoca only)
- Deprecated:
- since 5.0, there is no KDEDIRS anymore. If you care for XDG_DATA_DIRS instead, use this: QStandardPaths::standardLocations(QStandardPaths::GenericDataLocation).join(QString(':'))
Definition at line 517 of file kstandarddirs.cpp.
Returns the toplevel directory in which KStandardDirs will store things.
Most likely
$HOME/.kde. Don't use this function if you can use locateLocal()
- Returns
- the toplevel directory
- Deprecated:
- since 5.0, there is no KDEDIRS nor KDEHOME anymore. Use QStandardPaths::writableLocation(QStandardPaths::GenericDataLocation) or QStandardPaths::writableLocation(QStandardPaths::GenericConfigLocation) instead.
Definition at line 2126 of file kstandarddirs.cpp.
- Returns
$XDG_CONFIG_HOMESee also
- Deprecated:
- since 5.0 use QStandardPaths::writableLocation(QStandardPaths::GenericConfigLocation) + '/'
Definition at line 2138 of file kstandarddirs.cpp.
- Returns
$XDG_DATA_HOMESee also
- Deprecated:
- since 5.0 use QStandardPaths::writableLocation(QStandardPaths::GenericDataLocation) + '/'
Definition at line 2132 of file kstandarddirs.cpp.
This function is just for convenience.
It simply calls instance->dirs()->findResource(type, filename).
- Parameters
-
Definition at line 2146 of file kstandarddirs.cpp.
This function is much like locate.
However it returns a filename suitable for writing to. No check is made if the specified
filename actually exists. Missing directories are created. If
filename is only a directory, without a specific file,
filename must have a trailing slash.
- Parameters
-
Definition at line 2152 of file kstandarddirs.cpp.
This function is much like locate.
No check is made if the specified filename actually exists. Missing directories are created if
createDir is true. If
filename is only a directory, without a specific file,
filename must have a trailing slash.
- Parameters
-
Definition at line 2158 of file kstandarddirs.cpp.
Recursively creates still-missing directories in the given path.
The resulting permissions will depend on the current umask setting.
permission = mode & ~umask.
- Parameters
-
- Returns
- true if successful, false otherwise
- Deprecated:
- since 5.0, use QDir().mkpath(dir).
Definition at line 1655 of file kstandarddirs.cpp.
Expands all symbolic links and resolves references to '/.
/', '/../' and extra '/' characters in
filename and returns the canonicalized absolute pathname. The resulting path will have no symbolic link, '/./' or '/../' components.
- Deprecated:
- since 5.0, port to QFileInfo::canonicalFilePath, but note that it returns an empty string if filename doesn't exist!
Definition at line 1119 of file kstandarddirs.cpp.
Expands all symbolic links and resolves references to '/.
/', '/../' and extra '/' characters in
dirname and returns the canonicalized absolute pathname. The resulting path will have no symbolic link, '/./' or '/../' components.
- Deprecated:
- since 5.0, port to QDir::canonicalPath, but note that it returns an empty string if filename doesn't exist!
Definition at line 1057 of file kstandarddirs.cpp.
Converts an absolute path to a path relative to a certain resource.
If "abs = ::locate(resource, rel)" then "rel = relativeLocation(resource, abs)" and vice versa.
- Parameters
-
- Returns
- A relative path relative to resource
typethat will find
absPath. If no such relative path exists,
absPathwill be returned unchanged.
- Deprecated:
- since 5.0, write your own loop, for instance:
See KDE5PORTING.html for how to port other resources.
Definition at line 1636 of file kstandarddirs.cpp.
This function is used internally by almost all other function as it serves and fills the directories cache.
- Parameters
-
- Returns
- The list of possible directories for the specified
type. The function updates the cache if possible. If the resource type specified is unknown, it will return an empty list. Note, that the directories are assured to exist beside the save location, which may not exist, but is returned anyway.
Definition at line 1147 of file kstandarddirs.cpp.
Finds a location to save files into for the given type in the user's home directory.
- Parameters
-
- Returns
- A path where resources of the specified type should be saved, or QString() if the resource type is unknown.
Definition at line 1563 of file kstandarddirs.cpp.
Returns a QStringList list of pathnames in the system path.
- Parameters
-
- Returns
- a QStringList list of pathnames in the system path.
Definition at line 1322 of file kstandarddirs.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2021 The KDE developers.
Generated on Fri Apr 9 2021 22:58:16 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/frameworks/kdelibs4support/html/classKStandardDirs.html
|
CC-MAIN-2021-17
|
refinedweb
| 2,651
| 51.65
|
Replace A Line Or Word In A File - Online Code
Description
To replace a word or words or complete line in an existing file with a new word or words or lines, the following programme helps to achieve it.
Source Code
import java.io.*; public class BTest { public static void main(String args[]) { try { File file = new File("file.txt"); BufferedReader reader = new BufferedReader(new FileReade... (login or register to view full code)
To view full code, you must Login or Register, its FREE.
Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
|
http://www.getgyan.com/show/2051/Replace_a_line_or_word_in_a_file
|
CC-MAIN-2017-34
|
refinedweb
| 101
| 72.16
|
Create the timer
Our app's UI is finished, so now we'll create a timer and implement our countdown.
Using the QTimer class
Let's take a minute to review the behavior that we defined for our finished app. The app has a button that, when clicked, changes the traffic signal from red to green. The button also starts a countdown timer, which is indicated visually in the text area of the UI. When the timer reaches 0, the traffic light changes to yellow, pauses briefly, and then changes to red.
We need a way to keep track of how much time is left before the traffic light should change to yellow. Fortunately, the Qt framework includes a class called QTimer that we can use to do just that. You can set a timer interval for a QTimer object and a signal, timeout(), will be emitted at that interval. For example, if you set an interval of 2000 milliseconds, the QTimer object emits its timeout() signal every two seconds while the timer is active. You can call start() to start the timer, and stop() to stop it.
We can use the functionality of QTimer in our traffic light app. We can use one QTimer object, with an interval of one second, to keep track of the countdown until the traffic light should turn yellow. When this QTimer emits its timeout() signal, we update the text in the countdown TextArea. We can use a second QTimer object, with an interval of about two seconds, to pause the traffic light in the yellow state before changing to the red state.
There's a problem with this plan, though: we can't use the QTimer class (and other Qt and C++ classes) in QML automatically. We need a way to provide access to the features of QTimer directly from QML. There are several ways to do this, but here are two possible approaches that we could choose to use in our app:
- Use the attachedObjects list in a QML control to specify the C++ objects that you want to use (in this case, QTimer).
- Create a class that extends the CustomControl class, and provide access to an underlying QTimer object. Then, register this new class for use in QML.
To demonstrate more Cascades features in this tutorial, we'll use the second approach for our traffic light app. One of the benefits of extending CustomControl is the ability to create a visual representation for the new control. Because CustomControl inherits from Control (which itself inherits from VisualNode), you can use the properties of these inherited classes to define how your control looks (for example, preferredWidth, opacity, scaleX, and so on). For the purposes of this tutorial, we won't create a visual representation for the timer that we make, but we'll leave that as an optional extension to our app.
Create the header file for the Timer class
To declare our class elements, create a C++ header file called timer.hpp in the src folder of the project (right-click the src folder and click New > Header File). In this file, keep the pre-populated code, and in between the #define and #endif statements, add the appropriate #include statements. We need to include both QObject and CustomControl. We also need to use a forward declaration of the QTimer class:
#include <QObject> #include <bb/cascades/CustomControl> class QTimer;
Next, we create the Timer class, extending CustomControl. We also use the Q_OBJECT macro to indicate that this class should be preprocessed by the Meta-Object Compiler (moc) tool to compile correctly. For more information about the moc tool in Qt, see Using the Meta-Object Compiler (moc) on the Qt website.
class Timer : public bb::cascades::CustomControl { Q_OBJECT
The QTimer class includes a couple of properties that we want to expose in our own Timer class. The active property indicates whether the timer is running, and the interval property specifies the current interval that the timeout() signals are emitted at. We expose these properties by using the Q_PROPERTY macro, and we use the READ and WRITE keywords inside this macro to specify functions that access and change each property. The NOTIFY keyword is also important, and specifies the signal that's emitted when the property changes. For more information about properties in Qt and the Q_PROPERTY macro, see The Property System on the Qt website.
Q_PROPERTY(bool active READ isActive NOTIFY activeChanged) Q_PROPERTY(int interval READ interval WRITE setInterval NOTIFY intervalChanged)
We now declare the public functions of the Timer class, including the constructor. We need functions to get and set the interval of the timer, and we should also have a way to access the state of the timer (active or inactive):
public: explicit Timer(QObject* parent = 0); bool isActive(); void setInterval(int m_sec); int interval();
This class includes two slots, start() and stop(), to start and stop the timer. Remember that slots are normal member functions, but are declared in a public slots: section so that they can be connected to signals if needed:
public slots: void start(); void stop();
Our class also includes three signals: timeout(), intervalChanged(), and activeChanged(). These signals are emitted in the definitions of our class functions, which we'll add soon when we create our timer.cpp file. We declare our signals in a signals: section:
signals: void timeout(); void intervalChanged(); void activeChanged();
Finally, we declare the underlying QTimer object that our Timer class uses:
private: QTimer* _timer; };
Create the source for the Timer class
To define our class elements, create a C++ source file called timer.cpp in the src folder of the project (right-click the src folder and click New > Source File). This source file doesn't include any pre-populated code, so we start by adding the appropriate #include statements. We need to include QTimer, as well as timer.hpp that we created:
#include <QTimer> #include "timer.hpp"
Next, we create the constructor for Timer. The constructor creates a QTimer object to represent our timer. We also call QObject::connect() to connect the timeout() signal in QTimer to the timeout()signal of our Timer class. This way, when the QTimer emits the timeout() signal, our timer also emits its timeout() signal and we can handle it using a signal handler in QML. We use the setVisible() function to indicate that our timer shouldn't be visible on the screen, because it has no visual representation defined for it. The constructor also includes a Q_UNUSED macro, which simply means that the parent parameter isn't used in the body of the constructor.
Timer::Timer(QObject* parent) : bb::cascades::CustomControl(), _timer(new QTimer(this)) { Q_UNUSED(parent); //(_timer, SIGNAL(timeout()), this, SIGNAL(timeout())); // This is only available in Debug builds. Q_ASSERT(connectResult); setVisible(false); }
The isActive() function just returns the value of the active property by calling isActive() of the underlying QTimer object. Similarly, the interval() function calls interval() of the QTimer object to return the interval of the timer:
bool Timer::isActive() { return _timer->isActive(); } int Timer::interval() { return _timer->interval(); }
The setInterval() function calls setInterval() of QTimer to set the interval of the timer, and also emits the intervalChanged() signal by using the emit keyword:
void Timer::setInterval(int m_sec) { // If the timer already has the specified interval, do nothing if (_timer->interval() == m_sec) return; // Otherwise, set the interval of the timer and emit the // intervalChanged() signal _timer->setInterval(m_sec); emit intervalChanged(); }
The last functions to implement are start() and stop(). These functions call the corresponding functions of the underlying QTimer object. They also emit the activeChanged() signal:
void Timer::start() { // If the timer has already been started, do nothing if (_timer->isActive()) return; // Otherwise, start the timer and emit the activeChanged() // signal _timer->start(); emit activeChanged(); } void Timer::stop() { // If the timer has already been stopped, do nothing if (!_timer->isActive()) return; // Otherwise, stop the timer and emit the activeChanged() // signal _timer->stop(); emit activeChanged(); }
Our Timer class is now ready for us to use.
Last modified: 2015-03-31
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
http://developer.blackberry.com/native/documentation/dev/signals_slots/signals_slots_create_timer.html
|
CC-MAIN-2015-22
|
refinedweb
| 1,354
| 57.71
|
Developers are busy people, and we don’t always have time to evaluate the JavaScript world’s myriad of frameworks and tools.
In this article I want to help you decide whether React Server Components is something that you should check out immediately, or whether you should wait.
We’ll start by looking at what React Server Components are, then discuss what problems they solve, and wrap up with a conversation on whether you should care or not. Let’s get started.
What are React Server Components?
React Server Components are a new experimental feature of React. Here’s how the React team describes the feature:
“Server Components allow developers to build apps that span the server and client, combining the rich interactivity of client-side apps with the improved performance of traditional server rendering.”
The client in the context of Server Components is a web browser. Although React can run in other clients—aka React Native running on iOS and Android—the Server Components feature is currently only concerned with the web.
The server in the context of Server Components is a JavaScript-based backend like Express.
The idea is, Server Components give you the ability to selectively move components from the client, where the browser executes them, to the server, where something like Express executes them.
To make it easy to tell the difference, Server Components introduces a new naming convention, where
.server.js files are server components,
.client.js files are client components, and regular
.js files are files that can run in both environments. Here’s what that looks like in the React team’s Server Components demo.
Wait, why would I want to do any of this?
Rendering components on a server has a number of potential benefits. The React team’s full writeup on server components lists these benefits in great detail, but I’ll summarize what I think are the most important ones here.
Benefit #1: Using third-party dependencies without a file size penalty
One of best-known web performance tips is to minimize the amount of code you ship to your users. As such, front-end developers are hesitant to add large dependencies to their applications, even if those dependencies would save us a lot of time and effort.
Server Components offer an interesting solution to this problem. Because Server Components can live on a server (and not a client), their dependencies can live on the server as well—allowing you to use dependencies with zero impact on the size of your client-size bundles.
For example, suppose you’re writing an application that displays user-written Markdown. Two libraries that can help you do that are marked, which parses Markdown, and sanitize-html, which cleans up user-written HTML, including removing potential XSS attacks.
By using those two libraries you can write a simple React component that looks something like this:
/* RenderMarkdown.js */ import marked from 'marked'; // 35.9K (11.2K gzipped) import sanitizeHtml from 'sanitize-html'; // 206K (63.3K gzipped) export function RenderMarkdown({text}) { const sanitizedHtml = sanitizeHtml(marked(text)); return <div>{sanitizedHtml}</div> }
If you’re writing this component today you have to do a cost-benefit analysis. Are the conveniences of marked and sanitize-html worth the ~75K of gzipped JavaScript being added to your client-side bundle, as well as the performance hit of having your users’ browsers interpret an (un-gzipped) 200K+ of JavaScript code at runtime? Probably?
Now let’s look at a version of this component that can run on a server as a Server Component.
/* RenderMarkdown.server.js */ // Same code, but now these dependencies have no client-side penalty import marked from 'marked'; import sanitizeHtml from 'sanitize-html'; export function RenderMarkdown({text}) { const sanitizedHtml = sanitizeHtml(marked(text)); return <div>{sanitizedHtml}</div> }
The only code difference in this version is the file name (
RenderMarkdown.server.js instead of
RenderMarkdown.js), but the behavior difference is fairly substantial. With this version of RenderMarkdown, your user never has to download or interpret marked or sanitize-html, but you still get the benefit of using both to keep Markdown implementation clean.
This is pretty cool, but before you get too excited, there are some Server Components limitations that will keep you from removing a lot of your client-side dependencies. Here’s the full of things a Server Component can not do from the React team’s Server Components RFC (Request for Comments).
The big ones here are Server Components cannot have state and cannot work with DOM APIs, which means all of your components that use things like
useState() or
onChange are not eligible. This is a big limitation because... most UI components rely on state and DOM APIs—meaning, a lot of your dependencies will have to remain on the client.
Still, being able to remove some of your code to the server has the potential to lead to noticeable performance gains, especially for larger apps. Facebook stated that their first production experiments with Server Components allowed them to remove almost 30% of their code from the client, which is a big deal.
And being able to move code to the server is not the only benefit of Server Components.
Benefit #2: Accessing your backend fast
Accessing data is one of the most expensive tasks in modern front-end applications. Because most applications store their data remotely (aka not on the client), getting the data you need involves network calls, and trying to reduce the number of network calls you make, while also keeping your code clean and maintainable, can be a big challenge.
Server Components have the ability to help here, as you now have the ability to move data-access components to a server, which can access data storage much faster.
For example, suppose you have a header component that needs to retrieve notifications, a user’s profile, and a user’s subscription. Here’s one way you could write that component today.
// Header.js export function Header() { const [notifications, setNotifications] = React.useState([]); const [profile, setProfile] = React.useState({}); const [subscription, setSubscription] = React.useState({}); React.useEffect(() => { fetch('') .then(res => res.json()) .then(data => { setNotifications(data); }) fetch('') .then(res => res.json()) .then(data => { setProfile(data); }) fetch('') .then(res => res.json()) .then(data => { setSubscription(data); }) }, []); return ( <div> {...} </div> ) }
This approach is not ideal, as your component must wait for three separate network requests to completely render.
There are ways around this. You could ask a backend developer to build an API just for your header, which would return exactly what you need from multiple locations. But UI-specific APIs aren’t reusable, and therefore difficult to maintain over time. You could also use something like GraphQL to aggregate your backend API calls, but GraphQL isn’t an option for every company.
React Server Components offers an interesting new approach to this problem, by allowing you to access your data directly on the server. For example, consider this update to the header that lets you access a database right in your component.
// Header.server.js import db from 'my-database-of-choice'; export function Header() { const notifications = db.notifications.get(); const profile = db.profile.get(); const subscription = db.subscriptions.get(); return ( <div> {...} </div> ) }
With Server Components, because you’re running on a server, you have the ability to access server-side resources without making a network round trip. And this ability lets you write cleaner code, as you don’t need to write a bespoke backend API just for the UI, or architect your components to reduce as many network calls as possible.
That being said, even though the ability to quickly access server-side resources is cool, it’s also not without downsides—the big one being, this is all highly dependent on your backend setup. You stand to gain a lot if your server-side resources are JavaScript-based, but if your server-side resources are in a completely different ecosystem (Java, .NET, PHP, etc), you’ll have a hard time actually gaining much from a Server Component architecture.
Before we wrap up let’s look at some of the other limitations of Server Components.
NOTE: I’m only hitting the high-level benefits of server components to keep this discussion brief. If you want to read about all the benefits, I’d recommend reading through the section on Server Components benefits from the React team’s RFC.
What are the issues with Server Components?
After spending time with Server Components my biggest complaint is the complexity it introduces to React applications.
For example, as I started to play around with the React team’s Server Components demo, I realized I had to fundamentally change how I approached building components. Instead of just creating a new file and typing
export const MyComponent = () => {}, I now had to start thinking about how the component would be used, to help determine whether it was a better fit for the client or the server.
And that’s just when creating the components. As Server Components advance, some of those same concerns are going to apply to how you unit test your Server Components, and also how to debug these components when things go wrong.
For example, currently React Server Components return “a description of the rendered UI, not HTML”, which I’m sure is important to the implementation, but it does mean that the response you see in your developer tools looks like nonsense.
To be fair, most of these limitations come from Server Components being so new. The React team has stated that they expect most of the initial adoption to be through frameworks like Next.js early on, so it would make sense that some of these workflows are a bit rough today.
So should you care?
In my opinion there are a three groups of people that should care about Server Components today.
1) If you are a developer on a framework like Next.js.
Frameworks like Next.js are a logical consumer of React Server Components, as Next.js is already a framework that users server-side code to help React apps run faster.
These frameworks also have the ability to help hide some of the messy details of the underlying React implementation, making Server Components easier for your average developer to use.
2) If your company is operating at Facebook’s scale.
In its current state, React Server Components introduces a lot of complexity for small performance gains.
For companies like Facebook this sort of tradeoff makes sense, as they have the engineering capacity to deal with this complexity, and marginal performance gains are a big deal for web applications operating at Facebook’s scale.
That being said, most companies don’t operate at Facebook’s scale, and therefore most companies have no need to evaluate Server Components in its current state. You can wait until the feature stabilizes, or appears in a framework like Next.js
3) If you like tinkering with the latest and greatest.
The reason I looked into Server Components is I think they’re a cool idea with a lot of potential. The line between the client and server is getting blurry in the front-end world, and I think we’re going to see more experiments that try to mix and match the two environments to help developers build the best possible web applications.
With that in mind, if you’re the type of person that likes to experiment with the latest and greatest, React Server Components is well worth trying. The Server Components intro video is excellent, the React team’s RFC is a well-written guide that details how everything works. There’s also an open pull request where you can submit your own feedback on the feature.
Final words
Overall, Server Components is still too early for your average developer to care about, but it’s a fascinating idea with a lot of potential for those that want to guide the future of React and web development.
Master the Art of React UI with KendoReact
KendoReact is a professional UI components and data visualization library for React on a mission to help you design and build business apps with React much faster. With KendoReact, developers get an immediate productivity boost and businesses get shorter time-to-market. Designed and built from the ground up for React, KendoReact plays well with any existing UI stack. Its 90+ customizable and feature-rich components make it the perfect foundation for your internal UI library.
Built by a team with 19+ years of experience in making enterprise-ready components, this library is lightning fast, highly customizable and fully accessible, delivering support for WCAG 2.1, Section 508, and WAI-ARIA a11y standards. You can find detailed accessibility compliance information here.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/tjvantoll/should-you-care-about-react-server-components-4dg2
|
CC-MAIN-2021-21
|
refinedweb
| 2,123
| 53
|
Good Morning all:
I am new to programming and I have a homework assignment that is giving me some difficulty. The problem is the glass rod problem. Where you break a glass rod in 3 pieces an see if the pieces will for a triangle.
The main problem is how the professor wants the program written.
We have to write a main and three functions that do most of the work. I have my code written and it compiles but it returns erroneous answers. Here is my code, can anyone point me in the right direction to fixing it. Please
Code:#include <iostream> #include <cmath> #include <cstdlib> #include <ctime> #include <cstdlib> using namespace std; void doBreak (int & side1, int & side2, int & side3); bool attempts (int); void randomNumberGen (int & num1, int & num2); void main() { int count = 1; int tests = 1000000; double probability; int attempts; int triangle = 0; int val; srand (time(0)); do { triangle = triangle + attempts; count++; }while (count <= tests); probability = (triangle / tests) * 100; cout <<"The probability that the broken glass rod will form a triangle is" << probability <<"% \n"; } bool attempts (int) { int side1, side2, side3; doBreak (side1, side2, side3); if ((side1 + side2) > side3 && (side1 + side3) > side2 && (side2 + side3) > side1) { return true; } else { return false; } } void doBreak (int & side1, int & side2, int & side3) { int num1, num2; randomNumberGen (num1, num2); side1 = num1; side2 = (num2 - num1); side3 = 1000 - num2; } void randomNumberGen (int & num1, int & num2) { do { num1 = rand() % 999 + 1; num2 = rand() % 999 + 1; } while (!(num1 < num2)); }
|
https://cboard.cprogramming.com/cplusplus-programming/152444-glass-rod-dilemma.html
|
CC-MAIN-2017-09
|
refinedweb
| 244
| 57.64
|
Designed for command line parsing. More...
#include "utility.hpp"
Designed for command line parsing.
The sample below demonstrates how to use CommandLineParser:
The keys parameter is a string containing several blocks, each one is enclosed in curley braces and describes one argument. Each argument contains three parts separated by the
| symbol:
@symbol)
For example:
Note that there are no default values for
help and
timestamp so we can check their presence using the
has() method. Arguments with default values are considered to be always present. Use the
get() method in these cases to check their actual value instead.
String keys like
get<String>("@image1") return the empty string
"" by default - even with an empty default value. Use the special
<none> default value to enforce that the returned string must not be empty. (like in
get<String>("@image2"))
For the described keys:
Constructor.
Initializes command line parser object
Copy constructor.
Destructor.
Set the about message.
The about message will be shown when printMessage is called, right before arguments table.
Check for parsing errors.
Returns true if error occured while accessing the parameters (bad conversion, missing arguments, etc.). Call printErrors to print error messages list.
Access arguments by name.
Returns argument converted to selected type. If the argument is not known or can not be converted to selected type, the error flag is set (can be checked with check).
For example, define:
Call:
Access:
@-prefixed name:
Access positional arguments by index.
Returns argument converted to selected type. Indexes are counted from zero.
For example, define:
Call:
Access arguments:
Returns application path.
This method returns the path to the executable from the command line (
argv[0]).
For example, if the application has been started with such command:
this method will return
./bin.
Check if field was provided in the command line.
Assignment operator.
Print list of errors occured.
Print help message.
This method will print standard help message containing the about message and arguments description.
|
https://docs.opencv.org/3.2.0/d0/d2e/classcv_1_1CommandLineParser.html
|
CC-MAIN-2018-34
|
refinedweb
| 322
| 60.01
|
On Oct 16, 2006, at 5:15 PM, Ryan Martell wrote: > I am trying to link against the ffmpeg libraries from XCode 2.4, > and I get the "local relocation entries in non-writable section" > error. This is on a MacBook Pro (Intel Dual Core) > > /usr/bin/gcc-4.0 -bundle -arch i386 /usr/local/lib/libavcodec.a - > Wl,-all_load -Wl,-twolevel_namespace -Wl,-twolevel_namespace_hints - > Wl,-undefined -Wl,dynamic_lookup -Wl,-multiply_defined -Wl,suppress > -isysroot /Developer/SDKs/MacOSX10.4u.sdk -o /Users/rmartell/dev/ > SDLOpenGLTest/build/SDLOpenGLTest.build/Debug/SDLOpenGLTest.build/ > Objects-normal/i386/libavcodec.ab > /usr/bin/ld: /usr/local/lib/libavcodec.a(bitstream.o) has local > relocation entries in non-writable section (__TEXT,__text) > > I know this has been posted before, and I have tried the various > options: > > 1) The code is compiled with -mdynamic-no-pic, which supposedly > caused issues in the past, but not on gcc 3.3 (i think) or later... > 3) configure ffmpeg without the flag --enable-shared. (This is done). I've never heard of --mdynamic-no-pic being a problem. > I am building the libraries statically. That's what we're doing for Perian. > <snip> > > 1) Link with the flag '-read_only_relocs suppress', Honestly, I think this is your best bet. We're doing this in Perian and it hasn't caused any problems. You might try experimenting with using the -dynamic linker flag either more or less, (I'm a bit fuzzy on which would be better based on skimming the man page for ld(1). > Which sounds a bit scary to me. Reading the man page makes it sound a bit less scary. > > I know that the -mdynamic-no-pic was a 5% speed boost (according to > Michael), so I'd like to leave it in, but I'm stymied. > > So, gcc gurus, what's my magic compiler option to fix this? Scarcely a guru, but hopefully my 2? will be useful. > > Thanks! > -Ryan Martell Augie Fackler > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel at mplayerhq.hu >
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2006-October/018855.html
|
CC-MAIN-2017-04
|
refinedweb
| 334
| 60.11
|
Hans wrote:> We create filename/pseudos/backup, and that tells the archiver what to > do.....Instead of exposing the old semantics under a new interface, why notexpose the new semantics under a new interface.There exist plenty of programs that know the old Unix semantics. Theredon't exist many working programs that use the new semantics that you'readding.I raise again the example of how Windows adapted to long filenames. OldDOS and FAT programs, including my Unix backups of today, see a 8.3 namespace. Only code that knows the new magic sees the long names.If given the choice of breaking much old, existing stuff, or some new,mostly not yet existing stuff, does not it make more sense to break whatmostly doesn't exist yet?One possible way to do this, of no doubt many: * Stealing a corner of the existing filename space for some magic names with the new semantics. * A new option on open(2), hence opendir(3), that lights up these magic names. * Doing any of the classic pathname calls with such a new magic name exposes the new semantics - such calls as: access execve mkdir mknod mount readlink rename rmdir stat truncate unlinkThis means essentially constructing a map between old and new,such that changes made in either view are sane and visiblefrom the other
|
https://lkml.org/lkml/2004/8/28/19
|
CC-MAIN-2016-50
|
refinedweb
| 222
| 60.85
|
Quartz scheduler
{
System.out.println("Hello World Quartz Scheduler: " + new Date...Quartz scheduler Hai I want to run a simple Helloworld quartz job...
" The requested resource (/Helloworld/) is not available". And this is my code for scheduler
Quartz Tutorial
Quartz Tutorial
In this Quartz Tutorial you will how to use Quartz Job scheduler in your java
applications. Quartz Job scheduler is so flexible that it can be used with your Job Scheduler - Subversion
Quartz Job Scheduler Dear Sir,
i have taken that toturial.we... the particular time comes the trigger is getting fired while we are printing the console... the database and genarating the report. we are getting the mail from the quartz framework tutorial
Quartz framework tutorial Hi,
I am trying to create the scheduler application. Can any one provide me the url of Quartz framework tutorials.
Thanks
Hi,
Check the examples at Quartz Tutorial page.
Thanks
Hello World Quartz Scheduler
Hello World Quartz Scheduler
...
Scheduler application with the help of Quartz framework. That will
display...-usable handles to Scheduler instances.
StdSchedulerFactory(): A Class
Download Quartz Job Scheduler
this tutorial the latest version of Quartz Scheduler is
1.6.0. You can download the latest... Download Quartz Job Scheduler
In this section we will download Quartz Job Scheduler from
Quartz Tutorial
How to learn Quartz?
the tutorial.
Thanks
Hi,
Please check the tutorial Hello World Quartz Scheduler.
Thanks...How to learn Quartz? Hi,
I have to make a program in Java
how to start quartz in server - IDE Questions
problem that when I execute quartz in local system I can run this schedule class..., that how can I run this scheduler class when I place this project in server...how to start quartz in server Hi,
Thank you very much, you made
Quartz Trigger - XML
friend,
Quartz Job scheduler is so flexible that it can be used with your standalone as well as enterprise web based applications. Quartz Job scheduler is used...Quartz Trigger how to write a quatz trigger to fire mails at every
Job scheduling with Quartz - Java Server Faces Questions
... to database. It works fine but when the Quartz scheduler fires a job it accquires... while initialization or while calling the job. Hi,How you are initializing the Quart scheduler?I think there some problem while initialization
Scheduler Shutdown Example
of any quartz
application then we needed two classes: one is scheduler class... the
scheduler in quartz application. As we know that the scheduler is a main
interface of a Quartz Scheduler it maintains the list of JobDetail and Trigger.
If we
Confuse about Quartz or Thread. - JSP-Servlet
in advance. Hi friend,
Quartz Scheduler :
Quartz is a full-featured...Confuse about Quartz or Thread. Hi,
Thanx for reply.
Is it make any difference using simple thread instead of Quartz for automatic upload file
Java Quartz Framework
Java Quartz Framework
Quartz is an open source job scheduler. It provides powerful...
running the job scheduling using quartz - IDE Questions
running the job scheduling using quartz I am using netbeans IDE...; Hi friend,
Here is more information about quartz.
i am sending link , you can learn very easy way.
Configuration, Resource Usage and StdSchedulerFactory
to the scheduler: jobs,
triggers, calendars, etc. The important step for Quartz... that
Quartz is running inside of - by providing JDBCJobStore the JNDI name... work of creating a
Quartz Scheduler instance based on the content of properties
Spring with scheduler
Spring with scheduler how quartz scheduler relates with spring.need example with spring and quartz scheduler
Implementing more than one Job Details and Triggers
as GMT format time.
The scheduler class is a main class of this quartz application...
In this quartz tutorial, we... scheduler. We know that the
scheduler is a main interface of quartz scheduler
Introduction to Quartz Scheduler
Introduction to Quartz Scheduler
Introduction to Quartz Scheduler
This introductory section... in
java applications. Here, you will learn how Quartz Job Scheduler helps you
Convert String to Class - JSP-Servlet
Convert String to Class Hi all,
I am using Quartz as a scheduler.... Reading job name, class name, day & time using xml file only.
Problem is while reading class name, retrieve as a string format but Quartz required in "Class" format
J2EE Tutorial - Running RMI Example
J2EE Tutorial - Running RMI Example
....*;
public class greeterimpl extends
PortableRemoteObject... java.util.*;
public class greeterclientservlet extends
Scheduler- cron expression - Java Beginners
Scheduler- cron expression hi there,
i have a query on scheduler cron expression functionality.
My requirement is to run the cron expression every... friend,
Read for more information:
Quartz trigger dropping automatically - Java Beginners
Quartz trigger dropping automatically In our application we....
As per our understanding quartz trigger will not get dropped off....
Thanks
getting classnotfound exception while running login application
getting classnotfound exception while running login application hi... to bean 'loginController' while setting
bean property 'urlMap' with key... is
org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class
error while running the applet - Java Beginners
error while running the applet import java.applet.Applet;
import java.awt.*;
import java.awt.event.*;
class MyFrame extends Frame
{
boolean...);
++num;
}
}
}
}
i have problem while running the code , error
JobStores
for Quartz Scheduler step is selecting the appropriate JobStore. You declare which... to produce your scheduler instance.
The JobStore is for behind-the-scenes use of Quartz... is managed by an application
server that Quartz is running inside
While running jsp
While running jsp I found this error when i run the client.jsp can anyone help me
javax.xml.ws.WebServiceException: Failed to access the WSDL at:. It failed with: http
Establish a Connection with MySQL Database
of program:
As we know a quartz application needs
two classes: first is scheduler... in the quartz
scheduler to the scheduleJob() method. By following the entire process we will implement the scheduler class. Now, we will require the job class
while playing youtube in iphone appliation, coming black screen in iphone project?
while playing youtube in iphone appliation, coming black screen in iphone project? while playing youtube in iphone appliation, coming black screen in iphone project
error while running a jsp page in netbeans
error while running a jsp page in netbeans this is error that come under column "java db processes" in netbeans
Exception in thread "main" java.lang.ExceptionInInitializerError
Im getting this error while running JPA project
Im getting this error while running JPA project Exception in thread "main" javax.persistence.PersistenceException: [PersistenceUnit: examplePersistenceUnit] Unable to configure EntityManagerFactory
Map | Business Software
Services India
Java Tutorial Section
Core Java |
JSP Tutorials |
J2ME-Tutorials |
JDBC Tutorials |
JEE 5 Tutorial |
JDK 6 Tutorial |
Java UDP Tutorial
| Java Threading
Tutorial | Java 5
Compiling and Running Java program from command line
Compiling and Running Java program from command line - Video tutorial... HelloWorld
Here is video tutorial of the Compiling and Running Java program from... prompt on windows computer.
In this tutorial I will teach you the process
Iterator Java While
The Java While loop Iterator is the top tested loop.
It is in two forms while(), do while()
While loop is mostly used with Iterator compared to for loop
Java While Loop Iterator Example
import java.util.ArrayList;
import
While Loop Statement in java 7
While Loop Statement in java 7
This tutorial, helps you to understand the concept of while loop in
java 7.
While Loop Statements :
While loop....
Example :
Here is Simple example of while loop.
package looping;
Problem/hibernate
While and do-while
While and do-while
...-oriented programming language that allows us to define
a class within another class, such class is called a nested class. Inner
classes can be either
Upload file on server automatically on specific time - JSP-Servlet
and Tutorials on Quartz Scheduler visit to :... & Tomcat?
If my java application not running, Is there any way using tomcat only to upload a file?
If java application running, which will be the best
; Tutorial Section
Introduction
to Hibernate 3.0 |
Hibernate Architecture |
First
Hibernate Application |
Running
the Example in Eclipse |
Understanding... Language |
Preparing table for HQL |
Developing POJO class |
HQL from
Java developer desk
;
What
is Quartz?
Quartz is a fully...-alone application to the largest e-commerce
system. Quartz can be used to reate... components or EJBs).
Who
is using Quartz?
Quartz
The while and do
While and do-while
Lets try to find out what a while statement
does. In a simpler language, the while statement continually executes a block of statements while
Java Tutorial
.
This tutorial covers all the topics of Java Programming Language. In this
section you... on one platform and it is running over there then it
can be run on another..., class, const,
continue, default, do, double, else, enum, extends, final, finally
Tutorial
Tutorial Please give me a solution (running program) for the questions below:
1) Write a program for drawing/scribbling using a mouse. Draw by dragging with the left mouse button pressed. Create a button named Erase. When
Erron while
Erron while Hi,
i'm doing a project regarding xml. I want... error?
coding:
public class Test {
public static void main(String...;/h1><hr /> <div class=\"centered\"> <table><tr><
robot class
ability to write it's own code, while running.
It also facilitate the testing... in the programming.
where exactly robot class is useful.
why is it necessary... provide the class Robot. It gives the java program much power. It can perform all
Java error reached end of file while parsing
an error i.e java error reached
end of file while parsing as the class closing braces... Java error reached end of file while parsing... of file while parsing occurred when a programmer
do not close the class
applet running but no display - Applet
applet running but no display Hai,
Thanks for the post. I have applied the codebase url in the page and executed.
Now, when accessed the page...*;
public class appletParameter extends Applet {
private String
Error running webservice
Error running webservice Hi,
I am getting following error:
05/10... creating bean with name 'org.apache.cxf.wsdl.WSDLManager' defined in class path... class [org.apache.cxf.wsdl11.WSDLManagerImpl]: Constructor threw exception
Running problem with NoughtsAndCrossesGame in blank
Running problem with NoughtsAndCrossesGame in blank Hi i was having...*;
/**
*
* @author 000621812
*/
public class NoughtsAndCrossesGamev2...);
}
}
class NoughtsAndCrossesGameFrame extends JFrame
Running and testing application
Running And Testing Application
The complete database driven application... Object or by using
JUnit Framework. I have used Junit to test Action class.
To test you application using JUnit you need to extend the TestCase class
Using while loop to loop through a cursor
IF;
select id,name,class;
END WHILE cur1_loop;
CLOSE cur1;
END... Using while loop to loop through a cursor
....
Understand with Example
The Tutorial grasp you an example on 'Using
Why is my program running an infinite loop? Parallel Arrays Program
Why is my program running an infinite loop? Parallel Arrays Program ...;
public class parallel
{
public static void main (String[]args...;
System.out.println(id[i]);
System.out.println(gpa[i]);
}while (i<
Class
Class, Object and Methods
Class : Whatever we can see in this world all
the things... is termed as
a class. All the
objects are direct interacted with its class
Java Not running - Java Beginners
class Grader {
public Grader() {
double grades[] = {79, 70, 69...
Sakai Hi Friend,
Try the following code:
public class Grader
Java number calculation using while loop
Java number calculation using while loop
In this tutorial, you will learn how to use while loop to solve certain
number calculations. Here is an example... java.util.*;
public class NumberExample{
public static void main(String[]args
iPhone Quiz App Tutorial
iPhone Quiz App Tutorial
In this simple iPhone quiz application tutorial... in it.
Step 2
Now, we are going to create Class Actions and Class Outlets.... Name the class as "MainView" and add button actions in "
Beginners Java Tutorial
Beginners Java Tutorial
This tutorial will introduce you with the Java Programming language. This
tutorial is for beginners, who wants
Running and testing the example
Running And Testing The Example
Before running the application, you will have.... then
it will call the HelloWorld.action class. and ecute() method of the called and
the output...;); into to your action class
so that you may confirm that whether action
While and do-while
While and do-while
Lets try to find out what a while statement
does. In a simpler language, the while statement continually executes a block of statements while a particular
java - Java Server Faces Questions
Thanks...*;
public class SchedularTest {
private final Timer timer = new Timer();
int
J2EE Tutorial - Session Tracking Example
J2EE Tutorial - Session Tracking Example... java.util.*;
public class cart
{
Vector ...="cart1" scope="session" class="
Running and testing the example
Running And Testing The Example
An example of testing the HelloWorld...;
import org.apache.struts2.convention.annotation.Result;
public class... class HelloWorldActionTest extends TestCase {
String result;
String classResult
Memory leak in While loop
Memory leak in While loop How to manage a memory leak in While loop ?
make sure loop is not running infinitley...;make sure loop is not running infinitley
Building and Running Java 8 Support
8 for compiling and running your applications.
Check the tutorial for complete...Building and Running Java 8 Support As you know Java 8 is already... the video tutorial of Adding JDK 8 support in Eclipse.
After adding the JDK 8
Hibernate Quickly
new technologies are coming
fast. These days there is very high demand... Quick tutorial.
So, now you can learn Hibernate Quickly. Our Hibernate Quick guide
provides full running example to quick start your developement. You
The While keyword
using the keyword while.
public class Myclass {
while (expression...
The While keyword
While is a keyword defined in the java programming
language. Keywords
Jobs & Triggers
. While developing Quartz, we decided that it made sense to create a separation... with the Quartz scheduler.
You can placed jobs and triggers into 'groups' also.... While a class implementing the job interface that is the actual
'job', and you
reached end of file while parsing and while expected
reached end of file while parsing and while expected import java.io.*;
public class temperature{
public static void main(String []args)throws...);
}while(result=='Y');
System.out.println("Thank you for using
Server side Validation
Server Side Validation
It is very important to validate the data coming from...;
import java.util.List;
public class LoginModel implements Serializable... displayForm(){
return "input";
}
}
Then Write an Action class
Downloading and Viewing html source of a page i.e. running on the server
Downloading and Viewing html source of a page i.e. running on the
server... illustrates you the procedure of viewing
complete html code of a page i.e. running... of the program which views the html source
code of your given page running on server
Java Compilation and running error - RMI
Java Compilation and running error The following set of programs.... The following is the group of programs for Chat room. The ChatServer class have... java.rmi.server.*;
public class ChatClient extends JFrame implements
While loop Statement
While loop Statement How to print a table using while loop?
... class WhileDemo{
public static void main(String []args... +" = ");
while(y<=10)
{
int t = x
Building a Simple EJB Application Tutorial
Building a Simple EJB Application - A Tutorial
... enterprise class web applications using JAVA and J2EE technologies. He currently...)
Introduction
In
this tutorial we will create a simple session
While loop - Java Beginners
While loop Given a number, write a program using while loop.... Hi friend,
Code to solve the problem :
import java.io.*;
class...(br.readLine());
int n=num;
int rem=0;
int rev=0;
while(n!=0
Diff between Runnable Interface and Thread class while using threads
Diff between Runnable Interface and Thread class while using threads Diff between Runnable Interface and Thread class while using threads
Hi Friend,
Difference:
1)If you want to extend the Thread class
While Loop with JOptionPane
While Loop with JOptionPane Hello there,
why the output only... exploring Java.
import javax.swing.*;
class JOPtionPaneExample... int k = 5;
String name;
int age;
while(i < k
| Site
Map | Business Software
Services India
Linux Tutorial Section... |
PHP MySQL on Centos 5.x
Linux Basic Tutorial Section
What is Linux... |
Linux Vs. Windows (A Comparison) |
Linux Distributions |
Running from CDs
JDBC Tutorial - Writing first JDBC example and running in Eclipse
be installed and configured.
MySQL database should be running
... and running perfectly. Open the
Eclipse IDE and create a new project as explained... ;
Step 4: Create Java code and run example
Create a new Java class
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/88497
|
CC-MAIN-2016-18
|
refinedweb
| 2,756
| 57.87
|
[
]
Wei Zhou commented on CLOUDSTACK-9339:
--------------------------------------
Hi Dean,
I've applied the following patch to our internal version (based on 4.7.1):
{code}
diff --git a/systemvm/patches/debian/config/opt/cloud/bin/cs/CsAddress.py b/systemvm/patches/debian/config/opt/cloud/bin/cs/CsAddress.py
index b4ed263..b0e2429 100755
--- a/systemvm/patches/debian/config/opt/cloud/bin/cs/CsAddress.py
+++ b/systemvm/patches/debian/config/opt/cloud/bin/cs/CsAddress.py
@@ -27,7 +27,6 @@ from CsRoute import CsRoute
from CsRule import CsRule
VRRP_TYPES = ['guest']
-VPC_PUBLIC_INTERFACE = ['eth1']
class CsAddress(CsDataBag):
@@ -323,7 +322,7 @@ class CsIP:
# If redundant only bring up public interfaces that are not eth1.
# Reason: private gateways are public interfaces.
# master.py and keepalived will deal with eth1 public interface.
- if self.cl.is_redundant() and (not self.is_public() or (self.config.is_vpc()
and self.getDevice() not in VPC_PUBLIC_INTERFACE)):
+ if self.cl.is_redundant() and not self.is_public():
CsHelper.execute(cmd2)
# if not redundant bring everything up
if not self.cl.is_redundant():
diff --git a/systemvm/patches/debian/config/opt/cloud/bin/cs/CsRedundant.py b/systemvm/patches/debian/config/opt/cloud/bin/cs/CsRedundant.py
index 385204c..b6e3c7d 100755
--- a/systemvm/patches/debian/config/opt/cloud/bin/cs/CsRedundant.py
+++ b/systemvm/patches/debian/config/opt/cloud/bin/cs/CsRedundant.py
@@ -41,6 +41,8 @@ from CsRoute import CsRoute
import socket
from time import sleep
+VPC_PUBLIC_INTERFACE = ['eth1']
+NETWORK_PUBLIC_INTERFACE = ['eth2']
class CsRedundant(object):
@@ -193,6 +195,8 @@ class CsRedundant(object):
if not proc.find() or keepalived_conf.is_changed() or force_keepalived_restart:
keepalived_conf.commit()
CsHelper.service("keepalived", "restart")
+ elif self.cl.is_master(): # Bring public interfaces up
+ self.bring_public_interfaces_up()
def release_lock(self):
try:
@@ -290,6 +294,27 @@ class CsRedundant(object):
self.set_lock()
logging.debug("Setting router to master")
+ self.bring_public_interfaces_up()
+
+ # bring_public_interfaces_up(self):
dev = ''
ips = [ip for ip in self.address.get_ips() if ip.is_public()]
route = CsRoute()
@@ -298,38 +323,27 @@ class CsRedundant(object):
continue
dev = ip.get_device()
logging.info("Will proceed configuring device ==> %s" % dev)
+ cmd1 = "ip link show %s | grep 'state UP'" % dev
cmd2 = "ip link set %s up" % dev
if CsDevice(dev, self.config).waitfordevice():
+ devUp = CsHelper.execute(cmd1)
+ if devUp:
+ continue
CsHelper.execute(cmd2)
logging.info("Bringing public interface %s up" % dev)
try:
gateway = ip.get_gateway()
logging.info("Adding gateway ==> %s to device ==> %s" % (gateway,
dev))
- route.add_defaultroute(gateway)
+ if self.config.is_vpc() and dev in VPC_PUBLIC_INTERFACE:
+ route.add_defaultroute(gateway)
+ elif not self.config.is_vpc() and dev in NETWORK_PUBLIC_INTERFACE:
+ route.add_defaultroute(gateway)
except:
logging.error("ERROR getting gateway from device %s" % dev)
else:
logging.error("Device %s was not ready could not bring it up" % dev)
- # _collect_ignore_ips(self):
"""
This returns a list of ip objects that should be ignored
{code}
> Virtual Routers don't handle Multiple Public Interfaces
> -------------------------------------------------------
>
> Key: CLOUDSTACK-9339
> URL:
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the default.)
> Components: Virtual Router
> Affects Versions: 4.8.0
> Reporter: dsclose
> Labels: firewall, nat, router
>
> There are a series of issues with the way Virtual Routers manage multiple public interfaces.
These are more pronounced on redundant virtual router setups. I have not attempted to examine
these issues in a VPC context. Outside of a VPC context, however, the following is expected
behaviour:
> * eth0 connects the router to the guest network.
> * In RvR setups, keepalived manages the guests' gateway IP as a virtual IP on eth0.
> * eth1 provides a local link to the hypervisor, allowing Cloudstack to issue commands
to the router.
> * eth2 is the routers public interface. By default, a single public IP will be setup
on eth2 along with the necessary iptables and ip rules to source-NAT guest traffic to that
public IP.
> * When a public IP address is assigned to the router that is on a separate subnet to
the source-NAT IP, a new interface is configured, such as eth3, and the IP is assigned to
that interface.
> * This can result in eth3, eth4, eth5, etc. being created depending upon how many public
subnets the router has to work with.
> The above all works. The following, however, is currently not working:
> * Public interfaces should be set to DOWN on backup redundant routers. The master.py
script is responsible for setting public interfaces to UP during a keepalived transition.
Currently the check_is_up method of the CsIP class brings all interfaces UP on both RvR. A
proposed fix for this has been discussed on the mailing list. That fix will leave public interfaces
DOWN on RvR allowing the keepalived transition to control the state of public interfaces.
Issue #1413 includes a commit that contradicts the proposed fix so it is unclear what the
current state of the code should be.
> * Newly created interfaces should be set to UP on master redundant routers. Assuming
public interfaces should be default be DOWN on an RvR we need to accommodate the fact that,
as interfaces are created, no keepalived transition occurs. This means that assigning an IP
from a new public subnet will have no effect (as the interface will be down) until the network
is restarted with a "clean up."
> * Public interfaces other than eth2 do not forward traffic. There are two iptables rules
in the FORWARD chain of the filter table created for eth2 that allow forwarding between eth2
and eth0. Equivalent rules are not created for other public interfaces so forwarded traffic
is dropped.
> * Outbound traffic from guest VMs does not honour static-NAT rules. Instead, outbound
traffic is source-NAT'd to the networks default source-NAT IP. New connections from guests
that are destined for public networks are processed like so:
> 1. Traffic is matched against the following rule in the mangle table that marks the connection
with a 0x0:
> *mangle
> -A PREROUTING -i eth0 -m state --state NEW -j CONNMARK --set-xmark 0x0/0xffffffff
> 2. There are no "ip rule" statements that match a connection marked 0x0, so the kernel
routes the connection via the default gateway. That gateway is on source-NAT subnet, so the
connection is routed out of eth2.
> 3. The following iptables rules are then matched in the filter table:
> *filter
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A FW_OUTBOUND -j FW_EGRESS_RULES
> -A FW_EGRESS_RULES -j ACCEPT
> 4. Finally, the following rule is matched from the nat table, where the IP address is
the source-NAT IP:
> *nat
> -A POSTROUTING -o eth2 -j SNAT --to-source 123.4.5.67
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
|
http://mail-archives.apache.org/mod_mbox/cloudstack-issues/201604.mbox/%3CJIRA.12956155.1459864315000.254093.1460978365484@Atlassian.JIRA%3E
|
CC-MAIN-2018-09
|
refinedweb
| 1,071
| 50.53
|
Lightweight interface for SSRS reports to python
Project description
SSPYRS
The SSPYRS (SQL Server Python Reporting Services) library is a lightweight interface for interacting with and retrieving data from SSRS reports. The core functionality of the library is straightforward. Perform authentication to an SSRS server, initialize a session, and then retrieve the report data from that session. Report data can be interacted with via raw XML, but has predefined methods to organize it into Pandas DataFrame objects.
The SSPYRS library works primarily from the XML export functionality of SSRS. However, this neither XML nor CSV exports are provided in the free express versions of SQL Server (they are available within the currently free developer editions of SQL Server 2017). The library does include direct download functions for the Excel export included in the express version, however it will not read the data directly into memory.
SSPYRS has been validated to work with SSRS 2008 R2, SSRS 2014, and SSRS 2016, SSRS 2017, and PowerBI Server 2017 under most server settings.
To install SSPYRS, execute in console:
pip install sspyrs
Usage and Documentation
Report Objects
A report object can be initialized as follows:
import sspyrs myrpt = sspyrs.report('', myusername, pass)
If passing parameters to the report, they can be passed as a dictionary as an argument called ‘parameters’. Note that parameters must use the actual parameter names designated within the rdl file. Parameters with defaults do not need to be specified unless desired. An example of valid parameters would be:
params_multi = {'Param_Format': ['CSV', 'XML'], 'Param_Status': 'rsSuccess'}
Retrieving Data
Raw XML Data
To retrieve the raw XML from the report, use the rawdata() method:
rpt_xml = myrpt.rawdata()
The resulting variable will be a dictionary with all report data elements. This will include some report metadata in addition to the XML formatted data elements from the report. Note that some of the XML tags and headings may appear differently than their corresponding report attributes. This is due to the fact that the XML does not include any XML object labels, only their names, which must be unique across the entire .rdl file, not just within an element. For example, in a report with 2 tables which share column names between them, the first table or data object will have the normal column names appended with an “@” (e.g. “@ID”,”@Val”), while the second table will have column names like “@ID2”, “@Val2”. The tabledata() method strips the “@” and numbers out, but the rawdata() method leaves them be.
Tabular Data
To quickly organize the raw XML into a tabular format, use the tabledata() method:
rpt_tables = myrpt.tabledata()
The resulting variable will be a dictionary of Pandas DataFrames, whose keys in the dictionary correspond to the data object names within the .rdl file. This method also attempts some limited data parsing for number and date columns.
Exporting Data
Default Download
When working with versions that allow XML exports, the report data can be directly exported to a few convenient formats using the download() method:
rpt_downresults = myrpt.download(type='CSV')
The resulting variable lists out the data objects which were downloaded and written to files. Currently available exports include CSV, JSON, and Excel. The default download file type is CSV. For CSV and JSON, a file will be created for each data object, named by its dictionary key from the tabledata() results. For Excel, a single file with multiple tabs is created.
Direct Download
When working with versions of SSRS which do not allow XML data exports (typically because the feature is not included in express editions), the data can be exported directly to any of the available export types (on express editions this usually includes Excel, Word, and PDF) using the directdown() method. The direct download can be called:
rpt.directdown('myfile', 'CSV') rpt.directdown('myfile', 'Excel') rpt.directdown('myfile', 'PPTX')
These functions will create a file called ‘myfile.csv’, or whatever extension is specified, in the current working directory. The directdown method supports all available SSRS export formats as of SSRS 2017. If there is a report containing complicated formatting to the point that the built in rawdata() and tabledata() methods are impractical, using the directdown() method and parsing the results is a viable alternative.
Also, worth noting is the Excel export from directdown() preserves SSRS formatting and reads natively into Python via pandas. If preserving data formats is important, exporting to Excel via directdown() and reading the resulting file back into Python is the preferred solution.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/sspyrs/
|
CC-MAIN-2021-10
|
refinedweb
| 771
| 53.21
|
#include <wx/htmllbox.h>
wxSimpleHtmlListBox is an implementation of wxHtmlListBox which shows HTML content in the listbox rows.
Unlike wxHtmlListBox, this is not an abstract class and thus it has the advantage that you can use it without deriving your own class from it. However, it also has the disadvantage that this is not a virtual control and thus it's not well-suited for those cases where you need to show a huge number of items: every time you add/insert a string, it will be stored internally and thus will take memory.
The interface exposed by wxSimpleHtmlListBox fully implements the wxControlWithItems interface, thus you should refer to wxControlWithItems's documentation for the API reference for adding/removing/retrieving items in the listbox. Also note that the wxVListBox::SetItemCount function is
protected in wxSimpleHtmlListBox's context so that you cannot call it directly, wxSimpleHtmlListBox will do it for you.
Note: in case you need to append a lot of items to the control at once, make sure to use the Append(const wxArrayString&) function.
Thus the only difference between a wxListBox and a wxSimpleHtmlListBox is that the latter stores strings which can contain HTML fragments (see the list of tags supported by wxHTML).
Note that the HTML strings you fetch to wxSimpleHtmlListBox should not contain the
<html> or
<body> tags.
This class supports the following styles:
A wxSimpleHtmlListBox emits the same events used by wxListBox and by wxHtmlListBox.
Event macros for events emitted by this class:
wxEVT_LISTBOXevent, when an item on the list is selected. See wxCommandEvent.
wxEVT_LISTBOX_DCLICKevent, when the listbox is double-clicked. See wxCommandEvent.
Constructor, creating and showing the HTML list box.
Constructor, creating and showing the HTML list box.
Frees the array of stored items and relative client data.
|
https://docs.wxwidgets.org/trunk/classwx_simple_html_list_box.html
|
CC-MAIN-2019-47
|
refinedweb
| 293
| 51.07
|
OnsenUI 2 React with Meteor.js Demo App
Hi guys,
I start using OnsenUI 2 React version in Meteor.js after see @Fran-Diox OnsenUI Meteor ToDo App repo.
I create new project and start using all OnsenUI 2 components in Meteor.js 1.3+ and try to make a complete demo apps based on demos, but I’m very new in React, OnsenUI and ES6 so I create this issue to ask my questions in this journey and then when it’s complete I can introduce this in News & Announcements category of forum for Meteor.js developer. I think it’s very interesting for other Meteor mobile developers like me. :wink:
My project address is:
This is app screen shot:
And you can contribute with:
- Answer my question in this thread about problems in this app. :wink:
- Report existing bugs and issues.
- Add some documentation or comments.
- Submit pull request.
My first issue (Issue #1) is that I don’t know why but my Floating action button component
click eventnot working. You can see it here:
What is the problem. I exactly do same thing as
I think it is because renderFixed is not bound to the instance.
Could you try to replace
renderFixed() {by
renderFixed = ()=>{?
@dagatsoin First thanks for your attention. I convert my
renderFixbased on your note like this:
renderFixed = () => { return ( <Fab ripple style={{backgroundColor: ons.platform.isIOS() ? '#4282cc' : null}} onClick={this.handleClick} <Icon icon='md-face' /> </Fab> ); };
but It return below error on server console:
imports/ui/demo/FAB/Index.jsx:20:4: /imports/ui/demo/FAB/Index.jsx: Missing class properties transform.
line 20 is
renderFixeddefinition.
@cyclops24 Not sure if that syntax is correct. Change lines #41 and #42 with the following.
Try this in ES5:
renderToolbar={this.renderToolbar.bind(this)} renderFixed={this.renderFixed.bind(this)}
Or ES6:
renderToolbar={() => this.renderToolbar()} renderFixed={() => this.renderFixed()}
@Fran-Diox Thanks man. Yehhhh :+1: it’s worked. So now with arrow function I haven’t need constructor and
bindlike below? it’s true?
constructor(props) { super(props); this.renderToolbar = this.renderToolbar.bind(this); this.handleClick = this.handleClick.bind(this); };
@cyclops24 I think that’s not doing anything at all, you can safely remove it :sweat_smile:
Bindings in JavaScript are a bit tricky sometimes.
@Fran-Diox Thanks man. I know
thisand
bindingfinally kill me one day. :wink:
I fixed that and add your name to contributors for small thanks.
I find this article useful about arrow function vs bind: (maybe useful for others like me)
My next issue, Issue #2 is that I don’t know why but when I add
import demoIconIndex from './demo/Icon/Index.jsx';to my TOC component, my app return
Error: Cannot find module './demo/Icon/Index.jsx'and break at all but this file exist in same path.
I also double check anything.
Guys I did a lot of test and find that if
Iconword exist in path app breaks but If I rename file to
Iconsand change my import to
./demo/Icons/Index.jsxall things works well. Is
Iconhas a special or reserved word in ES6 or OnsenUI 2 ? :hushed:
@cyclops24 I don’t think so :sweat_smile: No idea why it doesn’t work with
Icon. By the way, I just released a blog post about Meteor + Onsen UI. Let’s see if more devs join us :smile:
You can comment if you want and add a link to your app!
|
https://community.onsen.io/topic/638/onsenui-2-react-with-meteor-js-demo-app
|
CC-MAIN-2022-05
|
refinedweb
| 568
| 68.16
|
Jul 31, 2011 10:05 AM|JoeFletch|LINK
I am creating an ASP.net application to scrub Legacy data. Part of the functionality requires generic fields (labels and button text) to be in the native language, so I plan on using the standard localization functionality of ASP.net (I have not decided if I am going to use a drop down or automatic detection yet). The other aspect of the application allows for different Legacy Sources to be loaded into the application into generic fields that we have identified in the database. But we want to label the fields based on the Legacy Source selected within the application. And I believe that using a customized localization file can achive this based on what the user selects from a drop down list. Is it possible to utilize two localization resource files on a single page in an ASP.net application?
-JoeFletch
Member
450 Points
Aug 04, 2011 07:00 AM|JoeFletch|LINK
Within my application I have different labels that need to be changed based on either a Language or a Source selected. Both selections will need to be made on a particular page and they will drive labels to be populated.
When a Language is selected, I want the following objects to be updated...
Aug 04, 2011 07:05 AM|JoeFletch|LINK
Here is a crude picture of what I want.
-JoeFletch
Aug 26, 2011 08:18 AM|JoeFletch|LINK
I'm wondering if localization / resource files are not the best option here. Would maybe caching from either an XML file or from SQL server work better?
-JoeFletch
Aug 26, 2011 12:39 PM|bas bloemink|LINK
Hi JoeFletch,
As far as I know you can not use multiple (global or non-global) resource files at the same time at the same page.
As you wrote in your last post, best seems to be caching both your resources from XML or DB. That is how we do it.
Good luck!
Aug 26, 2011 01:43 PM|JoeFletch|LINK
bas bloeminkcaching both your resources from XML or DB
Does caching to an arrary at page load seem reasonable? I think that is what I am going to try now. My only concern is multiple users with different Languages and Sources selected. Is the caching on the client side or the server side? I can't see it being on the client side, then how would the aspx page be generated? If it is on the server, then does asp.net just cache specific to a user?
-JoeFletch
Aug 26, 2011 02:45 PM|bas bloemink|LINK
Hi Joe,
First about caching:
- Caching is always server-side
- All users share the same cache, there is no user-specific cache
I will try to explain how we deal with our resources.
We have two tables
- tblLanguage
- LCID, varchar
- Description, varchar
- tblTranslation
- ID, varchar
- nl-NL, varchar
- en-US, varchar
Where each tblLanguage.ID references to its related column in tblTranslation.
In code I have defined a Language class:
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Text; using MySql.Data.MySqlClient; using System.Web.Caching; /// <summary> /// Summary description for Language /// </summary> public class Language { private const string CACHE_TRANSLATIONS = "Cache_Translations"; public string LCID {get; set;} public string sDescription { get; set; } // We use a Dictionary instead of an ObjectList as these are faster // public Dictionary<string, string> dicTranslations; public Language() { // Add constructor logic here // Get the users prefered Language // LCID = (HttpContext.Current.Profile as ProfileCommon).Language; // Determine if this particular TranslationsDictionary is already in Cache // if (HttpContext.Current.Cache[CACHE_TRANSLATIONS + LCID] != null) { // Get the required TranslationsDictionary from Cache // dicTranslations = (Dictionary<string, string>)HttpContext.Current.Cache[CACHE_TRANSLATIONS + LCID]; } else { // Required TranslationsDictionary not yet in Cache // // Initialize the Translations-Dictionary // dicTranslations = new Dictionary<string, string>(); // Get all translations from the DataBase // GetTranslationsFromDB(LCID); // Insert the Dictionary into Cache // HttpContext.Current.Cache.Insert(CACHE_TRANSLATIONS + LCID, dicTranslations, null, DateTime.Now.AddMinutes(30), //DateTime.Now.AddSeconds(5), Cache.NoSlidingExpiration); } } private void GetTranslationsFromDB(string sALCID) { StringBuilder sb = new StringBuilder(); sb.Append("SELECT * "); sb.Append(" FROM tbltranslation"); string sSql = sb.ToString(); using (MySqlConnection conn = DBResources.SportLogConnection()) { MySqlCommand com = new MySqlCommand(sSql, conn); //com.Parameters.AddWithValue("@pmLCID", sALCID); com.Connection.Open(); MySqlDataReader dr = com.ExecuteReader(); while (dr.Read()) { string sTranslationID = dr["id"].ToString(); string sValue = (dr[sALCID] is DBNull ? "" : dr[sALCID].ToString()); // Add each translation to the dictionary // dicTranslations.Add(sTranslationID, sValue); } } } public static List<Language> GetAllLanguages() { List<Language> lstLanguage = new List<Language>(); StringBuilder sb = new StringBuilder(); sb.Append("SELECT * "); sb.Append(" FROM tbllanguage "); sb.Append(" ORDER BY description"); string sSql = sb.ToString(); using (MySqlConnection conn = DBResources.SportLogConnection()) { MySqlCommand com = new MySqlCommand(sSql, conn); com.Connection.Open(); MySqlDataReader dr = com.ExecuteReader(); while (dr.Read()) { Language lan = new Language(); lan.LCID = dr["lcid"].ToString(); lan.sDescription = dr["description"].ToString(); lstLanguage.Add(lan); } } return lstLanguage; } public string Translation(string sATranslationID) { string sResult; dicTranslations.TryGetValue(sATranslationID, out sResult); if ( (sResult == null) || (sResult == "")) sResult = sATranslationID; return sResult; } }
Note: as you can see this class only uses 1 language/resource per user at a time. You have to change it so it will have both your Language-resource and Source-resource. (You can either make a second class for the second resource type or expand the above.)
In a page you can use this Language class as follows:
private void LoadLanguage() { // Create Language-object // language = new Language(); lblDisplayLanguage.Text = language.Translation("lblDisplayLanguage.Text"); }
.
As you can see, the ID of each translation-entry in tblTranslation corresponds with the [ControlID].[PropertyID]
In the above example, each language will be available in chache after its first use and stay there for the given amount of time. So if you have only two languages, there are only two objects in cache. These two objects are available for all your users.
Aug 26, 2011 03:09 PM|JoeFletch|LINK
Excellent. Thanks for the detailed response and your suggestions. I will try it and get back to the thread.
-JoeFletch
Sep 06, 2011 08:25 AM|JoeFletch|LINK
bas bloemink
Hy JoeFletch,
How did my example worked out for you? Could you please ‘mark as answer’ if it was helpful?
Thanks!
Sorry, its been a rough couple of weeks (hurricane damage, household sicknesses, etc). I hope to get to it today.
-JoeFletch
Sep 06, 2011 03:05 PM|JoeFletch|LINK
bas bloemink
Hy JoeFletch,
How did my example worked out for you? Could you please ‘mark as answer’ if it was helpful?
Thanks!
Thanks! Your post was quite helpful. I guess I really struggled with how the cache worked. Combining a translation/source ID with the actually source/Language Key was what cleared it up for me. For example, 1EN relates to ID 1 in the database with a Language Key of EN. I can also have 1NL in the cache and any user can retrieve these cached values. I did change the code quite a bit though.
Public Function GetText(ByVal ID As Integer, ByVal Key As String) As String If Cache(ID.ToString & Key) Is Nothing Then Cache.Insert(ID.ToString & Key, LanguageSourceTextBLL.GetTextDataByIDLangaugeSource(ID, Key), Nothing, DateTime.Now.AddMinutes(3), TimeSpan.Zero) PageStatusLabel.Text = PageStatusLabel.Text & "<br />Pulled " & ID.ToString & Key & " from the database and placed it in the cache." Else PageStatusLabel.Text = PageStatusLabel.Text & "<br />Pulled " & ID.ToString & Key & " from cache." End If Return Cache(ID.ToString & Key) End Function
I will be pulling out the PageStateLabel.Text entries and adding time to the cache expiration.
But I still have one problem. I can not call this function on an aspx page within an asp.net control. Examples...
<asp:Label</asp:Label>
<asp:Label</asp:Label>
But the following works...
<% =GetText(1,"EN") %>
-JoeFletch
Sep 06, 2011 03:58 PM|bas bloemink|LINK
Let me first say, I am not a VB guy so I do not know if all the notations are correct.
But what I do see is the following:
Return Cache(ID.ToString & Key)
Every item coming from Cache is of type Object. Your method dough, returns a String.
When you retrieve an item from Cache, convert it to String first before returning it:
Return Cache(ID.ToString & Key).ToString()
Something else, before adding VB code to your markup, try retrieving entries from cache in code-behind first.
Dim sEntry as String sEntry = Cache(ID.ToString() & Key).ToString() Label1.Text = sEntry
This will give you the posibility to step through all lines.
Sep 06, 2011 04:30 PM|JoeFletch|LINK
bas bloeminkReturn Cache(ID.ToString & Key).ToString()
I have made this update. Thanks.
bas bloeminkThis will give you the posibility to step through all lines.
That is why I use a status label, so see each step along the way. The strings are being retrieved and stored in the cache. I am just having some trouble placing the retrieval code into the asp.net controls.
-JoeFletch
13 replies
Last post Sep 06, 2011 04:30 PM by JoeFletch
|
https://forums.asp.net/t/1705770.aspx?Custom+Use+of+Localization+Functionality
|
CC-MAIN-2020-45
|
refinedweb
| 1,488
| 59.5
|
Not sure where to put this as it's not obfuscated, but this is my first attempt at writing a program that parses in two different languages. Specifically, this Perl program compiles as C code under Cygwin with:
gcc hel2.c -o hel2 -Wall -ansi
I'm embarrassed by all of the cheap preprocessor instructions in there. It seems like a cop-out.
#include <stdio.h> #ifdef _FSTDIO #define $i i #define sub #define main int i; int main sub main() { #else sub main(); #endif for ($i=0;$i<5;$i++) { printf("Just another Perl hacker,\n"); } return(0); } #ifndef _FSTDIO main(); #endif
Okay, not very good. Shoot me :) Since I did specify -ansi on the command line of the C compiler, I expect that this will work on any C compiler.
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.
|
https://www.perlmonks.org/bare/?node_id=133308
|
CC-MAIN-2021-10
|
refinedweb
| 149
| 68.5
|
Welcome to the Parallax Discussion Forums, sign-up to participate.
I keep hearing this refrain about "poor programming" now a days.:
int main (int argc, char* argv[])
{
printf("Hello World!");
return (0);
}.
WHAT ON EARTH???
Heater. wrote: »
RS_Jim,.
Personally, as a beginner I would advise ignoring all talk or branching, merging, rebasing etc. Why?
Because, an important thing to realize is that the repo on your machine, and the one on github are exactly equivalent. There is no "master".
That means that when you hack code on your machine and commit it, it is a branch from github or wherever it came from. Until you push your changes.
If you find you have messed up your branch, you can always just delete the whole directory and "git clone" it back again.
And, well, all that branching, merging, rebasing stuff is complex. I almost never need it.
Heater. wrote: »
Yes it is.
My simple idea is that when introducing beginners to git, is to not talk about branching and merging, add --patch, rebasing and so on. That just confuses everyone. Useful when you need it but most of the time, on one man or with a few collaborators on github etc, not needed.
Heater. wrote: »
Tor,
I find what you are suggesting with "add --patch" somewhat disturbing. Because:
1) The code you are committing is not actually the code you tested. It's not uncommon that bugs get masked by the introduction of printfs and other debug checks.
But better yet is to remember to just "add" and "commit" for every little fix.
Then you don't have to mess around with "add --patch".
msrobots wrote: »
...just need to write it down. Still independent of the language to use, mostly.
Genetix wrote: »
Just like in writing for something that is short and simple a formal outline isn't required though highly recommended..
Heater. wrote: »
No I didn't. I'm too old to not loath and detest the Smurfs.
That about sums it up.
Last night I did a search of all bmp and jpg files on a particular system, and I was amazed at the results. Photos and images I have never seen before. Resources included in exe files are another great source of bloat.
Just now I have a bunch of apps open and 20 odd tabs in my browser. Images and graphics abound. It's all using 50% of the 8GB of RAM on this machine.
Given that I like to have all that, and given that such features take space, and given that the whole deal cost half the price of my CP/M machine maxed out with WordStar in 1981, what's the problem?
There is one thing I run here that I do call bloat. The IntelliJ IDE. That thing takes two minutes to start up and get itself into a usable state. Then consumes 25% of my RAM.
God knows why. Except, well, it written in Java.
This was so intense that I forgot one thing... Get it pushed up to BitBucket. Or at least backed up somewhere.
This morning I thought I'd better do that before my computer dies or whatever other disaster.
But no, I had to do one more little thing first....
POOF, I inadvertently deleted the whole directory!
So today I had to start over, which was annoying but not so bad as the whole thing was still fresh in my mind.
On the bright side, the new recreation is better than the original approach.
Some well known software engineer said "Be prepared to throw away the first version"
He had more, e.g. Fred Brooks' law: "Adding manpower to a late software project makes it later."
Is there an Auto Save or Auto Backup option?
I can't remember which CAD program it was but it would save to a temporary file each time you made a change;
It was neat because if the program crashed you wouldn't lose everything.
I once had SolidWorks lock-up and I lost hours of work.
I remember using Lynx but I've never seen DOOM in text-mode.
Whatever the basic Dropbox subscription is, I have it. Delete a sync'd file on your system and you delete it from Dropbox...BUT Dropbox keeps a history where you can go back and retrieve previous saves. I don't know if there is a limit, probably depends on available storage space.
Saved my posterior, a while ago.
I almost had to buy a copy of "The Mythical Man-Month" for the bosses of every new company I worked for.
Said bosses get a bit out of shape when I explain that their project proposal may well require 100,000 lines of code. That historically a software engineer produces 10 lines of code per day, as Fred pointed out, and so the whole project could take about 40 man years.
So no, I won't have a demo ready by next week.
And, by the way, you should plan to throw that first version away!
(After post-psychological-shock-deep-breaths)
This looks like a pixelated 1st person shooter game that was video shopped into the command shell video.
It's the game "DOOM". Probably the first majorly successful 3D first person shooter and one of the most famous games in history.
No need for it to be video shopped, there have been a few cases of people modifying DOOM to render it's graphics as text instead of the usual pixels.
Years ago I saw one that rendered to plain old black and white text characters. Can't find it now.
Did you play barney Doom or Castle Smufenstein?
I did once hack some resources in Quake to put the faces of a few of my friends on the walls in odd places. Said friends were awestruck to find themselves in the game when they came over to play.
I was never much of a gamer. The last straw was when, after many hours of play, I scored a 100,000, the highest possible, in Starglider on my Atari ST520. At which point it ranked me as "Cheat". I was so pissed off I have pretty much never played any games since.
Thanks Heater. Your simple instructions just made it easier for me to understand git.
Be warned that git has a billion other features. Which is great when you need them. But it does lean to a billion blogs, posts, etc around the net that really confuse the issue for a beginner.
For one guy hacking on a project it's quite enough to get by without all the branching, merging, rebasing talk.
But here is a neat trick:
Let's say you hack on file A to fix a bug X.
Then you hack on file B to fix bug Y. These changes are not otherwise related.
Now you want to commit those changes.
A good idea is to do this part by part:
Do a git add of file A and then commit with a comment like "Fixed bug X"
Then do a git add of file B and commit with a comment like "Fixed bug Y"
Now in your git log you have a clear record of what happened. As opposed to the confusion in the log that would happen if you added both files and committed with a comment like "Fixed bug X and Y".
Or imagine you have changed a dozen things in a week and then do a single commit. Then your log entry would say "Fixed X, Y, Z, reformatted this and that added feature P,Q". Impossible to disentangle later if you need to.
I find what you are suggesting with "add --patch" somewhat disturbing. Because:
1) The code you are committing is not actually the code you tested. It's not uncommon that bugs get masked by the introduction of printfs and other debug checks.
2) If I were to push that to github or whatever from one place and then pull it to a machine in other place, as I often do when working at home, office, and elsewhere, then I don't have all that nice printf/debug/test code in place. Nor would anyone else collaborating on that code.
3) If you have test harnesses, unit tests, or whatever in place then they should be part of the repository. Then the whole thing can then be cloned anywhere and the tests run.
Using "add --patch" as you suggest would not fit the way I work.
Having said that "add --patch" is very useful. Let's say that in one day you have made three changes to the same file to fix three bugs. Well, then you can add and commit the whole thing. In which case the commit comment should say "Fixed bugs A, B, C". But better is to add and commit, one at at time, the three changes (patches), with appropriate commit comments. Then the commit history explains what has been done more clearly. And if need be the changes can be rolled back, one by one.
But better yet is to remember to just "add" and "commit" for every little fix. Then you don't have to mess around with "add --patch".
My simple idea is that when introducing beginners to git, is to not talk about branching and merging, add --patch, rebasing and so on. That just confuses everyone. Useful when you need it but most of the time, on a one man project or with a few collaborators on github etc, not needed.
I am with you Heater - little bites at a time.
[..]
I really recommend that you try to work with --patch in practice. Very few that do will go back to wholesale commits - there are many uses, depending on the individual. More than I've mentioned.
(one is that when I edit a file, I often end up with unintentional whitespace changes - an extra space on an otherwise empty line, or two blank lines where I want a single one. Instead of trying to edit the file to remove any whitespace changes that weren't intentional, I use add --patch to add just the real diff. After that I end up with a file where 'git diff' only shows whitespace changes. At that point I can do 'git checkout -f' (or -f -- filename) and get the cleaned-up file back, without the unintentional whitespace changes. That actually helps a lot for work stuff - we always review changes by others, and the cleaner the commit is the better.)
Well, yes, just don't end up with CVS - which really was just a way of doing snapshot backups. It wasn't version control. All I can say is - try it. It's not the same thing at all.
It's just that I like to keep things simple. Most of what I work on only has a master branch, it only moves forwards. Unless something gets hosed!
I do agree about keeping the commit history "clean". I hate it when people have a commit comment that says "Fixed bug X" but when you look at the commit there is a ton of other, unrelated, changes in their. Often just white space/formatting changes they sneaked in or perhaps some refactoring. It confused the issue when you look back over the history. That is why I like to make a lot of git adds and commits. Often many per day. One thing at a time.
Except of course I do have many branches. The final product is in Github or BitBucket but I clone it and and hack on things in my office. Then again at home. Or perhaps on my laptop whilst out and about traveling. Sometimes means I end up conflicting with myself! But usually it is easy to get everything merged back into a whole.
I don't usually have a need for a master branch and then other branches for major changes/experiments.
Sure a Source Control and Versioning Tool can make the life more easy, but has nothing to do with good coding.
I can write good and bad code with a pencil and paper.
First you need to understand the problem itself. Do not guess, find out the hard facts surrounding the problem. Every hour spend understanding the problem saves multiple hours coding in the wrong direction.
Second you need to understand the limitations of the hardware used to run your code. This will also set boundaries around the possibility of a solution.
Third you need to relax and just think. There is where the experience of a programmer shows. Thinking thru multiple solutions to ponder feasibility and amount of work. Just sit there an think. Run some test code to verify assumption about the problem.
The next step is the actual coding, some people like Peter J. prefer to start at the bottom, coding needed subroutines, gluing them together later, other ones prefer to work from the top down, outlining the solution and filling it up further down. Normally one need a mix of both.
But usually, at that point, a programmer has the complete solution solved in his head and just need to write it down. Still independent of the language to use, mostly.
Life is good, everything runs smoothly until it does not.
Because even the best of the better programmers makes mistakes in steps one to three. Always. And the management change things in between. Also Always.
And at that point in development consistent and modular programming pays off. Since you avoided duplicates in your code
Actually here is where the fun begins while programming. Because the next idiot to fix that code next year is - hmm - yourself.
And programs you write are like boomerang-children. Mine are hunting me down even 20+ years after I wrote them. Still running, somewhere, and in need of some TLC.
Enjoy
Mike
In other words, avoid jumping into actual coding before understanding the problem.
This appears to be very difficult to do; I'm guilty of not "writing it down, before". And the majority of my students don't understand the concept either. And usually, the "jumpers" have the most problems in terms of compile errors and logic-flow issues.
Mastering this step can save a lot of time and frustration.
DJ.
That is something I always do when I am stumped. My favorite inspiration inducing activity when working at one of the universities is to go for a coffee and some sightseeing in the cafeteria or on the patio. Nothing like it for getting the gray matter working.
Some good discussion in this his thread from @microcontrolleruser here -
I have never seen anyone run a game in command shell before. Maybe what he typed was a predefined user-command(?) that openedd and initiated in the command prompt window. I tried to learn Command Shell a while back, but couldn't make sense of it.
|
http://forums.parallax.com/discussion/comment/1437053/
|
CC-MAIN-2021-04
|
refinedweb
| 2,498
| 83.05
|
Learn memory layout of C program such as text segment, heap segment, stack segment, command line arguments in linux c with examples. Subscribe here to enable complete access to all technical articles, online tests and projects.
This article will help you to gain required knowledge to successfully complete online tests, technical projects which are required to complete online courses and earn course completion certificates for software jobs.
Let's demonstrate the memory layout of a C program using few example programs. To begin with, let's write a simple c program to add two integers.
Program output:Program output:
#include "stdio.h" int result1; int main() { int p1 = 10,p2 = 20; result1 = add(p1,p2); printf("The result is %d\n",result1); return 0; } int add(int a,int b) { return a+b; }
# ./a.out The result is 30 //size command output # size ./a.out text data bss dec hex filename 830 292 8 1130 46a ./a.out
|
https://www.mbed.in/c/c-memory-layout/
|
CC-MAIN-2021-17
|
refinedweb
| 158
| 63.9
|
When we are developing any web-application, we usually have a dashboard in the Admin panel, to show sales reports in a Chart or say a number of items sold this week, earnings of the last week vs the current week, the total number of new users signed up for the application etc.So to provide better and quick visualization of these data, we need to create charts in our application, and in this article, I am going to demonstrate you, how to show live data from database & integrate it in Google Chart(pie chart) using your ASP.NET MVC application(in JSON format).
Google Chart API provides many types of charts in which we can represent the data in the application. Some of them are given below:
- Geo Chart
- Column Chart
- Bar Chart
- Line Chart
- Pie Chart
- Area Chart
There are 3 major steps in creating pie(or any other) chart in your web application using Google Charts API
- Fetch Data from database using C#(Controller or Web Method)
- Get data on front end using jQuery Ajax (in JSON or any other format, we will be using JSON in this example)
- Add Columns, Value in google chart Datatable and draw the chart using javascript/jQuery.
Let's create a sample project(GoogleChartExample) for it using Visual Studio 2017(You can use older version also if you don't have VS 2017).
- File -> New Project -> Web(Left Hand menu)/Asp.NET Web application (Right hand menu)
- Select MVC template for auto-generating basic Controller and View of MVC
- Connect your Solution to database using Entity Framework, check this article here if you are new to Edmx
- Suppose we have this table in our Database, and we want to show its PostCount data by CategoryName in Google Charts(pie chart here)
- Now Press Ctrl+F5, it will build your project and open Index View of HomeController in your default Web-Browser
If you are new to MVC, you can read about the basics of MVC in our previous article
- Now, what we need to do here is:
1. to erase this auto-generated HTML
2. create the HTML div for pie chart
3. place the Ajax call while loading the page
4. Fetch data into a List using C#
5. pass this list as a JSON to front end
6. Add this list data in Google Chart API Column using Javascript
7. Draw the Chart from the above-created column's and show it
Note: don't forget to include Google chart scripts into to your View/Application, here is the link
<script type="text/javascript" src=""></script>
- Go step by step as described above
HTML for Pie Chart in Index view can be
<div id="chartdiv" style="width: 600px; height: 350px;"> </div>
Create an Ajax Call on Load of a View using jQuery to call Controller Method which will return JSON list
//reference of Google charts in javascript <script type="text/javascript" src=""></script> <script type="text/javascript"> google.charts.load('current', { 'packages': ['corechart'] }); //Call function after Google Chart is loaded, it is required, otherwise you may get error google.charts.setOnLoadCallback(DrawonLoad); function DrawonLoad() { $(function () { $.ajax({ type: 'GET', url: '/Home/GetPiechartJSON', success: function (chartsdata) { // Callback that creates and populates a data table, // instantiates the pie chart, passes in the data and // draws it. //get JSONList from Object var Data = chartsdata.JSONList; var data = new google.visualization.DataTable(); data.addColumn('string', 'CategoryName'); data.addColumn('number', 'PostCount'); /Loop through each list data for (var i = 0; i < Data.length; i++) { data.addRow([Data[i].CategoryName, Data[i].PostCount]); } //", }); }, error: function () { alert("Error loading data! Please try again."); } }); }) } </script>
Create
GetPiechartJSONMethod in
HomeControllerto fetch data from database, return the data as JSON
public JsonResult GetPiechartJSON() { List<BlogPieChart> list = new List<BlogPieChart>(); using (var context = new BlogDbEntities()) { list = context.Blogs.Select(a=>new BlogPieChart { CategoryName=a.CategoryName, PostCount=a.PostCount }).ToList(); } return Json(new { JSONList=list},JsonRequestBehavior.AllowGet); }?
Note: I have created a new Class named as
BlogPieChart.csto get only CategoryName and PostCount, here is the code for it
public class BlogPieChart { public string CategoryName { get; set; } public int? PostCount { get; set; } }
Now Run your application using Ctrl+f5, it will open in your browser, and the output would be like
- That's it, you have just created your pie chart using Google charts
Understanding the Javascript code
Although I have explained my javascript code above with comments, I would like to clarify it a bit more
google.charts.load('current', { 'packages': ['corechart'] }); google.charts.setOnLoadCallback(DrawonLoad);
the above javascript code loads the google charts on our web page and it will call
DrawonLoad a function after charts are loaded, it is required to write above lines, remove it and calling Ajax call directly may throw javascript error "Cannot read property 'DataTable' of undefined" related to Google charts.
var Data = chartsdata.JSONList; var data = new google.visualization.DataTable();
Above code will Fetch JSONList and creates a variable to initialize datatable of Google Charts
Now add the columns in your Google Charts using, with their data-type
data.addColumn('string', 'CategoryName'); data.addColumn('number', 'PostCount');
Below code will help your loop through the data, and add data as a row in Google charts above created datatable
for (var i = 0; i < Data.length; i++) { data.addRow([Data[i].CategoryName, Data[i].PostCount]); }
After this, we are adding initiating and drawing our chart using the ID and some extra header options
//", });
That's we are done if you find any issue in the code, or while creating google chart in your MVC application, feel free to add a comment or ask your question on it, we will help you.
|
https://qawithexperts.com/article/asp.net/using-google-charts-in-aspnet-mvc-with-example/54
|
CC-MAIN-2019-39
|
refinedweb
| 945
| 58.32
|
Porting Pose2lux using PyCarrara to get Luxrender into Carrara?
I have no idea what I am talking about since my programming skills are limited to 10 rows of C++ code, at best.
The idea is to port the Pose2Lux to Carrara with PyCarrara, to get Luxrender support.
The Pose2Lux python script is compatible with Poser 6 up to Poser Pro 2012, hopefully the same method would work with Carrara, but what do I know...
So far I got PyCarrara installed, it runs its samples and I got all the way to row 7 of Pose2Lux before the first DLL crash:
----------------------------------------------
Traceback (most recent call last):
File "PyCarrara_1EAF672C", line 7, in
File "C:\Python26\Lib\lib-tk\Tkinter.py", line 38, in
import FixTk
File "C:\Python26\Lib\lib-tk\FixTk.py", line 65, in
import _tkinter
ImportError: DLL load failed: Initieringen av en DLL-fil misslyckades.
=== Script execution terminated ===
-----------------------------------------------
Maybe someone with some actual skills can take a look at this, to check if it is possible or just a crazy idea...
Hmm, looks bad, Poser has full Python support since v6, while PyCarrara only has limited Python support?
More info at
Renders
Today I got to line 3236 while executing pose2lux in the Python folder after commenting away some lines, stopped at loading poser.Scene, since I don't have Poser and have not started to change over to Carrara.
Still crashing at line 4 to 7 in Carrara though, cannot load the GUI DLL I think, as above. Attached crash screen.
Running Win7 64bit, Carrara 8.1.1 Pro 64Bit, PyCarrara 1.0 64bit and Python 2.6.6 64bit.
Help anyone?
GUI is a very weak point of PyCarrara. I've made several attempts based on several python library and none of them worked. I believe it's linked to to way Python is invoked by PyCarrara.
More to the point, check that the library you're using is in the path seen by pyCarrara. In the sys libray, you have the path variablethat you can print or even update if necessary (see Python sys library doc.
Thank you for the help, checked the Path browser in IDLE, looked good.
Googled the error and one response mentioned 64bit issues wit Tk, not sure that it really is the problem though.
Since PyCarrara may be the problem, I started to look at other exporters, looks like LuxSI for Softimage/XSI is complete/updated and written in C++, suitable for the Carrara SDK.
Need to look further into this, waiting for the Daz support team to upload the SDK now...
There is also a cheap solution (it's the one I've chosen) : Buy an an old version of Poser 7 : Cheap Poser 7.
You'll get your luxrender script working and, as a benefit, a cloth simulation that works, a walk generator, possibility of importing binary morph...
For $30, I've never regretted it :-)
With Poser Debut and Trialpay, it is free to get the pose2lux script running.
But I would like to stick with Carrara and avoid Poser, already have Daz Studio that has Reality for luxrender.
A restart, the Carrara SDK is now here:
The Luxsi 1.1 source is here:
Free Softimage Mod Tool for testing here:
Free time limited Softimage XSI trial:
Only thing missing in programming knowledge and time...
|
http://www.daz3d.com/forums/viewreply/137476/
|
CC-MAIN-2015-35
|
refinedweb
| 556
| 71.14
|
Details are used in the framework to work with tabular data, pertaining to a record in an item’s table.
For example, the Invoices journal in the Demo application has the InvoiceTable detail, which keeps a list of tracks in an customer’s invoice.
Details and detail items share the same underlying database table.
To create a detail, you must first create a detail item (select Details group of the project tree and click on New button) and then use the Details Dialog (select item in the project tree and click on Details button) to add a detail to an item.
For example the following code
def on_created(task): task.invoice_table.open() print task.invoice_table.record_count() task.invoices.open(limit=1) task.invoices.invoice_table.open() print task.invoices.invoice_table.record_count()
will print:
2259 6
Details have two
common fields -
master_id and
master_rec_id, that are used to store information about the
ID of the master (each item have its own unique ID) and the value of the primary
field of the record of its master. This way each table can be linked to several
items. As well as each item can have several details. To get access to details of
an item use its
details attribute. To get access to the master of the detail
use its
master attribute.
Detail class, used to create details, is an ancestor of the Item class and inherits all its attributes, methods and events.
Note
The
apply method of the Detail class does nothing. To write changes made
to a detail use
apply method of its master.
To work with a detail its muster must be active
To make any changes to a detail its master must be in an edit or insert mode
In this example from the client module of the Invoices item of Demo project, the Invoice_table detail is reopened every time the cursor of its master moves to another record.
var ScrollTimeOut; function on_after_scroll(item) { clearTimeout(ScrollTimeOut); ScrollTimeOut = setTimeout( function() { item.invoice_table.open(function() {}); }, 100 ); }
And just as an example:
from datetime import datetime, timedelta def on_created(task): invoices = task.invoices.copy() invoices.set_where(invoicedate__gt=datetime.now()-timedelta(days=1)) invoices.open() for i in invoices: i.invoice_table.open() i.edit() for t in i.invoice_table: t.edit() t.sales_id.value = '101010' t.post() i.post() invoices.apply()
The same code on the client will be as follows:
function on_page_loaded(task) { var date = new Date(), invoices = task.invoices.copy(); invoices.set_where({invoicedate__gt: date.setDate(date.getDate() - 1)}); invoices.open(); invoices.each(function(i) { i.invoice_table.open(); i.edit(); i.invoice_table.each(function(t) { t.edit(); t.sales_id.value = '101010'; t.post(); }); i.post(); }); invoices.apply(); }
|
http://jam-py.com/docs/programming/data/details.html
|
CC-MAIN-2018-05
|
refinedweb
| 444
| 55.54
|
Re: Java, Ruby, JRuby, JRubify some Java?
- From: Axel Etzold <AEtzold@xxxxxx>
- Date: Wed, 23 Sep 2009 03:32:14 -0500
-------- Original-Nachricht --------
Datum: Wed, 23 Sep 2009 15:25:11 +0900
Von: Audrey A Lee <audrey.lee.is.me@xxxxxxxxx>
An: ruby-talk@xxxxxxxxxxxxx
Betreff: Java, Ruby, JRuby, JRubify some Java?
Hello JRuby People,
I'm not quite ready to JRubyify yet but,
I'm working on a mini-project which requires that I screen-capture a
portion of my x-display on a linux box.
It looks like I can use a class in Java named "Robot" to do this:
-
I figure any class (even if it is a Java class) named "Robot" deserves
my attention.
So I ran this query:
-
And this page looks good:
-
I see this example:
import java.awt.AWTException;
import java.awt.Robot;
import java.awt.Rectangle;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
class ScreenCapture {
public static void main(String args[]) throws
AWTException, IOException {
// capture the whole screen
BufferedImage screencapture = new Robot().createScreenCapture(
new Rectangle(Toolkit.getDefaultToolkit().getScreenSize
()) );
// Save as JPEG
File file = new File("screencapture.jpg");
ImageIO.write(screencapture, "jpg", file);
// Save as PNG
// File file = new File("screencapture.png");
// ImageIO.write(screencapture, "png", file);
}
}
My question:
Is it possible to transform the above Java-syntax into Ruby-syntax
which could be interpreted by JRuby?
Or I could ask it this way:
How do I transform the above Java-syntax into JRuby-syntax?
--Audrey
Dear Audrey,
you can use Java classes in Jruby straight away:
For Linux automation, you might want to look at (the non-Java)
xdotool and its Ruby gem binding xdo:
You might combine that with one of the many ways to take screenshots
in Linux:
Best regards,
Axel
--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter
.
- Follow-Ups:
- Re: Java, Ruby, JRuby, JRubify some Java?
- From: Ilan Berci
- References:
- Java, Ruby, JRuby, JRubify some Java?
- From: Audrey A Lee
- Prev by Date: Re: pleas help for forwarding an email
- Next by Date: Re: How to detect what is running Ruby program?
- Previous by thread: Java, Ruby, JRuby, JRubify some Java?
- Next by thread: Re: Java, Ruby, JRuby, JRubify some Java?
- Index(es):
|
http://newsgroups.derkeiler.com/Archive/Comp/comp.lang.ruby/2009-09/msg01658.html
|
CC-MAIN-2015-35
|
refinedweb
| 378
| 57.77
|
BOYS WILL BE BOYS: GENDER, OVERCONFIDENCE, AND COMMON STOCK INVESTMENT*
- Oliver Jackson
- 3 years ago
- Views:
Transcription
1 BOYS WILL BE BOYS: GENDER, OVERCONFIDENCE, AND COMMON STOCK INVESTMENT* BRAD M. BARBER AND TERRANCE ODEAN Theoretical models predict that overcon dent investors trade excessively. We test this prediction by partitioning investors on gender. Psychological research demonstrates that, in areas such as nance, men are more overcon dent than women. Thus, theory predicts that men will trade more excessively than women. Using account data for over 35,000 households from a large discount brokerage, we analyze the common stock investments of men and women from February 1991 through January cult to reconcile the volume of trading observed in equity markets with the trading needs of rational investors. Rational investors make periodic contributions and withdrawals from their investment portfolios, rebalance their portfolios, and trade to minimize their taxes. Those possessed of superior information may trade speculatively, although rational speculative traders will generally not choose to trade with each other. It is unlikely that rational trading needs account for a turnover rate of 76 percent on the New York Stock Exchange in We believe there is a simple and powerful explanation for high levels of trading on nancial markets: overcon dence. Human beings are overcon dent about their abilities, their knowledge, and their future prospects. Odean [1998] shows that overcon dent investors who believe that the precision of their knowledge about the value of a security is greater than it actually * We are grateful to the discount brokerage rm that provided us with the data for this study and grateful to Paul Thomas, David Moore, Paine Webber, and the Gallup Organization for providing survey data. We appreciate the comments of Diane Del Guercio, David Hirshleifer, Andrew Karolyi, Timothy Loughran, Edward Opton Jr., Sylvester Schieber, Andrei Shleifer, Martha Starr-McCluer, Richard Thaler, Luis Viceira, and participants at the University of Alberta, Arizona State University, INSEAD, the London Business School, the University of Michigan, the University of Vienna, the Institute on Psychology and Markets, the Conference on Household Portfolio Decision-making and Asset Holdings at the University of Pennsylvania, and the Western Finance Association Meetings. All errors are our own. Terrance Odean can be reached at (530) or odean] ucdavis.edu; Brad Barber can be reached at (530) or bmbarber] ucdavis.edu. 1. NYSE Fact Book for the Year by the President and Fellows of Harvard College and the Massachusetts Institute of Technology. The Quarterly Journal of Economics, February
2 262 QUARTERLY JOURNAL OF ECONOMICS is trade more than rational investors and that doing so lowers their expected utilities. Greater overcon dence leads to greater trading and to lower expected utility. A direct test of whether overcon dence contributes to excessive market trading is to separate investors into those more and those less prone to overcon dence. One can then test whether more overcon dence leads to more trading and to lower returns. Such a test is the primary contribution of this paper. Psychologists nd that in areas such as nance men are more overcon dent than women. This difference in overcon dence yields two predictions: men will trade more than women, and the performance of men will be hurt more by excessive trading than the performance of women. To test these hypotheses, we partition a data set of position and trading records for over 35,000 households at a large discount brokerage rm into accounts opened by men and accounts opened by women. Consistent with the predictions of the overcon dence models, we nd that the average turnover rate of common stocks for men is nearly one and a half times that for women. While both men and women reduce their net returns through trading, men do so by 0.94 percentage points more a year than do women. The differences in turnover and return performance are even more pronounced between single men and single women. Single men trade 67 percent more than single women thereby reducing their returns by 1.44 percentage points per year more than do single women. The remainder of this paper is organized as follows. We motivate our test of overcon dence in Section I. We discuss our data and empirical methods in Section II. Our main results are presented in Section III. We discuss competing explanations for our results in Section IV and make concluding remarks in Section V. I. A TEST OF OVERCONFIDENCE I.A. Overcon dence and Trading on Financial Markets Studies of the calibration of subjective probabilities nd that people tend to overestimate the precision of their knowledge [Alpert and Raiffa 1982; Fischhoff, Slovic, and Lichtenstein 1977]; see Lichtenstein, Fischhoff, and Phillips [1982] for a review of the calibration literature. Such overcon dence has been
3 BOYS WILL BE BOYS 263 observed in many professional elds.con dence in their judgments. (For further discussion see Lichtenstein, Fischhoff, and Phillips [1982] and Yates [1990].) Overcon dence is greatest for dif cult tasks, for forecasts with low predictability, and for undertakings lacking fast, clear feedback [Fischhoff, Slovic, and Lichtenstein 1977; Lichtenstein, Fischhoff, and Phillips 1982; Yates 1990; Grif n and Tversky 1992]. Selecting common stocks that will outperform the market is a dif cult task. Predictability is low; feedback is noisy. Thus, stock selection is the type of task for which people are most overcon dent. Odean [1998] develops models in which overcon dent investors overestimate the precision of their knowledge about the value of a nancial security. 2 They overestimate the probability that their personal assessments of the security s value are more accurate than the assessments of others. Thus, overcon dent investors believe more strongly in their own valuations, and concern themselves less about the beliefs of others. This intensi- es differences of opinion. And differences of opinion cause trading [Varian 1989; Harris and Raviv 1993]. Rational investors only trade and only purchase information when doing so increases their expected utility (e.g., Grossman and Stiglitz [1980]). Overcon dent investors, on the other hand, lower their expected utility by trading too much; they hold unrealistic beliefs about how high their returns will be and how precisely these can be estimated; and they expend too many resources (e.g., time and money) on investment information [Odean 1998]. Overcon dent 2. Other models of overcon dent investors include De Long, Shleifer, Summers, and Waldmann [1991], Benos [1998], Kyle and Wang [1997], Daniel, Hirshleifer, and Subramanyam [1998], Gervais and Odean [1998], and Caballé and Sákovics [1998]. Kyle and Wang argue that when traders compete for duopoly pro ts, overcon dent traders may reap greater pro ts. However, this prediction is based on several assumptions that do not apply to individuals trading common stocks. Odean [1998] points out that overcon dence may result from investors overestimating the precision of their private signals or, alternatively, overestimating their abilities to correctly interpret public signals.
4 264 QUARTERLY JOURNAL OF ECONOMICS investors also hold riskier portfolios than do rational investors with the same degree of risk aversion [Odean 1998]. Barber and Odean [2000] and Odean [1999] test whether investors decrease their expected utility by trading too much. Using the same data analyzed in this paper, Barber and Odean show that after accounting for trading costs, individual investors underperform relevant benchmarks. Those who trade the most realize, by far, the worst performance. This is what the models of overcon dent investors predict. With a different data set, Odean [1999] nds. The ndings are inconsistent with rationality and not easily explained in the absence of overcon dence. Nevertheless, overcon dence is neither directly observed nor manipulated in these studies. A yet sharper test of the models that incorporate overcon dence is to partition investors into those more and those less prone to overcon dence. The models predict that the more overcon dent investors will trade more and realize lower average utilities. To test these predictions, we partition our data on gender. I.B. Gender and Overcon dence While both men and women exhibit overcon dence, men are generally more overcon dent than women [Lundeberg, Fox, and Punćochaŕ 1994]. 3 Gender differences in overcon dence are highly task dependent [Lundeberg, Fox, and Punćochaŕ 1994]. Deaux and Farris [1977] write Overall, men claim more ability than do women, but this difference emerges most strongly on... masculine task[s]. Several studies con rm that differences in con dence are greatest for tasks perceived to be in the masculine domain [Deaux and Emswiller 1974; Lenney 1977; Beyer and Bowden 1997]. Men are inclined to feel more competent than 3. While Lichtenstein and Fishhoff [1981] do not nd gender differences in calibration of general knowledge, Lundeberg, Fox, and Punćochaŕ [1994] argue that this is because gender differences in calibration are strongest for topics in the masculine domain.
5 BOYS WILL BE BOYS 265 women do in nancial matters [Prince 1993]. Indeed, casual observation reveals that men are disproportionately represented in the nancial industry. We expect, therefore, that men will generally be more overcon dent about their ability to make nancial decisions than women. Additionally, Lenney [1977] reports that gender differences in self-con dence depend on the lack of clear and unambiguous feedback. When feedback is unequivocal and immediately available, women do not make lower ability estimates than men. However, when such feedback is absent or ambiguous, women seem to have lower opinions of their abilities and often do underestimate relative to men. Feedback in the stock market is ambiguous. All the more reason to expect men to be more con dent than women about their ability to make common stock investments. Gervais and Odean [1998] develop a model in which investor overcon dence results from self-serving attribution bias. Investors in this model infer their own abilities from their successes and failures. Due to their tendency to take too much credit for their successes, they become overcon dent. Deaux and Farris [1977], Meehan and Overton [1986], and Beyer [1990] nd that the self-serving attribution bias is greater for men than for women. And so men are likely to become more overcon dent than women. The previous study most like our own is Lewellen, Lease, and Schlarbaum s [1977] analysis of survey answers and brokerage records (from 1964 through 1970) of 972 individual investors. Lewellen, Lease, and Schlarbaum s report that men spend more time and money on security analysis, rely less on their brokers, make more transactions, believe that returns are more highly predictable, and anticipate higher possible returns than do women. In all these ways, men behave more like overcon dent investors than do women. Additional evidence that men are more overcon dent investors than women comes from surveys conducted by the Gallup Organization for PaineWebber. Gallup conducted the survey fteen times between June 1998 and January There were approximately 1000 respondents per survey. In addition to other questions, respondents were asked What overall rate of return do you expect to get on your portfolio in the NEXT twelve months? and Thinking about the stock market more generally, what overall rate of return do you think the stock market will
6 266 QUARTERLY JOURNAL OF ECONOMICS provide investors during the coming twelve months? On average, both men and women expected their own portfolios to outperform the market. However, men expected to outperform by a greater margin (2.8 percent) than did women (2.1 percent). The difference in the average anticipated outperformance of men and women is statistically signi cant (t = 3.3). 4 In summary, we have a natural experiment to (almost) directly test theoretical models of investor overcon dence. A rational investor only trades if the expected gain exceeds the transactions costs. An overcon dent investor overestimates the precision of his information and thereby the expected gains of trading. He may even trade when the true expected net gain is negative. Since men are more overcon dent than women, this gives us two testable hypotheses: H1: Men trade more than women. H2: By trading more, men hurt their performance more than do women. It is these two hypotheses that are the focus of our inquiry. 5 II. DATA AND METHODS II.A. Household Account and Demographic Data Our main results focus on the common stock investments of 37,664 households for which we are able to identify the gender of the person who opened the household s rst brokerage account. This sample is compiled from two data sets. Our primary data set is information from a large discount brokerage rm on the investments of 78,000 households for the six years ending in December For this period, we have end-ofmonth position statements and trades that allow us to reasonably estimate monthly returns from February 1991 through January 4. Some respondents answered that they expected market returns as high as 900. We suspect that these respondents were thinking of index point moves rather than percentage returns. Therefore, we have dropped from our calculations respondents who gave answers of more than 100 to this question. If, alternatively, we Windsorize answers over 900 at 100 there is no signi cant change in our results. 5. Overcon dence models also imply that more overcon dent investors will hold riskier portfolios. In Section III we present evidence that men hold riskier common stock portfolios than women. However, gender differences in portfolio risk may be due to differences in risk tolerance rather than (or in addition to) differences in overcon dence.
7 BOYS WILL BE BOYS The data set includes all accounts opened by the 78,000 households at this discount brokerage rm. Sampled households were required to have an open account with the discount brokerage rm during Roughly half of the accounts in our analysis were opened prior to 1987, while half were opened between 1987 and On average, men opened their rst account at this brokerage 4.7 years before the beginning of our sample period, while women opened theirs 4.3 years before. During the sample period, men s accounts held common stocks for 58 months on average and women s for 59 months. The median number of months men held common stocks is 70. For women it is 71. In this research, we focus on the common stock investments of households. We exclude investments in mutual funds (both open- and closed-end), American depository receipts (ADRs), warrants, and options. Of the 78,000 sampled households, 66,465 had positions in common stocks during at least one month; the remaining accounts either held cash or investments in other than individual common stocks. The average household had approximately two accounts, and roughly 60 percent of the market value in these accounts was held in common stocks. These households made over 3 million trades in all securities during our sample period; common stocks accounted for slightly more than 60 percent of all trades. The average household held four stocks worth $47,000 during our sample period, although each of these gures is positively skewed. 6 The median household held 2.6 stocks worth $16,000. In aggregate, these households held more than $4.5 billion in common stocks in December Our secondary data set is demographic information compiled by Infobase Inc. (as of June 8, 1997) and provided to us by the brokerage house. These data identify the gender of the person who opened a household s rst account for 37,664 households, of which 29,659 (79 percent) had accounts opened by men and 8,005 (21 percent) had accounts opened by women. In addition to gender, Infobase provides data on marital status, the presence of children, age, and household income. We present descriptive statistics in Table I, Panel A. These data reveal that the women in our sample are less likely to be married and to have children than men. The mean and median ages of the men and women in our sample are roughly equal. The women report slightly lower household income, although the difference is not economically large. 6. Throughout the paper, portfolio values are reported in current dollars.
8 268 QUARTERLY JOURNAL OF ECONOMICS TABLE I DESCRIPTIVE STATISTICS FOR DEMOGRAPHICS OF FEMALE AND MALE HOUSEHOLDS All households Married households Single households Variable Women Men Difference (women men) Women Men Difference (women men) Women Men Difference (women men) Panel A: Infobase data Number of households ,005 29,659 NA 4,894 19,741 NA 2,306 6,326 NA Percentage married Percentage with children Mean age Median age Mean income ($000) % with income > $125, Panel B: Self-reported data Number of households ,637 11,226 1,707 7, ,184 Net worth ($000) 90th Percentile th Percentile Median th Percentile th Percentile Equity to net worth (%) Mean Median Investment experience (%) None Limited Good Extensive The sample consists of households with common stock investment at a large discount brokerage rm for which we are able to identify the gender of the person who opened the household s rst account. Data on marital status, children, age, and income are from Infobase Inc. as of June Self-reported data are information supplied to the discount brokerage rm at the time the account is opened by the person on opening the account. Income is reported within eight ranges, where the top range is greater than $125,000. We calculate means using the midpoint of each range and $125,000 for the top range. Equity to Net Worth (%) is the proportion of the market value of common stock investment at this discount brokerage rm as of January 1991 to total self-reported net worth when the household opened its rst account at this brokerage. Those households with a proportion equity to net worth greater than 100 percent are deleted when calculating means and medians. Number of observations for each variable is slightly less than the number of reported households.
9 BOYS WILL BE BOYS 269 In addition to the data from Infobase, we also have a limited amount of self-reported data collected at the time each household rst opened an account at the brokerage (and not subsequently updated), which we summarize in Table I, Panel B. Of particular interest to us are two variables: net worth, and investment experience. For this limited sample (about one-third of our total sample), the distribution of net worth for women is slightly less than that for men, although the difference is not economically large. For this limited sample, we also calculate the ratio of the market value of equity (as of the rst month that the account appears in our data set) to self-reported net worth (which is reported at the time the account is opened). This provides a measure, albeit crude, of the proportion of a household s net worth that is invested in the common stocks that we analyze. (If this ratio is greater than one, we delete the observation from our analysis.) The mean household holds about 13 percent of its net worth in the common stocks we analyze, and there is little difference in this ratio between men and women. The differences in self-reported experience by gender are quite large. In general, women report having less investment experience than men. For example, 47.8 percent of women report having good or extensive investment experience, while 62.5 percent of men report the same level of experience. Married couples may in uence each other s investment decisions. In some cases the spouse making investment decisions may not be the spouse who originally opened a brokerage account. Thus, we anticipate that observable differences in the investment activities of men and women will be greatest for single men and single women. To investigate this possibility, we partition our data on the basis of marital status. The descriptive statistics from this partition are presented in the last six columns of Table I. For married households, we observe very small differences in age, income, the distribution of net worth, and the ratio of net worth to equity. Married women in our sample are less likely to have children than married men, and they report having less investment experience than men. For single households, some differences in demographics become larger. The average age of the single women in our sample is ve years older than that of the single men; the median is four years older. The average income of single women is $6,100 less than that of single men, and fewer report having incomes in excess of $125,000. Similarly, the distribution of net worth for single women
10 270 QUARTERLY JOURNAL OF ECONOMICS is lower than that of single men. Finally, single women report having less investment experience than single men. II.B. Return Calculations To evaluate the investment performance of men and women, we calculate the gross and net return performance of each household. The net return performance is calculated after a reasonable accounting for the market impact, commissions, and bid-ask spread of each trade. For each trade, we estimate the bid-ask spread component of transaction costs for purchases (spr db ) or sales (spr ds ) as spr ds 5 cl P S ds P ds 1D S s 2, and spr 5 2 P db db b 2 1D. cl P db cl cl P ds and P db are the reported closing prices from the Center for Research in Security Prices (CRSP) daily stock return les on the s b day of a sale and purchase, respectively; P ds and P db are the actual sale and purchase price from our account database. Our estimate of the bid-ask spread component of transaction costs includes any market impact that might result from a trade. It also includes an intraday return on the day of the trade. The commission component of transaction costs is calculated to be the dollar value of the commission paid scaled by the total principal value of the transaction, both of which are reported in our account data. The average purchase costs an investor 0.31 percent, while the average sale costs an investor 0.69 percent in bid-ask spread. Our estimate of the bid-ask spread is very close to the trading cost of 0.21 percent for purchases and 0.63 percent for sales paid by open-end mutual funds from 1966 to 1993 [Carhart 1997]. 7 The average purchase in excess of $1000 cost 1.58 percent in commissions, while the average sale in excess of $1000 cost 1.45 percent. 8 We calculate trade-weighted (weighted by trade size) spreads and commissions. These gures can be thought of as the total cost 7. Odean [1999] nds that individual investors are more likely to both buy and sell particular stocks when the prices of those stocks are rising. This tendency can partially explain the asymmetry in buy and sell spreads. Any intraday price rises following transactions subtract from our estimate of the spread for buys and add to our estimate of the spread for sells. 8. To provide more representative descriptive statistics on percentage commissions, we exclude trades less than $1000. The inclusion of these trades results in a round-trip commission cost of 5 percent, on average (2.1 percent for purchases and 3.1 percent for sales).
11 BOYS WILL BE BOYS 271 of conducting the $24 billion in common stock trades (approximately $12 billion each in purchases and sales). Trade-size weighting has little effect on spread costs (0.27 percent for purchases and 0.69 percent for sales) but substantially reduces the commission costs (0.77 percent for purchases and 0.66 percent for sales). In sum, the average trade in excess of $1000 incurs a roundtrip transaction cost of about 1 percent for the bid-ask spread and about 3 percent in commissions. In aggregate, round-trip trades cost about 1 percent for the bid-ask spread and about 1.4 percent in commissions. We estimate the gross monthly return on each common stock investment using the beginning-of-month position statements from our household data and the CRSP monthly returns le. In so doing, we make two simplifying assumptions. First, we assume that all securities are bought or sold on the last day of the month. Thus, we ignore the returns earned on stocks purchased from the purchase date to the end of the month and include the returns earned on stocks sold from the sale date to the end of the month. Second, we ignore intramonth trading (e.g., a purchase on March 6 and a sale of the same security on March 20), although we do include in our analysis short-term trades that yield a position at the end of a calendar month. Barber and Odean [2000] provide a careful analysis of both of these issues and document that these simplifying assumptions yield trivial differences in our return calculations. Consider the common stock portfolio for a particular household. The gross monthly return on the household s portfolio (R gr ht ) is calculated as R gr ht 5 s ht O p itr gr it, i= 1 where p it is the beginning-of-month market value for the holding of stock i by household h in month t divided by the beginning-ofmonth market value of all stocks held by household h, R gr it is the gross monthly return for that stock, and s ht is the number of stocks held by household h in month t. For security i in month t, we calculate a monthly return net of transaction costs (Rnet it ) as (1 1 R net it ) 5 (1 1 R gr it ) (1 2 s b )/(1 1 c i,t2 1 ), c it
12 272 QUARTERLY JOURNAL OF ECONOMICS where c s it is the cost of sales scaled by the sales price in month t b and c i,t2 1 is the cost of purchases scaled by the purchase price in month t 2 1. The cost of purchases and sales include the commissions and bid-ask spread components, which are estimated individually for each trade as previously described. Thus, for a security purchased in month t 2 s 1 and sold in month t, both c it b and c i,t2 1 are positive; for a security that was neither purchased in month t 2 s b 1 nor sold in month t, both c it and c i,t2 1 are zero. Because the timing and cost of purchases and sales vary across households, the net return for security i in month t will vary across households. The net monthly portfolio return for each household is R net ht 5 s ht O p itr net it. i= 1 (If only a portion of the beginning-of-month position in stock i was purchased or sold, the transaction cost is only applied to the portion that was purchased or sold.) We estimate the average gross and net monthly returns earned by men as RM t gr 5 1 n mt n mt O R gr ht, and RM net 5 h= 1 1 n mt n mt O R net ht, h= 1 where n mt is the number of male households with common stock investment in month t. There are analogous calculations for women. II.C. Turnover We calculate the monthly portfolio turnover for each household as one-half the monthly sales turnover plus one-half the monthly purchase turnover. 9 In each month during our sample period, we identify the common stocks held by each household at the beginning of month t from their position statement. To calculate monthly sales turnover, we match these positions to sales s 9. Sell turnover for household h in month t is calculated as S ht i= 1 pit min (1, S it/h it), where S it is the number of shares in security i sold during the month, p it is the value of stock i held at the beginning of month t scaled by the total value of stock holdings, and H it is the number of shares of security i held at the beginning s of month t. Buy turnover is calculated as S ht i= 1 pi,t+ 1 min (1, B it/h i,t+ 1), where B it is the number of shares of security i bought during the month.
13 BOYS WILL BE BOYS 273 during month t. The monthly sales turnover is calculated as the shares sold times the beginning-of-month price per share divided by the total beginning-of-month market value of the household s portfolio. To calculate monthly purchase turnover, we match these positions to purchases during month t 2 1. The monthly purchase turnover is calculated as the shares purchased times the beginning-of-month price per share divided by the total beginning-of-month market value of the portfolio. 10 II.D. The Effect of Trading on Return Performance We calculate an own-benchmark abnormal return for individual investors that is similar in spirit to those proposed by Lakonishok, Shleifer, and Vishny [1992] and Grinblatt and Titman [1993]. In this abnormal return calculation, the benchmark for household h is the month t return of the beginning-of-year portfolio held by household h, 11 denoted R b ht. It represents the return that the household would have earned if it had merely held its beginning-of-year portfolio for the entire year. The gross or net own-benchmark abnormal return is the return earned by household h less the return of household h s beginning-of-year portfolio ( AR gr ht = R gr ht 2 R b ht or AR net ht = R net ht 2 R b ht ). If the household did not trade during the year, the own-benchmark abnormal return would be zero for all twelve months during the year. In each month the abnormal returns across male households are averaged yielding a 72-month time-series of mean monthly own-benchmark abnormal returns. Statistical signi cance is calculated using t-statistics based on this time-series: AR t gr / gr [s ( AR t )/ Ö 72], where AR t gr 5 1 n mt n mt O (R gr ht 2 t= 1 R b ht ). 10. If more shares were sold than were held at the beginning of the month (because, for example, an investor purchased additional shares after the beginning of the month), we assume the entire beginning-of-month position in that security was sold. Similarly, if more shares were purchased in the preceding month than were held in the position statement, we assume that the entire position was purchased in the preceding month. Thus, turnover, as we have calculated it, cannot exceed 100 percent in a month. 11. When calculating this benchmark, we begin the year on February 1. We do so because our rst monthly position statements are from the month end of January If the stocks held by a household at the beginning of the year are missing CRSP returns data during the year, we assume that stock is invested in the remainder of the household s portfolio.
14 274 QUARTERLY JOURNAL OF ECONOMICS There is an analogous calculation of net abnormal returns for men, gross abnormal returns for women, and net abnormal returns for women. 12 The advantage of the own-benchmark abnormal return measure is that it does not adjust returns according to a particular risk model. No model of risk is universally accepted; furthermore, it may be inappropriate to adjust investors returns for stock characteristics that they do not associate with risk. The ownbenchmark measure allows each household to self-select the investment style and risk pro le of its benchmark (i.e., the portfolio it held at the beginning of the year), thus emphasizing the effect trading has on performance. II.E. Security Selection Our theory says that men will underperform women because men trade more and trading is costly. An alternative cause of underperformance is inferior security selection. Two investors with similar initial portfolios and similar turnover will differ in performance if one consistently makes poor security selections. To measure security selection ability, we compare the returns of stocks bought with those of stocks sold. In each month we construct a portfolio comprised of those stocks purchased by men in the preceding twelve months. The returns on this portfolio in month t are calculated as R t pm 5 n pt O T it i= 1 n pt pm R ity O i= 1 T pm it, where T pm it is the aggregate value of all purchases by men in security i from month t 2 12 through t 2 1, R it is the gross monthly return of stock i in month t, and n pt is the number of different stocks purchased from month t 2 12 through t 2 1. (Alternatively, we weight by the number rather than the value of trades.) Four portfolios are constructed: one for the purchases of men (R pm t ), one for the purchases of women (R pw t ), one for the sales of men (Rsm t ), and one for the sales of women (Rsw t ). 12. Alternatively, one can rst calculate the monthly time-series average own-benchmark return for each household and then test the signi cance of the cross-sectional average of these. The t-statistics for the cross-sectional tests are larger than those we report for the time-series tests.
15 BOYS WILL BE BOYS 275 III.A. Men versus Women III. RESULTS In Table II, Panel A, we present position values and turnover rates for the portfolios held by men and women. Women hold slightly, but not dramatically smaller, common stock portfolios ($18,371 versus $21,975). Of greater interest is the difference in turnover between women and men. Models of overcon dence predict that women, who are generally less overcon dent than men, will trade less than men. The empirical evidence is consistent with this prediction. Women turn their portfolios over approximately 53 percent annually (monthly turnover of 4.4 percent times twelve), while men turn their portfolios over approximately 77 percent annually (monthly turnover of 6.4 percent times twelve). We are able to comfortably reject the null hypothesis that turnover rates are similar for men and women (at less than a 1 percent level). Although the median turnover is substantially less for both men and women, the differences in the median levels of turnover are also reliably different between genders. In Table II, Panel B, we present the gross and net percentage monthly own-benchmark abnormal returns for common stock portfolios held by women and men. Women earn gross monthly returns that are percent lower than those earned by the portfolio they held at the beginning of the year, while men earn gross monthly returns that are percent lower than those earned by the portfolio they held at the beginning of the year. Both shortfalls are statistically signi cant at the 1 percent level as is their difference (0.34 percent annually). Turning to net own-benchmark returns, we nd that women earn net monthly returns that are percent lower than those earned by the portfolio they held at the beginning of the year, while men earn net monthly returns that are percent lower than those earned by the portfolio they held at the beginning of the year. Again, both shortfalls are statistically signi cant at the 1 percent level as is their difference of percent (0.94 percent annually). Are the lower own-benchmark returns earned by men due to more active trading or to poor security selection? The calculations described in subsection II.E indicate that the stocks both men and women choose to sell earn reliably greater returns than the stocks they choose to buy. This is consistent with Odean [1999], who uses different data to show that the stocks individual investors
16 276 QUARTERLY JOURNAL OF ECONOMICS TABLE II POSITION VALUE, TURNOVER, AND RETURN PERFORMANCE OF COMMON STOCK INVESTMENTS OF FEMALE AND MALE HOUSEHOLDS: FEBRUARY 1991 TO JANUARY 1997 All households Married households Single households Women Men Difference (women men) Women Men Difference (women men) Women Men Difference (women men) Number of households Panel A: Position Value and Turnover 8,005 29,659 NA 4,894 19,741 NA 2,306 6,326 NA Mean [median] ,371 21,975 3,604*** 17,754 22,293 4,539*** 19,654 20, *** beginning [7,387] [8,218] [2 831]*** [7,410] [8,175] [2 765]*** [7,491] [8,097] [2 606]*** position value ($) Mean [median] *** *** *** monthly turnover (%) [1.74] [2.94] [2 1.20]*** [1.79] [2.81] [1.02]*** [1.55] [3.32] [2 1.77]*** Panel B: Performance Own-benchmark monthly abnormal gross return (%) Own-benchmark monthly abnormal net return (%) *** *** 0.028*** *** *** * *** 0.045*** (2 2.84) (2 3.66) (2.43) (2 2.89) (2 3.67) (1.28) (2 1.64) (2 3.60) (2.53) *** *** 0.078*** *** *** 0.060*** *** *** 0.120*** (2 9.70) ( ) (6.35) (2 9.10) ( ) (3.95) (2 6.68) ( ) (6.68) ***, **, * indicate signi cant at the 1, 5, and 10 percent level, respectively. Tests for differences in medians are based on a Wilcoxon sign-rank test statistic. Households are classi ed as female or male based on the gender of the person who opened the account. Beginning position value is the market value of common stocks held in would have been earned if the household had held the beginning-of-year portfolio for the entire year (i.e., the twelve months beginning February 1). T-statistics for abnormal returns are in parentheses and are calculated using time-series standard errors across months.
17 BOYS WILL BE BOYS 277 sell earn reliably greater returns than the stocks they buy. We nd that the stocks men choose to purchase underperform those that they choose to sell by twenty basis points per month (t = ). 13 The stocks women choose to purchase underperform those they choose to sell by seventeen basis points per month (t = ). The difference in the underperformances of men and women is not statistically signi cant. (When we weight each trade equally rather than by its value, men s purchases underperform their sales by 23 basis points per month and women s purchases underperform their sales by 22 basis points per month.) Both men and women detract from their returns (gross and net) by trading; men simply do so more often. While not pertinent to our hypotheses which predict that overcon dence leads to excessive trading and that this trading hurts performance one might want to compare the raw returns of men with those of women. During our sample period, men earned average monthly gross and net returns of and percent; women earned average monthly gross and net returns of and percent. Men s gross and net average monthly market-adjusted returns (the raw monthly return minus the monthly return on the CRSP value-weighted index) were and percent; women s gross and net average monthly market-adjusted returns were and percent. 14 For none of these returns are the differences between men and women statistically signi cant. The gross raw and market-adjusted returns earned by men and women differed in part because, as we document in subsection III.D, men tended to hold smaller, higher beta stocks than did women; such stocks performed well in our sample period. In summary, our main ndings are consistent with the two predictions of the overcon dence models. First, men, who are more overcon dent than women, trade more than women (as measured by monthly portfolio turnover). Second, men lower their returns more through excessive trading than do women. 13. This t-statistic is calculated as t 5 2 s 2 (R pm t R sm t ) (R pm t R sm t )/Î The gross (net) annualized geometric mean returns earned by men and women were 18.7 (16.3) and 18.6 (16.9) percent, respectively.
18 278 QUARTERLY JOURNAL OF ECONOMICS Men lower their returns more than women because they trade more, not because their security selections are worse. III.B. Single Men versus Single Women If gender serves as a reasonable proxy for overcon dence, we would expect the differences in portfolio turnover and net return performance to be larger between the accounts of single men and single women than between the accounts of married men and married women. This is because, as discussed above, one spouse may make or in uence decisions for an account opened by the other. To test this ancillary prediction, we partition our sample into four groups: married women, married men, single women, and single men. Because we do not have marital status for all heads of households in our data set, the total number of households that we analyze here is less than that previously analyzed by about Position values and turnover rates of the portfolios held by the four groups are presented in the last six columns of Table II, Panel A. Married women tend to hold smaller common stock portfolios than married men; these differences are smaller between single men and single women. Differences in turnover are larger between single women and men than between married women and men, thus con rming our ancillary prediction. In the last six columns of Table II, Panel B, we present the gross and net percentage monthly own-benchmark abnormal returns for common stock portfolios of the four groups. The gross monthly own-benchmark abnormal returns of single women ( ) and of single men ( ) are statistically signi cant at the 1 percent level, as is their difference (0.045 annually 0.54 percent). We again stress that it is not the superior timing of the security selections of women that leads to these gross return differences. Men (and particularly single men) are simply more likely to act (i.e., trade) despite their inferior ability. The net monthly own-benchmark abnormal returns of married women ( ) and married men ( ) are statistically signi cant at the 1 percent level, as is their difference (0.060). The net monthly own-benchmark abnormal returns of single women ( ) and of single men ( ) are statistically signi cant at the 1 percent level, as is their difference (0.120 annually 1.4 percent). Single men underperform single women by signi cantly more than married men underperform married women ( = 0.60; t = 2.80).
19 BOYS WILL BE BOYS 279 In summary, if married couples in uence each other s investment decisions and thereby reduce the effects of gender differences in overcon dence, then the results of this section are consistent with the predictions of the overcon dence models. First, men trade more than women, and this difference is greatest between single men and women. Second, men lower their returns more through excessive trading than do women, and this difference is greatest between single men and women. III.C. Cross-Sectional Analysis of Turnover and Performance Perhaps turnover and performance differ between men and women because gender correlates with other attributes that predict turnover and performance. We therefore consider several demographic characteristics known to affect nancial decisionmaking: age, marital status, the presence of children in a household, and income. To assess whether the differences in turnover can be attributed to these demographic characteristics, we estimate a crosssectional regression where the dependent variable is the observed average monthly turnover for each household. The independent variables in the regression include three dummy variables: marital status (one indicating single), gender (one indicating woman), and the presence of children (one indicating a household with children). In addition, we estimate the interaction between marital status and gender. Finally, we include the age of the person who opened the account and household income. Since our income measure is truncated at $125,000, we also include a dummy variable if household income was greater than $125, We present the results of this analysis in column 2 of Table III; they support our earlier ndings. The estimated dummy variable on gender is highly signi cant (t = ) and indicates that (ceteris paribus) the monthly turnover in married women s accounts is 146 basis points less than in married men s. The differences in turnover are signi cantly more pronounced between single women and single men; ceteris paribus, single 15. Average monthly turnover for each household is calculated for the months during which common stock holdings are reported for that household. Marital status, gender, the presence of children, age, and income are from Infobase s records as of June 8, Thus, the dependent variable is observed before the independent variables. This is also true for the cross-sectional tests reported below.
20 280 QUARTERLY JOURNAL OF ECONOMICS TABLE III CROSS-SECTIONAL REGRESSIONS OF TURNOVER, OWN-BENCHMARK ABNORMAL RETURN, BETA, AND SIZE: FEBRUARY 1991 TO JANUARY 1997 Dependent variable Mean monthly turnover (%) Ownbenchmark abnormal net return Portfolio volatility Individual volatility Beta Size coef cient Intercept 6.269*** 0.321*** *** *** *** ( ) (58.85) (70.98) (11.44) (22.16) Single 0.483*** *** 0.330*** 0.020** 0.079*** (4.24) (0.14) (3.40) (4.17) (2.12) (4.65) Woman 1.461*** 0.058*** 0.689*** 0.682*** 0.037*** 0.136*** ( ) (4.27) (2 7.27) (2 8.54) (2 3.91) (2 8.00) Single 0.733*** ** 0.540*** *** woman (2 3.38) (1.08) (2 2.45) (2 3.57) (2 1.60) (2 4.30) Age/ *** 0.002*** 0.536*** 0.393*** 0.027*** 0.055*** (2 9.26) (4.23) ( ) ( ) (2 9.55) ( ) Children (2 0.40) (0.76) (2 0.19) (2 0.79) (2 0.22) (2 0.61) Income /1000 (2 1.30) (1.33) (0.22) (1.38) (2.49)** (0.31) Income dummy (2 0.24) (1.54) (0.10) (0.11) (2 0.68) (2 0.82) Adj. R 2 (%) ***, **, * indicate signi cantly different from zero at the 1, 5, and 10 percent level, respectively. + indicates signi cantly different from one at the 1 percent level. Each regression is estimated using data from 26,618 households. The dependent variables are the mean monthly percentage turnover for each household, the mean monthly own-benchmark abnormal net return for each household, the portfolio volatility for each household, the average volatility of the individual common stocks held by each household, estimated beta exposure for each household, and estimated size exposure for each household. Own-benchmark abnormal net returns are calculated as the realized monthly return for a household less the return that would have been earned if the household had held the beginning-of-year portfolio for the entire year. Portfolio volatility is the standard deviation of each household s monthly portfolio returns. Individual volatility is the average standard deviation of monthly returns over the previous three years for each stock in a household s portfolio. The average is weighted equally across months and by position size within months. The estimated exposures are the coef cient estimates on the independent variables from time-series regressions of the gross household excess return on the market excess return (R m t 2 R ft ) and a zero-investment size portfolio (SMB t ). Single is a dummy variable that takes a value of one if the primary account holder (PAH) is single. Woman is a dummy variable that takes a value of one if the primary account holder is a woman. Age is the age of the PAH. Children is a dummy variable that takes a value of one if the household has children. Income is the income of the household and has a maximum value of $125,000. When Income is at this maximum, Income dummy takes on a value of one. (t-statistics are in parentheses.) women trade 219 basis points (146 plus 73) less than single men. Of the control variables we consider, only age is signi cant; monthly turnover declines by 31 basis points per decade that we age. We next consider whether our performance results can be explained by other demographic characteristics. To do so, we estimate a cross-sectional regression in which the dependent variable is the monthly own-benchmark abnormal net return earned by each household. The independent variables for the Investors Reluctant to Realize Their Losses?
THE JOURNAL OF FINANCE VOL. LIII, NO. 5 OCTOBER 1998 Are Investors Reluctant to Realize Their Losses? TERRANCE ODEAN* ABSTRACT I test the disposition effect, the tendency of investors to hold losing investments
Do Individual Day Traders Make Money? Evidence from Taiwan
Do Individual Day Traders Make Money? Evidence from Taiwan Brad M. Barber Graduate School of Management University of California, Davis Davis, CA 95616 (530) 752-0512 bmbarber@ucdavis.edu *
All That Glitters: The Effect of Attention and News on the Buying Behavior of Individual and Institutional Investors
All That Glitters: The Effect of Attention and News on the Buying Behavior of Individual and Institutional Investors Brad M. Barber Graduate School of Management, University of California, Davis Terrance
DO INVESTMENT-CASH FLOW SENSITIVITIES PROVIDE USEFUL MEASURES OF FINANCING CONSTRAINTS?*
DO INVESTMENT-CASH FLOW SENSITIVITIES PROVIDE USEFUL MEASURES OF FINANCING CONSTRAINTS?* STEVEN N. KAPLAN AND LUIGI ZINGALES No. This paper investigates the relationship between nancing constraints and
Costly Search and Mutual Fund Flows
THE JOURNAL OF FINANCE VOL. LIII, NO. 5 OCTOBER 1998 Costly Search and Mutual Fund Flows ERIK R. SIRRI and PETER TUFANO* ABSTRACT This paper studies the flows of funds into and out of equity mutual funds.
Are High-Quality Firms Also High-Quality Investments?
FEDERAL RESERVE BANK OF NEW YORK IN ECONOMICS AND FINANCE January 2000 Volume 6 Number 1 Are High-Quality Firms Also High-Quality Investments? Peter Antunovich, David Laster, and Scott Mitnick The relationship,
The Capital Asset Pricing Model: Some Empirical Tests
The Capital Asset Pricing Model: Some Empirical Tests Fischer Black* Deceased Michael C. Jensen Harvard Business School MJensen@hbs.edu and Myron Scholes Stanford University - Graduate School of Business
WHAT HAS WORKED IN INVESTING
Established in 1920 Investment Advisers WHAT HAS WORKED IN INVESTING Studies of Investment Approaches and Characteristics Associated with Exceptional Returns What Has Worked in Investing: Studies of Investment
Do
Banks as Monitors of Other Banks: Evidence from the Overnight Federal Funds Market*
Craig H. Furfine Bank for International Settlements Banks as Monitors of Other Banks: Evidence from the Overnight Federal Funds Market* I. Introduction Banks have traditionally been both regulated and
Have Financial Markets Become More Informative?
Have Financial Markets Become More Informative? Jennie Bai, Thomas Philippon, and Alexi Savov April 2014 Abstract The finance industry has grown, financial markets have become more liquid, and information
College-Educated Millennials: An Overview of Their Personal Finances
TIAA-CREF Institute College-Educated Millennials: An Overview of Their Personal Finances February 2014 Carlo de Bassa Scheresberg Senior Research Associate, Global Financial Literacy Excellence Center
Journal of Financial Economics
Journal of Financial Economics 103 (2012) 61 87 Contents lists available at SciVerse ScienceDirect Journal of Financial Economics journal homepage: When do high stock returns
|
http://docplayer.net/35512-Boys-will-be-boys-gender-overconfidence-and-common-stock-investment.html
|
CC-MAIN-2018-47
|
refinedweb
| 8,493
| 51.78
|
I have a script that spawns spawn points. What I am trying to make happen is that a spawn point will spawn randomly (x,y) just off the screen.
The character in this small game doesn't move but can rotate 360 degrees and enemies can randomly come at him from any direction. Im stuck here. The same thing happens everytime and its not random. I get spawners that spawn just outside the top and spawners that appear to spawn right above my character. Im obviously doing something very wrong.
public class SpawnPointController : MonoBehaviour {
//array of our spawn points
public GameObject[] SpawnPoints;
//variables for spawn delay
private float nextSpawn = 0;
public float spawnRate = 2;
//amount of spawners
public int SpawnAmount = 5;
//spawner prefab
public GameObject SpawnPointPrefab;
//Spawn Point Position
public Vector3 v3Pos = new Vector3 (Random.Range(-.15f,1.15f), Random.Range(-.15f,1.15f), 19);
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
SpawnPoints = GameObject.FindGameObjectsWithTag("SpawnPoint");
v3Pos = Camera.main.ViewportToWorldPoint(v3Pos);
for (int i=0; i<SpawnAmount; i++)
{
if (Time.time > nextSpawn && SpawnAmount > 0)
{
nextSpawn = Time.time + spawnRate;
GameObject pos = SpawnPoints[Random.Range (0, SpawnPoints.Length)];
Instantiate(SpawnPointPrefab,v3Pos, Quaternion.identity);
SpawnAmount--;
}
}
}
}
Answer by robertbu
·
Jul 25, 2014 at 08:01 AM
I spot three problems and a potential problem right away. First this line:
public Vector3 v3Pos = new Vector3 (Random.Range(-.15f,1.15f), Random.Range(-.15f,1.15f), 19);
Does not create a spawn point off the screen. For example, it could return (0.5, 0.5) and be in the middle of the screen.
Problem #2, is that you only initialize v3Pos to a viewport coordinate at the top of the file, and that initialization is only done once when the script is attached to the game object.
On line 28, you convert v3Pos to a world coordinate, but subsequent passes through Update() are passing a world coordinate to ViewportToWorldPoint().
Might I suggest the following for the generation of spawn points just outside the screen.
v3Pos = new Vector3(0.857f, 0.857f, 0.0f);
v3Pos = Quaternion.AngleAxis(Random.Range(0.0f, 360.0f), Vector3.forward) * v3Pos;
v3Pos += new Vector3(0.5f, 0.5f, 19.0f);
v3Pos = Camera.main.ViewportToWorldPoint(v3Pos);
v3Pos could be local to Update() and not a public class instance variable. This code works by creating a Vector the length from the center of the screen to just off the top right corner of the screen. Then it randomly rotates that vector and add it to the center of the screen (Viewport coordinate). Since the potential spawn points are in a circle around the origin, the spawn points will be a litte bit further from the screen when spawned on the middle of the edge. Not sure if that is a problem for you or not.
Thanks that worked
Valid viewport coordinates are between 0 and 1 where (0, 0) is the left bottom screen point and (1, 1) the top right. Using values greater than 1 or smaller than 0 might result in strange results depending on the projection used.
Also keep in mind that a screen usually is wider than tall, so your circle becomes an ellipse.
@Bunny83 - Can you explain under what conditions the Viewport coordinates fail? So you are saying the OP should not solve this problem this way but instead do something else...like use viewport coordinates to find the center and edges and then use world space coordinates to move beyond the.
Spawning at a random position away from the player
1
Answer
Creating a random number of spawning items on collision
2
Answers
Prefab script values do not update
2
Answers
Random spawn script not working correctly
3
Answers
Spawn object using vector array anywhere but previous spawn location
1
Answer
|
https://answers.unity.com/questions/756248/using-vector3-in-viewporttoworldpoint.html
|
CC-MAIN-2019-51
|
refinedweb
| 630
| 65.01
|
Hot questions for Using Neural networks in ruby
Question:
I want to train a neural network with the sine() function.
Currently I use this code and the (cerebrum gem):
require 'cerebrum' input = Array.new 300.times do |i| inputH = Hash.new inputH[:input]=[i] sinus = Math::sin(i) inputH[:output] = [sinus] input.push(inputH) end network = Cerebrum.new network.train(input, { error_threshold: 0.00005, iterations: 40000, log: true, log_period: 1000, learning_rate: 0.3 }) res = Array.new 300.times do |i| result = network.run([i]) res.push(result[0]) end puts "#{res}"
But it does not work, if I run the trained network I get some weird output values (instead of getting a part of the sine curve).
So, what I am doing wrong?
Answer:
Cerebrum is a very basic and slow NN implementation. There are better options in Ruby, such as
ruby-fann gem.
Most likely your problem is the network is too simple. You have not specified any hidden layers - it looks like the code assigns a default hidden layer with 3 neurons in it for your case.
Try something like:
network = Cerebrum.new({ learning_rate: 0.01, momentum: 0.9, hidden_layers: [100] })
and expect it to take forever to train, plus still not be very good.
Also, your choice of 300 outputs is too broad - to the network it will look mostly like noise and it won't interpolate well between points. A neural network does not somehow figure out "oh, that must be a sine wave" and match to it. Instead it interpolates between the points - the clever bit happens when it does so in multiple dimensions at once, perhaps finding structure that you could not spot so easily with a manual inspection. To give it a reasonable chance of learning something, I suggest you give it much denser points e.g. where you currently have
sinus = Math::sin(i) instead use:
sinus = Math::sin(i.to_f/10)
That's still almost 5 iterations through the sine wave. Which should hopefully be enough to prove that the network can learn an arbitrary function.
Question:
After) }
Everything worked out fine. The network was trained in 4703.664857 seconds.
The network will be trained much faster when I normalise the input/output to a number between 0 and 1.
ai4r uses a sigmoid function, so it's clear that it does not output negative values. But why do I have to normalise the input values? Does this kind of neural network only accept input values < 1?
In the sine example, is it possible to input any number as in:
Input: -10.0 -> Output: 0.5440211108893699 Input: 87654.322 -> Output: -0.6782453567239783 Input: -9878.923 -> Output: -0.9829544956991526
or do I have to define the range?
Answer:.
Question:
This is a litle modified sample program I took from FANN website.
The equation I created is c = pow(a,2) + b.
Train.c
#include "fann.h" int main() { const unsigned int num_input = 2; const unsigned int num_output = 1; const unsigned int num_layers =, "sample.data", max_epochs, epochs_between_reports, desired_error); fann_save(ann, "sample.net"); fann_destroy(ann); return 0; }
Result.c
#include <stdio.h> #include "floatfann.h" int main() { fann_type *calc_out; fann_type input[2]; struct fann *ann = fann_create_from_file("sample.net"); input[0] = 1; input[1] = 1; calc_out = fann_run(ann, input); printf("sample test (%f,%f) -> %f\n", input[0], input[1], calc_out[0]); fann_destroy(ann); return 0; }
I created my own dataset
dataset.rb
f= File.open("sample.data","w") f.write("100 2 1\n") i=0 while i<100 do first = rand(0..100) second = rand(0..100) third = first ** 2 + second string1 = "#{first} #{second}\n" string2 = "#{third}\n" f.write(string1) f.write(string2) i=i+1 end f.close
sample.data
100 2 1 95 27 9052 63 9 3978 38 53 1497 31 84 1045 28 56 840 95 80 9105 10 19 ... ...
sample data first line gives number of samples, number of inputs and last number of outputs.
But I am getting an error
FANN Error 20: The number of output neurons in the ann (4196752) and data (1) don't match Epochs
What's the issue here? How does it calculate
4196752 neurons?
Answer:
Here, using fann_create_standard, the function signature is
fann_create_standard(num_layers, layer1_size, layer2_size, layer3_size...), whilst you are trying to use it differently:
struct fann *ann = fann_create_standard(num_layers, num_input, num_neurons_hidden, num_output);
you construct a network with 4 layers, but only provide data for 3. The 4196752 neurons in the output layer are likely coming from an undefined value.
|
https://thetopsites.net/projects/neural-network/ruby.shtml
|
CC-MAIN-2021-31
|
refinedweb
| 750
| 69.07
|
Python Gems #5: silent function chaining
This one is a bit more involved then the last few, because it’s not a specific syntax feature but a way of writing your code.
Take a class, GUIWorker:
class GUIWorker:
def __init__(self, url):
self.url = url
def click(self):
system.click(self.url)
def write(self, sentence):
system.write(self.url, sentence)
If a common use pattern was:
w = GUIWorker('textfield.png')
w.click()
w.write(sentence)
It might make sense to be able to write something like:
w.click().write(sentence)
This is called function chaining. It’s a powerful concept enabled by a very simple internal change:
class GUIWorker:
def __init__(self, url):
self.url = url
def click(self):
system.click(self.url)
return self
def write(self, sentence):
system.write(self.url, sentence)
return self
But what if say we have a method,
.if_true(predict) and if the predict is true we want to continue the chain — and if not, just ignore the rest of the chain? Introducing: Silent Chain Fail
This is a super powerful (and somewhat ‘magical’) concept. By creating an empty object and overriding its
__getattr__ method to return itself if it doesn’t have the requested attribute, you can block off the rest of the call chain without any errors being thrown.
This is a daily series called Python Gems. Each short posts covers a detail or a feature of the python language that you can use to increase your codes readability while decreasing its length.
|
https://medium.com/@adamshort/python-gems-5-silent-function-chaining-a6501b3ef07e
|
CC-MAIN-2019-13
|
refinedweb
| 252
| 65.93
|
0
I am trying to write a program that can be used to display two random numbers to be added together. This program should wait for the answer to be input. If the answer is correct display a statement that says that is correct. If the answer is wrong it needs to display that is incorrect then display the correct answer.
Thanks.
Here is what I have so far. I can not figure how the display part to display the statements and the correct answer if need be. Go easy I am new at this and it is my first attempt at programming
#include <iostream> #include <cstdlib> #include <iomanip> #include <ctime> using namespace std; int main () { srand((unsigned)time(0)); int random_integer1; int random_integer2; double answer; random_integer1 = (rand()%500)+1; random_integer2 = (rand()%500)+1; cout << setw (6) << random_integer1 << endl; cout << "+ "; cout << setw (4) << random_integer2 << endl; cout << "------"; cout << "\n"; cin >> answer; cout << "\n"; return 0; }
Edited by mike_2000_17: Fixed formatting
|
https://www.daniweb.com/programming/software-development/threads/157781/need-help-on-random-numbers-and-adding-them-and-displaying-statement
|
CC-MAIN-2017-34
|
refinedweb
| 159
| 66.37
|
import "github.com/go-kit/kit/util/conn"
Package conn provides utilities related to connections.
ErrConnectionUnavailable is returned by the Manager's Write method when the manager cannot yield a good connection.
AfterFunc imitates time.After.
Dialer imitates net.Dial. Dialer is assumed to yield connections that are safe for use by multiple concurrent goroutines..
NewDefaultManager is a helper constructor, suitable for most normal use in real (non-test) code. It uses the real net.Dial and time.After functions..
Put accepts an error that came from a previously yielded connection. If the error is non-nil, the manager will invalidate the current connection and try to reconnect, with exponential backoff. Putting a nil error is a no-op.
Take yields the current connection. It may be nil.
Write writes the passed data to the connection in a single Take/Put cycle.
Package conn imports 4 packages (graph) and is imported by 15 packages. Updated 2017-08-04. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/go-kit/kit/util/conn
|
CC-MAIN-2017-51
|
refinedweb
| 164
| 53.68
|
Why is instance management a problem?The trouble with the internal representations that attach to Tcl_Obj values is that they're too easy to lose. If we simply use the reference count of the Tcl_Obj to manage whether a reference is "live", we run into several problems that cause us to collect garbage prematurely.1. shimmering. The interpreter may find it necessary to convert the object in question to another internal representation. This problem is actually not a major cause of premature garbage collection in systems like Jacl and TclBlend. The typical string representation in a Tcl_Obj that's used as an object instance is something that can't be mistaken for, nor converted to, anything else. Very occasionally, one can get bitten by the duality between strings and lists, causing the internal representation of the object to be converted to a Tcl list. This problem could be avoided for handles and for opaque objects by choosing a string representation that is not a well-formed list, for example one that contains an unbalanced brace.2. Interpolation into strings. This issue is a much commoner cause of premature garbage collection. There are a number of reasons to embed an object reference into a string. One common one is to create callbacks:
set foo [createAFunkyObject]; # foo's internal representation contains # an object pointer button .b -text "Funky" -command "$foo doSomething"When the variable foo goes out of scope, it is likely that the callback string of .b will be the only reference to the object: since that string has only a string representation, there is no longer a reference to the object. The object immediately becomes a candidate for garbage collection. When the button is invoked, the object no longer exists.This problem can be avoided in some cases by creating callbacks that are pure lists. The interpreter has special code that bypasses the parser when a pure list is evaluated, so the Tcl_Obj containing the object in question can be kept alive. Alas, this scheme all to soon breaks down, because not every possible callback can be represented in that way. Moreover, if garbage-collected object systems, become popular, this technique will be well-nigh impossible to explain to newcomers. Furthermore, even under the best of circumstances, it requires the intepreter to maintain a global cache mapping object names to object values, so that an object name that has been interpolated into a string by means of a command like:
bind $w <Button-1> [list doSomething $foo %x %y]can have its internal representation recovered.
The string representation is stupid.As Donal points out [2]:I have a feeling that if you could somehow attach to strings a notion of what objects are "contained" within that string, you could do something reasonably smart with operations like string-concatenation and [eval].The problem is that the Tcl_Obj has only one idea of a string representation: a pointer to a UTF-8 character string. The notion that "everything is a string" has served us well for as long as Tcl has existed. Nevertheless, its elegant simplicity comes at a price: we have no way of attaching additional information to substrings of a string.
A smarter string representation.For nearly as long as Tcl has existed, I (KBK) have wanted a more "intelligent" string representation. This desire originally came less from wanting to preserve object information and more from wanting to achieve better performance on common operations such as concatenation and substring extraction. All too often, the current implementation requires that strings be copied in memory.I contend that the current concept of "string representation" is too simple to address both instance management and performance on large strings. In fact, even though the text widget in Tk conceptually operates on a single large string of characters, from the very beginning it has used an internal representation of that string based on B-trees. Tcl strings with embedded object references need something similar.This discussion reminds me of a text editor that I helped develop in the late 1970's. In the editor, a file was represented as an B-tree of lines. Each node in the tree contained counts of lines and characters in its subtree; nodes could also be paged out to external memory (remember that typical systems of the time had at most 64k bytes of RAM!). Nodes could also be annotated: a node could be labeled with additional data. One thing that the annotations were used for was script processing: there was a little language that controlled the editor. When source code for an editor macro was processed, the B-tree representing the macro string was annotated with file name and line number; this structure was used to produce error messages and stack traces.Nowadays, data structures have advanced a little bit, and memory is cheap; paging to external storage is not a major concern. Still, representing Tcl strings with some sort of search tree with order statistics is a tempting idea. Not only would it solve the perennial problem of locating the source line after an error in a Tcl script, but it also provides an ideal framework for keeping track of Donal's notion of what objects are "contained" within the string.
How do we get there from here?There are many details that need to be worked out; this idea must be a candidate for Tcl 9 because it will unquestionably break extensions. One possible roadmap might be:
- Augment Tcl_Obj with another pointer, to the head of the tree that holds its annotated string representation.
- Modify Tcl_GetStringFromObj, Tcl_SetStringObj, and so on to use this tree in preference to the string stored in bytes. Note that Tcl_GetStringFromObj will still need to maintain bytes so that it can be freed; callers to this function expect the string to be stored in Tcl's internal memory.
- Modify string operations like [string] and [append] to work on the tree in preference to bytes. This ought to buy a fair amount of performance on large strings because it will save memory copying.
- Modify the source to annotate each line of the source text with file and line number. This technique is much more powerful than the one suggested in TIP #86 [3]; it allows for correct line number tracing even in the presence of constructs such as [eval], [uplevel], [namespace code] and [proc]s that define other [proc]s.
RS: Sounds interesting. Once annotated strings make it robustly into the Tcl core, they might also replace the text-specific Btree... And: do you think it's possible to expose the annotation to script level?
FPX: This is interesting. In my understanding, this feature is about objects that are more than their string representation, e.g. opaque data structures at the C level that you want to represent by a pointer value. One important keyword in this context is Feather, which would do the same with code -- its lambdas have information (namely, their implementation) that is not expressed by their stringified representation.
AK: A similar thing I saw somewhere bandied around is the notation of a concat-string-object. Essentially an object maintaining a list of objects object (like a list), but the string is calculated by direct concatenation and not by [join]. One interesting application would be when it comes to variable interpolation in an eval'd string, for example [eval "$cmd foo bar"] because it should allow us to extend the existing optimization of eval for pure lists to more general strings as well, without losing the internal representation of "cmd".
male - 2007-05-16: This sounds really interesting, but ... when I think on the application I work on, an old application coming from the tcl 7.4 times with many constructs really working with the "everything is a string" concept, than I'm a bit afraid, that all this mostly pure string processing would increase the memory consumption a lot.Why I'm afraid? The C(++) API to tcl is old and completely based on the old Tcl_CreateCommand interface. So every communication with C++ sided functionality will break the Tcl_Obj concept and will work on strings.Is it possible to estimate the "new" overhead, if a plain character based string would be changed in a B-tree or annotated string representation? I mean the runtime overhead in processing and the memory overhead!I would be cautious to ask for an enhancement of the string handling, because tcl is currently very quick and does not consume too much memory working with a lot of strings. But what will be the consequence of introducing a new much more complex string representation?Best regards,Martin
FB: It seems that ropes perfectly fit the above needs. Tcl strings represented as ropes would be B-trees whose leaves point to portions of immutable strings. Not only would potentially expensive operations be cheaper, but one could associate metadata with the source strings. For example, a source command would load a string into memory with information about the file associated as metadata, and the ropes pointing to this source string (lists, procs) would access this metadata to generate meaningful messages. One could also mix different representations of strings in one single rope, for example portions in UTF-8 or 16-bit Unicode byte arrays, eliminating shimmering (byte arrays are internal reps). Utility procs would simplify iteration and random access. See Cloverfield, section Data structures.
|
http://wiki.tcl.tk/3073
|
CC-MAIN-2018-34
|
refinedweb
| 1,567
| 51.78
|
GUI Framework with all Dependencies Embedded
"Raudrohi" stands for Achillea millefolium, i.e. yarrow, in Estonian. "Raud" stands for "iron" in Estonian. "Rohi" stands for "grass" and "medical drug" in Estonian.
The Raudrohi JavaScript Library (hereafter: RJSL) is a collection of 3. party libraries and code written by me, martin.vahi@softf1.com. The RJSL includes a widgets based framework, where some of the widgets are non-graphical.
The GitHub account ( ) contains only a crippled, incomplete, version of the RJSL. A complete development package (size: ~1GiB) can be downloaded from:
An almost complete development version can be downloaded from the by executing
npm install raudrohi
Issue tracking:
This version exists mainly for 3 reasons:
fulfill dependencies of other projects;
to allow other, a bit bold, people to experiment with the RJSL;
to demonstrate to possible employers/clients that I have done something in JavaScript.
The documentation of it is in a pretty shoddy state. The shoddiest parts of the documentation are not even published.
The Raudrohi JavaScript Library is not a tool for everyone. The needs of web developers are intentionally ignored and probably they will always be ignored. The background of the targeted audience is software development.
The code style is dynamic language oriented: any hack that is considered to work reliably on targeted web browsers, is applied, regardless of how many "best practices" and "good code style" rules it violates.
The output of various code style policing software, JSLint, JSHint, etc., is intentionally ignored. Despite the very hackish nature of the code, almost every design decision has been evaluated from speed and memory usage point of view.
The RJSL can have a high learning curve and the architecture of the RJSL has been optimized to minimize the amount of work that software developers, who have crossed the learning curve, have to do to implement a web application. Nothing is, and hopefully never will be, dumbed down, regardless of the barrier of entry that the solution imposes. In Albert Einstein style: the RJSL is kept as simple as possible, but not simpler.
Backwards compatibility between different versions is, intentionally, NOT MAINTAINED.
The RJSL consists of global namespaces, which wrap functions that can be used by ignoring all the rest of the RJSL.
The RJSL widgets can be non-grapical and all instances of widgets can communicate with each other by messaging. The messages are plain JavaScript objects that have the following fields:
target instance specific ID as a string, i.e. "instance phone number"
origin instance specific ID as a string
message type as a string, i.e. target instance can use that value to reroute the message to an instance specific method that processes that kind of messages.
data field, which accepts plain JavaScript objects, which can be strings.
To send a message to the web server, one sends a message to a specific widget instance that acts as a gateway to the server. Server responses are received by the gateways, which then repackage the response to the message objects and send the message objects to the widget that is targeted by the response.
The properties of the widgets probably depend on the RJSL version. The API is always subject to refactoring and no backwards compatibility is maintained.
The RJSL framework allows the creation of composite widgets by using HTML and designating the positions of subwidgets by DIV-tags. The custom widgets can be reused for composing new widgets and the custom widgets fit into the RJSL framework as if they were builtin widgets.
Widgets can be non-graphical and have a role of a general building component.
Each of the widgets has an on-off state to regulate the usage of network traffic. For example, there's no point of asking data from the server before a log-in session has been established and widgets that depend on private data can be switched on and off according to the existence of a login-session.
Each of the widgets has a built-in state machine that interacts with the built-in state machines of its subwidgets. The state machine has one default, mandatory, state called "zero". Every time the "zero" state is entered, all of the widget's subwidgets are also set to the state "zero". States can be grouped into user-defined clusters and the state "zero" is by default part of every cluster. State transitions and cluster-transitions can trigger user-defined actions.
The main benefit of the widgets built-in state machine system is that it facilitates the writing of GUI business logic. That's actually, what the state machine system has been designed for, but it's possible to ignore its existence, i.e. it's not mandatory to use it.
As of October 2012 widgets' built-in state system allows user-defined states to belong to more than one cluster, but that's fundamentally flawed approach, because that way it's not possible to determine the execution order of state-cluster transition event-hanlers. If client code is written with a limitation that every user-defined state belongs at most to only one state cluster, then the refactoring of the state system will probably not break the client code.
The RJSL contains a global message passing system, that allows any RJSL widget instance to send messages to any other RJSL widget instance.
AJAX communication with the web server is normalized out by wrapping the gate to the web server into one of the non-graphical widgets.
As of October 2012 the message passing system API has to be rewritten, because one has cleaner specification for it, but the system that exists, is not fundamentally flawed. Its API and protocol are the ones that need to be heavily refactored.
The RJSL message passing system contains means for handling AJAX responses that come in too late, have become "irrelevant", or are duplicates.
Implementation overview: Every widget has a microsession counter. If a widget A sends out a message to widget B and the microsession counter of widget A is incremented before the answer from the widget B arrives to the widget A, then the answer from the widget B is dismissed, because all answer-carrying messages that have a different origin microsession counter value than the current microsession counter value are dismissed. The widget B may, but does not have to, represent the web server.
Example scenario: widget A orders a list of names from a server and displays the text "Loading..." to the user, while waiting for the answer. The user changes its mind and sets the widget A to a different state that has to display something else that the widget A has to order from the server. Widget A orders the new type of data, but receives the previously ordered list of names before the set of the new type of data. As the state of the widget A has changed, the microsession counter value of widget A has changed, the list of names is dismissed and the widget A keeps on waiting for the newer set of data.
Each of the RJSL widgets has its own microsession counter, but its not mandatory to use it, neither is it mandatory to initialize it in client code.
Historical note: the microsession architecture has emerged purely form practice and is a result of tedious bugfixing/refactoring.
The widgets visibility is interpreted in relation to the widget's parent widget. If a parent widget is visible, then only the subwidgets that have its visibility bit set, are visible. The visibility is meant to change during runtime and all of the widgets are responsible for maintaining their own data. For example, if a textarea widget is set to be hidden, then it saves its text and renders it next time, when it becomes visible again.
All graphical widgets have a "readonly mode" and "editable mode". The mode is imposed recursively. For example, if the editability bit of a report widget is set to "false", then all of its data entry fields can switch to readonly mode. That functionality allows the same, document/application specific, widgets to be used for both, displaying and editing documents.
An illustration: A text field widget changes from "text field" to "plain text", if it is switched from editable mode to readonly mode. The button widget implements the readonly/editable modes by being enabled/disabled.
Browser normalization is based on a bottom layer which wraps third party libraries.
That means that the RJSL can probably keep up with the browser evolution by swapping the third party libraries/library versions in the bottom layer, without any need for any changes in the rest of the RJSL.
Historical note: One of the first versions of the RJSL did not wrap the third party library, YUI 2.something, to a bottom layer and when it came to the YUI library update, heavy refactoring had to be done. The reason for the update was that the YUI 2.something became obsolete, did not support latest browser versions.
The Microsoft Internet Explorer is not supported and probably will never be supported.
The API of the RJSL is never even meant to be stable. It will be refactored to any extent that one feels comfortable with.
The RJSL will probably never be popular, because it is targeted to technical audience and the needs of web developers and novice/hobby programmers are intentionally ignored.
All of the RJSL, including the 3. party parts, is under a license that allows redistribution, modification and commercial, closed-source, use.
The parts that have been written by me, martin.vahi@softf1.com, are placed to namespace raudrohi and are under the BSD license, except code examples in the
./src/examples
, which are in public domain.
Namespace liilia contains code that is not written by me, Martin Vahi, but has been modified by me. The word "liilia" stands for "lilium" in Estonian.
To develop JavaScript applications that use the RJSL one only needs the files from
./src/release
, which also contains all of the RJSL dependencies.
The "Hello World" resides at
./src/examples/lesson_01_hello_world
and it works "out of the box".
The RJSL uses HTML5.
Projects that depend on the RJSL probably depend on an environment variable called RAUDROHI_HOME, which is meant to point to the folder that contains the README.md that You are currently reading.
Build scripts are all Linux specific. To build RJSL, the PATH must contain
Java 7, Ruby 2.x.x, Rake
The rest of the tools are bundled with the RJSL development deliverables.
Release version of the RJSL is built by
cd $RAUDROHI_HOME/src/dev_tools rake build
Debug version of the RJSL is built by
cd $RAUDROHI_HOME/src/dev_tools rake b
RJSL depends on a specific version of the mmmv_devel_tools ( ). To save RJSL developers from the work of finding and installing the right version of the mmmv_devel_tools, the RJSL build scripts use copy of the mmmv_devel_tools from
$RAUDROHI_HOME/src/dev_tools/lib/mmmv_devel_tools
The mmmv_devel_tools depends on a specific version of the Kibuvits Ruby Library (hereafter KRL, ). To save the users of the mmmv_devel_tools from the work of finding and installing the right version of the KRL, the mmmv_devel_tools uses a copy of KRL that is bundled with it. The RJSL build scripts, mainly the
$RAUDROHI_HOME/src/dev_tools/Rakefile
also uses the KRL. As of 2013_11_04 no more than one version of the KRL can be used in a single rubyscript. Consequently the mmmv_devel_tools determines the KRL version for the RJSL build scripts.
The
$RAUDROHI_HOME/src/dev_tools/code_generation/*.rb
include/import/"require" the
$RAUDROHI_HOME/src/dev_tools/Rakefile
which gets the value of the Ruby constant KIBUVITS_HOME from the copy of the mmmv_devel_tools.
The purpose of this section is to point out some noteworthy, but may be somewhat unpopular, JavaScript related resources:
A lot.
There exists some utter nonsense in there that originates from an era, when I did not yet know that events in JavaScript do not trigger new threads. One should study the Worker Threads concept and see, how the nonsense relates to that.
The first thing to refactor is the widget internal state machine related state cluster implementation that allows a user-defined state to belong to more than one cluster. A correct version is that user-defined states can belong to at most one state cluster, because then there will be no need to determine the execution order of cluster change event handler functions.
The second at the list is the messaging system related protocol. It's not necessarily flawed, but it's so terrible that it's hard to work with. One should also implement the concept of "bus-packets" or "packet-bus'es".
|
https://www.npmjs.com/package/raudrohi
|
CC-MAIN-2015-35
|
refinedweb
| 2,109
| 53.41
|
Using Visual Studio Team Test
Telerik Testing Framework comes with built-in support for Visual Studio Team Test and its unit testing framework. Telerik Testing Framework can be used with or without Visual Studio, but if you are already using Visual Studio Team Test in your development environment, you can easily and quickly integrate Telerik Testing Framework as part of that environment.
Telerik Automation Infrastructure comes with the following features to facilitate its integration with Visual Studio:
Telerik Testing Framework comes with a BaseTest base test class under its TestTemplates namespace that can be used as the base class for all your Telerik automation tests running as a Visual Studio unit test. The base class provides the following integration features:
Unifies both the logging location and the log content. Any logging from Telerik automation using its Log object will also be logged to the Visual Studio log location and to the actual Visual Studio log content of that particular test. This includes logging from JavaScipt.
[TestMethod] public void DLog() { // logging from VS TestContext.WriteLine("Hello from VS"); // logging from Telerik Log.WriteLine("Hello from Telerik"); }
<TestMethod()> _ Public Sub DLog() ' logging from VS TestContext.WriteLine("Hello from VS") ' logging from Telerik Log.WriteLine("Hello from Telerik") End Sub
Visual Studio log
VS 2012 / 2013
VS 2010
Telerik settings can be read directly from an app.config file contained in your Visual Studio test project. This allows you to configure your Telerik tests using the same .config file that you would be using to store your connection strings and other settings for your test suite.
When installing Telerik Testing Framework, a new fully commented Visual Studio item template will be added to your list of available templates. This will enable you to start using Telerik Testing Framework by simply selecting it from the 'Add->New Item' tool menu (or context menu) available to your VS project. You are provided with both a C# and a VB.NET template.
Getting Started Using Visual Studio Team Test
In this section we will walk you through the steps to get you started using Telerik Framework inside a Visual Studio Team Test environment.
- Once you have completed installing Telerik Testing Framework on the target machine, start your Visual Studio environment and open your test project or create a new test project if you are starting from scratch.
Once you have created the project, right-click the project node in the Solution Explorer. Then select Add->New Item... (NOTE: Do not use Add->New Test)
Visual Studio will pop-up the Add New Item dialog as shown below.
Expand the Test node displayed on the left then select Telerik Testing Framework. Then choose Web or Wpf. You should see four templates as shown in the image above.
Select the VsUnit template.
Enter a name for your new unit test file then.
Start writing your automated Telerik unit test just like any other Visual Studio unit test. You can view, manage, and execute your Telerik unit tests just like any other Visual Studio unit tests.
Telerik's Visual Studio Team Test Template
The Telerik Framework template is very similar to Visual Studio's unit test template with the addition of Telerik's integration points to initialize and clean up Telerik's infrastructure. Telerik Visual Studio tests also that is set to the Manager.ActiveBrowser.Find instance. The following are the objects and their short-cuts that the base class provides:
|
http://docs.telerik.com/teststudio/testing-framework/using-vs-team-test
|
CC-MAIN-2015-22
|
refinedweb
| 573
| 62.48
|
default .ufo and .py
Is there a way to change the default ufo that opens when opening a new font, or do I need to auto generate what I need on
newFontDidOpen?
Can this be done for new scripts also? I wish to put
font = CurrentFont()etc. in there by default.
hi Timo,
the character set of new fonts can be chosen in the preferences:
other than this, you’ll really need to use a
newFontDidOpenobserver:
from mojo.events import addObserver class MyCustomNewFont: def __init__(self): addObserver(self, "customizeFont", "newFontDidOpen") def customizeFont(self, info): f = info['font'] f.info.familyName = 'Untitled' MyCustomNewFont()
AFAIK there’s no notification for new scripts… maybe something to consider
There is the wonderfull RoboREPL extension from Tal, where you have a terminal style direct interactions
this is very nice indeed :) use
settings.editStartupCode()to define some code to be executed when the interpreter is started:
Yes, it's very handy. It would be handy also in the scripting window. :)
Thanks!
|
https://forum.robofont.com/topic/599/default-ufo-and-py
|
CC-MAIN-2019-22
|
refinedweb
| 165
| 54.32
|
I turned in this code.
#include <iostream> using namespace std; int adder(int temp); int main() { // beginning values int n=6; int *num=&n; // printout values cout << "In main, we start with these values" << endl; cout << "value of n - " << n << endl; cout << "Value of *num - " << *num << endl; cout << "Value of &num -" << &num << endl; //jump to adder function *num=adder(*num); // printout new values cout << "\n\nBack in main we get this result." << endl; cout << "value of n - " << n << endl; cout << "Value of *num - "<< *num << endl; cout << "Value of &num -"<< &num << endl; cout << "\n\n" << endl; system("PAUSE"); return EXIT_SUCCESS; } int adder(int temp){ temp++; cout << "\n\nIn the 'adder' function we change *num to " << temp << endl; return (temp); }
Friday, he gave me a "C", and said he would not accept my 4 apologies. I must have given him the dear in the headlights look because, before I could say anything else he said "Comments are nothing more than an apology... figure it out." I figure he must like me, 67% of the class failed.
First, Is there anything wrong with the code?
Second, Any notion to his apology remark?
Edited by macosxnerd101: Title renamed to be more descriptive.
|
http://www.dreamincode.net/forums/topic/183017-pointers-help/
|
CC-MAIN-2016-44
|
refinedweb
| 198
| 75.64
|
Getting started from Apache Spark
If you already know Apache Spark, using Beam should be easy. The basic concepts are the same, and the APIs are similar as well.
Spark stores data Spark DataFrames for structured data, and in Resilient Distributed Datasets (RDD) for unstructured data. We are using RDDs for this guide.
A Spark RDD represents a collection of elements, while in Beam it’s called a Parallel Collection (PCollection). A PCollection in Beam does not have any ordering guarantees.
Likewise, a transform in Beam is called a Parallel Transform (PTransform).
Here are some examples of common operations and their equivalent between PySpark and Beam.
Overview
Here’s a simple example of a PySpark pipeline that takes the numbers from one to four, multiplies them by two, adds all the values together, and prints the result.
In Beam you pipe your data through the pipeline using the
pipe operator
| like
data | beam.Map(...) instead of chaining
methods like
data.map(...), but they’re doing the same thing.
Here’s what an equivalent pipeline looks like in Beam.
ℹ️ Note that we called
Maptransform. That’s because we can only access the elements of a PCollection from within a PTransform. To inspect the data locally, you can use the InteractiveRunner
Another thing to note is that Beam pipelines are constructed lazily.
This means that when you pipe
| data you’re only declaring the
transformations and the order you want them to happen,
but the actual computation doesn’t happen.
The pipeline is run after the
with beam.Pipeline() as pipeline context has
closed.
ℹ️ When the
with beam.Pipeline() as pipelinecontext closes, it implicitly calls
pipeline.run()which triggers the computation to happen.
The pipeline is then sent to your runner of choice and it processes the data.
ℹ️ The pipeline can run locally with the DirectRunner, or in a distributed runner such as Flink, Spark, or Dataflow. The Spark runner is not related to PySpark.
A label can optionally be added to a transform using the
right shift operator
>> like
data | 'My description' >> beam.Map(...).
This serves both as comments and makes your pipeline easier to debug.
This is how the pipeline looks after adding labels.
Setup
Here’s a comparison on how to get started both in PySpark and Beam.
Transforms
Here are the equivalents of some common transforms in both PySpark and Beam.
ℹ️ To learn more about the transforms available in Beam, check the Python transform gallery.
Using calculated values
Since we are working in potentially distributed environments, we can’t guarantee that the results we’ve calculated are available at any given machine.
In PySpark, we can get a result from a collection of elements (RDD) by using
data.collect(), or other aggregations such as
reduce(),
count(), and more.
Here’s an example to scale numbers into a range between zero and one.
import pyspark sc = pyspark.SparkContext() values = sc.parallelize([1, 2, 3, 4]) min_value = values.reduce(min) max_value = values.reduce(max) # We can simply use `min_value` and `max_value` since it's already a Python `int` value from `reduce`. scaled_values = values.map(lambda x: (x - min_value) / (max_value - min_value)) # But to access `scaled_values`, we need to call `collect`. print(scaled_values.collect())
In Beam the results from all transforms result in a PCollection. We use side inputs to feed a PCollection into a transform and access its values.
Any transform that accepts a function, like
Map,
can take side inputs.
If we only need a single value, we can use
beam.pvalue.AsSingleton and access them as a Python value.
If we need multiple values, we can use
beam.pvalue.AsIter
and access them as an
iterable.
import apache_beam as beam with beam.Pipeline() as pipeline: values = pipeline | beam.Create([1, 2, 3, 4]) min_value = values | beam.CombineGlobally(min) max_value = values | beam.CombineGlobally(max) # To access `total`, we need to pass it as a side input. scaled_values = values | beam.Map( lambda x, min_value, max_value: x / lambda x: (x - min_value) / (max_value - min_value), min_value =beam.pvalue.AsSingleton(min_value), max_value =beam.pvalue.AsSingleton(max_value)) scaled_values | beam.Map(print)
ℹ️ In Beam we need to pass a side input explicitly, but we get the benefit that a reduction or aggregation does not have to fit into memory. Lazily computing side inputs also allows us to compute
valuesonly once, rather than for each distinct reduction (or requiring explicit caching of the RDD).
Next Steps
- Take a look at all the available transforms in the Python transform gallery.
- Learn how to read from and write to files in the Pipeline I/O section of the Programming guide
- Walk through additional WordCount examples in the WordCount Example Walkthrough.
- Take a self-paced tour through our Learning Resources.
- Dive in to some of our favorite Videos and Podcasts.
- Join the Beam users@ mailing list.
- If you’re interested in contributing to the Apache Beam codebase, see the Contribution Guide.
Please don’t hesitate to reach out if you encounter any issues!
Last updated on 2022/04/08
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know!
|
https://beam.apache.org/get-started/from-spark/
|
CC-MAIN-2022-27
|
refinedweb
| 859
| 59.8
|
I'm trying to write a function to do a DNS lookup which takes an IP address and finds the hostname corresponding to the IP as well as the ALIASES associated with this IP address since a single IP address may be associated with multiple domain names.
I have reached as far as getting the hostname associated with the IP address but I have been unable to retrieve any aliases associated with this IP address.
The code I have written so far is.....
#include <iostream>
using namespace std;
#include <winsock2.h>
bool InitializeWinsock()
{
WSADATA wsa;
if( WSAStartup( MAKEWORD(2, 2), &wsa) != 0 )
return false;
return true;
}
int main()
{
InitializeWinsock();
char* ip = "216.239.51.101"; // google
// Convert IP to long.....
unsigned int x = inet_addr(ip);
if(x == INADDR_NONE)
cout << "Cannot resolve IP address!" << endl;
// Resolve IP address to hostname...
hostent* h = gethostbyaddr( (const char*)&x, sizeof(x), AF_INET);
if(h == NULL)
cout << "Failed to get hostname!" << endl;
// Print the hostname...
else
cout << h->h_name << endl;
// How to get aliases now?
// This where I am stuck!
return 0;
}
I'm not sure that "" has any aliases associated with it but even if you choose a site that has an alias I still don't know how to retrieve that alias. If anyone knows how to get the aliases associated with an IP please help.
|
http://cboard.cprogramming.com/windows-programming/31534-dns-lookup-printable-thread.html
|
CC-MAIN-2016-30
|
refinedweb
| 223
| 74.29
|
25 February 2010 02:58 [Source: ICIS news]
SINGAPORE (ICIS news)--Singapore-based Rotary Engineering posted a 36% year-on-year jump in its fourth-quarter net profit to Singapore dollars (S$) 26.2m ($18.6m), boosted by its earnings from the Middle East, the company said on Thursday.
Revenue, however, was slightly lower in the October-December 2009 period to S$147m from S$149m in same period in 2008, the company said.
Rotary is a provider of engineering, procurement, construction and maintenance services supporting the oil, gas and petrochemical industries.
“Investing in the growing ?xml:namespace>
Rotary last year landed $745m (€551) worth of contract from Saudi Aramco Total Refining and Petrochemical Co (SATORP) to build a refinery tank farm in
For the whole of 2009, the company recorded a net profit of S$54.2m, a 7% increase from the previous year, with revenues at S$552m, up 6% year on year, Rotary Engineering said.
“Going forward, we will capitalise on our SATORP win to make further inroads into the Middle Eastern market, participating in tenders and prospecting actively,” Chia said.
“In
($1 = S$1.41 /
|
http://www.icis.com/Articles/2010/02/25/9337733/singapore-based-rotary-engineering-q4-net-profit-jumps-36.html
|
CC-MAIN-2014-42
|
refinedweb
| 189
| 52.09
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.