text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I’ve done a bit of work with customers interested in consuming SharePoint data without using the SharePoint web services. A common request asks how to consume the data from JavaScript.
Turns out the answer is easy when you develop a custom feature.
I created a companion screencast on Channel9 that walks through the entire process of creating a feature using VSeWSS. If you want to see all the details or think I didn’t detail a step enough in this post, please go watch the video since I create pretty much the entire thing from scratch. This blog post is meant to make it easier for you to read the code and copy/paste to try it on your own.
Create the Feature
Using Visual Studio Extensions for Windows SharePoint Services 1.3 (current version is the VSeWSS 1.3 March 2009 CTP), create a new project and add a new feature (do this on the WSPView pane).
Make sure to check the checkbox to add the default element.xml in the resulting dialog. Once created, you are probably going to want to rename the feature from its default “Feature1” to something more maintainable, like “ListToJson”. Just click the “feature1” node in the WSPView, hit F2, and rename it. In the WSPView, find the feature.xml file and open it. Give it a name, description, and the path to the image that you want to use.
<?xml version="1.0" encoding="utf-8"?> <Feature Id="16522953-a146-4110-b9ce-0e70c0f4218c" Title="List To JSON" Descriptio="Exposes a list using JSON data format" ImageUrl="ListToJson/json160.gif" Scope="Web" Version="1.0.0.0" Hidden="FALSE" DefaultResourceFile="core" xmlns=""> <ElementManifests> <ElementManifest Location="Element1\Element1.xml" /> </ElementManifests> </Feature>
To provide the image as part of my feature, I went to the Solution Explorer pane in Visual Studio 2008 and added a new “Template” project item.
Delete the generated TemplateFile.txt, we don’t need it. Under the newly created Templates folder, add a folder “IMAGES”, and under the IMAGES folder add a subfolder called “ListToJson”. In the ListToJson folder, add the image. This is how you deploy items to the folders under the "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\TEMPLATE" directory. We’ll do this a few times.
When we created the feature, a new file called “element.xml” was created. Open that file, and under the Elements node add a child node “CustomAction”.
<CustomAction Id="ListToJson" Description="Export list to JSON data format" RegistrationType="List" RequireSiteAdministrator="false" Title="List to JSON" ImageUrl="_layouts/images/ListToJson/json160.gif" Sequence="1000" GroupId="ActionsMenu" Location="Microsoft.SharePoint.StandardMenu"> <UrlAction Url="{SiteUrl}/_layouts/ListToJson/ListToJsonHandler.ashx?ListID={ListId}"/> </CustomAction>
Notice the UrlAction here points to the ListID, that’s how we pass the ListID to our custom ASHX handler. While we’re at it, let’s see the code for the .ASHX handler.
Create the IHttpHandler
We need to provide the .ASHX file. Add a new folder under the “TEMPLATES” directory in your Visual Studio solution called “LAYOUTS”, and under the LAYOUTS folder add a child folder “ListToJson”. Inside that folder, add a new file and rename it to “ListToJsonHandler.ashx”. Inside the file, you’ll find there’s nothing to it except the 5-part name for the assembly and type that implements the IHttpHandler interface.
<%@ WebHandler Class="ListToJsonHandler, Channel9, Version=1.0.0.0, Culture=neutral, PublicKeyToken=df0b6232e0609274" Language="C#" %>
In your Visual Studio project, add a new class called ListToJsonHandler (not in a namespace, or you need to change the .ASHX above to reflect the proper type name). It’s the same old IHttpHandler you’ve been coding for years, I just used some C# goodness in there for the property initializers and the yield statement in the iterator, as well as creating an extension method for the SPList type.
using System; using System.Web; using Microsoft.SharePoint; using System.Collections; using System.Collections.Generic; using System.Web.Script.Serialization; using System.IO; public class ListToJsonHandler : IHttpHandler { public bool IsReusable { // Return false in case your Managed Handler cannot be reused for another request. // Usually this would be false in case you have some state information preserved per request. get { return true; } } public void ProcessRequest(HttpContext context) { string listIDFromUrl = context.Request.QueryString["ListID"]; if (!string.IsNullOrEmpty(listIDFromUrl)) { StringWriter writer = new StringWriter(); HttpContext.Current.Server.UrlDecode(listIDFromUrl, writer); Guid listID = new Guid(writer.ToString()); SPList list = SPContext.Current.Web.Lists[listID]; context.Response.Clear(); context.Response.ContentType = "application/json"; context.Response.Write(list.ToJson()); } } } public static class MyExtensions { public static string ToJson(this SPList list) { IEnumerable<ListItem> items = GetListItems(list); JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Serialize(items); } private static IEnumerable<ListItem> GetListItems(SPList list) { foreach (SPListItem item in list.Items) { yield return new ListItem { Name = item.Name, Url = item.Url }; } } } public class ListItem { public string Name { get; set; } public string Url { get; set; } }
The PublicKeyToken is not going to be correct in your environment, you need to use the Strong Name tool in the SDK to find the proper token. My project is called “Channel9”, so I went to the bin/debug directory and ran the following command using the Visual Studio 2008 command prompt.
sn.exe -Tp Channel9.dll
Copy the value of the public key token and paste it in the 5-part name in the .ASHX file above. You now have all the pieces/parts to make this work. Add a reference to the System.Web.Extensions.dll assembly, and everything should compile and deploy just fine. You should now be able to run it and see your new feature added to the Actions menu for all lists on your site, and when you click it you should see the JSON representation of the data for the list.
Create the Application Page
This is the part that usually stumps ASP.NET devs new to SharePoint for the first time. It’s easy to create a SharePoint application page using your existing ASP.NET skillz. Under the TEMPLATES/LAYOUTS/ListToJson folder in your Visual Studio 2008 solution, add a new file “MyAppPage.aspx”. The contents of that page are listed here.
<%@ Assembly protected override void OnLoad(EventArgs e) { this.listID.Value = Request.QueryString["ListID"]; } </script> <asp:Content <asp:ScriptManager <Scripts> <asp:ScriptReference </Scripts> </asp:ScriptManager> <label title="Enter the ListID"></label> <input type="text" id="listID" runat="server" /> <input type="button" onclick="doit();return false;" value="Get the data" /> <div id="result"></div> </asp:Content> <asp:Content Query a list using JSON </asp:Content>
See, it’s the same ASP.NET you’ve always coded. A few things to note are the inclusion of the SharePoint assembly reference, and inheriting from the LayoutsPageBase type. This will incorporate your page into the SharePoint framework, integrating security and the rest of the goodness.
That page uses a JavaScript file. I use the ASP.NET AJAX 1.0 capabilities that are included out of box with Visual Studio 2008, I get this for free because I used the asp:ScriptManager control in my application page.
function doit() { $get("result").innerHTML = ""; var req = new Sys.Net.WebRequest(); var listID = $get("ctl00_PlaceHolderMain_listID").value; req.set_url("ListToJsonHandler.ashx?ListID=" + listID); req.set_httpVerb("GET"); req.add_completed(OnRequestCompleted); req.invoke(); } function OnRequestCompleted(sender, e) { var result = eval(sender.get_responseData()); for (var i = 0; i < result.length; i++) { $get("result").innerHTML += result[i].Name + " " + result[i].Url + "<br/>"; } }
Since we added the new application page, we want to provide navigation so that we can actually access it. Update the elements.xml file to include a new custom action.
<CustomAction Id="ListToJSONPage" Title="List to JSON Page" Description="Queries the list data using AJAX" ImageUrl="_layouts/images/ListToJson/json160.gif" RequireSiteAdministrator="false" RegistrationType="List" GroupId="ActionsMenu" Sequence="1001" Location="Microsoft.SharePoint.StandardMenu"> <UrlAction Url="{SiteUrl}/_layouts/ListToJSON/MyAppPage.aspx?ListID={ListId}"/> </CustomAction>
The last bit of magic here is to configure SharePoint to understand the .NET 3.5 assemblies. Jan Tielens has a great post that shows how to enable .NET 3.5 in SharePoint, the lazy way. Once you have your SharePoint site able to talk to the .NET 3.5 assemblies, you should be golden. The result is a new item on the ActionsMenu for the lists on your site to expose them as JSON.
Clicking the link will take you to an application page with a textbox and a button, the textbox already contains the GUID for the list that you want to query. Clicking the button will show the data from the list.
For More Information
Screencast that shows all the steps to create the ListToJson feature for this post on Channel9
VSeWSS 1.3 March CTP Download
How to Enable .NET 3.5 in SharePoint 2007 Sites, The Lazy Way
Features for SharePoint, by Ted Pattison
How To: Add Actions to the User Interface
PingBack from
Un mes más, y ya van 27 entregas, aquí os dejo el último recopilatorio de enlaces
Hello,
Nice post, I am curious why you used the yield. Is more performant? Or simply looks better ;-). Also I have to say, I don’t understand very well how the Yield works.
Thanks
I was just showing off C# syntax. I could have written something like:
public static string ToJson(this SPList list)
{
List<ListItem> items = new List<ListItem>();
foreach(SPListItem item in list.Items)
{
items.Add(new ListItem { Name = item.Name, Url = item.Url});
}
JavaScriptSerializer ser = new JavaScriptSerializer();
return ser.Serialize(items);
}
Part 3 of the SharePoint for Developers screencast series is posted to Channel9, “ Expression Blend and | https://blogs.msdn.microsoft.com/kaevans/2009/03/24/consuming-sharepoint-lists-via-ajax/ | CC-MAIN-2017-47 | refinedweb | 1,592 | 51.04 |
#include <hallo.h> Eduard Bloch wrote on Fri Sep 21, 2001 um 05:24:10PM: > But how does this comply with the GPL? As far as I can see, the kernel > guys have been doing this for a while (see below) and the kernel is > still GPLed. Okay, so if nobody has hints for me, I will upload to main soon with the following copyright: --- It may be redistributed under the terms of the GNU GPL, Version 2 found on Debian systems in the file /usr/share/common-licenses/GPL . NOTE about the included firmware files: The files Root and Dpram are distributed with the source package. Even if they contain binary code, it cannot be executed as part of any other GPLed code. GPL covers these files as prepared data accompanying the GPL program. Their content is to be treated as source code. You may use, distribute and modify it UNDER THE TERMS OF GNU GENERAL PUBLIC LICENSE. Contact 'convergence integrated media GmbH' <> for any further questions about the creation of these files. --- Gruss/Regards, Eduard. -- Am 8. Tag schuf Gott Linux und er sah das es gut war. | https://lists.debian.org/debian-legal/2001/09/msg00086.html | CC-MAIN-2014-15 | refinedweb | 190 | 69.92 |
You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Difference between revisions of "Event Platform/EventStreams"
Revision as of 20:19, 18 October 2019
EventStreams is a web service that exposes continuous streams of structured event data. It does so over HTTP using chunked transfer encoding following the Server-Sent Events (SSE) protocol. EventStreams can be consumed directly via HTTP, but is more commonly used via a client library.
EventStreams provides access to arbitrary streams of data, including MediaWiki RecentChanges. It replaced RCStream, and may in the future replace irc.wikimedia.org. EventStreams is backed by Apache Kafka.
Note: Often 'SSE' and EventSource are used interchangeably. This document refers to SSE as the server-side protocol, and EventSource as the client-side interface.
There is also a more asynchronous friendly version, here, needs Python 3.6 async generator capability.
import json from sseclient import SSEClient as EventSource url = '' for event in EventSource(url): if event.event == 'message': try: change = json.loads(event.data) except ValueError: pass else: print('{user} edited {title}'.format(**change))
Server side filtering is not))
Pywikibot supports EventStreams with freely configurable client side filtering and automatic reconnection. Also composed streams or timestamp for historical consumption are possible.
Usage sample with composed streams and timestamp:
>>> from pywikibot.comms.eventstreams import EventStreams >>> stream = EventStreams(streams=['recentchange', 'revision-create'], since='20190111') >>> stream.register_filter(server_name='fr.wikipedia.org', type='edit') >>> change = next(iter(stream)) >>> print('{type} on page "{title}" by "{user}" at {meta[dt]}.'.format(**change)) edit on page "Véronique Le Guen" by "Speculos" at 2019-01-12T21:19:43+00:00.:" } }
Stream selection
Streams are addressable either individually, e.g.
/v2/stream/revision-create, or as a comma separated list of streams to compose, e.g.
/v2/stream/page-create,page-delete,page-undelete.
See available streams:
Timestamp Historical Consumption
Since 2018-06, EventStreams supports timestamp based historical consumption. This can be provided as individual assignment objects in the
Last-Event-ID by setting a timestamp field instead of an offset field. Or, more simply, a
since query parameter can be provided in the stream URL, e.g.
since=2018-06-14T00:00:00Z.
since can either be given as a milliseconds UTC unix epoch timestamp or anything parseable by Javascript
Date.parse(), e.g. a UTC ISO-8601 datetime string.
When given a timestamp, EventStreams will ask Kafka for the message offset in the stream(s) that most closely match the timestamp. Kafka guarantees that all events after the returned message offset will be after the given timestamp. NOTE: The stream history is not kept indefinitely. Depending on the stream configuration, there will likely be between 7 and 31 days of history available. Please be kind when providing timestamps. There may be a lot of historical data available, and reading it and sending it all out can be compute resource intensive. Please only consume the minimum of data you need.
Example URL:.
Response,"timestamp":1532031066001},{|timestamp offset|timestamp|timestamp objects in the Last-Event-ID HTTP header.
Filtering
EventStreams does not have $wgServerName (or any other) server side filtering capabilities. | https://wikitech-static.wikimedia.org/w/index.php?title=Event_Platform/EventStreams&diff=next&oldid=353244 | CC-MAIN-2022-05 | refinedweb | 522 | 50.73 |
Redux is a state management tool, designed specifically for client-side JavaScript applications that rely heavily on complex data and external APIs, and provides excellent development tools that make it easier to work with your data.
What does Redux do?
Simply put, Redux is a centralized data store. All of your app data is stored in one large object. Redux Devtools facilitate visualization:
This state is immutable, which is a strange concept at first, but makes sense for several reasons. If you want to change the status, you must send a action, which basically takes a few arguments, forms a payload, and sends it to Redux. Redux changes the current state to one reducer function, which changes the existing state and returns a new state that replaces the current state and triggers a reload of the affected components. For example, you can have a reducer to add a new item to a list, or remove or edit an item that already exists.
Doing it this way means you’ll never get undefined behavior with your app changing state at will. Also, because there is a record of every action and what it has changed, it enables debugging over time, where you can scroll through your app’s state to debug what is happening with each action. (much like a git history).
Redux can be used with any front-end framework, but it’s commonly used with React, and that’s what we’ll be focusing on here. Under the hood, Redux uses React’s Context API, which works the same as Redux and is good for simple apps if you want to forgo Redux altogether. However, Redux’s Devtools are fantastic when working with complex data, and they’re actually more optimized to avoid unnecessary renderings.
If you are using TypeScript things are much more complicated for Redux to be strictly typed. Instead, you’ll want to follow this guide, which uses
typesafe-actions to manage actions and reducers in a user-friendly way.
Structure your project
First, you’ll want to set up your folder structure. It depends on you and your team’s style preferences, but there are basically two main templates that most Redux projects use. The first is to simply split each type of file (action, reducer, middleware, side effect) into its own folder, like this:
store/ actions/ reducers/ sagas/ middleware/ index.js
It’s not the best though, as you’ll often need an action file and a shrink file for each feature you add. It is better to merge the actions and reducers folders and divide them by functionality. In this way, each action and the corresponding reducer are in the same file. You
store/ features/ todo/ etc/ sagas/ middleware/ root-reducer.js root-action.js index.js
This cleans up imports, as you can now import both actions and reducers in the same statement using:
import todosActions, todosReducer from 'store/features/todos'
It’s up to you to decide if you want to keep the Redux code in its own folder (
/store in the examples above), or embed it in the src root folder of your application. If you already separate the code by component and write many custom actions and reducers for each component, you might want to merge the
/features/ and
/components/ folders and store the JSX components with the reducer code.
If you are using Redux with TypeScript, you can add an additional file in each features folder to define your types.
Installation and configuration of Redux
Install Redux and React-Redux from NPM:
npm install redux react-redux
You will probably also want
redux-devtools:
npm install --save-dev redux-devtools
The first thing you will want to create is your store. Save this as
/store/index.js
import createStore from 'redux' import rootReducer from './root-reducer' const store = createStore(rootReducer) export default store;
Of course, your store will get more complicated than that as you add things like side effect add-ons, middleware, and other utilities like
connected-react-router, but that is all that is required for now. This file takes the root reducer and calls
createStore() use it, which is exported for the application to use.
Next, we’re going to create a simple to-do list feature. You will probably want to start by defining the actions required by this feature and the arguments passed to them. To create a
/features/todos/ folder and save the following items as
types.js:
export const ADD = 'ADD_TODO' export const DELETE = 'DELETE_TODO' export const EDIT = 'EDIT_TODO'
This defines some string constants for action names. Whatever data you pass around, every action will have a
type property, which is a unique string that identifies the action.
You don’t have to have a file of type like this because you can just type the string name of the action, but it’s better for interoperability to do it that way. For example, you might have
todos.ADD and
reminders.ADD in the same app, so you don’t have to type
_TODO Where
_REMINDER whenever you reference an action for that feature. a few actions using the types of the string constants, exposing the arguments, and creating a payload for each. These do not need to be entirely static, as they are functions. An example that you can use is setting an execution CUID for certain actions.
The most complicated piece of code, and where you’ll implement most of your business logic, is in reducers. These can take many forms, but the most common configuration used is to use a switch statement that handles each case depending passed as an argument and each case returns a modified version of the state. In this example,
ADD_TODO adds a new element to the state (with a new ID each time),
DELETE_TODO deletes all items with the given id, and
EDIT_TODO maps and replaces the element’s text with the given ID.
The initial state must also be defined and passed to the reduction function as the default value for the state variable. Of course, this does not define your entire Redux state structure, only the
state.todos section.
These three files are usually separated in more complex applications, but if you want you can also set them all in one file, just make sure you’re importing and exporting correctly.
Once this feature is complete, let’s connect it to Redux (and our app). In
/store/root-reducer.js, import the todosReducer (and any other functionality reducer from the
/features/ folder), then send it to
combineReducers(), forming a top level root reducer which is passed to the store. This is where you will configure the root state, making sure to keep each feature’s not connected to React. To do this, you will need to wrap your entire application in a provider component. This ensures that the necessary state and hooks are passed to every component in your application.
In
App.js Where
index.js, wherever you have your root renderer, wrap your app in a
<Provider>, and send it to the store (imported from
/store/index.js) as an accessory:
import React from 'react'; import ReactDOM from 'react-dom'; // Redux Setup import Provider from 'react-redux'; import store, history from './store'; ReactDOM.render( <Provider store=store> <App/> </Provider> , document.getElementById('root'));
You are now free to use Redux in your components. The easiest way is to use function components and square brackets. For example, to send an action, you would use the
useDispatch() hook, which allows you to directly call actions, for example
dispatch(todosActions.addTodo(text)).
The following container has an entry connected to the local React state, which is used to add a new task to the state each<HTMLInputElement>) setText(e.target.value); return ( <div className="App"> <header className="App-header"> <input type="text" value=text onChange=handleChange /> <button onClick=handleClick> Add New Todo </button> <TodoList /> </header> </div> ); export default Home;
Then when you want to use the data stored in the report, use the
useSelector to hang up. It takes a function which selects part of the report to use in the application. In this case, it defines the
post variable to the current task list. This is then used to make a new to-do item for each entry in
state.todos.
import React from 'react'; import useSelector from 'store' import Container, List, ListItem, Title from './styles' function TodoList() const posts = useSelector(state => state.todos) return ( <Container> <List> posts.map(( id, title ) => ( <ListItem key=title> <Title>title : id</Title> </ListItem> )) </List> </Container> ); export default TodoList;
You can actually create custom selection functions to handle this for you, saved in the
/features/ folder much like actions and reducers.
Once you’ve got everything set up and figured out, you might want to consider setting up Redux Devtools, setting up middleware like Redux Logger or
connected-react-router, or installing a side-effect model such as Redux Sagas. | https://roxxcloud.com/how-to-get-started-with-redux-for-javascript-state-management-cloudsavvy-it/ | CC-MAIN-2021-49 | refinedweb | 1,492 | 59.94 |
I think you could at least have every attribute be declared in the language (presumably in Policy.swift in the standard library), but treat certain attributes specially. (Step zero for this is probably something like "open Attr.def, write
attribute @<name> { _builtinAttrID: <id> } for each entry, and make the compiler parse that". Then you gradually lean on
_builtinAttrID less and less, but you might never get to not leaning on it at all.)
Pitch: Introduce custom attributes
I think you could at least have every attribute be declared in the language (presumably in Policy.swift in the standard library), but treat certain attributes specially. (Step zero for this is probably something like "open Attr.def, write
I strongly disagree with your proposed syntax, I can see your intention, having
@ kind of be the function, but that looks pretty bad to me, without mentioning it adds unnecessary parenthesis.
I like the idea of having
#available use this. But again I disagree with your syntax.
Having it be called
if would require another syntax for checking inside implementations (or a pretty weird call). where it currently is the same, such as
if #available(iOS 8.1, *) { } else { }
Also, would prefer to avoid String based API's whenever possible, and again less parenthesis (although the composition would be good). Something like:
@available(.iOS(version: .greaterThanOrEqualTo(8.1))) class Foo { // implementation that can use iOS 8.0+ APIs }
But a problem that brings, as seen by the "inside implementation" example is that it would require the custom attributes to be able to have a return type too,
Bool in this case, which could be interesting, but opens another can of worms.
Just for completion, would like to add that I really liked your examples for "deriving" implementations, apart (again) from the specific syntax >.>
Thanks everybody for the feedback. Here's a second version of the pitch.
I've come to agree with the sentiment that including custom attributes that are available at runtime might be a bit much. Therefore, I'll try to make a first proposal only for compile-time attribtues. I don't really understand the steps in the Swift compiler, but it seems to me that attributes should at least be available in libSyntax and SourceKit so that tools can officially deal with them. Any clarifications here are appreciated.
(As an aside, I've never done this before. Would it be better if I hosted the pitch in github or in a gist somewhere as it evolves?)
Introduce custom attributes
- Proposal: SE-NNNN
- Author: Vini Vendramini
- Review Manager: TBD
- Status: Pitch only
Introduction
Swift currently supports marking declarations with attributes, such as
@objc and
@deprecated. The compiler uses these attributes to change the way it interprets the corresponding code. However, there is no support for declaring and using new attributes to suit the needs of a project or library. This proposal aims to change that.
Previous discussions on the subject:
Motivation
Adding custom attributes to Swift would enable programmers to explore and use new methods of meta-programming. For instance, someone writing a linter might allow users to mark declarations that the linter should ignore, while someone writing a JSON parser might let users mark their properties directly with the right coding keys. This proposal would allow for such use cases and many others that haven't yet been thought of.
Proposed solution
This document proposes adding the following to the language and the compiler:
- A syntax for declaring new attributes, including:
- A list of the types of declarations on which they can be used.
- Associated values and default values (optionally).
- The ability to use these new attributes to annotate declarations.
It deliberately does not include support for inspecting the attributes in runtime (see Alternatives Considered).
Detailed design
Syntax
Custom attributes would be declared using the following syntax:
<accessLevel> attribute[(listOfSupportedDeclarations)] @<attributeName>[(<associatedValues>)]
And used as in the examples:
// Defines a public attribute called codingKey internal attribute(.variables) @codingKey(_ key: String = "key") // Usage import JSONFramework class NetworkHandler { @codingKey("url_path") var url: URL }
// Defines an internal attribute called ignoreRules internal attribute(.functions, .variables, .classes) @ignoreRules(_ rules: [String], printWarnings: Bool = false) // Usage import LinterAttributes @ignoreRules(["cyclomatic_complexity", "line_count"]) func a() { // ... }
Usage
The declared attributes should be made public by both libSyntax and SourceKit, allowing developers to use them to access the attribute information.
Namespacing
Name conflicts can be solved with simple namespacing when needed:
import JSONFrameworkA import JSONFrameworkB struct Circle { @JSONFrameworkA.codingKey("circle_radius") var radius: Double @JSONFrameworkB.codingKey("color_name") var color: Color }
Annotating types
The Swift compiler supports annotations for types, as it enables the compiler to provide extra guarantees and features regarding those types. However, it isn't clear at the moment how custom attributes for types would be useful, and therefore it is out of the scope of this proposal.
Source compatibility
This proposal is purely additive.
Effect on ABI stability
No effect.
Effect on API resilience
No effect.
Alternatives considered
Currently available workarounds
There's been discussion before on emulating some of this behavior using protocols, an approach that's more limited but currently feasible.
Runtime support
An initial version of this proposal suggested that attributes should be made available at runtime through the reflection API. This idea was positively received in the discussions but was ultimately left for a future proposal, which makes this proposal smaller and allows time to improve the reflection APIs before runtime attributes are considered.
May I suggest an alternate syntax:
// Defines an internal attribute called ignoreRules internal attribute ignoreRules(_ rules: [String], printWarnings: Bool = false): FuncAttribute, VarAttribute, ClassAttribute
Explanation for changes:
- We know it is an attribute from
attribute, we don't need
@here.
- The name is more important than what it applies to, put the name first.
- Since the supported declarations were pushed to last,
and are somewhat protocol-like, enhance the similarity.
I think that's a good first step.
The pitch looks great, I like everything in there
Just think the actual syntax could be tweaked a tiny bit.
I like these changes a lot
Maybe instead of looking like protocol conformance, we could make these look like the
get /
set modifiers in protocol properties? We could also use keyword names so there are no extra protocols to memorize.
// Defines an internal attribute called ignoreRules internal attribute ignoreRules(_ rules: [String]) { func var class }
It makes sense to simplify this proposal by relegating runtime access to a future proposal, but I think its still important that we think about them. For example, I've always really liked the fact that C# Custom Attributes are defined as standard types and their constructors:
- it doesn't introduce a new syntax
- those types serve both as definition and as runtime metadata
- behaviour can be attached to them
Here's what it could look like in Swift:
@attribute(name: "codingKey", usage: .variables) internal struct CodingKeyAttribute { let key: String init(key: String = "key") { self.key = key } var snakeCase: String { // ... } } @attribute(name: "ignoreRules", usage: [.functions, .variables, .classes]) internal struct IgnoreRulesAttribute { private let rules: [String] private let printWarnings: Bool init(_ rules: [String], printWarnings: Bool = false) { self.rules = rules self.printWarnings = printWarnings } }
Once we allow accessing them at runtime, we could reuse the type as metadata:
for attribute in myObject.reflectionMetadata.attributes { if let codingKey = attribute as? CodingKeyAttribute { print(codingKey.snakeCase) } else if let ignoreRules = attribute as? IgnoreRulesAttribute { // Ignore } }
I probably jumping into cold water now, I quickly read the latest proposal version and I'm still confused how this should work.
- What do we gain from custom attributes if they have no behavior at all?
- Can I write the attribute declaration in multiple lines (with a line width of 80 characters this would be required)?
- Is it a future direction to allow defining the behavior of an attribute?
I'm here because I just debugged an issue I had with
RxSwift library that caused a crash. In
RxSwift you have to work with objects called
DisposeBag a lot to be able to break strong reference cycles and clean up subscriptions.
I ultimately would want either an attribute for a stored property or property behaviors that would allow me, in
defer like fashion of execution, force a stored property to be the top most stored property as it can change the behavior of a serial program.
A quick example:
class Host { let crasher = Crasher() // This position will allow `crasher` to produce side-effects during `deinit` // and *can* lead to a crash of the program. let disposeBag = DisposeBag() ... }
As I mentioned above I would want to write a property behavior or an attribute near that property to force it to be the top most.
@storedPropertyOrder(index: 0) let disposeBag = DisposeBag()
Sure I can 'just' re-order the properties, but from a readability point I would mix all my properties groups which I would like to avoid if possible.
- Is something like this a feasible future of this feature?
For curios people, here is the issue with a code sample that produces different behavior depending on the oder of the stored properties:
I considered this at first, but ultimately it seemed to me that it would
- Not make a lot of sense for attributes that don’t have runtime behaviors, as they would create code that never gets interacted with (except for the compiler deriving them into an attribute).
- Maybe pollute the code with attribute types that would rarely be used in practice. For instance, it could be rare for a user to want to instantiate the CodingKeys type themselves, but it would show up on their autocomplete nonetheless.
I’d love some more feedback on this, but in any case it’ll be good to mention in “Alternatives Considered”.
I’ll assume you’re talking about runtime behavior. Attributes that don’t have runtime behavior would still show up in the source code, allowing devs to create tools that interacted with them before they’re compiled (using SourceKit for instance).
I’m not familiar with RxSwift, but there could be a tool that looks at your code and makes sure that all variable declarations marked with
@storedPropertyOrder(index: 0) are indeed at index
0, throwing a warning or an error if they’re not.
That said, it seems that your property index example would need special support from the compiler to be implemented, even if we did have runtime support for attributes.
Sure! I thought that was implicit, but maybe it’ll be better to mention this in the proposal.
In a future proposal where attributes can be accessed at runtime, I'd expect all custom attributes to be accessible that way (even those from the Standard Library). If that's what happens, we need a runtime metadata type for attributes anyway.
I don't see this as an issue, and if it really is, it can solved at the SourceKit level, the same way that all internal Standard Library types/properties that are prefixed with an underscore don't show up in code completion.
I wouldn't mind that syntax at all, especially if it were easier to implement in the compiler. It feels a bit weird to me to define an attribute using what looks like an attribute, but I can get over that.
One other idea, we continue to have the
attribute syntax look like a function, but make it return an associated type. The type it returns is what Swift stores in the in the runtime reflection API when that attribute is used.
public attribute(var) codingKey(_ name: String) -> CodingKeyAttribute { return CodingKeyAttribute(name: name) } public struct CodingKeyAttribute: CustomStringConvertible { public let name: String public var description: String { ... } }
The only thing is I'm not exactly sure how you would access something like this using SwiftSyntax.
Are there existing examples of developers writing tools that interact with comments in source-code? It seems to me that anything one can do with a non-functional custom attribute, one can also do with a custom comment. If this use-case is important and powerful, then I’d like to see what people are already doing with it, since there’s nothing stopping them today.
See Sourcery and SwiftLint
Oh yeah, we'd definitely need a way to access attributes, I just think it could be simpler. For instance, it could be a bit like the current "children" property in Mirrors:
// In the reflection API struct Attribute { let name: String let associatedValues: [String: Any] } // Declare the attribute internal attribute codingKey(_ key: String): VarAttribute // Use it at runtime for attribute in myObject.reflectionMetadata.attributes { if let attribute.label == "codingKey" { let keyToUseInJSON = attribute.associatedValues["key"] // ... } }
In my view this has the advantage of being simpler and more in line with the current reflection API, though it definitely has the disadvantage of being less type safe.
I find it hard to justify to myself involving all the functionalities of a struct for attributes that are only meant for the source code. For instance, what does it mean to implement methods in a source-code-only attribute? And what if we never get around to proposing runtime attributes, or if the community ultimately decides it doesn't want them in the language?
It sure is important to consider how this scales into runtime attributes, but also to bear in mind that this proposal has to be able to stand on its own.
This seems to be a valid point. I believe that custom attributes would make it easier for tools like SwiftLint and Sourcery to allow for more complex functionalities (using different types, limiting the kind of declarations they support, etc). However, it's worth noting that they already get a lot done by only using comments, which means that if this proposal is to move forward it should be aware of that and present a clear improvement.
It is extremely common for linters and formatters across all languages to look for special comments indicating that a style rule should be modified or disabled for a particular section of code.
The Swift project also encourages incremental improvements over huge, complicated features added all at once; they're easier to design and review, less likely to contain undetected flaws, and the incremental features with their partial functionality can be delivered before the huge feature would be finished. This is especially true when some part of the huge feature depends on some other feature that hasn't been designed yet, which is true of runtime attribute metadata—you'll need a new reflection design if you want to actually use it, and that design isn't here yet. From that perspective, factoring this into two proposals—one adding attribute declarations and another adding runtime metadata and APIs to use it—makes a lot of sense.
(It might also make sense to add the metadata but not the APIs; Swift has already done this for types and their members generally.)
That's a good idea too. I was hoping to limit the introduction of new declaration statements. If it feels weird to you to define an attribute using what looks like an attribute, there's always the possibility of making it a protocol conformance:
protocol Attribute { var name: String { get } var usage: AttributeUsage { get } } public struct CodingKeyAttribute: Attribute { public let name = "codingKey" public let usage = [.properties] public let key: String init(key: String) { self.key = key } }
That's the crux of the problem. It's not because I'm accessing runtime metadata that I want it to be any less type-safe.
I've been quiet so far, but am very much supportive of introducing custom attribute support. An immediate motivating use case for me is being able to use custom attributes with Sourcery templates. This would be greatly preferable to using comment-based annotations. One huge win would be the ability of the compiler to validate that custom attributes usage is actually valid. I have some annotation validation logic in my Sourcery templates but not nearly as much as I should. This proposal (and a Sourcery update to take advantage of it) could help a lot with that.
This part felt weird to me as well. I really like the protocol direction a lot, especially because it turns attributes into strongly typed values. That will be extremely useful for both static and dynamic metaprogramming when Swift receives those features. One small tweak I would make is to move the
name and
usage requirements to be static.
In this model, an attribute is just a value type and attribute usage initializes a value of the corresponding type. Usage sites would need to be limited to compiler evaluable initializers / case constructors. It would be unfortunate to block this feature while waiting for compile time interpreter but this model is compelling enough that it might be worth doing that if necessary. I wonder if there would be a way to provide a limited form of this feature in the meantime by restricting initializer / case argument types.
Agreed. So we’d be looking at something like:
public enum AttributeUsage { case types case properties case functions } public protocol Attribute { static var name: String { get } static var usage: AttributeUsage { get } } // Example public struct CodingAttribute: Attribute { public static let name = "coding" public static let usage = .properties public let key: String? public init(key: String? = nil, ignored: Bool = false) { self.key = key self.ignored = ignored } } struct Person: Decodable { @coding(key: "first_name") let firstName: String @coding(key: "last_name") let lastName: String @coding(ignored: true) let age: Int }
Other question: how precise do we want to be in
AttributeUsage? Do we want to allow some attributes to apply only to protocols? To enums? To structs? To extensions? To static members? Etc...
The name should be just "coding" not "codingKey" I think. | https://forums.swift.org/t/pitch-introduce-custom-attributes/21335?page=2 | CC-MAIN-2019-22 | refinedweb | 2,948 | 51.99 |
Apache Airflow
Airflow is a platform to programmatically author, schedule and monitor workflows.
When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
Airflow is not a data streaming solution. Airflow is not in the Spark Streaming or Storm space, it is more comparable to Oozie or Azkaban.
DAGs
Workflows in Airflow are designed as
DAGs - or Directed Acyclic Graphs.
A
DAG is a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies.
For example, a simple DAG could consist of 3 tasks: A, B, and C. It could say that A has to run successfully before B can run, but C can run anytime. It could also say that A times out after 5 minutes and that B can be restarted up to 5 times in case it fails. It might also say that the workflow will run every night at 10pm but shouldn’t start until a certain date.
An example Airflow Pipeline DAG
Notice that the DAG we just outlined only describes how to carry out a workflow, not what we want the workflow to actually do - A, B, and C could really be anything! DAGs aren’t concerned with what its constituent tasks do, they just make sure the tasks happen at the right time, in the right order, and with the right handling of any unexpected issues. Airflow DAGs are a framework to express how tasks relate to each other, regardless of the tasks themselves.
Hooks and Operators
Airflow DAGs are made up of
tasks, which consist of
hooks and
operators.
Hooks
Hooks are interfaces to external APIs (Google Analytics, SalesForce, etc.), databases (MySQL, Postgres, etc.), and other external platforms.
Whereas hooks are the interfaces,
Operators determine what your DAG actually does.
Operators
The atomic units of DAGs - while DAGs describe how to run a workflow,
Operators determine what actually gets done.
Operators describe single tasks in a workflow and can usually stand on their own, meaning they don’t need to share resources (or even a machine in some cases) with any other operators.
Note: In general, if two operators need to share information (e.g. a filename or a small amount of data), you should consider combining them into a single operator.
Here are some common operators and the tasks they accomplish:
BashOperator- executes a bash command
PythonOperator- executes Python code
Tasks
Once an operator is instantiated, it is referred to as a
task. The instantiation defines specific values when calling the abstract operator, and the parameterized task becomes a node in a DAG. Each task must have a
task_id that serves as a unique identifier and an
owner.
Note: Be sure that
task_ids aren’t duplicated when dynamically generating DAGs - your DAG may not throw error if there is a duplicated
task_id, but it definitely wont’ execute properly.
Task Instances
An executed task is called a
TaskInstance. This represents a specific run of a task and is a combination of a DAG, at task, and a specific point in time.
Task instances also have indicative states, which could be “running”, “success”, “failed”, “skipped”, “up for retry”, etc.
Workflows
By stringing together operators and how they depend on each other, you can build workflows in the form of
DAGs.
Templating with Jinja
Imagine you want to reference a unique s3 file name that corresponds to the date of the DAG run, how would you do so without hardcoding any paths?
Jinja is a template engine for Python and Apache Airflow uses it to provide pipeline authors with a set of built-in parameters and macros.
A jinja template is simply a text file that contains the following:
- variables and/or expressions - these get replaced with values when a template is rendered.
- tags - these control the logic of the template.
In Jinja, the default delimiters are configured as follows:
{{% ... %}}for Statements
{{ ... }}for Expressions
{{# ... #}}for Comments
# ... ##for Line Statements
Head here for more information about installing and using Jinja.
Jinja templating allows you to defer the rendering of strings in your tasks until the actual running of those tasks. This becomes particularly useful when you want to access certain parameters of a
task_run itself (i.e.
run_date or
file_name).
Not all parameters in operators are templated, so you cannot use Jinja templates everywhere by default. However, you can add code in your operator to add any fields you need to template:
Note: Your Jinja templates can be affected by other parts of your DAG. For example, if your DAG is scheduled to run ‘@once’,
next_execution_date and
previous_execution_date macros will be
None since your DAG is defined to run just once.
template_fields = ('mission', 'commander')
Example
date = "{{ ds }}" t = BashOperator( task_id='test_env', bash_command='/tmp/test.sh', dag=dag, env={'EXECUTION_DATE: date} )
In the example above, we passed the execution
date as an environment variable to a Bash script. Since
{{ ds }} is a macro and the
env parameter of the
BashOperator is templated with Jinja, the execution date will be available as an environment variable named
EXECUTION_DATE in the Bash script.
Note: Astronomer’s architecture is built in a way so that a task’s container is spun down as soon as the task is completed. So, if you’re trying to do something like download a file with one task and then upload that same task with another, you’ll need to create a combined Operator that does both.
XComs
XComs (short for “cross-communication”) can be used to pass information between tasks that are not known at runtime. This is a differentiating factor between XComs and Jinja templating. If the config you are trying to pass is available at run-time, then we recommend using Jinja templating as it is much more lightweight than XComs. On the flip-side, XComs can be stored indefinitely, give you more nuanced control and should be used when Jinja templating no longer meets your needs.
Functionally, XComs can almost be thought of as dictionaries. They are defined by a
key, a
value, and a
timestamp and have associated metadata about the task/DAG run that created the XCom and when it should become visible.
As shown in the example below, XComs can be called with either
xcom_push() or
xcom_pull(). “Pushing” (or sending) an XCom generally makes it available for other tasks while “Pulling” retrieves an XCom. When pulling XComs, you can apply filters based on criteria like
key, source
task_ids, and source
dag_id.
Example XCom (reference):
import airflow from airflow import DAG from airflow.operators.python_operator import PythonOperator args = { 'owner': 'airflow', 'start_date': airflow.utils.dates.days_ago(2), 'provide_context': True } dag = DAG( 'example_xcom', schedule_interval='@once', default_args=args ) value_1 = [1, 2, 3] value_2 = {'a': 'b'} def push(**kwargs): # pushes an XCom without a specific target kwargs['ti'].xcom_push(key='value from pusher 1', value=value_1) def push_by_returning(**kwargs): # pushes an XCom without a specific target, just by returning it return value_2 def puller(**kwargs): ti = kwargs['ti'] # get value_1 v1 = ti.xcom_pull(key=None, task_ids='push') assert v1 == value_1 # get value_2 v2 = ti.xcom_pull(task_ids='push_by_returning') assert v2 == value_2 # get both value_1 and value_2 v1, v2 = ti.xcom_pull(key=None, task_ids=['push', 'push_by_returning']) assert (v1, v2) == (value_1, value_2) push1 = PythonOperator( task_id='push', dag=dag, python_callable=push) push2 = PythonOperator( task_id='push_by_returning', dag=dag, python_callable=push_by_returning) pull = PythonOperator( task_id='puller', dag=dag, python_callable=puller) pull.set_upstream([push1, push2])
A few things to note about XComs:
- Any object that can be pickled can be used as an XCom value, so be sure to use objects of appropriate size.
- If a task returns a value (either from its Operator’s
execute()method, or from a PythonOperator’s
python_callablefunction), than an XCom containing that value is automatically pushed. When this occurs,
xcom_pull()automatically filters for the keys that are given to the XCom when it was pushed.
- If
xcom_pullis passed a single string for
task_ids, then the most recent XCom value from that task is returned; if a list of
task_idsis passed, then a corresponding list of XCom values is returned.
Other Core concepts
Default Arguments
If a dictionary of
default_args is passed to a DAG, it will apply them to any of its operators. This makes it easy to apply a common parameter (e.g. start_date) to many operators without having to retype it.
from datetime import datetime, timedelta default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime(2017, 3, 14), 'email': ['airflow@airflow.com'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) }
Context Manager
DAGs can be used as context managers to automatically assign new operators to that DAG.
DAG Assignment
Operators do not have to be assigned to DAGs immediately. DAG assignment can be done explicitly when the operator is created, through deferred assignment, or even inferred from other operators.
Additional Functionality
In addition to these core concepts, Airflow has a number of more complex features. More detail on these functionalities is available here.
Sources: | https://docs.astronomer.io/v2/apache_airflow/tutorial/core-airflow-concepts.html | CC-MAIN-2018-30 | refinedweb | 1,485 | 52.6 |
NAME
undelete - attempt to recover a deleted file
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <unistd.h> int undelete(const char *path);
DESCRIPTION
The undelete() system call attempts to recover the deleted file named by path. Currently, this works only when the named object is a whiteout in a union file system. The system call removes the whiteout causing any objects in a lower layer of the union stack to become visible once more. Eventually, the undelete() functionality may be expanded to other file systems able to recover deleted files such as the log-structured file system.
RETURN VALUES
The undelete() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
ERRORS
The undelete() succeeds unless: [ENOTDIR] A component of the path prefix is not a directory. [ENAMETOOLONG] A component of a pathname exceeded 255 characters, or an entire path name exceeded 1023 characters. [EEXIST] The path does not reference a whiteout. [ENOENT] The named whiteout does not exist. [EACCES] Search permission is denied for a component of the path prefix. [EACCES] Write permission is denied on the directory containing the name to be undeleted. [ELOOP] Too many symbolic links were encountered in translating the pathname. [EPERM] The directory containing the name is marked sticky, and the containing directory is not owned by the effective user ID. [EINVAL] The last component of the path is ‘..’. [EIO] An I/O error occurred while updating the directory entry. [EROFS] The name resides on a read-only file system. [EFAULT] The path argument points outside the process’s allocated address space.
SEE ALSO
unlink(2), mount_unionfs(8)
HISTORY
The undelete() system call first appeared in 4.4BSD-Lite. | http://manpages.ubuntu.com/manpages/jaunty/man2/undelete.2freebsd.html | CC-MAIN-2013-20 | refinedweb | 290 | 56.55 |
If]
# $Id: Htpasswd.pm,v 1.3 2000/05/24 20:27:48 kayos Exp $
package Tie::Htpasswd;
use strict;
use vars qw($VERSION);
use Apache::Htpasswd;
$VERSION = sprintf("%d.%02d",q$Revision: 1.3 $ =~ /(\d+)\.(\d+)/);
my $F_FILENAME = 0;
my $F_DATA = 1;
sub TIEHASH {
my ($pkg, $filename) = @_;
my $obj = [];
$obj->[$F_FILENAME] = $filename;
$obj->[$F_DATA] = new Apache::Htpasswd($filename);
return (bless $obj, $pkg);
}
sub FETCH {
my ($self,$key) = @_;
return $self->[$F_DATA]->fetchPass($key);
}
sub STORE {
my ($self,$key,$value) = @_;
if($self->[$F_DATA]->fetchPass($key)) {
$self->[$F_DATA]->htpasswd($key,$value,1);
} else {
$self->[$F_DATA]->htpasswd($key,$value);
}
return $self->[$F_DATA]->fetchPass($key);
}
sub DELETE {
my ($self,$key) = @_;
my $prev_value = $self->[$F_DATA]->fetchPass($key);
$self->[$F_DATA]->htDelete($key);
return $prev_value;
}
sub EXISTS {
my ($self,$key) = @_;
my $result = $self->[$F_DATA]->fetchPass($key);
return ( $result );
}
sub DESTROY {
my ($self) = @_;
undef $self->[$F_DATA];
}
1;
You may also want to consider having the module import and use some of the methods from Apache::Htpasswd, which already has a locking scheme.
Another possible problem is that you do not re-read the password file before writing it. If you script opens the file, and while it does other things another process writes to the file, you will wipe out whatever changes were made.
The DESTROY function should only change what is needed, as opposed to writing out whatever you have saved.
Consider this:
Thanks. I was just going to add locking and all that, so I
decided to check out Perl Monks to see if anyone commented
on it. I'll check out Apache::Htpasswd.
Yes, I watch meteor showers
No, I do not watch meteor showers
My meteors prefer to bathe
Results (151 votes). Check out past polls. | http://www.perlmonks.org/?node_id=9034 | CC-MAIN-2017-04 | refinedweb | 288 | 50.26 |
Building object instances: soul-killing drudgery or trivial annoyance?
Are you one of those who, when asked to build an object with a large set of properties, gnash their teeth and wail “there must be a better use of my precious life-line than this?” Do you simply accept that when life throws a bunch of unset properties in your direction, all you can do is roll up your sleeves and get setting?
If you are in the former camp, rather than the latter, then Groovy 2.3 has something good for you!
Introducing the groovy.transform.builder package…more ways to create an object than you can shake a mouse at!
Building complex objects is now as easy as pie, as the following little example shows.
import groovy.transform.* import groovy.transform.builder.* // NB: 'true' Gb Long.metaClass.getGb = { -> delegate.longValue() * 1024L * 1024L * 1024L } Double.metaClass.getInch = { -> 2.54D * delegate.doubleValue() } @ToString(includeNames=true) @Builder class User { String name String extension } @ToString @Builder(builderStrategy=SimpleStrategy, prefix="", excludes=['id']) class Computer { UUID id = UUID.randomUUID() String vendor String type Double screen // cm Long ram // bytes Long disk // bytes User user } // need a class to embed logging into @groovy.util.logging.Log class Main { void goForIt() { def built = new Computer() .vendor("Apple") .type("Macbook Pro") .screen(15.6D.inch) .ram(16L.gb) .disk(512L.gb) .user(User.builder().name("Bob").extension("1234").build()) log.info built.toString() } } new Main().goForIt()
Take a good look at the above; there are actually two styles of builder in use here: Computer and User configure the builder facility in two different ways.
A picture paints a thousand words, as they say:
Choice is good, no?
This is only a quick overview of what is actually a very configurable facility that probably has enough in it to satisfy all but the most rabid properties setter hater.
Never again should you write Groovy code like this:
Thing t = new Thing() t.something('x') t.somethingElse(42) t.kill(8) t.me('now')
I’ll be watching!
It is worth taking a look at the documentation to see what else this new feature can do to help you reclaim your life and dignity.
Just for the hell of it, I have also thrown in the very useful @ToString and @Log annotations…what fun!
I have also done a teeny-tiny bit of metaprogramming: take a look at how Long and Double are modified to give us a taste of DSL-y goodness.
And just to wrap things up neatly, be aware that Groovy has always had a few tools to make your life easier: the documentation specifically calls out the with statement (which I’ve raved about before) and maps as named parameters.
We’re spoiled for choice, we really are!
PS: I took the title for this posting from the title of Don Henley’s excelllent second album. Music to angrily program by! | http://wordpress.transentia.com.au/wordpress/2014/06/06/building-the-perfect-beast/ | CC-MAIN-2019-26 | refinedweb | 486 | 63.29 |
Hi,
Please find below the requirement details:
1)Enter the number of carriers:
2(say)
2)Enter the details of carrier1:
15(amount of time taken to deliver an item
125.50(cost to deliver an item
3)Enter the details of carrier2:
12
150.50
But when i am trying to run the loop, the code prints till the integer details of carrier2 and while going to the double details arrayindexoutofbound exception is generated.
Code:
public class sample{
int a;
public static void main(String args[])
Scanner sc = new Scanner(System.in);
System.out.println("Enter the number of carriers");
a = sc.nextInt();
int[] b = new int[];
double[] c = new Double[];
System.out.println("Enter the carriers details:");
for(i=0;i<a;i++){
b = sc.nextInt();
c = sc.nextDouble();
}
But when i run the code for say a=2, instead of printing the double value of 2nd carrier arrayindexoutofbound error is displayed.
Can you please let me know where i am getting wrong in the concept?
Thanks in advance. | http://www.javaprogrammingforums.com/collections-generics/40801-hi-i-am-new-java-i-am-facing-exception-error-while-running-loop-int-double-input.html | CC-MAIN-2018-05 | refinedweb | 171 | 58.08 |
Read and Generate QR Code With 5 Lines of Python Code
Introduction
QR Code is the most popular 2 dimensional barcodes that widely used for document management, track and trace in supply chain and logistics industry, mobile payment, and even the “touchless” health declaration and contact tracing during the COVID-19 pandemic. Comparing to 1D barcode, QR code can be very small in size but hold more information, and also easier for scanning as you can scan it from any direction.
In this article, I would be sharing with you how to use some pure Python packages to generate QR code and read QR code from images.
Generate QR code with Python
To generate QR code, we will use a Python package called qrcode. Below is the pip command to install this package:
#install qrcode together with pillow pip install qrcode[pil] #or install qrcode if you already have pillow installed pip install qrcode
As it has dependency to Pillow package, you will need to have this package installed as well. Once you have these packages ready, let’s import the modules at the beginning of our code:
import qrcode from PIL import Image
Generating a QR code with this qrcode library can be easily done with 1 line of code:
img = qrcode.make('QR Code')
If you check the “img” object from Jupyter Notebook, you can see the below image:
This make function provides a quick way to generate QR code with all the default parameters. To specify the parameters like the size, style or border of the boxes, you can use the QRCode class. For instance:
qr = qrcode.QRCode( version=1, error_correction=qrcode.constants.ERROR_CORRECT_L, box_size=10, border=4, )
Here is the explanations for these parameters:
version – QR code has 40 different sizes which indicated as the version parameter in above, version 1 represents a 21×21 matrix. You can use (v-1)*4 + 21 to calculate the size of the matrix for each of the version number.
error_correction – specifies error correction level which controls how many error correction code blocks to be inserted in order to achieve the error correction capability. In another words, if you want your barcode to be readable even when it’s damaged (or have a logo/image onto it) , you may increase the error correction level, but this would also make your barcode more compact.
box_size – the number of pixels of the square box
border – the thickness of the square box border
Once you have a QRCode instance, you can use the below code to specify the barcode data, color and generate a image:
#barcode content qr.add_data('codeforests.com') #auto adjust the size qr.make(fit=True) #specifying barcode color img = qr.make_image(fill_color="#040359", back_color="#f7f7fa")
If you check the “img” object from Jupyter Notebook again, you shall see something similar to below:
To use the same barcode style to generate new barcode, you can just clear the data and then re-generate a new image object:
qr.clear() qr.add_data('Python Tutorials') img2 = qr.make_image(fill_color="#015b82", back_color="TransParent")
When inspecting the “img2” in Jupyter Notebook, you shall see below:
You can simply use the “save” method to save it into an image file since it is a Pillow Image object:
img2.save("qr_code.png")
The qrcode package cannot directly generate multiple QR codes into one image, if you need that, you may use the Pillow package to combine the images. For instance:
#create a blank image new_img = Image.new("RGBA", (600, 350), "#fcfcfc") new_img.paste(img, (0, 0)) new_img.paste(img2, (300, 0)) new_img.save("multi-QR-code.png")
The above will create a new image and combine the two barcode images into one. If you check the saved image file, you shall see:
With this package, you can also generate styled QR code e.g.: rounded corners, radial gradient, embedded image or different color masks. You can take a look at the samples from it’s office site.
Read QR Code in Python
To read QR code, we will use another Python package called pyzbar. You can use below pip command to install it:
pip install pyzbar
This library is also a very easy to use, you can directly pass in a Pillow Image object, numpy.ndarray or raw bytes to the decode method to detect the barcode. For instance:
import pyzbar.pyzbar as pyzbar from pyzbar.pyzbar import ZBarSymbol input_image = Image.open("multi-QR-code.png") decoded_objects = pyzbar.decode(input_image, symbols=[ZBarSymbol.QRCODE])
The decode method returns a list of barcode objects detected from the image with their position info. You can use the symbols parameter to restrict what type of barcodes you want to detect. When this parameter is not specified, all its supported barcode types will be checked.
From the above, you can further loop through the list to get the actual content data of the barcodes:
for obj in decoded_objects: zbarData = obj.data.decode("utf-8") print(zbarData)
You shall see the below result:
In your real-world project, if you need to read one barcode among the multiple barcodes from a document, you may try to use the symbols to restrict the barcode types, or use regular expression to validate the detected barcode data in order to find the correct one you need.
If you need to do a lot of image pre-processing or even read barcode from video or webcam, you may install OpenCV and use the detectAndDecodeMulti method to read the QR code.
Conclusion
In this article, we have reviewed through two simple but useful packages – qrcode for generating QR code, and pyzbar for reading the content from a QR code. There are quite many other Python packages for generating all sorts of one or two dimensional barcodes, some are in pure Python packages and some are Python wrappers, you may take a look at the summary table from this blog if any specific barcode type you need is not supported by these two packages. | https://www.codeforests.com/2021/08/08/read-and-generate-qr-code-in-python/ | CC-MAIN-2022-21 | refinedweb | 996 | 50.16 |
Job control in vasp.py
Table of Contents
One of the things we often need to do in DFT calculations is setup a series of calculations, run them all, and then do some analysis. The new vasp.py helps us with this. We can create a list of calculators, and then run each one of them. We run all our jobs asynchronously in a queue system, with no real control over when they start and when they end. A challenge in this is we usually need some kind of way to stop the script after the jobs are started so we can wait for them to finish before proceeding with analysis.
We address this challenge by storing a reference to all calculators created on the Vasp class, and defining some class methods that can work on all of them. For example the Vasp.run method will run each calculator (which submits a job for each one to the queue system). The default behavior is then to exit, but we can also tell it to wait, which will cause it to periodically check if the calculations are done before proceeding.
import vasp help(vasp.Vasp.run)
Help on method run in module vasp.vasp_core: run(cls, wait=False) method of __builtin__.type instance Convenience function to run calculators. The default behavior is to exit after doing this. If wait is True, iy will cause it to wait with the default args to Vasp.wait. If wait is a dictionary, it will be passed as kwargs to Vasp.wait.
With our typical workflow of scripting in org-mode this is very convenient. We can write one script that sets up the calculations, runs them, and later when they are done, performs the analysis. We have to run this script two times. The first time submits the jobs and exits at the Vasp.run() line (see Output from first run). After the jobs are done, we run it a second time, and we get the results and make the plot!
from vasp import Vasp from ase.lattice.cubic import BodyCenteredCubic NUPDOWNS = [4.0, 4.5, 5.0, 5.5, 6.0] # These are the same for all calculations. fixed_pars = dict( xc='PBE', encut=200, kpts=[4, 4, 4], ispin=2, atoms=BodyCenteredCubic(directions=[[1, 0, 0], [0, 1, 0], [0, 0, 1]], size=(1, 1, 1), symbol='Fe')) # Prepare a list of calculators calcs = [Vasp('bulk/Fe-bcc-fixedmagmom-{0:1.2f}'.format(B), nupdown=B, **fixed_pars) for B in NUPDOWNS] # This will start each calculation, and if they are not ready abort the script. # If they are ready, we will get the energies. energies = Vasp.run() import matplotlib.pyplot as plt plt.plot(NUPDOWNS, energies) plt.xlabel('Total Magnetic Moment') plt.ylabel('Energy (eV)') plt.savefig('Fe-fixedmagmom.png')
This style works especially well for our workflow with org-mode.
1 Output from first run
Here is the output of the script the first time I ran it. It just tells me that jobs have been submitted and are queued. The output is a bit verbose, because of the way the exception handling system works in vasp.py. Basically, there ends up being multiple calls to self.update before the script exits.
energy not in {}. Calc required. energy not in {}. Calc required. /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.00 submitted: 1397190.gilgamesh.cheme.cmu.edu /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.00 Queued: 1397190.gilgamesh.cheme.cmu.edu energy not in {}. Calc required. energy not in {}. Calc required. /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.50 submitted: 1397191.gilgamesh.cheme.cmu.edu /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-4.50 Queued: 1397191.gilgamesh.cheme.cmu.edu energy not in {}. Calc required. energy not in {}. Calc required. /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.00 submitted: 1397192.gilgamesh.cheme.cmu.edu /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.00 Queued: 1397192.gilgamesh.cheme.cmu.edu energy not in {}. Calc required. energy not in {}. Calc required. /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.50 submitted: 1397193.gilgamesh.cheme.cmu.edu /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-5.50 Queued: 1397193.gilgamesh.cheme.cmu.edu energy not in {}. Calc required. energy not in {}. Calc required. /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-6.00 submitted: 1397194.gilgamesh.cheme.cmu.edu /home-research/jkitchin/dft-book/blog/source/org/bulk/Fe-bcc-fixedmagmom-6.00 Queued: 1397194.gilgamesh.cheme.cmu.edu | http://kitchingroup.cheme.cmu.edu/dft-book/posts/job-control-in-vasp.py/ | CC-MAIN-2020-05 | refinedweb | 790 | 53.58 |
Warnings Plugin 5.0 (White Mountain) Public Beta
Jenkins' Warnings plugin collects compiler warnings or issues reported by static analysis tools and visualizes the results. The plugin (and the associated static analysis plugin suite) has been part of the Jenkins plugin eco-system for more than ten years now. In order to optimize user experience and support Pipeline, a major rewrite of the whole set of plugins was necessary. This new version (code name White Mountain) is now available as a public beta. Please download and install this new version and help us to identify problems before the API is sealed.
The new release is available in the experimental update center. It has built-in support for almost hundred static analysis tools (including several compilers), see the list of supported report formats.
Features overview
The Warnings
In the next sections, I’ll show the new and enhanced features in more detail.
One plugin for all tools
Previously the warnings plugin was part of the static analysis suite that provided the same set of features through several plugins (CheckStyle, PMD, Static Analysis Utilities, Analysis Collector etc.). In order to simplify the user experience and the development process, these plugins and the core functionality have been merged into the warnings plugin. All other plugins are not required anymore and will not be supported in the future. If you currently use one of these plugins you should migrate to the new recorders and steps as soon as possible. I will still maintain the old code for a while, but the main development effort will be spent into the new code base.
The following plugins have been integrated into the beta version of the warnings plugin:
Android-Lint Plugin
CheckStyle Plugin
CCM Plugin
Dry Plugin
PMD Plugin
FindBugs Plugin
All other plugins still need to be integrated or need to be refactored to use the new API.
New pipeline support
Requirements for using the Warnings plugin in Jenkins Pipeline can be complex and sometimes controversial. In order to be as flexible as possible I decided to split the main step into two individual parts, which could then be used independently from each other.
Simple pipeline configuration
The simple pipeline configuration is provided by the step
recordIssues. This step is automatically derived from the
FreeStyle job recorder: it scans for issues in a given set of files (or in the console log) and reports these issues
in your build. You can use the snippet generator to create a working snippet that calls this step. A typical example
of this step is shown in the following example:
recordIssues enabledForFailure: true, tools: [[pattern: '*.log', tool: [$class: 'Java']]], filters: [includeFile('MyFile.*.java'), excludeCategory('WHITESPACE')]
In this example, the files '*.log' are scanned for Java issues. Only issues with a file name matching the pattern 'MyFile.*.java' are included. Issues with category 'WHITESPACE' will be excluded. The step will be executed even if the build failed. The recorded report of warnings will be published under the fixed URL 'https://[your-jenkins]/job/[your-job]/java'. URL or name of the report can be changed if required.
Advanced Pipeline Configuration
Sometimes publishing and reporting issues using a single step is not sufficient. For instance, if you build your product using several parallel steps and you want to combine the issues from all of these steps into a single result. Then you need to split scanning and aggregation. Therefore, the plugin provides the following two steps that are combined by using an intermediate result object:
scanForIssues: this step scans a report file or the console log with a particular parser and creates an intermediate report object that contains the report.
publishIssues: this step publishes a new report in your build that contains the aggregated results of one or several
scanForIssuessteps.
You can see the usage of these two steps in the following example:
def java = scanForIssues tool: [$class: 'Java'] def javadoc = scanForIssues tool: [$class: 'JavaDoc'] publishIssues issues:[java, javadoc], filters:[includePackage('io.jenkins.plugins.analysis.*')] def checkstyle = scanForIssues tool: [$class: 'CheckStyle'], pattern: '**/target/checkstyle-result.xml' publishIssues issues:[checkstyle] def pmd = scanForIssues tool: [$class: 'Pmd'], pattern: '**/target/pmd.xml' publishIssues issues:[pmd] publishIssues id:'analysis', name:'White Mountains Issues', issues:[checkstyle, pmd], filters:[includePackage('io.jenkins.plugins.analysis.*')]
Filtering issues
The created report of issues can be filtered afterwards. You can specify an arbitrary number of include or exclude filters. Currently, there is support for filtering issues by module name, package or namespace name, file name, category or type.
An example pipeline that uses such a filter is shown in the following snippet:
recordIssues tools: [[pattern: '*.log', tool: [$class: 'Java']]], filters: [includeFile('MyFile.*.java'), excludeCategory('WHITESPACE')]
Quality gate configuration
You can define several quality gates that will be checked after the issues have been reported. These quality gates let you to modify Jenkins' build status so that you immediately see if the desired quality of your product is met. A build can be set to unstable or failed for each of these quality gates. All quality gates use a simple metric: the maximum number of issues that can be found and still pass a given quality gate.
An example pipeline that enables a quality gate for 10 warnings in total or 1 new warning is shown in the following snippet:
recordIssues tools: [[pattern: '*.log', tool: [$class: 'Java']]], unstableTotalHigh: 10, unstableNewAll: 1
Issues history: new, fixed, and outstanding issues
One highlight of the plugin is the ability to categorize issues of subsequent builds as new, fixed and outstanding.
Using this feature makes it a lot easier to keep the quality of your project under control: you can focus only on those warnings that have been introduced recently.
Note: the detection of new warnings is based on a complex algorithm that tries to track the same warning in two two different versions of the source code. Depending on the extend of the modification of the source code it might produce some false positives, i.e., you might still get some new and fixed warnings even if there should be none. The accuracy of this algorithm is still ongoing research and will be refined in the next couple of months.
Severities
The plugin shows the distribution of the severities of the issues in a chart. It defines the following default severities, but additional ones might be added by plugins that extend the warnings plugin.
Error: Indicates an error that typically fails the build
Warning (High, Normal, Low): Indicates a warning of the given priority. Mapping to the priorities is up to the individual parsers.
Note that not every parser is capable of producing warnings with a different severity. Some of the parses simply use the same severity for all issues.
Build Trend
In order to see the trend of the analysis results, a chart showing the number of issues per build is also shown. This chart is used in the details page as well as in the job overview. Currently, type and configuration of the chart is fixed. This will be enhanced in future versions of the plugin.
Issues Overview
You can get a fast and efficient overview of the reported set of issues in several aggregation views. Depending on the number or type of issues you will see the distribution of issues by
Static Analysis Tool
Module
Package or Namespace
Severity
Category
Type
Each of these detail views are interactive, i.e. you can navigate into a subset of the categorized issues.
Issues Details
The set of reported issues is shown in a modern and responsive table. The table is loaded on demand using an Ajax call. It provides the following features:
Pagination: the number of issues is subdivided into several pages which can be selected by using the provided page links. Note that currently the pagination is done on the client side, i.e. it may take some time to obtain the whole table of issues from the server.
Sorting: the table content can be sorted by clicking on ony of the table columns.
Filtering, Searching: you can filter the shown issues by entering some text in the search box.
Content Aware: columns are only shown if there is something useful to display. I.e., if a tool does not report an issues category, then the category will be automatically hidden.
Responsive: the layout should adapt to the actual screen size.
Details: the details message for an issue (if provided by the corresponding static analysis tool) is shown as child row within the table.
Remote API
The plugin provides two REST API endpoints.
Summary of the analysis result
You can obtain a summary of a particular analysis report by using the URL
[tool-id]/api/xml
(or
[tool-id]/api/json). The summary contains the number of issues, the quality gate status, and all
info and error messages.
Details of the analysis result
The reported issues are also available as REST API. You can either query all issues or only the new, fixed, or outstanding issues. The corresponding URLs are:
[tool-id]/all/api/xml: lists all issues
[tool-id]/fixed/api/xml: lists all fixed issues
[tool-id]/new/api/xml: lists all new issues
[tool-id]/outstanding/api/xml: lists all outstanding issues
How You Can Help
I hope these new features are useful for everyone! Please download or install this new release and test it in your jobs:
Convert some of your jobs to the new API and test the new (and old) features (based on your requirements).
Read all labels carefully, I’m not a native speaker so some descriptions might be misleading or incorrect.
Check the new URLs and names of the parsers, see list of supported report formats. These can’t be changed after the beta testing.
If you find a problem, incorrect phrase, typo, etc. please report a bug in Jira (or even better: file a PR in GitHub).
This has been a brief overview of the new features of the Warnings plugin in Jenkins. For more, be sure to check out my talk at "DevOps World | Jenkins World" where I show more details of the Warnings plugin! | https://www.jenkins.io/blog/2018/09/11/speaker-blog-warnings-plugin/ | CC-MAIN-2021-49 | refinedweb | 1,683 | 53.81 |
The GPL License and Linking: Still Unclear After 30 Years
It all started with an simple idea from my colleague who maintains our ipumsr R package, which we released on CRAN under the Mozilla Public License v2. His idea was “I’d like to fork the readr package from CRAN and add functionality to deal with hierarchical data so I can use it in ipumsr. But readr has a GPLv3 license.”
From there, it got anything but simple - we unwittingly waded into a decades-old debate.
Understanding Why GPL Exists
It’s useful to spend a few minutes getting some background context on the GNU General Public License (GPL).
Richard Stallman, author of the GPL and founder of the Free Software Foundation, wrote the first version of the Emacs text editor in 1975 and made the source freely available. In 1982, James Gosling wrote the first C version of Emacs. For a while, Gosling made the source code of his version freely available too, and Stallman subsequently used Gosling Emacs in 1985 to create GNU Emacs. Later Gosling sold the rights for Gosling Emacs to UniPress, which then told Stallman to stop distributing the source code. Stallman was forced to comply. After that experience, he set out to create a license to ensure that would never happen again. He wanted to preserve access to the source code of any derivative software that benefited from free software.
There are two key properties Stallman put into the GPL that are critical to this discussion. The first is that it’s a very “sticky” or “viral” license. The GPL is what is known as a strong copyleft license, which means that all derived works of the GPL’ed work must also be licensed under GPL if distributed. The second is that the GPL requires that the source code be freely available to anyone who wants it. Combined, these properties mean that you cannot use GPL code in any software system you distribute (even as a cloud-based software as a service) unless you also make your source code available under a GPL license too.
IPUMS, We Have a Problem
In contrast, our ipumsr package is released as Mozilla Public License v2 (MPLv2), which is the preferred license we use at IPUMS when releasing open source code. MPL is a weak copyleft license, which means that if you modify MPL’ed code and distribute it, you need to make the source code for those modifications available, but you’re not required to also make available your code that simply uses the MPL’ed code. We chose MPL because it strikes a good balance between keeping our own work, including improvements to it, freely available while not restricting what people can do with their own software just because they find our library useful. In other words, we don’t want to impose our licensing philosophy on other people beyond our own code. We want to preserve their freedoom to license their code as they wish.
It’s also worth noting the third major class of license, the permissive license. The best known examples of a permissive license are probably the MIT and Apache licenses. The main idea of a permissive license is to place a few restrictions as possible on the use of the code (not even requiring that distributed modifications of the code also be released as open source). If you look at the MIT license for example, it essentially says only that you must keep the copyright notice present in the code, and that if you break anything you’re on your own. That’s it.
Sometimes the GPL sort of restriction and “viral license propagation” of a strong copyleft license is what you want, but it’s not what we want, so my colleague knew we had an issue to solve. He had some ideas about how to work around that and comply with GPL, and he was coming to me for a second opinion.
Let’s Create an Intermediary
Our idea was that we would create a third package, “hipread” (hierarchical ipums reader). hipread would be a fork of readr, which we would then modify to add the hierarchical support. We would release hipread as GPL, which is naturally required since we would take GPL readr code and modify it to make hipread.
Essentially, hipread would be a small wrapper/extension of readr, we’d release it as GPL, we’d use that library in our ipumsr library, and we’re all good. Right?
Not so fast…
What Constitutes a Derived Work?
When we started researching our proposed solution to see if it met all licensing requirements and goals, we determined that the first part of our idea - writing hipread as a wrapper extension around readr and releasing hipread as GPL - would be a fine option. However, we then came across quite the surprise…
It seems that there’s quite a bit of debate around whether or not simply using a GPL’ed library (e.g. via an
import or
use statement) in your code constitutes creating a derived work and therefore subjects your code to the GPL license!
Just using hipread in ipumsr might require ipumsr to be released under GPL? Yikes. And the more we researched it, the more confusing it became.
Let’s first examine what the GPLv3 license itself has to say about this. The relevant section is titled “Corresponding Source” in the license.
“Corresponding Source for a work means all the source code needed to…run the object code… Corresponding Source includes…the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require.” Well, that sure makes it sound like if ipumsr imports readr, it has created a larger derivative work of readr.
The GPL FAQ confirms this strict interpretation:
Linking a GPL covered work statically or dynamically with other modules is making a combined work based on the GPL covered work. Thus, the terms and conditions of the GNU General Public License cover the whole combination.
I suppose this isn’t that surprising. Agree with Stallman’s viewpoint or not, given his experience it makes sense that his intent would be that anyone who uses GPL’ed code to build a derived software system should have to release their source code back to the world, and that his interpretation of what makes a derived system would be fairly broad.
On that latter point, the FAQ goes on to say:.
Now we’re getting oddly situational about what does and does not constitute a combined work. That last bit about “only invoking main” is a bit confusing in how that would apply to the readr-hipread-ipumsr relationship. readr has a function to read a csv file which is used by hipread which is in turn used by ipumsr. The file to read is an option to that function. Is that use case covered under this exception? Or because the function is returning a data structure which ipumsr is going to interact with, are we indeed creating a combined work? Not very clarifying.
It gets even stranger when you dig into the FSF’s answer to this FAQ question: What is the difference between an “aggregate” and other kinds of “modified versions”? interesting part of this answer is not whether we’re distributing an aggregate or not, but rather the insight this answer offers into what FSF considers to be a single program. They come back to “if the semantics of the communication are intimate enough”, but also assert “this is a legal question, which ultimately judges will decide.”
However, I think it’s fair to conclude at this point that in the opinion of the FSF, they want what we’re doing to be bound by GPL. Our ipumsr library doesn’t work without significant interaction with readr, so therefore in their eyes we’ve created a combined work.
Is the FSF Position Enforceable?
For a counterpoint, we can turn to Larry Rosen, former general counsel of the Open Source Initiative. He wrote a concise Linux Journal article on his opinion of what constitutes a derivative work in 2003. His key conclusion.
and he goes on to assert why he feels this is important:
You should care about this issue to encourage that free and open-source software be created without scaring proprietary software users away. We need to make sure that companies know, with some degree of certainty, when they’ve created a derivative work and when they haven’t.
Rosen speaks in more depth about the topic as it specifically relates to the GPL in a 3-page white paper titled “The Unreasonable Fear of Infection” and comes to the same strong conclusion that linking to GPL code is not enough to meet the definition of a derivative work.
Malcolm Bain, a Barcelona lawyer, also explored this topic in depth in a 2011 white paper, but frustratingly concludes, more or less, “it’s unclear”.
This pattern of confusion is reflected across the internet as a whole. You can find plenty of people who argue that using a library does expose your code to the GPL conditions. You can find plenty who say no, it doesn’t.
Ultimately, it has not been sorted out in a court yet, so there’s no clear answer as to the enforceability of the GPL as the FSF wants it to be. In the meantime, lots of folks are avoiding the GPL because of this uncertainty.
LGPL: A Failed(?) Attempt to Address This Problem
By 1991, shortly after the GPL was created, people started to realize that while the GPL is useful for protecting whole software applications, it created complications for library code. The FSF subsequently released the first version of the GNU Library General Public License, now known as the Lesser General Public License (LGPL), as a compromise between the strong copyleft of the GPL and the permissive nature of licenses like the MIT license. The LGPL is a weak copyleft license and it’s very similar to the MPL that we use.
The basic idea of a “weak copyleft” license is “I want to ensure that if you modify my code and distribute the modified code, then you give the source code for those modifications back to the world freely, but I really don’t care to restrict how you can simply use my code as part of your larger system.” If someone writes a library and wants to ensure:
- that the source code is available for modified copies of the library that are distributed, and
- that application developers using that library have the freedom to license their application code as they want,
then the LGPL was designed for those library developers.
Unfortunately, the Free Software Foundation doesn’t seem to like their own weak copyleft license much. In addition to intentionally labeling it “Lesser”, they specifically encourage folks to NOT use LGPL for libraries. They argue that doing so allows free libraries to be used in proprietary software, and that we shouldn’t be giving proprietary software companies any more advantages. Rather, we should create unique functionality, release it as GPL, and force companies to release their code for free if they want to use the library functionality.
LGPL has remained much less common than the GPL. According to this study, as of 2016 LGPL was 6% of the open source license “market”, whereas GPL(v2 and v3) was 34%.
In any case, readr uses GPL, not LGPL, so LGPL can’t help us with our ipumsr problem.
So… What Do We Do for Our ipumsr Problem?
It seems we’re left with three less-than-ideal choices:
- Release ipumsr under the GPL, which goes against our desire to let anyone benefit from ipumsr, whether that’s free software that prefers to use a different license than GPL, or commercial software, or whatever.
- Jump in with many, many others in the R community (and elsewhere) and use GPL’ed libraries in our non-GPL’ed code, and wait for the legal community to clarify the issue, if ever.
- Write our own library that provides the same functionality as readr and license it as we wish.
The goal we’re trying to achieve here is to simply make IPUMS data easier to use for R users. We don’t charge for IPUMS data, and if you know anything about our mission, we strongly believe in keeping data free. We’re not going to profit in any way from incorporating readr in our library.
It’s true that someone downstream may take ipumsr and use it in a way that they profit from it. I don’t know how the authors of readr would feel about that. At IPUMS, we’d be ok with that. If they distributed a modified version of our library, we’d want the source code for those modifications to be released back to the public, and the MPLv2 license that we use formalizes that wish. But their own code that simply uses our library? That’s for them to decide.
So, we’re going with option 2. It doesn’t feel great, but we’re going with the option that feels most pragmatic and is in the spirit with being as helpful as we can to the R community. If the enforceability of the GPL on code that simply uses a library is ever sorted out (and it’s been 30 years, so we’re not holding our breath) we will of course adjust accordingly, but until then, we’re just happy that our library will be available for others to use with few strings attached.
And on a pragmatic note, ipumsr already imports multiple GPL’ed packages before this issue every came onto our radar, so we’re not creating any additional exposure we didn’t already have. That shows our prior ignorance on this topic. But it’s also completely inline with what hundreds if not thousands of other CRAN packages are doing today, so perhaps our ignorance can be forgiven?
What About the R Community at Large?
In full disclosure, I am not a member of the R community. I’ve never written R code beyond a few tutorials I did to get the flavor of it. But as an IT Director who is trying to provide guidance to our organization about how we can share our code with the world in the most usable way, the GPL is a big mess that I would prefer to avoid altogether, at least until the linking issue is sorted out.
Usage of the GPL has been in general decline, along with the other copyleft licenses. In fact, between 2012 and 2018, permissive licenses overtook copyleft licenses as the most commonly used open source licenses.
And yet, the R community seems to prefer GPL as one of its favored licenses. If this is due to the community being especially principled about free software, I absolutely respect that. If, on the other hand, this propagation of GPL to so many libraries is simply due to folks being unaware of the implications, perhaps it’s time for a reckoning around this topic.
Putting aside for a moment the motivations for using GPL for so many libraries, the R community definitely has a potential looming disaster around the GPL linking issue. Spend a few minutes clicking around R’s CRAN package repository and see just how many non-GPL packages are importing GPL’ed packages. Just looking at packages which import readr, a random sampling showed almost half of them were distributed with licenses other than GPL. If a court ever were to rule that merely importing a GPL’ed library requires that code to also be GPL’ed, there’s going to be an awful lot of scrambling that would need to happen.
As it turns out, I don’t need to merely wonder about the community’s intentions. The R Consortium conducted a survey last year on this topic. Here’s some of what they found:
- 60% of respondents want other software developers to be able to use their package(s) without imposing license requirements on the software that uses their package (via API), with only 15% disagreeing.
- The most popular license used among respondents is ‘GPL-3’ at 35% with ‘GPL-2 or GPL-3’ a close second at 34% and ‘GPL-2’ next at 24%.
Those two findings confirm that there is indeed a lot of confusion about licensing in the R community. Perhaps it is time for that reckoning after all.
Fran Fabrizio Code · DevCulture
R Open Source Licensing | https://tech.popdata.org/the-gpl-license-and-linking-still-unclear-after-30-years/ | CC-MAIN-2018-47 | refinedweb | 2,776 | 66.37 |
Masquerading on OpenStack Router not working
Hello
I am working on a scenario where i want to implement IP masquerading on OS router specific port for the outgoing packets from a particular tenant network.
I tried sudo ip netns exec qrouter-1faa6c68-7719-4a2c-b92c-4961cac27ada iptables -t nat -A POSTROUTING -s 20.0.0.0/16 -o qr-b053ed0b-43 -j MASQUERADE where qrouter-1faa6c68-7719-4a2c-b92c-4961cac27ada is namespace value of the the OS router and qr-b053ed0b-43 is outgoing port where i want to do masquerading given by ifconfig -a output.
When i try sudo ip netns exec qrouter-1faa6c68-7719-4a2c-b92c-4961cac27ada iptables -t nat -L -v masquerading is shown for POSTROUTING. However, in reality it seems to be not working because the the packets i received at other end still have original source ip addresses.
Thanks in advance. | https://ask.openstack.org/en/question/96599/masquerading-on-openstack-router-not-working/ | CC-MAIN-2019-26 | refinedweb | 146 | 50.67 |
Hi all,
I want to take the position of cursor from the command line and also want it to go to specific location. But I dont know how to accomplish that.
I have tried with cout.seekp () and cout.tellp (). But they didnt worked as I hoped. seekp () and tellp () did work with files. These are not working with command line and cout.tellp () is returning -1 showing that tellp () has failed. cout.seekp () is printing on the command line where the cursor, not taking it to the value I passed to seekp as argument.
The following code is the one I tried.
//17-july-09 14.02 #include <iostream> using namespace std; int main () { int pos = 0; cout << "Hello World!"; pos = cout.tellp (); //to get the position of cursor cout << pos << endl; //this displays -1 cout.seekp (400); //want to take cursor to some other location (like 400) cout << "$" << endl; //and print $ at that position //but the cursor remains where it is and prints $ there, not at the specified location return 0; }
Any help would be appreciated.
I am using Windows Vista and MS Visual C++. | https://www.daniweb.com/programming/software-development/threads/204957/position-of-cursor-on-command-line | CC-MAIN-2022-21 | refinedweb | 187 | 83.96 |
span8
span4
span8
span4
In my startup script I use an "import ArcPy" in order to create a new version of my ArcSDE database. I changed the Python interpreter to C:\Windows\SysWOW64\python27.dll.
My code (some "private parts" disguised with <XXXXXX>
import sys import datetime import arcpy import fme inWorkspace = r'<XXXXXX>BUDATA/ArcSDE Connecties/GABUAPP_AREAALDATA.sde'parentVersion = 'sde.DEFAULT' versionName = 'GISIBsync_' + datetime.datetime.now().strftime('%Y%m%d_%H%M%S') arcpy.CreateVersion_management(inWorkspace,parentVersion,versionName,"PUBLIC")
To my surprise during runtime it seems as if the startup script is trying to create an Autodesk 3Ds reader (in bold in the output beneath), which is strange since my FME-workspace only contains Oracle Spatial and ArcSDE readers. It also seems that this reader is trying to read the ArcGIS toolboxes (!).
Even stranger: after this unsuccesful creation of a reader, the startup script continues, and finally creates the ArcSDE database version.
Starting translation...
2016-12-14 09:05:02| 0.8| 0.8|INFORM|FME 2015.0 (20150217 - Build 15253 - WIN32)
2016-12-14 09:05:02| 0.8| 0.0|INFORM|FME_HOME is 'C:\Program Files (x86)\FME\'
2016-12-14 09:05:02| 0.8| 0.0|INFORM|FME Desktop Oracle Edition (floating)
2016-12-14 09:05:02| 0.8| 0.0|INFORM|Permanent License.
2016-12-14 09:05:02| 0.8| 0.0|INFORM|Machine host name is: <XXXXXXX>
2016-12-14 09:05:02| 0.8| 0.0|INFORM|START - ProcessID: 1136, peak process memory usage: 53012 kB, current process memory usage: 53012 kB
2016-12-14 09:05:02| 0.8| 0.0|INFORM|FME Configuration: Command line arguments are `C:\Program Files (x86)\FME\fme.exe' `D:/git_repo/Etl/Geodatabase_writer\wb-xlate-1481702696704_1512' `LOG_STANDARDOUT' `YES' `LogCountServerName' `{c70fed7a-9180-42a0-a51c-7f03e505c4b8}'
2016-12-14 09:05:03| 1.0| 0.2|INFORM|Using user-specified Python interpreter from C:\Windows\SysWOW64\python27.dll
2016-12-14 09:05:03| 1.0| 0.0|INFORM|Python version 2.7 successfully loaded
2016-12-14 09:05:03| 1.0| 0.0|INFORM|FME_BEGIN_PYTHON: evaluating python script from string...
2016-12-14 09:05:12| 8.8| 7.8|INFORM|FME Extension for ArcGIS (9) (20140217 - Build 14241 - WIN32)
2016-12-14 09:05:12| 8.8| 0.0|INFORM|FME 2015.0 (Build 15253)
2016-12-14 09:05:25| 20.2| 11.4|INFORM|Creating reader for format: Autodesk 3ds
2016-12-14 09:05:25| 20.2| 0.0|INFORM|Trying to find a DYNAMIC plugin for reader named `3DS'
2016-12-14 09:05:25| 20.3| 0.0|INFORM|Loaded module '3DS' from file 'C:\Program Files (x86)\FME\plugins/3ds/3ds.dll'
2016-12-14 09:05:25| 20.3| 0.0|INFORM|FME API version of module '3DS' matches current internal version (3.7 20141021)
2016-12-14 09:05:25| 20.3| 0.0|INFORM|Creating reader for format: Autodesk 3ds
2016-12-14 09:05:25| 20.3| 0.0|INFORM|Trying to find a DYNAMIC plugin for reader named `3DS'
2016-12-14 09:05:25| 20.3| 0.0|INFORM|FME API version of module '3DS' matches current internal version (3.7 20141021)
2016-12-14 09:05:25| 20.3| 0.0|INFORM|3DS Reader: Reading in Variable FeatureType mode
2016-12-14 09:05:25| 20.3| 0.0|INFORM|3DS Reader: Opening dataset: 'C:/Program Files (x86)/ArcGIS/ArcGISDataReviewer/Desktop10.2/ArcToolbox/Toolboxes/Data Reviewer Tools.tbx'
2016-12-14 09:05:25| 20.3| 0.0|ERROR |3DS Reader: Failed to load file: 'C:/Program Files (x86)/ArcGIS/ArcGISDataReviewer/Desktop10.2/ArcToolbox/Toolboxes/Data Reviewer Tools.tbx'
2016-12-14 09:05:25| 20.3| 0.0|ERROR |3DS Reader: Error opening the dataset: 'C:/Program Files (x86)/ArcGIS/ArcGISDataReviewer/Desktop10.2/ArcToolbox/Toolboxes/Data Reviewer Tools.tbx'
2016-12-14 09:05:25| 20.3| 0.0|INFORM|3DS Reader: Closing reader
2016-12-14 09:05:25| 20.3| 0.0|WARN |UniversalReader -- readSchema resulted in 0 schema features being returned
2016-12-14 09:05:25| 20.3| 0.0|WARN |Reader Parameter(0) = >C:\Program Files (x86)\ArcGIS\ArcGISDataReviewer\Desktop10.2\ArcToolbox\Toolboxes\Data Reviewer Tools.tbx<
2016-12-14 09:05:25| 20.3| 0.0|WARN |Reader Directive(0) = >UNIQUE_ID_ATTRIBUTE<
2016-12-14 09:05:25| 20.3| 0.0|WARN |Reader Directive(1) = >FME_FEATURE_ID<
2016-12-14 09:05:25| 20.3| 0.0|WARN |Reader Directive(2) = >PERSISTENT_CACHE<
I can symphatize. Consider opening a support ticket with Safe:?
Not a show-stopper, as the script continues and creates the new version in ArcSDE.
But a bit inconvenient, since I'd rather have my scripts & workspace error-free
The Data Interop might possibly explain it, but that sounds like a question for Safe support.
Is it a show-stopper for your workspace?
Dropping the "import fme" doesn't make a difference.
When removing the startup script completely FME doesn't create the Autdesk 3Ds Reader. Also, when having a startup script without import ArcPy the Autodesk Reader isn't created.
One thing to note is that our ArcGIS (10.2.2) setup includes the Data Interoperability extension.
Perhaps this lead to a sort of "circular reference"?
(unfortunately I can't remove the Data Interop extension myself to check it out, the entire setup of both FME and ArcGIS have been scripted)
First of all, you can drop the "import fme" statement and see if that makes a difference.
Also, try temporarily removing the startup script and compare the logs, does FME still try to create the Autodesk reader?
Answers Answers and Comments
9 People are following this question.
inserting rows into SQL SDE table 3 Answers
My Python script that uses arcpy no longer runs in FME 2016.1 1 Answer
Python dll not working in FME 2016 32-bit 5 Answers
Python startup script to operate on an unpacked zipped File Geodatabase input 1 Answer
Run workspace in ArcGIS Python toolbox (.pyt) ? 1 Answer | https://knowledge.safe.com/questions/37659/arcpy-import-statement-tries-to-perform-import-act.html | CC-MAIN-2020-10 | refinedweb | 1,015 | 52.76 |
XMonad.Actions.ConstrainedResize
Description
Lets you constrain the aspect ratio of a floating window (by, say, holding shift while you resize).
Useful for making a nice circular XClock window.
Synopsis
- mouseResizeWindow :: Window -> Bool -> X ()
Usage
You can use this module with the following in your
~/.xmonad/xmonad.hs:
import qualified XMonad.Actions.ConstrainedResize as Sqr
Then add something like the following to your mouse bindings:
, ((modm, button3), (\w -> focus w >> Sqr.mouseResizeWindow w False)) , ((modm .|. shiftMask, button3), (\w -> focus w >> Sqr.mouseResizeWindow w True ))
The line without the shiftMask replaces the standard mouse resize function call, so it's not completely necessary but seems neater this way.
For detailed instructions on editing your mouse bindings, see XMonad.Doc.Extending. | http://hackage.haskell.org/package/xmonad-contrib-0.10/docs/XMonad-Actions-ConstrainedResize.html | CC-MAIN-2015-32 | refinedweb | 119 | 50.84 |
It's been a very busy month, I can tell you.
Ruby/Google is at release 0.4.0 now. This is a beta release and will probably be the last for a while. All the useful features that I can think of are now present and, to the best of my knowledge, functioning correctly.
bash-completion continues to do well. It's in Debian and Mandrake's Cooker release, but was pulled from Red Hat 7.3 for the fourth beta. Oh well...
I'm currently working on Ruby/DICT, which will be a client and client-side library for the DICT protocol, as defined in RFC2229.
I really can't speak highly enough of the Ruby programming language.
Just look at this piece of code:
module HTML
def method_missing(methId, data, attrs={})
tag = methId.id2name
tag.upcase!
attr_str = ''
attrs.each do |key, value|
attr_str << sprintf(' %s="%s"', key.upcase, value)
end
sprintf("<%s%s>%s</%s>", tag, attr_str, data, tag)
end
end
Assuming that has been saved as html.rb, you now have a very simple module that returns strings marked up as HTML, just like CGI.pm in Perl.
You call it like this:
require 'html'
include HTML
puts a('Google', 'href' => '')
puts ul(li('item1') + li('item2') + li('item3'))
#!/usr/bin/ruby -w
require 'html' include HTML
puts a('Google', 'href' => '') puts ul(li('item1') + li('item2') + li('item3'))
The output looks like this:
<A HREF="">Google</A>
<UL><LI>item1</LI><LI>item2</LI><LI>item3</LI></UL>
Voila, no more arsing around with HTML tags in your scripts.
This is how it works. You simply call the tag that you require as if it were a method in the module HTML. However, no such method is defined, so Ruby invokes the method method_missing to handle the error. By default, this method is, itself, undefined, which leads Ruby to raise an exception.
However, in the module above, the method has been defined. As its argument, it is passed the method as a symbol name, then the original arguments to the non-existent tag method. The last argument is optional and is a hash of attribute/value pairs, common in so many HTML tags.
The method then converts the tag and attributes to upper-case, formats the tag around its content, and ultimately returns it to the calling code.
The thing that's remarkable about the above code is how brief it is and how easy it was to write. It took me all of five minutes. Consider how the same code would look in Perl or Python.
Sarah and I are leaving for a long weekend in Vancouver, Canada on Friday. I'm rather looking forward to that. It will be another nice break, probably our last in the run-up to the wedding in August.
It looks like we're going to have to move. Our landlord wants to sell our apartment and we're not about to cough up the $425,000 he wants for the place, so we're going to be on the move again.
I think we're going to end up in Menlo Park this time. We've seen a couple of really nice places and I hope to sign a lease later this week.
It'll be a shame to leave this place. I really like Palo Alto and being close to University Avenue has been fantastic, but there just seems to be a dearth of large apartments in this town. Menlo Park's just up the road, though, so it's not like we'll be travelling very far. | http://www.advogato.org/person/ianmacd/diary.html?start=79 | CC-MAIN-2014-41 | refinedweb | 602 | 82.14 |
”
The 0.10 C++ Broker supports the following additional Queue configuration options:
This allows to specify how to size a queue and what to do when the sizing constraints have been reached. The queue size can be limited by the number messages (message depth) or byte depth on the queue.
Once the Queue meets/ exceeds these constraints the follow policies can be applied
REJECT - Reject the published message
FLOW_TO_DISK - Flow the messages to disk, to preserve memory
RING - start overwriting messages in a ring based on sizing. If head meets tail, advance head
RING_STRICT - start overwriting messages in a ring based on sizing. If head meets tail, AND the consumer has the tail message acquired it will reject
Examples:
Create a queue an auto delete queue that will support 100 000 bytes, and then REJECT
#include "qpid/client/QueueOptions.h" QueueOptions qo; qo.setSizePolicy(REJECT,100000,0); session.queueDeclare(arg::queue=queue, arg::autoDelete=true, arg::arguments=qo);
Create a queue that will support 1000 messages into a RING buffer
#include "qpid/client/QueueOptions.h" QueueOptions qo; qo.setSizePolicy(RING,0,1000); session.queueDeclare(arg::queue=queue, arg::arguments=qo);
The default ordering in a queue in Qpid is FIFO. However additional ordering semantics can be used namely LVQ (Last Value Queue). Last Value Queue is define as follows.
If I publish symbols RHT, IBM, JAVA, MSFT, and then publish RHT before the consumer is able to consume RHT, that message will be over written in the queue and the consumer will receive the last published value for RHT.
Example:
#include "qpid/client/QueueOptions.h" QueueOptions qo; qo.setOrdering(LVQ); session.queueDeclare(arg::queue=queue, arg::arguments=qo); ..... string key; qo.getLVQKey(key); .... for each message, set the into application headers before transfer message.getHeaders().setString(key,"RHT");
Notes:
Messages that are dequeued and the re-queued will have the following exceptions. a.) if a new message has been queued with the same key, the re-queue from the consumer, will combine these two messages. b.) If an update happens for a message of the same key, after the re-queue, it will not update the re-queued message. This is done to protect a client from being able to adversely manipulate the queue.
Acquire: When a message is acquired from the queue, no matter it's position, it will behave the same as a dequeue
LVQ does not support durable messages. If the queue or messages are declared durable on an LVQ, the durability will be ignored.
A fully worked ??? can be found here
This option is used);
This option is used to determine whether enqueue/dequeue events representing changes made to queue state are generated. These events can then be processed by plugins such as that used for Section 1.7, “ Queue State Replication ”.
Example:
#include "qpid/client/QueueOptions.h" QueueOptions options; options.enableQueueEvents(1); session.queueDeclare(arg::queue="my-queue", arg::arguments=options);
The boolean option indicates whether only enqueue events should be generated. The key set by this is 'qpid.queue_event_generation' and the value is and integer value of 1 (to replicate only enqueue events) or 2 (to replicate both enqueue and dequeue events). | http://qpid.apache.org/releases/qpid-0.16/books/AMQP-Messaging-Broker-CPP-Book/html/ch01s02.html | CC-MAIN-2014-42 | refinedweb | 528 | 56.35 |
Sentence generation using Markov Chains
Project description
markovipy
She tries striking conversations with you with her cohesive sentences after you have given her fill of text to her. And no she won’t complain about how big your corpus is. Also, don’t ask her if she can pass the turing test. She might not talk to you again.
I wrote a blog post explaining the motivation and what is there behind the scenes if you are interested
Installation
To install markovipy, run this command in your terminal:
$ pip install markovipy
This is the preferred method to install markovipy, as it will always install the most recent stable release.
If you don’t have pip installed, this Python installation guide can guide you through the process.
Usage
To use markovipy in a project:
>>> from markovipy import MarkoviPy >>> >>> # create MarkoviPy object >>> obj = MarkoviPy("/Users/tasrahma/development/projects/markovipy/corpus/ts_eliot/Gerontion_utf8.txt", 3) >>> >>> >>> # arguments passed, is the initial corpus file and the markov chain length(defaults to 2 if nothing passed) >>> obj.generate_sentence() 'Cammel, whirled Beyond the circuit of the shuddering Bear In fractured atoms.' >>> >>> obj.generate_sentence() 'After such knowledge, what forgiveness? Think now History has many cunning passages, contrived corridors And issues, deceives with whispering ambitions, Guides us by vanities.' >>> >>> obj.generate_sentence() 'Gull against the wind, in the windy straits Of Belle Isle, or running on the Horn, White feathers in the snow, the Gulf claims, And an old man, a dull head among windy spaces.' >>> >>> obj.generate_sentence() 'Silvero With caressing hands, at Limoges Who walked all night in the field overhead; Rocks, moss, stonecrop, iron, merds.' >>> >>> obj.generate_sentence() "Gives too soon Into weak hands, what's thought can be dispensed with Till the refusal propagates a fear."
Free software: GNU General Public License v3
Documentation:.
Features
More tests to be added. As of now, some minimal tests have been written. Contributions more than welcome.
Create a web-interface with some fancy buttons/UI which would give you random quotes.
Links
History
0.3.0(2017-05-30)
exposed the Markovipy class in the markovipy module. API changed from
from markovipy.markovipy import MarkoviPy
to
from markovipy import MarkoviPy
0.2.0 (2017-04-29)
self.middle_value mapping becomes a defaultdict
list_to_tuple() was removed to use just the tuple()
moved self.words_list = get_word_list(self.filename) to __init__()
0.1.1 (2017-04-14)
fixes UnicodeDecodeError while reading files instead of using the normal open()
0.1.0 (2017-04-11)
First release on PyPI.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/markovipy/ | CC-MAIN-2022-40 | refinedweb | 437 | 66.64 |
08). One ambition of XML has always been to help clean up this mess, hence XML's designation as "SGML for the web" (SGML is the meta-language of which HTML is just one flavor). XML came on the scene and immediately made a lot of waves. The W3C expected, reasonably enough, that XML might also find success in the browser, and set up XHTML as the most natural evolution from HTML to something more coherent. Unfortunately, unexpected problems kept popping up to sabotage this ambition. Deceptively simple concepts such as namespaces and linking turned into firestorms of technological politics. The resulting controversies and delays were more than enough to convince browser developers that XML might help escape the known problems, but it was offering up plenty of new and possibly unknown ones of its own.
Even without the mounting evidence that XML is not a panacea, browser developers were always going to have difficulty migrating to a strict XML-based path for the web given the enormous legacy of pages using tag soup, and considering Postel's Law, named after legendary computer scientist John Postel. This law states:
Be conservative in what you do; be liberal in what you accept from others.
The strictures of XML are compatible with this law on the server or database side, where managers can impose conservatism as a matter of policy. As a result, this is where XML has thrived. A web browser is perhaps the ultimate example of having to accept information from others, so that's where tension is the greatest regarding XML and Postel's law.
XHTML is dead. Long live XHTML
All this tension came to a head in the past few years. Browser vendors had been largely ignoring the W3C, and had formed the Web Hypertext Application Technology Working Group (WHAT WG) in order to evolve HTML, creating HTML5. Support for W3C XHTML was stagnant. The W3C first recognized the practicalities by providing a place to continue the HTML5 work, and it accepted defeat by retiring XHTML efforts in 2009. There's no simple way to assess whether or not this means the end of XHTML in practice. HTML5 certainly is not at all designed to be XML friendly, but it does at least give lip service in the form of an XML serialization for HTML, which, in this article, I'll call XHTML5. Nevertheless, the matter is far from settled, as one of the HTML5 FAQ entries demonstrates:
If I’m careful with the syntax I use in my HTML document, can I process it with an XML parser? No, HTML and XML have many significant differences, particularly parsing requirements, and you cannot process one using tools designed for the other. However, since HTML5 is defined in terms of the DOM, in most cases there are both HTML and XHTML serializations available that can represent the same document. There are, however, a few differences explained later that make it impossible to represent some HTML documents accurately as XHTML and vice versa.
The situation is very confusing for any developer who is interested in the future of XML on the web. In this article, I shall provide a practical guide that illustrates the state of play when it comes to XML in the HTML5 world. The article is written for what I call the desperate web hacker: someone who is not a W3C standards guru, but interested in either generating XHTML5 on the web, or consuming it in a simple way (that is, to consume information, rather than worrying about the enormous complexity of rendering). I'll admit that some of my recommendations will be painful for me to make, as a long-time advocate for processing XML the right way. Remember that HTML5 is still a W3C working draft, and it might be a while before it becomes a full recommendation. Many of its features are stable, though, and already well-implemented on the web.
Serving up documents to be recognized as XHTML5
Unfortunately, I have more bad news. You might not be able to use XHTML5 as
officially defined. That is because some specifications say that, in order
to be interpreted as XHTML5, it must be served up using
the
application/xhtml+xml or
application/xml MIME type. But if you do so, all
fully released versions of Microsoft® Internet Explorer® will fail
to render it (you're fine with all other major, modern web browsers).
Your only pragmatic solution is to serve up syntactic XHTML5 using the
text/html MIME type. This is technically a
violation of some versions of the HTML5 spec, but you might not have much
choice unless you can exclude support for Internet Explorer. To add to
the confusion this is a very contentious point in the relevant working
group, and in at least some drafts this language has been toned down.
Internet Explorer 9 beta (also known as a "platform preview") does have full support for XHTML served with an XML MIME type, so once this version is widespread among your users, this problem should go away. Meanwhile, if you need to support Internet Explorer 6 or older, even the workarounds introduced in this article are not enough. You pretty much have to stick to HTML 4.x.
Recommendation for the desperate web hacker: Serve up syntactic
XHTML5 using the
text/html MIME type.
Fun with DOCTYPE
One piece of good news, from a desperate web hacker perspective, is that XHTML5 brings fewer worries about document type declaration (DTDecl). XHTML 1.x and 2 required the infamous construct such as: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "">. The biggest problem with this was that a naive processor is likely to load that DTD URL, which might be an unwanted network operation. Furthermore, that one URL includes many others, and it wasn't uncommon for you to unnecessarily end up downloading dozens of files from the W3C site. Every now and then, the W3C-hosted files even had problems, which lead to extraordinarily hard-to-debug problems.
In XHTML5, the XML nature of the file is entirely determined by MIME type,
and any DTDecl is effectively ignored, so you can omit it. But HTML5 does provide a minimal DTDecl,
<!DOCTYPE html>. If you use this DTDecl, then almost all browsers will switch to "standards" mode, which, even if not fully HTML5, is generally much more compliant and predictable. Notice that the HTML5 DTDecl does not reference any separate file and so avoids some of the earlier XHTML problems.
Recommendation for the desperate web hacker: Use the HTML minimal
document type declaration,
<!DOCTYPE html>, in XHTML5.
Since you are not using any external DTD components, you cannot use common
HTML entities such as
or
©. These are defined in XHTML DTDs which you are not
declaring. If you try to use them, an XML processor will fail with an
undefined entity error. The only safe named
character entities are:
<,
>,
&,
", and
'. Use numerical equivalents instead. For example, use
rather than
and
© rather than
©.
Recommendation for the desperate web hacker: Do not use any named
character entities except for:
<,
>,
&,
", and
'
Technically speaking, if you serve up the document as
text/html, according to the first recommendation, you won't get errors from most browsers using HTML named character entities, but relying on this accident is very brittle, and remember that browsers are not the only consumer of XML. Other XML processors will choke on such documents.
Fun with namespaces
The last layer in the over-elaborate cake of mechanisms for recognizing the XML format, after MIME type and DTDecl, is the namespace. You're probably used to starting XHTML documents with a line such as the following.
<html xmlns="" xml:
The part in bold type (
xmlns="") is the namespace. In XHTML5, this namespace is still required. If you include other XML vocabularies, such as Scalable Vector Graphics (SVG), put these in their respective, required namespaces.
Recommendation for the desperate web hacker: Always include the default namespace at the top of XHTML5 documents and use the appropriate namespaces for other, embedded XML formats.
If you do include other vocabularies, their namespace declarations must be in the outermost start tags of the embedded sections. If you declare them on the
html element, you commit a
text/html document-conformance error.
Working with XHTML5 content
XHTML5 requires that you specify the media type either in a protocol
header, such as HTTP
Content-Type header, using a special character marker called a Unicode Byte Order Mark (BOM) or using the XML declaration. You can use any combination of these as long as they do not conflict, but the best way to avoid problems is to be careful in how you combine mechanisms. Unfortunately, using an XML declaration is a potential problem, because it causes all Internet Explorer versions 8 and below to switch to quirks mode, resulting in the infamous rendering anomalies for which that browser is famous.
Recommendation for the desperate web hacker: Only use Unicode
encodings for XHTML5 documents. Omit the XML declaration, and use the UTF-8 encoding, or use a UTF-16 Unicode Byte Order Mark (BOM) at the beginning of your document. Use the
Content-Type HTTP header while serving the document if you can.
The following is an example of such an HTTP header:
Content-Type: "text/html; charset=UTF-8"
The new semantic markup elements
HTML5 introduces new elements that provide clearer semantics for content
structure, such as
section and
article. These are the parts of HTML5 that might still be subject to
change, but changes will not likely be drastic, and the risk is balanced
by the improved expression provided by the new elements. One problem is
that Internet Explorer doesn't construct these elements in DOM, so, if you use
JavaScript, you'll need to employ another workaround. Remy Sharp
maintains a JavaScript fix that you can deploy by including the following
snippet in your document head (see Resources for a link).
<!--[if IE]> <script src=""></script> <![endif]-->
You might also need to define CSS rules for the elements just in case any browsers do render your document in HTML 4 style which defaults unknown elements to inline rendering. The following CSS should work.
header, footer, nav, section, article, figure, aside { display:block; }
Recommendation for the desperate web hacker:
Use the new HTML5 elements, but include the HTML5
shiv JavaScript and default CSS rules to support them.
Bringing it all together
I've made many separate recommendations, so I'll bring them all together into a complete example. Listing 1 is XHTML5 that meets these recommendations. When serving it over HTTP, use the header
Content-Type: "text/html; charset=UTF-8" unless you can afford to refuse support for Internet Explorer, in which case use the header
Content-Type: "application/xhtml+xml; charset=UTF-8".
Listing 1. Complete XHTML5 example
<!DOCTYPE html> <html xmlns="" xml: <head> <title>A micro blog, in XHTML5</title> <style> <!-- Provide a fall-back for browsers that don't understand the new elements --> header, footer, nav, section, article, figure, aside { display:block; } </style> <script type="application/javascript"> <!-- Hack support for the new elements in JavaScript under Internet Explorer --> <!--[if IE]> <script src=""></script> <![endif]--> </script> <script type="application/javascript"> <!-- ... Other JavaScript goes here ... --> </script> </head> <body> <header>A micro blog</header> <article> <section> <p> There is something important I want to say: </p> <blockquote> A stitch in time saves nine. </blockquote> </section> <section><p>By the way, are you as excited about the World Cup as I am?</p> </section> </article> <article> <section> <p> Welcome to my new XHTML5 weblog <img src="/images/logo.png"/> </p> </section> </article> <aside> <header>Archives</header> <ul> <li><a href="/2010/04">April 2010</a></li> <li><a href="/2010/05">May 2010</a></li> <li><a href="/2010/06">June 2010</a></li> </ul> </aside> <footer>© 2010 by Uche Ogbuji</footer> <nav> <ul> <li><a href="/">Home</a></li> <li><a href="/about">About</a></li> <li><a href="/2010/06">Home</a></li> </ul> </nav> </body> </html>
Listing 1 uses the HTML5 DTDecl and declares the default namespace at the top. The
style and
script elements in this example just provide workarounds for real-world browser issues. The
script element is only needed if you are using other JavaScript.
The document uses a lot of the new HTML5 elements, which I won't go into in
detail since they are not specific to the XML nature. See Resources for more information about these elements. Notice the "self-closed" syntax used for the
img element (in other words, it ends in
/>), and the use of numeric entity form for the copyright symbol,
©.
You can refer to Table 1 for a summary of how the above example will behave with various browsers.
Table 1. Browser support for XHTML5 that meets the recommendations in this article
Wrap up
One important, recent development is that the W3C HTML Working Group published a First Public Working Draft, "Polyglot Markup: HTML-Compatible XHTML Documents," (see Resources for a link) with the intention of giving XHTML5 a more thorough, accurate and up-to-date basis.
Again, it has been very painful for me to make many of the recommendations in this article. Such hack-arounds come from long, painful experience, and are the only way to avoid a nightmare of hard-to-reproduce bugs and strange incompatibilities when mixing XML into the real HTML world. This certainly does not mean that I have stopped advocating careful XML design and best practices. It is best to save XHTML5 for the very outermost components that connect to browsers. All flavors of XHTML are better seen as rendering languages than information-bearing languages. You should carry the main information throughout most of your system in other XML formats, and then convert to XHTML5 only at the last minute. You might wonder what is the point of creating XHTML5 even at the last minute, but remember Postel's law, which recommends being strict in what you produce. By producing XHTML5 for browsers, you make it easier for others to extract information from your websites and applications. In this age of mash-ups, web APIs, and data projects, that is a valuable characteristic.
Thanks to Michael Smith for bringing my attention to recent developments in this space.
Resources
Learn
- The HTML5 syntax issues section of the WHAT WG FAQ: Join the discussion of XML issues.
- The W3C working draft standard for XHTML5: Keep up with syntax for using HTML with XML, whether in XHTML documents or embedded in other XML documents.
- "Polyglot Markup: HTML-Compatible XHTML Documents"(W3C HTML Working Group, June 2010): Read this recently published Working Draft with a more rigorous basis for XHTML5.
- New elements, attributes and other language features in HTML5: Learn about the new elements available in XHTML5.
- Tip: Always use an XML declaration (Uche Ogbuji, developerWorks, June, 2007): Unfortunately, because of browser inconsistencies, this article recommends not using the XML declaration in XHTML5 files served for browsers. Read why it is always a good idea to do so in general in this tip.
- Thanks to Michael Smith for bringing my attention to recent developments in this space.
- Differences between HTML5 and XHTML5: Discover how HTML and XHTML significantly differ from each other even thought they appear to have similar syntax.
- Learn more about HTML5 in developerWorks articles and tutorials:
- New elements in HTML5 Structure and semantics (Elliotte Rusty Harold, August 2007): Explore new structural and inline elements in HTML5.
- Create modern web sites using HTML5 and CSS3 (Joe Lennon, March 2010): Implement the canvas and video elements of HTML5 in this hands-on introduction to HTML5 and CSS3.
- Build web applications with HTML5 (Michael Galpin, March 2010): Create tomorrow's web applications today with powerful HTML5 features such as multi-threading, geolocation, embedded databases, and embedded video.
- HTML5—XML's Stealth Weapon (Jonny Axelsson, July 2009): Read a reasonable summary of the history that led to XHTML5.
- Postel's law: Learn more about this. It is also called the robustness principle.
- New to XML: If you are new to XML, start exploring XML and all you can do with it. Readers of this column might be too advanced for this page, but it's a great place to get your colleagues started. All XML developers can benefit from the XML zone's coverage of many XML standards.
- My developerWorks: Personalize your developerWorks experience.
- IBM XML certification: Find out how you can become an IBM-Certified Developer in XML and related technologies.
- The developerWorks XML zone: Find more XML resources, including previous installments of the Thinking XML column. If you have comments on this article, or any others in this column please post them on the Thinking XML forum.
- XML technical library: See the developerWorks XML Zone for a wide range of technical articles and tips, tutorials, standards, and IBM Redbooks.
- The developerWorks Web development zone: Expand your site development skills with articles and tutorials that specialize in web technologies.
- developerWorks technical events and webcasts: Stay current with technology in these sessions.
- developerWorks on Twitter: Join today to follow developerWorks tweets.
- developerWorks podcasts: Listen to interesting interviews and discussions for software developers.
Get products and technologies
- Validator.nu tool: Validate your XHTML5 documents.
- HTML5 enabling script (Remy Sharp): Try this fix for Internet Explorer problems in accessing the new HTML5 elements from JavaScript.
- html5lib project: If you want to easily consume HTML or XHTML5, check out Python and PHP implementations of a HTML parser, which includes bindings for Python, C, PHP and Ruby.
-.
- My developerWorks community: Get involved and connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
-. | http://www.ibm.com/developerworks/web/library/x-think45/index.html | CC-MAIN-2014-41 | refinedweb | 2,951 | 51.48 |
Hi Stephen et al,
For everyone else, what is going on is that the reorganization of yt has required more imports to populate a self-registering plugin system for output files. Each StaticOutput subclass defines an _is_valid() function, which means "This is the type of data you have!" This is used by the load() command, so if the subclass is not imported (i.e., defined in the registry) the _is_valid function can't be found and it can't be a target for load().
If you do:
from yt.analysis_mods.... import something
but don't import the right frontend, then it can't be found. yt.mods imports all the frontends, so normal usage is not affected by this.
So on one hand, this is operating exactly as intended. The output_type_registry is filled in by a metaclass, whenever a new class is created; this is the self-registering mechanism. It means that frontends for codes are 100% contained within their directory, and there's no pollution -- if, for instance, the ramses++ headers interfere with static linking, you can avoid loading them by not loading that frontend.
This is why I recommend importing from yt.mods in all the recipes; this imports most or all of the frontends, populates the registry, and makes them all available to load. It's also why I added the amods object to yt.mods, which is an object that on-demand imports analysis modules; this helps us avoid having to import everything all at once (adds substantial time to loading, can also present problems that would otherwise be non-fatal, etc etc) but also makes available all the analysis modules. So you could for instance:
from yt.mods import * pf = load("...") amods.halo_finding.HaloFinder(pf)
Anyway, I think that if we want to preserve on-demand importing, which I very much would like to preserve, we will have to have two levels of fallback for the _is_valid function. The first, and default, would be for any subclasses of StaticOutput that define something; this is what we already do.
The second would be to provide a set of loader functions:
def enzo_is_valid(...) if ...: import yt.frontends.enzo.api return "enzo"
or something like that. We would then define or copy these right from the existing definitions, but perhaps only for the most common codes (Enzo, Orion, FLASH.) If a dataset is not recognized, it could iterate over all the known is_valid functions that exist independently of subclasses of StaticOutput.
Anyway, I guess the tl;dr summary is: this is not something I thought was a problem, it's not a problem if you use yt.mods and the amods object, but there are ways to get around it. Is this worth implementing?
-Matt
On Tue, Feb 15, 2011 at 10:21 PM, Stephen Skory stephenskory@yahoo.com wrote:
Hi Matt,
What I discovered is that without calling "from yt.mods import *", the class StaticOutput() was not being initialized in yt/data_objects/static_output.py, so the output_type_registry dict was empty in convenience.py. Should this concern us?
Stephen Skory stephenskory@yahoo.com 510.621.3687 (google voice)
Yt-dev mailing list Yt-dev@lists.spacepope.org | https://mail.python.org/archives/list/yt-dev@python.org/message/2T47GN3N4GBC7CODMQNJQB3CF77UBCZE/ | CC-MAIN-2019-51 | refinedweb | 529 | 64.51 |
void setup() { Serial.begin(9600);}void loop() { Serial.println("[ch947][ch949][ch953][ch945]!");}
I'm trying to make a simple sketch that prints international (more specificaly Greek) characters to the Serial port. I'll modify it to print Greek chars to a LED matrix. Is that even possible from the hardware perspective?
The sketch bellow doesn't work.
Do AVRs only support ASCII?
Hard to say without knowing more about your LED matrix. If it's something you've built then you can print anything you'd like on it.
It what way?
The AVR compilier supports eight-bit characters. Traditionally, the first 127 characters are ASCII. Using your example as an example, it's up to the reciever to decide how those eight-bits are interpreted.
The serial port of the Arduino can only sent bytes (value 0..255) over the line. The receiving application interpret these byte values - typically as ASCII, however you are free to use another interpretation including Greek characters. The Arduino does not know, it just send bytes. It's similar to writing a word document, selecting all and change the font to Greek. The (underlying) bytes won't change, but the interpretation and visualization is changed.Several LCD screens have the option to define your own characterset. If there are enough free definable chars in the LCD you could define the whole greek alphabet.If you are sending the data to a PC/Mac, it is up to the receveing app to translate the byte to greek.Hopes this helps.
This is what came into my mind after your reply. I'll make another array containing the Unicode characters and modify the function to use that array when needed. Will that work, considering that Unicode chars use 16bits?
Unicode, if UTF-8 encoding is used, represents eatch character using 1..4 bytes. you could try to use iso-8859-7 charset: is 8 bits per character and it should suit you well if you don't need to use any other "exotic" language simultaneously.
//*********************************************************************************************************/* * Copy a character glyph from the myfont data structure to * display memory, with its upper left at the given coordinate * This is unoptimized and simply uses plot() to draw each dot. *); } }}
// replace undisplayable characters with blank; if (c < 32 || c > 126) { charIndex = 0; }
// replace undisplayable characters with blank; if (c < 32 ) { charIndex = 0; } else if (c > 127) { // display greek font eg iso-8859-7 // thanks to VilluV } else { // display ascii font ... }
void ht1632_putchar_greek(int x, int y, char c){ byte charIndex; if (c < 193 || c > 254) charIndex = 0; else charIndex = c - 33; for (byte row=0; row<8; row++) { byte rowDots = pgm_read_byte_near(&greek[charIndex][row]); for (byte col=0; col<6; col++) { if (rowDots & (1<<(5-col))) ht1632_plot(x+col, y+row, 1); else ht1632_plot(x+col, y+row, 0); } }}
unsigned char PROGMEM greek[4][8] = { { 0x00, // ________ blank (ascii 32) 0x00, // ________ 0x00, // ________ 0x00, // ________ 0x00, // ________ 0x00, // ________ 0x00, // ________ 0x00 // ________ }, { 0x00, // ________ A 0x1C, // ___XXX__ 0x22, // __X___X_ 0x22, // __X___X_ 0x3E, // __XXXXX_ 0x22, // __X___X_ 0x22, // __X___X_ 0x00 // ________ }, { 0x00, // ________ B 0x3C, // __XXXX__ 0x22, // __X___X_ 0x3C, // __XXXX__ 0x22, // __X___X_ 0x22, // __X___X_ 0x3C, // __XXXX__ 0x00 // ________ }, { 0x00, // ________ 0x3E, // __XXXXX_ 0x20, // __X_____ 0x20, // __X_____ 0x20, // __X_____ 0x20, // __X_____ 0x20, // __X_____ 0x00 // ________ }};
ht1632_putchar_greek(0, 0, '[ch915]');
Open the sketch source file with some HEX editor and check how the char is stored there. It must be the same code that you are using in your program. If it is not, the file is still not in the correct encoding. If there is two or more bytes used for that char, it is still in UTF-8.
How to change default encoding in Arduino IDE, I don't (yet) know, maybe someone else can help with that. If it is not possible, create some .h file with your string constants with some other editor and include this file in your sketch, but don't open/edit it with IDE.
#define SOME_MSG "blah blah"#define OTHER_MSG "oh yea"
#include "messages.h"...do_something_with_string(SOME_MSG);
-finput-charset="iso-8859-7"
C67
void setup() { Serial.begin(9600); char latin = 'C'; Serial.println(latin); Serial.println((int)latin);}void loop() { }
"-108
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=45377.msg329025 | CC-MAIN-2017-04 | refinedweb | 733 | 72.76 |
>>...
:Getting screwed in both directions (Score:4, Informative)
>Checked arithmetic (on overflow, they throw an exception)
Eh. This isn't always what you want.
>Support for tail calls (for Lisp, F# and other functional languages)
IBM's compiler does do tail call optimization (for self calling)
>Value types, these are structs that are not wrapped in an object
Again, not that useful. Esp, since hotspot can compile objects down to stack-based if it wants
>Platform-invoke allows developers to call native code without having to write any glue in C++ using JNI, it can all be done in the managed language.
J/Invoke, JNA both do this
>The Common Language Specification: a spec that set the rules for interoperability across programming languages (for example: the rules for public identifier casing, handling of int vs uint and so on).
Do int's in scala work differently than ints in java?
>Delegates allow user to keep a reference to a method or an instance method and invoke it. The VM also can turn any delegate invocation into an asynchronous invocation, so you can turn any method into an async method, like this: mySortFunc.BeginInvoke (array)
java.lang.ref.Method.invoke and java.util.Runnable(and friends)
>Support for dynamic code generation through the Reflection.Emit API.
javax.tools.JavaCompiler
>A database file format allows for efficient reading of the metadata from assemblies. It does not require decompression and the database format is suitable for lazy loading.
Jar files do not have to be zipped
>Attributes were special objects that could annotate most elements in the virtual machine, you could annotate classes, methods, parameters, local variables, and at runtime the VM or the runtime could do interesting things with it.
java.lang.annotation.Annotation
>Unsafe code (pointers) to support C++, Cobol and Fortran compilers running on the CLI.
A bad idea. If you must use pointers, use an opaque interface like sun.misc.Unsafe (stores pointers as a long)
>Native support for talking to COM-based APIs. Although mostly Windows-specific, its used to talk to Mozilla, VirtualBox and OpenOffice APIs in Linux.
Win32 Specific. Apache jakarta has an implementation anyway
>Covariance and contravariance introduced with
.NET 4 make even more generic cases a pleasure to use.
Java has covariant return types. Not sure why you'd want non-explicit contravariance or covariant parameters.
>64-bit arrays (although part of the spec, only Mono implements this).
Eh. If it's really a problem, use JNI/JNA or a DB
Stuff that java does that
.net doesn't
- Work well on non Windows OS's.
- Work on non-x86
- GPL'ed reference implementation
Re:Getting screwed in both directions (Score:4, Informative)
Eh. This isn't always what you want.
That's why you get a choice:
The default is unchecked for C# and checked for VB.
IBM's compiler does do tail call optimization (for self calling)
It's trivial for safe calling. CLR has a special instruction to permit cross-method tail calls (it even handles function pointers), which is crucial for getting languages such as ML or Scheme to work right.
Again, not that useful. Esp, since hotspot can compile objects down to stack-based if it wants
Practice shows that global escape analysis is nowhere near as good as programmer's knowledge about how things should be. Yes, HotSpot can theoretically optimize those things away completely, but in practice it often cannot do even trivial stuff, such as avoiding boxing overhead in many common scenarios.
J/Invoke, JNA both do this
Yeah, and it's a mess due to Java's poor type system.
.NET can directly map to anything that can be described in C - it has structs with explicit layout control (this covers custom alignment and unions), unsigned integral types, portable pointer-sized integral types (IntPtr & UIntPtr), raw data pointers, and function pointers. With J/Invoke, you have to jump through various hoops to tell it how to map your data structures.
Do int's in scala work differently than ints in java?
The real question is - do classes work differently? Can you take any Scala class and reuse it in Java?
mySortFunc.BeginInvoke (array)
CLR delegates are actually more like magically created (by the runtime) instances of anonymous inner classes implementing callback interfaces. Their advantage is that they're implemented very efficiently by the runtime (effectively just a function pointer + a captured "this" pointer).
javax.tools.JavaCompiler
That's not the same thing. Reflection.Emit lets one emit bytecode (CIL) into a byte array - optionally with a higher-level helper API - and then have it compiled into a method, and have a delegate to it returned. JavaCompiler would be more akin to CodeDOM, but they are heavyweight solutions. Reflection.Emit is used for stuff where speed is of importance. For example,
.NET regex engine can compile regexes to bytecode (which is then JIT-compiled to native code, same as any other bytecode) for greater efficiency.
That said, you can still generate Java classes on the fly in memory as well, so it's a draw here.
java.lang.annotation.Annotation
CLR allows annotations to be placed on more things than Java. For example, on method return type.
There's one other thing in CLR type system - type modifiers, modopt and modreq. Those are somewhat like annotations in that they let types be decorated with other types, with VM ignoring it, and specific compilers interpreting the decorations as they need. Unlike attributes/annotations, though, those things decorate types (and not methods/fields/classes - i.e. not declarations), in any context (including parameter & local declarations, type casts... anything). This is used by C++, for example, to encode "const" and "volatile" - something like "const int* volatile" in C++ would become "int modreq(IsConst)* modreq(IsVolatile)" in IL, where IsConst and IsVolatile are classes declared in System.Compiler.RuntimeServices namespace. Any other language that has const/volatile semantics similar to C/C++ can reuse those, and can parse the existing declarations in assemblies compiled by Visual C++.
The VM furthermore considers two types with different modifiers to be distinct for the purposes of signature ma
Re: (Score:3, Informative)
Hmm, yep, except there is no run time checking for type safety. You can merrily pass any (pointer referenced) object through any number of function calls and not throw any exceptions. All that's required is a little reflection.
And this is the reason there is no type safety - because in the VM, LCD forces this upon you.
And there are features in the spec that MS doesn't support after 4 major versions? Say it isn't so!
Then there are
.NET collections which are .... basic at best. The concurrency toolset - much
Re: (Score:3, Informative)
Is that a strange name for a tuple?
Technically speaking, you can create new Java classes at runtime by defining them in a byte array, there's just no API for creating those byte arrays from a parse tree.
Re:Getting screwed in both directions (Score:4,?
Re:Getting screwed in both directions (Score:4, Informative)
Java is better than
.Net in the following ways:
- Good timezone support (.Net is a mess)
.Nets
- JDBC is a solid database library (unlike Ado.net)
- java.util.concurrent
- simpler, stabler language without a lot of needless features
- checked exceptions (better type checking)
- more libraries (from Sun, from apache, from jboss, spring, etc..)
- more options
- more mature
- platform independent
- defacto language standard
- G1 garbage collector and a bunch of other fancy GC options
- Camel case is not broken in Java
- Javadoc format much more readable that
- pointer compression in 64bit
- escape analysis and automatic allocation to the stack
- several open source implementations and several commercial versions
- integrates well with several high quality application servers (standardized app servers)
See.. I can play too.
Re: (Score:3, Informative)
JDBC is a solid database library (unlike Ado.net)
JDBC has API that is horribly designed. One only needs to look at Statement and PreparedStatement - where one derives from the other, but they actually work very differently. Can you explain what the inherited PreparedStatement#execute(string) is supposed to do?
java.util.concurrent
TPL [microsoft.com] and PLINQ [microsoft.com] (and Java doesn't even have anything as high-level as the latter).
simpler, stabler language without a lot of needless features
That one is purely subjective. On one hand, a lot of people seem to want some features for Java that have been in C# for ages (such as properties or lambdas). On the oth
Re:Getting screwed in both directions (Score:5, Informative)
Thank you for this, PInvoke is probably my biggest reason in prefering
.Net (mainly C#) over Java.
You can do this in Java with Java Native Access (JNA) [java.net]. From the JNA site:.)
You conveniently omit "languages" -- really, glorified macro platforms -- like ASP and ColdFusion, which were a big if not bigger influence on PHP than Perl ever was. And ASP was
... guess who? Microsoft.
(Damn. I was gonna moderate on this story, but couldn't resist replying.)
Re: (Score:3, Informative)
ASP wasn't even released until 2002. That's even two years after PHP 4.
ASP.NET, maybe. Old school ASP was circa 1998.
Re: (Score:3, Informative)
ASP wasn't even released until 2002. That's even two years after PHP 4.
No idiot...ASP was originally released with IIS in 1996. ASP is interpreted/scripted while ASP.NET was released in 2002. ASP and ASP.net share as much in common as VB6 and VB.Net do..ie language syntax and name only. Everything else is different..
Re:Ignorance, mostly. (Score:4, Informative)
+1 duck typing is awesome.
I'd bet, part of the problem the GP is griping about is really people forcing problems that aren't meant for ruby or python or php into a web application. If you have 1000's or even 100's of models, why do you need a web interface for all of it?
Ruby on rails (and in particular my personal favorite flavor hobo) makes for awesomely quick development for small - medium sized applications. I wouldn't use it for a behemoth application, unless said behemoth is really 100 medium sized problems that got stuck together for no good reason.
Also, I couldn't imagine trying to develop a website in c or c++ with having to worry about memory management and poor string support.
Re: (Score:3, Insightful), Informative)
If static languages are better, why is the bulk of web development done with dynamic languages?
I don’t know how much of that is reality and how much is popular perception. In any case, here are some general trends in mainstream statically-typed languages and mainstream dynamically-typed languages today that might contribute to the popularity of the latter for web development:
I think these are more reflections of the languages in current use and their surrounding cultures, rather than inherent traits of static vs. dynamic typing, but if we’re talking about the state of the industry today, there doesn’t seem to be any practical distinction.)
Re:Using them? (Score:5, Informative)
Yes. I have used IronRuby - it is pretty nice. I don't know much about the Windows platform, and it is really pretty useful to be able to write simple Ruby scripts that can interact with
.NET stuff. Scripting languages running on top of the CLR (and JVM) is pretty damn useful for a wide variety of applications and situations.
Re: : (Score:3, Informative)
Yes, you can build DLLs and EXEs from IronRuby projects. I haven't done so - I just use it as an interpreted scripting language and a familiar REPL when on the Windows platform - but I've heard it is possible to do so.
On JRuby - which I am more familiar with - you get both an interactive runtime and a compiler (jrubyc) which can turn Ruby into Java
.class files.: (Score:3, Informative)
Another point is that if Google had used IcedTea (the GPL'd version of Java), they never would have been at risk from Sun/Oracle's patents.
Yes, they would have. You only get a patent grant if you provide a full J2SE implementation, which would have been totally unreasonable on a phone. Merely building on top of the GPL'ed version is not enough.. | https://developers.slashdot.org/story/10/08/13/1515208/microsoft-may-back-off-of-net-languages | CC-MAIN-2016-36 | refinedweb | 2,062 | 55.74 |
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#511 closed defect (fixed)
import scipy.special gives ImportError DLL load failed: The specified module could not be found.
Description
I recently updated OSGEO4W, and now one of my QGIS plugins doesn't work. It appears that it is related to scipy. So, I started an OSGEO4W Shell, ran python, then entered the following lines:
>>> import numpy >>> numpy.version.full_version '1.11.0' >>> import scipy >>> scipy.version.full_version '0.14.0' >>> import scipy.special Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\OS1D50~1\apps\Python27\lib\site-packages\scipy\special\__init__.py", line 546, in <module> from ._ufuncs import * ImportError: DLL load failed: The specified module could not be found.
Change History (3)
comment:1 by , 6 years ago
comment:2 by , 6 years ago
comment:3 by , 6 years ago
fixed with numpy-1.11.0-2 (x86_64) (1.11.0-1 was missing fortran dlls)
Last edited 6 years ago by (previous) (diff)
Note: See TracTickets for help on using tickets.
See also #492 | https://trac.osgeo.org/osgeo4w/ticket/511 | CC-MAIN-2022-33 | refinedweb | 185 | 68.87 |
Matthew Woehlke <address@hidden> writes: > This simple test is OK on my Linux box and on NSK without -O. It does > not work /AT ALL/ (100% failure rate) on NSK with -O. OK, but the C standard does not require that it has to work, because it doesn't say what happens when you shift a negative integer right. So the test program is too strict. What happens if you use the following test program instead? That is, what is the output of this program, and its exit status? (Also, ossuming it outputs something and fails, what happens if you omit the printf? Does it still exit with status 1?) #include <limits.h> #include <stdio.h> #ifndef LLONG_MAX # define HALF_LLONG_MAX (1LL << (sizeof (long long int) * CHAR_BIT - 2)) # define LLONG_MAX (HALF_LLONG_MAX - 1 + HALF_LLONG_MAX) #endif int main (void) { long long int n = 1; int i; for (i = 0; ; i++) { long long int m = n << i; if (m >> i != n) { printf ("shift %d failed\n", i); return 1; } if (LLONG_MAX / 2 < m) break; } return 0; } | http://lists.gnu.org/archive/html/bug-coreutils/2006-11/msg00049.html | CC-MAIN-2019-13 | refinedweb | 172 | 81.43 |
21 April 2009 11:44 [Source: ICIS news]
LONDON (ICIS news)--DuPont’s first quarter net profits slumped 59% year on year to $488m (€376m) as most of its businesses continued to be badly hurt by the global recession, the company said on Tuesday.
The downturn continued to be broad and deep and was reflected in 20% lower quarterly sales for the company at $6.9bn. Sales volumes were down 19% across the group and 30% in segments other than agriculture & nutrition.
The lower sales volumes principally reflected global declines in construction, motor vehicle production and consumer spending which was exacerbated by inventory de-stocking across most supply chains, the ?xml:namespace>
Agriculture & nutrition was the only segment to lift profits in the quarter. Losses were reported from performance materials; electronic & communications technologies; and coatings & colour technologies. Profits for the company’s safety & protection businesses slumped 74%.
DuPont reduced its profits forecast for the year to a range of $1.70 to $2.10 a share with the expectation of difficult market conditions continuing with the exception of global agriculture markets.
“Our teams are working with urgency and agility to stay ahead of the worst global recession since the 1930s,” CEO Ellen Kullman said.
The company said it was increasing its 2009 fixed cost reduction goal to $1bn from $730m. Additional restructuring measures were expected to be finalised in the second quarter, it added.
($1 = €0.77)
For more on DuPont visit ICIS company intelligence
To discuss issues facing the chemical industry go to ICIS connect | http://www.icis.com/Articles/2009/04/21/9209496/duponts-q1-net-income-down-59-at-488m-revises-09-outlook.html | CC-MAIN-2014-42 | refinedweb | 256 | 53.61 |
I want to open a folder with specified items selected on Windows. I looked up the the Windows Shell Reference and found a function fit for this job: SHOpenFolderAndSelectItems.
However, I couldn't find an example on how to use it with Python. Does anybody know how I can do this?
I have another extra requirement: if that folder already open, don’t open it again and just activate it and select the file.
Using PyWin32 you can do something such as this, by default it should just activate and select the file if already open:
from win32com.shell import shell, shellcon import win32api folder = win32api.GetTempPath() folder_pidl=shell.SHILCreateFromPath(folder,0)[0] desktop = shell.SHGetDesktopFolder() shell_folder = desktop.BindToObject(folder_pidl, None, shell.IID_IShellFolder) items = [item for item in shell_folder][:5] ## print (items) shell.SHOpenFolderAndSelectItems(folder_pidl, items, 0)
Maybe you could try running the shell command through Python using subprocess.Popen. Check out this thread for more info: How to use subprocess popen Python | http://m.dlxedu.com/m/askdetail/3/4ccfcdaae562a0ac362707731a98b0b7.html | CC-MAIN-2019-04 | refinedweb | 163 | 60.11 |
Difference between revisions of "The CIO Framework - OOP344 20123"
Latest revision as of 19:44, 26 November 2012
OOP344 | Weekly Schedule | Student List | Teams | Project | Student Resources
Contents
- 1 Objective
- 2 Tips
- 3 File Names
- 4 Hierarchy
- 5 Student Resources
- 6 Issues, Releases and Due Dates
- 7 CFrame
- 8 CField
- 9 CLabel
- 10 CDialog
- 11 CLineEdit
- 12 CButton
- 13 CValEdit
- 14 CCheckMark
- 15 CMenuItem (optional)
- 16 CText
- 17 CCheckList
- 18 CMenu and MNode (optional)
Objective
Your objective at this stage is to create series of core classes designed to interact with the user. These Core Classes then can be used in development of any interactive application.
Please note that the class definitions here are minimum requirement for the Core Classes and you are free to add any enhancements or features you find useful. However make sure that you discuss these enhancements with your team and professor to make sure they are feasible before implementation.
It is highly recommended to develop the classes in the order they are stated here. You must create your own tester programs for each class (if possible); However, close to due date of each release, a tester program may be provided to help you verify the functionality of your classes. If tester programs are provided, then executables of the test programs will be available on matrix to show you how it is supposed to run.
Tips
Start by creating mock-up classes (class declaration and definition with empty methods that only compiles and don't do anything). Each class MUST have its own header file to hold its declaration and "cpp" file to hold its implementation. To make sure you do not do circular includes follow these simple guidelines:
- Add recompilation safeguards to all your header files.
- Always use forward declaration if possible instead of including a class header-file.
- Use includes only in files in which the actual header file code is used.
- Avoid "just in case" includes.
File Names
Use the following rules to create filenames for your class.
-.
Hierarchy
CFrame | |---CDialog | | |---CField | |-------- CLabel | | |-------- CButton | | |-------- CLineEdit | | | |-------CValEdit | |-------- CText | | |-------- CCheckMark | | |-------- CCheckList | | |-------- CMenuItem | | |-------- CMenu
Student Resources
Help/Questions
Hi people! Maybe someone can help me -- I am trying to do the copy constructor in CLabel which needs to copy cFrame but CFrame's attribute char_border[9] doesn't have a setter or a getter so far as I can see -- unless I'm misunderstanding something I need to add a getter/setter pair for char_border to CFrame or add a copy constructor that I could use. Does anyone have any suggestions? We can modify CFrame if we want to, right? Alina - email: ashtramwasser1
I did the CLabel class, for the copy constructor, the base class is CField, the attribute is void* _data, which you can cast to char*. then do deep copy for this data member. Hope this is useful for your question. Yun Yang
Blog Posts
Issues, Releases and Due Dates
Name Format
Issue and branch name format:
V.V_Name
example; issue: Add Text Class to the project (issue 2.9.1) issue and branch name on gitub: 2.9.1_AddTextClass
Issues
0.2 Milestone
(Due Mon Nov 12th, 23:59)
- Add console class to project and test with cio_test (issue 1)
- Create Mock-up classes
- Create the class files (header and cpp) with blank methods and make sure they compile
- CField Mock-up Class (issue 2.1)
- CLabel Mock-up Class (issue 2.2)
- CDialog Mock-up Class (issue 2.3)
- CLineEdit Mock-up Class (issue 2.4)
- CButton Mock-up Class (issue 2.5)
- CValEdit Mock-up Class (issue 2.6)
- CCheckMark Mock-up Class (issue 2.7)
- CText
- Add Text Class to the project (issue 2.8.1)
- CText Mock-up Class (issue 2.8.2)
- CCheckList Mock-up Class (issue 2.9)
0.3 Milestone
- Due along with 0.4 milestone
- CField, Dialog and Label
- line Edit
0.4 milestone
(Sun Nov 25th. 23:59)
- CButton
- CValEdit
- CCheckMark
0.6 milestone
- CText
- CheckList
CFrame
The code for this class is provided in your repository. You must understand and use it to develop your core classes in your repository.
CFrame class is responsible to create a frame or structure in which all user interface classes contain themselves in. It can draw a border around it self or be border-less. CFrame also, before displaying itself on the screen, will save the area it is about to cover, so it can redisplay them to hide itself.
CFrame is base of all objects in our user interface system.
#pragma once #include "cuigh.h" class CFrame{ int _row; // relative row of left top corner to the container frame or the screen if _frame is null int _col; // relative col of left top corner to the container frame or the screen if _frame is null int _height; int _width; char _border[9]; // border characters bool _visible; // is bordered or not CFrame* _frame; // pointer to the container of the frame (the frame, surrounding this frame) char* _covered; // pointer to the characters of the screen which are covered by this frame, when displayed void capture(); // captures and saves the characters in the area covered by this frame when displayed and sets // _covered to point to it void free(); // deletes dynamic memory in the _covered pointer protected: int absRow()const; int absCol()const; public: CFrame(int Row=-1, int Col=-1, int Width=-1,int Height=-1, bool Visible = false, const char* Border=C_BORDER_CHARS, CFrame* Frame = (CFrame*)0); virtual void draw(int fn=C_FULL_FRAME); virtual void move(CDirection dir); virtual void move(); virtual void hide(); virtual ~CFrame(); /* setters and getters: */ bool fullscreen()const; void visible(bool val); bool visible()const; void frame(CFrame* theContainer); CFrame* frame(); void row(int val); int row()const; void col(int val); int col()const; void height(int val); int height()const; void width(int val); int width()const; void refresh(); };
Properties
int _row, holds the relative coordinate of top row of this border with respect to its container.
int _col, same as _row, but for _col.
int _height, height of the entity.
int _width, width of the entity.
char _border[9], characters used to draw the border:
- _border[0], left top
- _border[1], top side
- _border[2], right top
- _border[3], right side
- _border[4], right bottom
- _border[5], bottom side
- _border[6], bottom left
- _border[7], left side
bool _visible; Indicates if the border surrounding the entity is to be drawn or not.
CFrame* _frame; holds the container (another CFrame) which has opened this one (owner or container of the current CFrame). _frame will be NULL if this CFrame does not have a container, in which case, it will be full screen and no matter what the values of row, col, width and height are, CFrame will be Full Screen (no border will be drawn)
char* _covered; is a pointer to a character array that hold what was under this frame before being drawn. When the CFrame wants to hide itself, it simple copies the content of this array back on the screen on its own coordinates.
Methods and Constructors
Private Methods
void capture();
- if _covered pointer is not pointing to any allocated memory, it will call the iol_capture function to capture the area that is going to be covered by this frame and keeps its address in _covered.
Protected Methods
- int absRow()const; calculates the absolute row (relative to the left top corner of the screen) and returns it.
- it returns the sum of row() of this border plus all the row()s of the _frames
- int absCol()const; calculates the absolute column(relative to the left top corner of the screen) and returns it.
- it returns the sum of col() of this border plus all the col()s of the _frames
Public Methods
CFrame(int Row=-1, int Col=-1, int Width=-1,int Height=-1, bool Visible = false, const char* Border=C_BORDER_CHARS, CFrame* Frame = (CFrame*)0);
- Sets the corresponding attributes to the incoming values in the argument list and set _covered to null
virtual void draw(int fn=C_FULL_FRAME);
- First it will capture() the coordinates it is supposed to cover
- If frame is fullscreen() then it just clears the screen and exits.
Otherwise:
- If the _visible flag is true, it will draw a box at _row and _col, with size of _width and _height using the _border characters and fills it with spaces. Otherwise it will just draw a box using spaces at the same location and same size.
virtual void move(CDirection dir);
First it will hide the Frame, then adjust the row and col to move to the "dir" direction and then draws the Frame back on screen.
virtual void hide();
using iol_restore()it restores the characters behind the Frame back on screen. It will also free the memory pointed by _covered;
virtual ~CFrame();
It will make sure allocated memories are freed.
bool fullscreen()const; void visible(bool val); bool visible()const; void frame(CFrame* theContainer); CFrame* frame(); void row(int val); int row()const; void col(int val); int col()const; void height(int val); int height()const; void width(int val); int width()const;
These functions set and get the attributes of the CFrame.
CFrame Help/Blogs
CField
CField is an abstract base class that encapsulates the commonalities of all Input Outputs Console Fields which are placeable on a CDialog. All Fields could be Framed, therefore a CField is inherited from CFrame.
#include "cframe.h" class CDialog; class CField : public CFrame{ protected: void* _data; public: CField(int Row = 0, int Col = 0, int Width = 0, int Height =0, void* Data = (void*) 0, bool Bordered = false, const char* Border=C_BORDER_CHARS); ~CField(); virtual int edit() = 0; virtual bool editable() const = 0; virtual void set(const void* data) = 0; virtual void* data(); void container(CDialog* theContainer); CDialog* container(); };
Attributes
void* _data;
Will hold the address of any type of data a CField can hold.
Constructors and Methods
CField(int Row = 0, int Col = 0, int Width = 0, int Height =0, void* Data = (void*) 0, bool Bordered = false, const char* Border=C_BORDER_CHARS);
Passes the corresponding attributes to it's parents enforce the children to implement;
- an edit() method
- an editable() method that returns true if the class is to edit data and false if the class is to only display data.
- a set() method to set the _data attribute to the data the class is to work with.
virtual void* data(); Complied Object Files
- Linux
- Mac
- Borland C++ 5.5
- Visual C++ 10
- Note. At least with the VS obj files, if you look at cfield.h, the method: virtual void* data(); is now: virtual void* data()const; However, this isn't the case in our header requirements. Noticed this when trying to compile with our header based on this wiki.
CLabel
A readonly Field that encapsulates console.display() function. (i.e it is responsible to display a short character string on the display) CLabel although, by inheritance is Frame, but it is never bordered.
#include "cfield.h" class CLabel : public CField{ // int _length;)
Constructors / Destructor
CLabel(const char *Str, int Row, int Col, int Len = 0);
passes the Row and Col to the CField constructor and then; if len is zero, it will allocate enough memory to store the string pointed by Str and then copies the Str into it. if len > 0, then it will allocate enough memory to store len chars,.
~CLabel();
makes sure that memory pointed by _data is deallocated before the object is destroyed.
Methods
void draw(int fn=C_NO_FRAME) ;
makes a direct call to console.display(), passing _data for the string to be printed and absRow() and absCol() for row and col and _length Complied Object Files = false, const char* Border=C_BORDER_CHARS); virtual ~CDialog(); void draw(int fn = C_FULL_FRAME); int edit(int fn = C_FULL_FRAME); int add(CField* field, bool dynamic = true); int add(CField& field, bool dynamic = false); CDialog& operator<<(CField* field); CDialog& operator<<(CField& field); bool editable(); int fieldNum()const; int curIndex()const; CField& operator[](unsigned int index); CField& curField(); }; }
Attributes
int _fnum;
Holds the number of Fields added to the Dialog
bool _editable;
will be set to true if any of the Fields added are editable. This is optional because it depends on how you are going to implement the collection of CFields:
int _curidx;
Holds the index of the Field that is currently being edited.
CField* _fld.
Constructors/Destructors
CDialog(CFrame *Container = (CFrame*)0, int Row = -1, int Col = -1, int Width = -1, int Height = -1, bool Borderd = false, const char* Border=C_BORDER_CHARS);
The constructor passes all the incoming arguments to the corresponding arguments of the apparent constructor CFrame.
Then it will set all called attributes to their default values and then sets all the field pointers (_fld) to NULL. It also sets all the dynamic (_dyn) flags to false.
virtual ~CDialog();
The destructor will loop through all the field pointers and if the corresponding dynamic flag is true then it will delete the field pointed to by the field pointer.
Methods
void draw(int fn = C_FULL_FRAME);
If fn is C_FULL_FRAME, it will call its parent draw. Then It will draw all the Fields in the Dialog.
If fn is Zero, then it will just draw all the Fields in the Dialog.
If fn is a non-zero positive value, then it will only draw Field number fn in the dialog. (First added Field is field number one.)
int edit(int fn = C_FULL_FRAME);
If CDialog is not editable (all fields are non-editable), it will just display the Dialog and then waits for the user to enter a key and then terminates the function returning the key.
If fn is 0 or less, then before editing, the draw method is called with fn as its argument and then editing begins from the first editable Field.
If fn is greater than 0 then editing begins from the first editable key on or after Field number fn.
Note that fn is the sequence number of field and not the index. (First field number is one)
Start editing from field number fn;
Call the edit of each field and depending on the value returned, do the following:
- For ENTER_KEY, TAB_KEY and DOWN_KEY, go to next editable Field , if this is the last editable Field then restart from Field number one.
- For UP_KEY go to the previous editable Field, if there is no previous editable Field, go to the last editable Field in the Dialog.
- For any other key, terminate the edit function returning the character which caused the termination.
int add(CField* field, bool dynamic = true);
Adds the CField' pointed by field to the Fields of the Dialog; by appending the value of the field pointer after the last added field in the _fld array , setting the corresponding _dyn element to the value of dynamic argument and then increasing _fnum by one and returning
it the index of added Field in the CDialog object.
important note:
Make sure that add() sets the container of the added CField to this CDialog object, using the container() method of CField
int add(CField& field, bool dynamic = false);
Makes a direct call to the first add method.
CDialog& operator<<(CField* field);
Makes a direct call to the first add method, ignoring the second argument and then returns the owner (current CDialog).
CDialog& operator<<(CField& field);
Makes a direct call to the second add method, ignoring the second argument and then returns the owner (current CDialog).
bool editable();
Returns _editable;
int fieldNum()const;
returns _fnum.
int curIndex()const;
returns _curidx;
CField& operator[](unsigned int index);
Returns the reference of the Field with incoming index. (Note that here, the first field index is 0)
CField& curField();
Returns the reference of the Field that was just being edited.
CDialog Complied Object Files
CLineEdit(char* Str, int Row, int Col, int Width, int Maxdatalen, int* Insertmode, bool Bordered = false, const char* Border=C_BORDER_CHARS);
LineEdit, sets the Field's _data to the value of str. If LineEdit is instantiated with this constructor then it will edit an external string provided by the caller function of LineEdit. LineEdit in this case is not creating any dynamic memory, therefore _dyn is set to false (therefore the destructor will not attempt to deallocate the memory pointed by _data).
The location (row and col) and Bordered are directly passed to the parent Field's constructor)
CLineEdit(int Row, int Col, int Width, int Maxdatalen, int* Insertmode, bool Bordered = false, const char* Border=C_BORDER_CHARS);
Works exactly like the previous constructor with one difference; since no external data is passed to be edited here, this constructor must allocate enough dynamic memory to accommodate editing of Maxdatalen characters. Then make it an empty string and set Fields's _data to point to it. Make sure _dyn is set to true in this case, so the destructor knows that it has to deallocate the memory at the end.
~CLineEdit();
If _dyn is true, it will deallocate the character array pointed by Fields's _data
Methods
void draw(int Refresh = C_FULL_FRAME);
It will first call Frame's draw passing Refresh as an argument to it.
Then it will make a direct call to console.display() to show the data kept in Field's _data.
The values used for the arguments of console.display() are:
- str: address of string pointed by _data + the value of _offset
- row: absRow() (add one if border is visible)
- col: absCol() (add one if border is visible)
- len: width() (reduce by two if border is visible')
int edit();
Makes a direct call to, and returns console.edit(). For the coordinates and width arguments follow the same rules as the draw function. For the rest of the arguments of console.edit(), use the attributes of CLineEdit.
bool editable()const;
Always return true;
void set(const void* Str);
Copies the characters pointed by Str into the memory pointed by Field's _data up to _maxdatalen characters.
CLineEdit Complied Object Files
CButton
Button is a child of CField. It displays a small piece of text (usually one word or two) and accepts one key hit entry. When in edit mode, to indicate the editing mode, it will surround the text with squared brackets.
#pragma once #include "cfield.h" namespace cio{ class CButton: public CField{ public: CButton(const char *Str, int Row, int Col, bool Bordered = true, const char* Border=C_BORDER_CHARS); virtual ~CButton(); void draw(int rn=C_FULL_FRAME); int edit(); bool editable()const; void set(const void* str); }; }
Attributes
This class does not have any attributes of its own!
Constructor / Destructor
CButton(const char *Str, int Row, int Col, bool Bordered = true, const char* Border=C_BORDER_CHARS);
When creating a Button, allocate enough memory to hold the contents of the Str and set Field's _data to point to it. Then copy the content of Str into the newly allocated memory.
Pass all the arguments directly to Field's constructor.
For Field size (width and hight) do the following:
For width: Set width to the length of Str + 2 (adding 2 for surrounding brackets) or if the Button is bordered set width to the length of Str + 4 (adding 2 for surrounding brackets and 2 for the borders). For height: Set the height to 1 or if the Button is bordered, set the height to 3.
virtual ~CButton();
Deallocates the allocated memory pointed by Field's _data.
Methods
void draw(int fn=C_FULL_FRAME);
Draws the Button with border around it if it is Bordered. Note that there should be a space before and after of the text that will be used to surround the text with "[" and "]"
hint:
- First calls Frame's draw(fn) (passing the fn argument to the parents draw)
- Use console.display() to display the Button's text (pointed by Field's _data)
- If not bordered
- display the text at absRow() and absCol()
- If bordered
- display the text at absRow()+1 and absCol()+2
int edit();
First draw() the Button, then surround it by squared brackets, place the cursor under the first character of Button's text and wait for user entry.
When user hits a key, if the key is ENTER_KEY or SPACE, return C_BUTTON_HIT (defined in cuigh.h) otherwise return the entered key. Complied Object FilesEdit's edit().
- After validation is done, if _help function exists, it will recall the help function using MessageStatus::ClearMessage and contianer()'s reference as arguments.
- It will return the terminating key
Navigation keys are Up key, Down key, Tab key or Enter key.
MessageStatus is enumerated in cuigh.h
CValEdit Complied Object Files Complied Object Files
CMenuItem (optional) Complied Object Files Complied Object Files Complied Object Files | https://wiki.cdot.senecacollege.ca/w/index.php?title=The_CIO_Framework_-_OOP344_20123&diff=cur&oldid=90040&printable=yes | CC-MAIN-2020-45 | refinedweb | 3,426 | 56.69 |
In this article, you are going to learn how to quickly get started by building React applications that use TypeScript. For this demonstration, you are going to use create-react-app the official boilerplate for generating a new Reactjs project. TypeScript is a superset of JavaScript that provides additional layer as strongly typed programming language. It follows object-oriented programming concepts when compiled.
Pre-requisites
you must have installed:
Nodejs
npm/yarn
create-react-app version 2
Getting Started
If you do not have create-react-app installed, please run the command below in your terminal.
npm install -g create-react-app
If you have already installed it, please ignore the above step and let us begin. Invoke the create-react-app command with additional TypeScript option to generate a React application with TypeScript enabled. Now that Create React App v2 is out, official TypeScript support comes with it. To check which version you are currently on run the following command.
create-react-app --version # output 2.1.1
create-react-app by default uses yarn as the package manager to install required dependencies. This is an exciting thing for JavaScript users that use TypeScript We are going to use yarn too to avoid context switching. If you have used create-react-app before, this should look very familiar. The additional piece of information is adding the –typescript flag. This flag tells CRA to use TypeScript as the default syntax and add the appropriate configuration to make it work out of the box.
You will get the first hands-on experience with TypeScript in your React project by looking at the project structure. Open the newly generated folder in your favorite code editor. You will see the following.
Notice how the file extensions of App component is automatically changed to .tsx from the usual .js or .jsx. There is also .tsconfig file with rules and configuration. It sits at the root of a TypeScript project and indicates the compiler settings and files to include.
You can now run the application by running the command yarn start. You will be welcomed by the following welcome screen.
The main advantage TypeScript provides is that it is a statically typed. Programming languages can either be statically or dynamically typed. The difference is when type checking occurs.
The App.tsx File
Let’s jump into the main App file that usually is our main React component. You would not find any TypeScript enabled source code here.
import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css';;
It is the same as the good old App.jsx file.
Create a TypeScript Component
In this section, let us create our first TypeScript Component. Inside App.tsx add the following above the class App.
function Greetings({ greetMessage }) { return <div>Your message {greetMessage}!</div>; }
This is a bare minimum class component which takes greetMessage from props. Simple React and JavaScript stuff. At this point, you will be getting a compilation error both in your code editor and in terminal.
Now let us modify this function and TypeScript and make this error go away.
function Greetings({ greetMessage }: { greetMessage: string }) { return <div>Your message {greetMessage}!</div>; }
We are defining in above that the prop greetMessage should be a string. This is called an inline way of defining a type. There are different ways to use TypeScript and described types. One can be if you create an interface as a separate object.
interface greetMessageProps { greetMessage: string; }
And then use them in the function Greetings like below.
function Greetings({ greetMessage }: greetMessageProps) { return <div>Your message {greetMessage}!</div>; }
This step includes a lot of boilerplate code writing. We are going to stick with the first option we used, that is inline types.
Using the TypeScript Component
So far our App.tsx file looks like below.
import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css'; function Greetings({ greetMessage }: { greetMessage: string }) { return <div>Your message {greetMessage}!</div>; }
Let us the newly created TypeScript stateless component in the App. After the <a> tag, add:
<Greetings greetMessage="Hello World" />
Now visit the URL to see it in action.
This shows our props are well working. On passing the wrong type of prop, say if you try to pass a number, React would not even compile. That is the advantage TypeScript gives you. To catch errors like these in development mode. Change the value of greetMessage in Greetings from Hello World to {234}.
<Greetings greetMessage={234} />
First of all, you will be notified by the code editor that it expects a string instead of a number.
Next, visiting the URL in the browser window, where the app is compiling, it will display a similar error message.
Conclusion
And that is all you need to get started with React and TypeScript. Your application should now have the new features and be fully running with TypeScript. This only scratches the surface of what TypeScript provides us. You can create types for all your components and props and catch the errors faster, even before the React app compiles. | https://blog.eduonix.com/web-programming-tutorials/using-typescript-react-app/ | CC-MAIN-2021-04 | refinedweb | 844 | 67.35 |
Linux and Unix Books page7
This page discusses - Linux and Unix Books page7.
Linux and Unix Books page6
This page discusses - Linux and Unix Books page6.
Linux and Unix Books page5
This page discusses - Linux and Unix Books page5.
Linux and Unix Books page4
This page discusses - Linux and Unix Books page4.
Linux and Unix Books page3
This page discusses - Linux and Unix Books page3.
Linux and Unix Books page2
This page discusses - Linux and Unix Books page2.
Linux and Unix Books page1
This page discusses - Linux and Unix Books page1.
Linux and Unix Books
This page discusses - Linux and Unix Books.
Database books- Page 1
This page discusses - Database books- Page 1.
Database books Page20
This tutorial is designed to get the user up and running with MS Access in a rapid fashion..
C and C++ books-page6
The two functions DOMString.transcode() and XMLString::transcode() return arrays of raw data, in the first case a char * (an old-fashioned C string) and in the second case a zero-terminated array of XMLCh's..
C and C++ books-page9
This book is a tremendous achievement. You owe it to yourself to have a copy on your shelf..
C and C++ books-page13
This book was motivated by my experience in teaching the course E&CE 250: Algorithms and Data Structures in the Computer Engineering program at the University of Waterloo..
C and C++ books-page12
This section is designed to give you a general overview of the C programming language. Although much of this section will be expanded in later sections it gives you a taste of what is to come..
C and C++ books-page7
Namespaces are a very powerful C++ language feature. This article does not teach you the syntax of namespaces.
Hardware and Network books Page 1
This is self-paced, internet-based Network Hardware tutorial -- there are no books to purchase and no scheduled classes to attend..
Web Design Books
This site explains in very simple terms how to design, build and publish a website. If you haven't done it before, Internet publishing can appear rather daunting.
Linux and Unix Books page5
This paper presents how the Strategy pattern has been used to build BAST, an extensible object-oriented framework for programming reliable distributed systems. .
Linux and Unix Books page4
In this paper we address the problem of how an operating-system kernel or a server can determine with absolute certainty that it is safe to execute code supplied by an application or other untrusted source..
Java & JEE books Page14
This book describes how to develop and deploy enterprise beans for the JavaTM 2 SDK, Enterprise Edition (J2EETM SDK)..
Java & JEE books Page13
JavaScript is a new scripting language which is being developed by Netscape..
Java & JEE books Page10
Provides an overview of Java technology as a whole. It discusses both the Java programming language and platform, providing a broad overview of what this technology can do and how it will make your life easier..
Java & JEE books Page11
Java HotSpot VM is developed by Sun Microsystems. .
Java & JEE books Page9
Java is a popular programming language available in many diverse contexts for implementing real software solutions. .
Hardware and Network books Page 2
This Cable Modem tutorial is designed to answer most questions about Cable Modems and the associated technology..
Java & JEE books Page12
Introduction to programming using java is a free, on-line textbook. It is suitable for use in an introductory programming course and for people who are trying to learn programming on their own..
Web Design Books Page 2
This book explains how applications are created with Mozilla and provides step-by-step information that shows how to create your own programs using Mozilla's powerful cross-platform development framework..
Java & JEE books Page17
The intended audience for this guide is the person who develops, assembles, and deploys J2EE applications in a corporate enterprise. The J2EE 1.4 Application Server manuals are available as online files in Portable Document Format (PDF) and Hypertext Markup Language (HTML). . | http://roseindia.net/books/ebooks/database-books-page12.shtml | CC-MAIN-2014-10 | refinedweb | 675 | 62.07 |
What is the effect of declaring a polynomial `sparse'?
I do some work on polynomials over finite fields. Generated polynomials of larger degree are sparse in the sense that only a few coefficients are not zero. I tried to utilize this property by defining PR=PolynomialRing(GF(7),name='x',sparse=True). Then type(PR) is <class 'sage.rings.polynomial.polynomial_ring.polynomialring_field_with_category'=""> what is not very informative. It is not clear which library is involved (FLINT, NTL, Sage's own implementation, or something else), and libraries FLINT, NTL seem not to support sparse polynomials (sparse in the sense of storing only the non-zero coefficients similar to sparse matrices). So what does "sparse=True" mean? Is there a reduction of occupied memory? Anyway, time consumption for multiplication increases tremendously (I expected a major reduction). Thanks in advance.
Function "make_pol" defined below generates for any target degree "deg" a polynomial over a finite field. Only about "deg^ex" coefficients are not zero where "ex" depends on the field's order and is in interval (1/2, 1), in particular "ex" is about 3/4 for field order 7. The reason for the sparsity is not yet understood. Multiplication of polynomials over GF(7) of degrees 100043 and 100361 (and 6144 coefficients each) generated this way takes 36 milliseconds for dense implementation but 29 seconds (without "milli"!) for sparse implementation (similarly for the function call, the function uses multiplications). If you are curious about the chosen degrees: these are Sophie Germain primes, they are of special interest since the generated polynomials are irreducible.
Platform info: 64 bit, 3.3GHz, Linux, Sage 7.1 (Release Date: 2016-03-20)
from sage.all import ZZ, GF, PolynomialRing def make_pol(deg, char, sparse): """ Generate a sparse polynomial over a finite field. INPUT: "deg" target degree (natural number) "ch" field characteristic (prime number) "sparse" whether a sparse implementation to use (boolean) OUTPUT: Polynomial in 'x' of degree "deg" over field "GF(ch)". Only about "deg^ex" (where "ex"<1) coefficients are not zero, value of "ex" depends on "ch", e.g. "ex==3/4" for "ch==7". """ if not deg in ZZ or deg<0: raise ValueError("First arg must be a non-negative integer.") if not char in ZZ or not is_prime(ch): raise ValueError("Second arg must be a prime number.") R = PolynomialRing(GF(ch),'x',sparse=sparse) i0 = 0 r0 = R(1) s = R.gen() if deg == 0: return r0 i1 = 1 r1 = s+R(1) if deg == 1: return r1 i2 = 2 r2 = s*r1 - r0 if deg == 2: return r2 k = 1 while i2 < deg: if deg & k != 0: i3 = 2*i2-i1 r3 = s*r2 - r1 i0 = i1 r0 = r1 i1 = i3 r1 = r3 else: i1 = i2 r1 = r2 k = 2*k s = s**2 - 2 i2 = 2*i1 - i0 r2 = s*r1 - r0 if deg == i2: return r2 return r3 | https://ask.sagemath.org/question/34095/what-is-the-effect-of-declaring-a-polynomial-sparse/ | CC-MAIN-2021-17 | refinedweb | 481 | 56.55 |
I am working on DB operations on one project and developing WinForms App C# and I am using Dapper to get data out of DB, I am stuck at situation where I need to retrieve data using inner join. Eg. I've two tables Authors and Book as Follow :
public class Authors { public int AuthorId { get; set; } public string AuthorName { get; set; } } public class Book { public int BookId { get; set; } public string AuthorId { get; set; } public string Title { get; set; } public string Description { get; set; } public int Price { get; set; } }
Now in SQL Server I can easily get the data out of it using following query:
select B.Title,b.Description,b.Price,A.AuthorName from author A inner join book B on A.AuthorId = B.Authorid
But I dont know how to do this with dapper multi mapping, I also saw articles like This but could not understand how it works and splitting. I will be great if i can get same solutions with my class designs. Thank.
This is ouput I want : ResultSet
With the result you have linked you can just do this:
public class Result { public string Title { get; set; } public string Description { get; set; } public int Price { get; set; } public string AuthorName { get; set; } } connection.Query<Result>("SELECT B.Title,B.Description,B.Price,A.AuthorName FROM Author A INNER JOIN Book B on A.AuthorId = B.Authorid");
Or you can use a dynamic type.
If you want to have a collection of Books with their Authors it's another story. Then you would do like this (selecting * for the sake of the example):
var sql = "SELECT * FROM Author A INNER JOIN Book B on A.AuthorId = B.Authorid"; var result = connection.Query<Book, Author, Book>(sql, (b, a) => { b.Author = a; return b; }, splitOn: "AuthorId");
The splitOn parameter should be understood something like this: If all columns are arranged from left to right in the order in the query, then the values on the left belong to the first class and the values on the right belong to the second class. | https://dapper-tutorial.net/knowledge-base/49935928/get-data-using-inner-join-from-dapper | CC-MAIN-2021-25 | refinedweb | 347 | 63.53 |
Linux compatibility in IncludeOS
The last version of IncludeOS, 0.13, is the most significant release we’ve made to date. The important change is the depreciation of Newlib, which used to be our C library. The C library is one of the fundamental APIs an operating system can provide.
In addition to having a C library modern operating systems typically implement POSIX,.
With this in mind, we found Musl. Musl is written explicitly for the Linux kernel. In addition to providing a C library, it also provides a complete POSIX implementation on top of.
Out of the close to 400 system calls Linux provides we have implemented 42 so far. For each system call we implement Musl unlocks the relevant parts of POSIX to us. Moreover, since implementing a system call is typically pretty simple we have an easy way of expanding our Linux support should we need to.
A quick and dirty example follows:
#include <sys/utsname.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> int main(void) { struct utsname buffer; errno = 0; if (uname(&buffer) != 0) { perror("uname"); exit(EXIT_FAILURE); } printf("system name = %s\n", buffer.sysname); printf("node name = %s\n", buffer.nodename); printf("release = %s\n", buffer.release); printf("version = %s\n", buffer.version); printf("machine = %s\n", buffer.machine); return EXIT_SUCCESS; }
Running this yields the following output:
*
The addition of Musl should make it a lot simpler to develop applications for IncludeOS. Specifically we now have a way of easily adding support for other langauge runtimes. | https://www.includeos.org/blog/2018/musl.html | CC-MAIN-2019-35 | refinedweb | 254 | 52.97 |
Java lexical structure
last modified July 6, 2020
Computer languages, like human languages, have a lexical structure. A source code of a Java program consists of tokens. Tokens are atomic code elements. In Java we have comments, identifiers, literals, operators, separators, and keywords.
Java programs are composed of characters from the Unicode character set.
Java comments
Comments are used by humans to clarify source code. There are three types of comments in Java.
If we want to add some small comment we can use single-line comments. For more
complicated explanations, we can use multi-line comments. The documentation
This is generated with the
javadoc tool.
package com.zetcode; /* This is Comments.java Author: Jan Bodnar ZetCode 2017 */ public class Comments { // Program starts here public static void main(String[] args) { System.out.println("This is Comments.java"); } }
The program uses two types of comments.
// Program starts here
This is an example of a single-line comment.
/* This is Comments.java /* Author: Jan Bodnar */ ZetCode 2017 */
Java white space
White space in Java is used to separate tokens in the source file. It is also used to improve readability of the source code.
int i = 0;
White spaces are required in some places. For example between the
int
keyword and the variable name. In other places, white spaces are forbidden.
They cannot be present in variable identifiers or language keywords.
int a=1; int b = 2; int c = 3;
The amount of space put between tokens is irrelevant for the Java compiler. The white space should be used consistently in Java source code.
Java identifiers
Identifiers are names for variables, methods, classes, or parameters. Identifiers can have alphanumerical characters, underscores and dollar signs ($). It is an error to begin a variable name with a number. White space in names is not permitted.
Identifiers are case sensitive. This means that
Name,
name,
or
NAME refer to three different variables. Identifiers also cannot
match language keywords.
There are also conventions related to naming of identifiers. The names should be descriptive. We should not use cryptic names for our identifiers. If the name consists of multiple words, each subsequent word is capitalized.
String name23; int _col; short car_age;
These are valid Java identifiers.
String 23name; int %col; short car age;
These are invalid Java identifiers.
The following program demonstrates that the variable names are case sensitive. Event though the language permits this, it is not a recommended practice to do.
package com.zetcode; public class CaseSensitiveIdentifiers { public static void main(String[] args) { String name = "Robert"; String Name = "Julia"; System.out.println(name); System.out.println(Name); } }
Name and
name are two different identifiers. In Visual Basic,
this would not be possible. In this language, variable names are not case
sensitive.
$ java com.zetcode.CaseSensitiveIdentifiers Robert Julia
Java literals
A literal is a textual representation of a particular value of a type. Literal types include boolean, integer, floating point, string, null, or character. Technically, a literal will be assigned a value at compile time, while a variable will be assigned at runtime.
int age = 29; String nationality = "Hungarian";
Here we assign two literals to variables. Number 29 and string "Hungarian" are literals.
package com.zetcode; public class Literals { public static void main(String[] args) { int age = 23; String name = "James"; boolean sng = true; String job = null; double weight = 68.5; char c = 'J'; System.out.format("His name is %s%n", name); System.out.format("His is %d years old%n", age); if (sng) { System.out.println("He is single"); } else { System.out.println("He is in a relationship"); } System.out.format("His job is %s%n", job); System.out.format("He weighs %f kilograms%n", weight); System.out.format("His name begins with %c%n", c); } }
In the above example, we have several literal values.
23 is an integer literal. "James" is a string literal.
The
true is a boolean literal.
The
null is a literal that represents a missing value.
68.5 is a floating point literal. 'J' is a character literal.
$ java com.zetcode.Literals His name is James His is 23 years old He is single His job is null He weighs 68.500000 kilograms His name begins with J
This is the output of the program.
Java operators
An operator is a symbol used to perform an action on some value. Operators are used in expressions to describe operations involving one or more operands.
+ - * / % ^ & | ! ~ = += -= *= /= %= ^= ++ -- == != < > &= >>= <<= >= <= || && >> << ?:
This is a partial list of Java operators. We will talk about operators later in the tutorial.
Java separators
A separator is a sequence of one or more characters used to specify the boundary between separate, independent regions in plain text or other data stream.
[ ] ( ) { } , ; . "
String language = "Java";
The double quotes are used to mark the beginning and the end of a string.
The semicolon
; character is used to end each Java statement.
System.out.println("Java language");
Parentheses (round brackets) always follow a method name. Between the parentheses we declare
the input parameters. The parentheses are present even if the method does not take any parameters.
The
System.out.println() method takes one parameter,
a string value. The dot character separates the class name (
System) from
the member (
out) and the member from the method name (
println()).
int[] array = new int[5] { 1, 2, 3, 4, 5 };
The square brackets
[] are used to denote an array type. They are
also used to access or modify array elements. The curly brackets
{}
are used to initiate arrays. The curly brackets are also used enclose the body
of a method or a class.
int a, b, c;
The comma character separates variables in a single declaration.
Java keywords
A keyword is a reserved word in Java language. Keywords are used to perform a specific task in the computer program. For example, to define variables, do repetitive tasks or perform logical operations.
Java is rich in keywords. Many of them will be explained in this tutorial.
abstract continue for new switch assert default goto package synchronized boolean do if private this break double implements protected throw byte else import public throws case enum instanceof return transient catch extends int short try char final interface static var class finally long strictfp void const float native super volatile while
In the following small program, we use several Java keywords.
package com.zetcode; public class Keywords { public static void main(String[] args) { for (int i = 0; i <= 5; i++) { System.out.println(i); } } }
The
package,
public,
class,
static,
void,
int,
for tokens are Java keywords.
Java conventions
Conventions are best practices followed by programmers when writing source code. Each language can have its own set of conventions. Conventions are not strict rules; they are merely recommendations for writing good quality code. We mention a few conventions that are recognized by Java programmers. (And often by other programmers too).
- Class names begin with an uppercase letter.
- Method names begin with a lowercase letter.
- The public keyword precedes the static keyword when both are used.
- The parameter name of the
main()method is called args.
- Constants are written in uppercase.
- Each subsequent word in an identifier name begins with a capital letter.
In this part of the Java tutorial, we covered some basic lexis for the Java language. | https://zetcode.com/lang/java/lexis/ | CC-MAIN-2021-21 | refinedweb | 1,204 | 60.21 |
10 May 2012 23:04 [Source: ICIS news]
HOUSTON (ICIS)--North American titanium dioxide (TiO2) margins at Kronos will taper off in the second half of 2012 as low-cost inventory is depleted and production costs increase by as much as 60%, analysts at Deutsche Bank Equity Research said on Thursday.
In its report, Deutsche Bank said gross margins at Kronos will contract throughout the remainder of the year from 46.6% in the first quarter. The bank also maintained its hold rating on Kronos stock and set a target price of $22/share.
"While we expect TiO2 supply and demand to remain tight through at least 2014 due to uneven global demand patterns,” an analyst said, he added that the industry will be well-supplied intermittently.
However, most market sources currently describe plentiful supply levels and soft demand in ?xml:namespace>
The root of most of the past year’s price increases has been the rising cost of ilmenite ore, but Kronos expects ore costs to stabilize over the next few years, enhancing margins and eventually allowing TiO2 capacity expansions.
Other North American TiO2 producers include DuPont, Cristal, Tronox and Hunts | http://www.icis.com/Articles/2012/05/10/9558631/kronos-tio2-margins-to-compress-in-second-half-2012-deutsche-bank.html | CC-MAIN-2014-52 | refinedweb | 192 | 56.59 |
I'm trying to create a Flash video that loads a movie clip placeholder to fill the black space before a user clicks the video start button. Once the user clicks the start button on the control panel, I want the placeholder to fade out (using tween-lite). The code attached loads the movie, the controls and the placeholder, but I can't get the placeholder to disappear when I click the start button (the movie plays onclick but behind the placeholder.) A tip on what's wrong with this code (action script 3.0) would be welcomed. Thanks.
(This code loads the movie without playing itbut onclick does not remove placeholder)
import fl.video.*;import com.greensock.*;
import com.greensock.easing.*;
movie1.autoPlay=false;
movie1.source="sandywoods_resident_export.flv";
function isPlaying(e:VideoEvent): void
{ TweenLite.to(poster, 4,{alpha:0}); }
function hidePoster(){
poster.visible=false;
}
This topic is now closed. New replies are no longer allowed. | http://community.sitepoint.com/t/problem-getting-flash-video-placeholder-to-work/16449 | CC-MAIN-2015-22 | refinedweb | 156 | 66.44 |
Java, J2EE & SOA Certification Training
- 32k Enrolled Learners
- Weekend
- Live Class
In the previous article we have discussed the implementation of Bubble Sort in C. This article will brief you on how to implement Selection Sort in C. Following are the pointers this article will focus on,
Let us get started with it then,
Selection sort is another algorithm that is used for sorting. This sorting algorithm, iterates through the array and finds the smallest number in the array and swaps it with the first element if it is smaller than the first element. Next, it goes on to the second element and so on until all elements are sorted.
Example of Selection Sort
Consider the array:
[10,5,2,1]
The first element is 10. The next part we must find the smallest number from the remaining array. The smallest number from 5 2 and 1 is 1. So, we replace 10 by 1.
The new array is [1,5,2,10] Again, this process is repeated.
Finally, we get the sorted array as [1,2,5,10]..
Step 5 − Repeat the process until we get a sorted array.
Let us take a look at the code for the the programmatic implementation,
Code for Selection Sort:
#include <stdio.h> int main() { int a[100], n, i, j, position, swap; printf("Enter number of elementsn"); scanf("%d", &n); printf("Enter %d Numbersn", n); for (i = 0; i < n; i++) scanf("%d", &a[i]); for(i = 0; i < n - 1; i++) { position=i; for(j = i + 1; j < n; j++) { if(a[position] > a[j]) position=j; } if(position != i) { swap=a[i]; a[i]=a[position]; a[position=swap; } } printf("Sorted Array:n"); for(i = 0; i < n; i++) printf("%dn", a[i]); return 0; }
Output:
In the above program, we first take the number of terms from the user and store it in n. Next, the user needs to enter the array. The array is accepted and stored in a[].
The first ‘for loop’ takes care of the element to be matched. It assigns i to the position variable. The inner ‘for loop’ is used to iterate through the remaining elements and find the smallest element. Once the smallest element is found the j is assigned to the position variable. j holds the index of the smallest element.
Then, it is checked if the position variable is not equal to i. If it is not equal, swapping takes place, using a swap variable. Let us now move to the final bit of this article on Selection Sort in C,
Other ways to implement Selection Sort:
There are other ways to do the selection sort. One such way is by using a sort function. In this program, we will do the sort in the function. Remaining aspects of the program will be the same.
Here is the code:
Output:
#include <stdio.h> void SelSort(int array[],int n); int main() { int array[100], n,i; printf("Enter number of elementsn"); scanf("%d", &n); printf("Enter %d Numbersn", n); for(i = 0; i < n; i++) scanf("%d", &array[i]); SelSort(array,n); return 0; } void SelSort(int array[], int n) { int i, j, position, swap; for(i = 0; i < (n - 1); i++) { position=i; for(j = i + 1; j < n; j++) { if(array[position]>array[j]) position=j; } if(position != i) { swap=array[i]; array[i]=array[position]; array[position]=swap; } } printf("Sorted Array:n"); for(i = 0; i < n; i++) printf("%dn", array[i]); }
The execution of the program is the same, but the only difference is that in this program a function is called to do the sorting.
Selection Sort is done in this manner.
With this we come to the end of this blog on ‘Selection Sort In C’..
Got a question for us? Mention them in the comments section of this blog and we will get back to you. | https://www.edureka.co/blog/selection-sort-in-c/ | CC-MAIN-2019-39 | refinedweb | 652 | 63.09 |
Jersey streaming 2?
I am trying to get json running in jersey 2. For the life of me, nothing flows until the thread is complete.
I've tried this example trying to simulate a slow data producer.
"/foo") public void getAsyncStream(@Suspended AsyncResponse response) { StreamingOutput streamingOutput = output -> { JsonGenerator jg = new ObjectMapper().getFactory().createGenerator(output, JsonEncoding.UTF8); jg.writeStartArray(); for (int i = 0; i < 100; i++) { jg.writeObject(i); try { Thread.sleep(100); } catch (InterruptedException e) { logger.error(e, "Error"); } } jg.writeEndArray(); jg.flush(); jg.close(); }; response.resume(Response.ok(streamingOutput).build()); }(
And yet the jersey just sits there until the json generator is done to return the results. I am watching the results go through charles proxy.
Do I need to activate something? Not sure why this won't happen
Edit:
It might actually work, just not the way I expected. I don't think the thread is writing things in real time, which is what I wanted, all the more so as not to buffer the responses and write them out to the client immediately. If I run a loop of a million and a non-sleeping thread, then the data will be written in chunks without loading it into memory.
source to share
Edit it correctly. It works as expected. Also
StreamingOutput
is just a wrapper that allows us to write directly to the response stream, but doesn't really mean that the response is sent to each side of the server by writing to the stream.
AsyncResponse
does not provide any other answer as far as the client is concerned. It just helps to increase the throughput when doing long tasks. The long running task must run on a different thread, so the method can return.
- More details in the asynchronous server API
You're looking for Chunked Output instead
Jersey offers a facility for sending a response to the client in several more or less independent chunks using dedicated output. Each piece of response usually takes some (longer) time to prepare before sending it to the client. The most important fact about response chunks is that you want to send them immediately to the client when they become available, without waiting for the rest of the chunks to become available.
Not sure how it will work for your specific use case as it
JsonGenerator
expects
OutputStream
(from which
ChuckedOutput
we are not using), but here is a simpler example
"async") public class AsyncResource { public ChunkedOutput<String> getChunkedStream() throws Exception { final ChunkedOutput<String> output = new ChunkedOutput<>(String.class); new Thread(() -> { try { String chunk = "Message"; for (int i = 0; i < 10; i++) { output.write(chunk + "#" + i); Thread.sleep(1000); } } catch (Exception e) { } finally { try { output.close(); } catch (IOException ex) { Logger.getLogger(AsyncResource.class.getName()) .log(Level.SEVERE, null, ex); } } }).start(); return output; } }(
Note. I had a problem getting this to work at first. I would only get a deferred full result. The problem seemed to be something completely separate from the program. This was my AVG causing the problem. Some function called "LinkScanner" stopped this chunking process. I disabled this feature and it started working.
I didn't understand much and I'm not sure what the security implications are, so I'm not sure why the AVG app has problems with it.
EDIT
The real issue seems to be because Jersey is buffering the response to calculate the Content-Length header. You can see this post how you can change this behavior
source to share | https://daily-blog.netlify.app/questions/2216530/index.html | CC-MAIN-2021-49 | refinedweb | 578 | 57.98 |
REGGIE
GOLD AWARD WINNERS
Chex Quest
Ralston Foods, Inc.
Waters Molitor, Inc.
Although Chex Cereals ranked 10th in its more than 300-product
category, the company and its agency, WatersMolitor set out
to execute a promotional campaign that would create consumer
top-of-mind awareness and shed a somewhat sleepy brand image.
With its position as a family product and research showing
that Chex had strong equity in its unique square, waffle shape,
the company set out to build market share, make the brand's
image more contemporary, strengthen its position and reinforce
the brand's equity. At a time when many cereal manufacturers
were focused on price cuts, Chex wanted to create a value-added
promotion that would build equity and create a compelling
reason to purchase Chex.
The creative solution was to pack a free, non-violent CD-ROM
computer game inside every box. The game helped to position
Chex as fun for children and gave it a contemporary appeal
by taking advantage of the ever-increasing number of home
computers. The game had to be exciting so consumers would
continue to play the game and generate brand impressions.
Agency WatersMolitor created an original, interactive 3-D
computer game called Chex Quest, which was packed as a free
premium in 5.7 million boxes of Chex Cereals. By bring in
America Online as a promotional partner, the company was able
to cut production costs. The CD-ROMS carried the AOL software
and a subscription offer for 50 free hours.
Chex Quest featured five playing levels, 3-D graphics, full
animation and sound effects, putting it on par with most popular
CD-ROM action/adventure games that typically retail for $30
or more. The game was rated appropriate for all audiences
and had a skill level targeted for kids of nine and older.
Chex Quest begins with a full-screen animated movie that
sets the story line and introduces the challenge. Players
eavesdrop on a meeting of the "Intergalactic Federation
of Cereal" and learn that slimy, green aliens called
"Flemoids" have invaded the food production and
processing facilities on "Bazoik." The Flemoids
must be "zorched" back to their dimension and their
prisoners freed by the "Chex Warrior."
Promotional support included:
* A free standing coupon insert distributed nationally in
Sunday newspapers announcing the game and the AOL offer.
* A front flag on 5.7 million packages of Chex announcing
the game and directing consumers to the Chex promotional web
site for the game sequel, Chex Quest 2.
* More than 42 million public relations impressions in major
newspapers, broadcast and appropriate magazines targeting
moms, game players and computer owners.
* Thirty-second television spots announcing the CD-ROM offer.
* A web site was created to reach the online audience and
offer the free sequel that could only be played by consumers
with the original game. The site also featured game character
bios, game tips, answers to frequently asked questions and
Chex recipes and product information.
The results, including a 295 percent increase in incremental
volume over base and a 48 percent increase in volume share
from the previous year, were tremendous. Consumer feedback
was extremely enthusiastic to the game and the brand. Players
of all ages appreciated the game and there were requests for
sequels.
The Chex Quest CD-ROM continued to be played by consumers
beyond the promotion period, immersing consumers in the brand
each time they play the game. In the minds of participating
consumers, Chex Cereal's image changed from old-fashioned
and stodgy to exciting, fun and modern.
National
Consumer Promotion over $500,000
MusicLand/Scream on Pay-Per-View
Sweepstakes
Buena Vista Television
In-house
Pay-Per-View movies generate a significant portion of revenues
for cable operators and direct-broadcast satellite systems
in addition to the studio that distributes them. The Pay-Per-View
industry generated a record $1.2 billion in 1997, with movies
ranking second only to sports.
Buena Vista Television, the Pay-Per-View distributor for
all Buena Vista Pictures and Miramax films, saw Halloween
as an opportunity to create event programming to reach PPV's
core audience of men between 18 and 34 with the movie "Scream."
Since "Scream" had just run and cable operators
do not like to schedule horror movies, Buena Vista Television
needed to create a compelling promotional event to convince
cable operators to air "Scream" on Halloween.
Finding a promotional partner to support "Scream"
was a unique challenge because of its horror roots. Looking
for a more edgy, irreverent partner targeting the same audience,
Buena Vista joined forces with the Musicland Group, consisting
of Sam Goody/Musicland, Suncoast Motion Picture Company, Media
Play and On Cue stores.It is the nation's largest specialty
retailer of prerecorded music and home video products. Musicland
was also seeking to create a branded event for Halloween to
drive that same demographic traffic into their stores.
To drive traffic to the stores and drive orders for the special
Halloween airing of "Scream," the "Scream"
Sweepstakes was launched. The grand prize was a trip for two
to Hollywood to attend the star-studded premier of "Scream
2." Consumers could pick up an entry form at any of the
1,375 Musicland Group locations.
Promotional elements included in-store "Scream"
POP elements such as banners, signs, employee name tags, "Scream"
footage on in-store monitors and "Scream" merchandise.
Media support of the campaign included cable promotional spots
instructing consumers to "Order 'Scream' on PPV and go
to Musicland to enter the 'Scream' Sweepstakes." This
provided $1 million in incremental media from 200 participating
cable operators. Media Play ran four weekly newspaper inserts
promoting the sweepstakes and the PPV airing as well as radio
spots promoting the sweepstakes. On Cue and Suncoast supported
the promotion with additional advertising and in-store flyers.
"Scream" ended 1997 as the number two Pay-Per-View
title, which is impressive since the movie ranked 11th at
the box office. It surpassed many higher theatrical-grossing
films such as "Independence Day" and "Jerry
Maguire." Based on the success of "Scream,"
Buena Vista Television is planning to re-release additional
titles on PPV in the future to continue to generate revenue
for the studio and its franchises, just as re-releasing classic
animated movies does for home video.
During the month of the "Scream" Promotion, Musicland
company-wide sales increased by 3.9 percent over the same
period in the prior year. And "Scream" was one of
Musicland's most successful pre-sell videos of the quarter.
The "Scream"/Musicland Halloween promotion is directly
credited for generating incremental revenue that made "Scream"
the number two Pay-Per-View film of the year.
National
Consumer Promotion Under $500,000
"Intrigue Buzz Tactics"
Oldsmobile
Frankel & Company
Out of all the divisions of General Motors, none has experienced
more change than Oldsmobile in recent years. Oldsmobile's
new direction is toward uncompromised satisfaction throughout
the ownership experience. The company's Intrigue, aimed at
import sedan buyers, was seen as an important vehicle for
executing this strategy.
The import competitors represent formidable competition,
having substantial equity with import-oriented consumers.
The Intrigue had to deliver on the promises of quality, refinement,
responsiveness, ride, handling and noise isolation that today's
import-oriented buyers expect. The model was designed to compete
in the upper mid-sedan market and be a key vehicle for the
division by accounting for up to a third of its total sales.
Buyers in this segment desire some distinctiveness in their
vehicle, are generally intrinsically motivated, research their
purchases very carefully and rely heavily on third-party endorsements
such as Consumer Reports, friends and family.
The objectives of "Intrigue Buzz Tactics," which
was implemented prior to the launch of national advertising
and promotions, were to create awareness of the Intrigue,
increase visibility, build excitement among consumers, generate
PR exposure and increase showroom traffic. Frankel created
a series of guerrilla tactics to create publicity and word-of-mouth
about the product. This included making "Experience the
Intrigue" the key strategy by bringing consumers to the
test drive and taking the Intrigue experience out of the showroom
to the consumer.
These strategies resulted in the "Compliments of Intrigue,"
"Shadow Play," "3-D Intrigue" and "Whose
Intrigue Sweepstakes." A fleet of Intrigues appeared
at various locations frequented by the target audience in
10 markets to provide "compliments" such as paying
for parking, meals, beverages, car washes, fill-ups and admission
fees. An Intrigue handout calling for a visit to the local
Olds dealer accompanied the compliments. The press was alerted
to these events, which according to consumer feedback sparked
curiosity and fostered positive perceptions of Oldsmobile.
More than 30,000 potential customers experienced the Intrigue
during the one-week promotion.
Reminiscent of the "Bat Signal" a real life "Intrigue
Signal" slide show was projected on buildings in 16 markets
by a mobile projection vehicle that drove through downtown
streets. In six markets 3-D billboards were created with an
actual Intrigue suspended on it. The billboard over the Holland
Tunnel in New York City appeared in newspapers throughout
the country.
The sweepstakes began with a promotional version of the Intrigue
"Embassy" commercial that ran in theaters in 15
markets. The commercial had no reference to the Oldsmobile
name, but did offer a shot of the logo as a hint to viewers.
Moviegoers were asked to guess what mysterious car company
had created the Intrigue. They were directed to an entry box
after the movie to answer the question and register for a
chance to win one of three Intrigues. This spot was viewed
by more than 5.5 million people with 86,000 entering the sweepstakes.
In addition to the above results, Intrigue sales volume increased
621 percent during the promotional period, the billboards
generated more than 31 million impressions, radio barter generated
over 9 million impressions, PR efforts generated 11 million
impressions and the "Shadow Play" drew 10,000 consumer
requests for information.
Local/Regional
Consumer Promotion Over $250,000
"The Steak Market Game"
Palm Restaurants
D.J Gorman
The Palm, a restaurant with locations in Manhattan, East
Hampton, Chicago and 10 other major metropolitan and resort
areas, caters specifically to an upscale lunch and dinner
business clientele. Its reputation is based primarily on the
high quality of steak it serves.
The restaurant set out to increase visit frequency among
current customers, increase its clientele, increase trials
by non-customers and work with tie-in partners that would
share the promotional costs.
Capitalizing on the bullish '90s stock market - an area of
prime interest to The Palm's target audience of upscale business
people - "The Steak Market Game" was born. The game
with all the elements of playing the market involved visit
incentives, instant win excitement and the appeal of good
odds of winning a prize. Restaurant customers could open an
account, make deposits and receive dividends as they built
their portfolios. Every customer had a personal account identified
by his or her home telephone number written on the certificate.
They could accumulate their stocks and mail them in at the
end of the promotion or deposit them when leaving the restaurant.
Customers received a random selection of three "stock
certificates" indicating a specific number of shares
of a NYSE company every time they dined. At the end of the
promotion, the customer with the highest value "steak
certificates" portfolio, based on actual prices at the
close of the New York Stock Exchange that day, won the grand
prize of $20,000.
The promotion's exposure was extended through the partnership
of American Express, Money Magazine and Seagram's. Each received
exposure on all promotional materials and helped to reinforce
the Wall Street orientation of the promotion.
"The Steak Market Game" was promoted through direct
mail, four walls, neighborhood marketing and advertising.
Restaurant customers and consumers in the Money Magazine database
received mailings with a starter gift of 10 shares of Seagram's
stock to encourage participation. The magazine announced the
promotion through a full-page ad. To further enhance the stock
market feel and excitement, all locations were provided with
an actual stock ticker tape machine.
A special "Trader's Dinner," arranged by D.J. Gorman,
for a select group of loyal customers launched the event in
each restaurant with immediate excitement and positive word-of-mouth.
Seagram's sponsored the event, which was complete with an
introduction to the game, stocks and a sampling of premier
Seagram's whiskeys.
The game increased customer visit frequency by 18.5 percent,
increased The Palm's customer database by 15 percent and increased
store sales by 8 percent compared to the previous year. This
was a significant accomplishment as previous promotions typically
generated a 2-percent increase. The objective of creating
an event that was valuable to tie-in partners was achieved,
and resulted in a contribution by the partners of more than
55 percent of the promotion's total cost.
Combining The Palm's signature product with something of
prime interest to its target audience resulted in a unique
execution of a sweepstakes promotion that stimulated the broadest
possible awareness of the restaurant and its product.
Local/Regional Consumer
Promotion Under $250,000
Marie Callender's Mobile Marketing
Program
Marie Callender
CME Promotion Group
Marie Callender's is a premium line of frozen single- and
multi-serve meals and cobblers, representing approximately
$200 million sales in a market category generating $3.5 billion
in annual sales.
The line of frozen meals is based on original recipes used
by founder Marie Callender in the first Marie Callender's
restaurant in California. Brand performance had been solely
driven by trade promotion and merchandising with no advertising
and very little consumer promotion.
The Mobile Marketing Program's objectives were to build consumer
awareness and trial in new markets, support the "Unbelievably
Good Frozen Food" positioning, and increase sales. In
core markets with Marie Callender's restaurants, the goal
was to support the launch of 12 new entrees. Traditionally,
the product line performed very well in those markets with
an established restaurant.
To accomplish the objectives, a nostalgic Airstream trailer
was converted into a retro Marie Callender's Mobile Diner
to deliver restaurant-quality food to consumers in six markets
- Chicago, Milwaukee, Detroit, Grand Rapids, Pittsburgh and
New England. During a 19-week period, the two mobile diners
were used to make sell-in presentations to all key retailers
in the tour markets and deliver "home cooked" taste
samples to consumers in the fun and relaxing environment of
the diner.
Product samples and high-value coupons were also distributed
during high-traffic weekend events and festivals and key retailer
locations. Coupon incentives used in the promotion ranged
from three $1 coupons in the mobile diners to ad coupons with
up to $1 off and in-store $1-off coupons.
The mobile diner promotion was supported by: in-store window
banners; point-of-sale displays and ad slicks; radio advertising;
and live-remote radio promotions.
Consumer response to the diner and samples was overwhelmingly
positive, while volume increased an average of 18 percent
and quality merchandising increased by 25 percent. The diner,
due to its success in establishing consumer awareness, has
been incorporated into a new television advertising campaign.
The mobile diner tour resulted in 100,000 consumer samples
and generated more than 700,000 total impressions. More than
100,000 coupons were distributed via the diner and in-store
demos.
By bringing to life the restaurant heritage through the retro
dinner vehicle, Marie Callender's was able to communicate
the premium nature of the brand in a fun and involving way,
while delivering quality taste samples and coupons. The diner's
relevancy to the brand heritage and its appeal with the retail
trade and consumers have caused it to take on an even more
pivotal role in the overall brand marketing. The retail trade
was extremely supportive not only of the account-specific
programs, but applauded the market-wide brand communication
it provided. Retailers supported the diner appearances with
ad features and displays.
Multi-Channel
Integrated Promotion Over $500,000
Crayola Search for True Blue Heroes
Binney & Smith
In-house
The Crayola brand name is among the few in America that is
immediately recognizable and universally evokes warm, positive
feelings by everyone. Manufactured by Binney & Smith since
1903, the product's primary goal has been bringing color,
excitement and imagination into children's lives.
The brand enjoys high awareness with strong kid and mom appeal,
but found it was vying for kid's leisure time with other product
lines such as video games, action figures and computers. The
amount of time kids color per day had not decreased, but the
company wanted to ensure that Crayola products remained relevant.
The promotion objectives were to drive significant incremental
core product sales, maximize consumer takeaway of Crayola
products, generate high levels of retailer support and create
top-of-mind awareness of the Crayola brand.
To accomplish these goals, eight new crayon colors were introduced
to add excitement and news to the 64- and 96-crayon packages.
These crayons were developed to eventually create a 120-count
"Giant Chest" of crayons to be introduced in 1998.
The colors, featuring the generic labels of "color 1"
through "color 8," were placed in promotional Crayola
The "Search for True Blue Heroes" solicited kids'
nominations of their heroes. They could nominate their hero
by drawing a picture, writing an essay and dedicating one
of the eight new colors to their hero. To drive consumer participation,
Crayola for the first time in history allowed winners in the
promotion to have their personal names printed on the new
crayon colors as the actual color names. The eight heroes,
the children that nominated them and their guests received
trips to the Crayola factory for an induction ceremony into
the Crayola Hall of Fame. One of the eight heroes was selected
the Ultimate True Blue Hero and Crayola donated $10,000 to
the winner's favorite charity.
To further enhance the program, Crayola created a mail-in
offer where consumers could order a crayon with their name
and their hero's name on it. Publicity campaigns surrounded
the search kick-off event, the introduction of the new colors
and the winners' ceremony. The only paid advertising was a
one-page ad about the event and mail-in offer that appeared
in Crayola Kids Magazine, published by Meredith Publishing.
The magazine provided editorial coverage of the promotion
and devoted a section to the winners.
Additionally, a display program, special incremental product
and merchandising opportunities, a sales force incentive contest
and trade excitement incentives were used to support the promotion.
A special limited-edition Crayola True Blue Heroes tin was
created, which was only provided for consumer sale to those
outlets that promoted and featured the contest. The contest
and the winners were also highlighted on the Crayola web site.
The promotion was a great success for Crayola, posting a
12-percent increase in factory shipments of 64- and 96-count
packages and a 6-percent increase in sell through. Sales exceeded
the $1 million incremental core crayon product sales goal.
The promotion also achieved a 47-percent increase in gross
media impressions with more than 220 million.
This campaign connected brand equity, program theme and business
results extremely well. Crayola created an ownable event that
linked its key equity elements: color, fun, kids, creativity
and "good for you," while bringing an integrated
marketing program to the consumer.
Multi-Channel Integrated
Promotion Under $500,000
"I Found Your Wallet"
Phoneworks Inc.
In-house
SmartSpiffs is a new interactive marketing system from PHONEWORKS
that provides marketers with turnkey solutions for loyalty
and continuity objectives. SmartSpiffs solutions now support
the marketing efforts of several national brands such as Coca-Cola,
Gerber and JC Penney.
Positioned as a "better mousetrap," SmartSpiffs
competes with all the traditional promotional tools as well
as the other "better mousetraps" available. One
of the marketing challenges for the high-tech-powered tool
is that live multi-media presentations are the only practical
means for showing off SmartSpiffs. And heads of top sales
promotion agencies are extremely difficult to reach.
To cut through the screening and insulation surrounding the
agency head, SmartSpiffs targeted the leaders of 100 top sales
promotion agencies with the goal of securing presentation
appointments.
PHONEWORKS used a unique and innovative strategy to get through
to the agency heads. FedEx overnight packages were sent to
the agency heads containing a bulging leather wallet bound
by a rubber band. A note, on torn piece of paper, is attached
that reads: "I found your wallet! I'll call tomorrow
to make sure you got it. Kathy."
The wallet makes it through to the boss, who is intrigued
and, of course, checks to confirm that his or her wallet is
in fact not missing. Upon opening the wallet, the boss finds
an ID card with his name and business address, snapshots of
people he doesn't recognize, three $1 bills and an ATM receipt
for recent withdrawal. There's a "to do" list scrawled
on the back of the ATM receipt with a reminder to find out
about SmartSpiffs. Further exploration of the wallet yields
a "while you were out slip" from "Kathy"
of PHONEWORKS wanting to set up a SmartSpiffs capabilities
presentation. There is also Kathy's business card, a matchbook
with SmartSpiffs 800-number scribbled on it and several SmartSpiff
award certificates from campaigns run by Minute Maid, Jack
Daniel's, Media Play and others. A Wall Street Journal-style
article raving about SmartSpiffs, a flier describing new applications
for the product and a real lottery ticket are also included.
The result: 16 agency heads called on their own to find out
more about SmartSpiffs and 63 presentations were scheduled
overall. Dozens of agency heads praised the promotion's originality.
The promotion also generated $14 million in quotations submitted
with $2.5 million in closed business to date.
"I Found Your Wallet" rapidly established the SmartSpiffs
brand as more than 60 percent of the promotion industry's
top agencies were introduced to the product and began advocating
it to their own clients.
Local/Regional
Business-To-Business/Trade Promotion
"QuestWorld Adventure"
Cartoon Network
In-house
The Cartoon Network, a 24-hour, all cartoon cable television
network, was seeking to promote the "Real Adventures
of Jonny Quest," a remake of the 1960s prime time series.
The new Jonny Quest combines the best of the classic series
- global adventures, cutting-edge technology and team work
- and updates it to the '90s as a mystery-adventure featuring
teenagers. The Cartoon Network's core audience is kids between
two and 11, but as they grow into tweens it becomes difficult
to keep their interest in cartoon programming. The "Real
Adventures of Jonny Quest" was created to have broad
appeal with viewers, especially the older kid demographic
because of the real themes. However, the show had been slow
in building consistent ratings and many tweens had not sampled
it.
The promotion objectives were to increase viewing interest
in the show and to maximize viewership among kids between
nine and 14 by creating a fantasy promotion, the "QuestWorld
Adventure." The contest gave 19 kids (10 from the U.S.)
the chance to be flown to an undisclosed destination to join
forces to carry out a top secret Quest mission. Additional
prizes included QuestWorld adventure gear, complete with backpack,
flashlight/siren, travel journal, pen, glow sticks and a T-shirt.
In addition to the U.S. contest, the network's Latin American
and Asia divisions also participated in the promotion.
To drive viewership, entrants were asked to tune in to all
new episodes of Jonny Quest between Feb. 10 and 14. Entry
required viewers to watch an episode, write down the destination
the Quest Team was traveling to and mail it in to the "QuestWorld
Adventure."
The contest was promoted by more than 200 spots on the Cartoon
Network, full-page print ads in publications serving the target
demographic, radio commercials and merchandise giveaways on
the Kids Star Radio Network in four key markets.
A cable operator contest was created to generate additional
local cross channel spots encouraging kids to "watch
the Cartoon Network to try out for the mission." The
174 participating cable operators generated more than 34,000
cross channel spots. Other support included tune-in promos
in 130 WB stores and the QuestWorld web site.
All correspondence to the winners perpetuated the fantasy
that this was a real Quest adventure including mission briefings,
top-secret itinerary and Quest Team credentials. Press kits
were sent to the media in the winner's hometowns. A film crew
documented the "QuestWorld Adventure" for on-air
promos and merchandising to future sponsors and cable operators.
The adventure took place in Jamaica and required the winners
to work together to solve a mystery as they explored the island.
The U.S. portion of the event received more than 50,000 entries
during the five-day promotion. The average age of the respondents
was 10 and ratings among the key target audience increased
by 100 percent during the promotion. QuestWorld web site visits
increased by 300 percent during the promotion. Cable affiliate
participation generated more than $3.4 million in incremental
cross channel media support.
This promotion captured the imagination of the Cartoon Network's
older viewers and completely exploited the essence of the
"Real Adventures of Jonny Quest." The high level
of interaction in the promotion increased interest in the
program, deepened the network's relationship with its viewers
and served to build the Cartoon Network brand worldwide.
International
Or Global Promotion | http://web.archive.org/web/20050427102350/http:/www.pmalink.org/members/reggies/1998_reggie_winners2.asp | CC-MAIN-2014-52 | refinedweb | 4,276 | 50.67 |
OK, rev#4 (and hopefully final). I only renamed a file,as per suggestion of Andrew Morton. Haven't seen anythingelse worth changing.Quick recap of the mods involved: o Documentation/x86-hal.txt # added file o include/asm-i386/eflags_if.h # added file (only used for VM) o arch/i386/Kconfig # added one menu entry o arch/i386/Makefile # added one ifdef..endif o arch/i386/boot/Makefile # added one ifdef..endifDiffs are available at: file 'Documentation/x86-hal.txt' explains the rationale forthese mods and my case for them going into the kernel before 2.6.No *.{c,h,S} files are modified. | https://lkml.org/lkml/2003/1/23/189 | CC-MAIN-2022-21 | refinedweb | 105 | 63.56 |
Today I had a customer query where they were seeing the following two messages every 10 minutes in the Event Log on a server that had Service Bus for Windows Server 1.1 installed and running.
Event ID 65203 -TrackingId: , SubsystemId: , General Service Bus Resource Provider Information. Namespace recovery completed. Trying to recover . were successfully recovered.
Event ID 65203 - TrackingId: , SubsystemId: , General Service Bus Resource Provider Information. Namespace recovery started.
So it turns out that this is just noise and safe to ignore.
When you try to create a namespace through the Resource Provider it will create the namespace in the project store with a state of ‘Activating’. This is then picked up by other processes in the gateway to complete the namespace creation process. If the gateway or the gateway database is down, then this request is left in the ‘Activating’ state. There is then another job in the Resource Provider that runs every 10 minutes that tries to complete any incomplete namespace creations.
The log that is written above is simply saying that the job is running, but there’s no namespace in a broken state to recover.
You can see this in the string resources of the dll.
I hope this helps someone in case they think something is broken.
Thanks! This was indeed helpful! | https://blogs.msdn.microsoft.com/softwaresimian/2014/08/14/event-id-65203-general-service-bus-resource-provider-information-namespace-recovery-completed-trying-to-recover/ | CC-MAIN-2018-34 | refinedweb | 218 | 75.5 |
bq. the original commit is still there, so nothing is really fixed.
Though 3 commits didn't look good, the contributor's identity would be
retrievable when looking at git history.
Consider this:
$ git blame
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
| grep 'se.classification.InterfaceAud'
552400e5 (tedyu 2016-09-12 12:16:26 -0700 60) import
org.apache.hadoop.hbase.classification.InterfaceAudience;
552400e5 would lead to:
HBASE-16491 A few org.apache.hadoop.hbase.rsgroup classes missing
@InterfaceAudience annotation (Umesh Agashe)
The downside with force push is that everyone who had the corresponding
branch checked out needs to re-check out.
This morning I moved a few pending patches out of old work space for master
branch (not to lose them when I do rm -rf in the future) and cloned master
branch again.
My two cents.
On Tue, Sep 13, 2016 at 10:00 AM, Gary Helmling <ghelmling@gmail.com> wrote:
> >
> > Please never force push to any non-feature branch. I thought we had
> > protections to stop force pushes on master and
> > have filed INFRA-12602 to get them in place.
> >
> >
> Yeah, I shouldn't have done it. But I think the protection against force
> pushes only applies to the "rel/*" namespace.
>
>
> > To fix erroneous commit messages, please revert the offending commits
> > and then reapply them with a correct commit message.
> >
> >
> Honestly, I don't see the point of this. In this case the original commit
> is still there, so nothing is really fixed. Instead we wind up with 3
> commits muddying up the change history for the affected files.
>
> I would much rather preserve a clean change history at the cost of a few
> bad commit messages. I don't think it's really that big a deal.
> | http://mail-archives.apache.org/mod_mbox/hbase-dev/201609.mbox/%3CCALte62xOR3fkoE5tXRQpp5K4UAyyg0RWon813ky_7_=FQ9MTAw@mail.gmail.com%3E | CC-MAIN-2017-47 | refinedweb | 296 | 66.84 |
Talk:Tag:highway=path
Contents
- 1 =Using path in unusual cases
- 2 path common usage
- 3 key description evolution
- 4 Rationale
- 5 General Discussions
=Using path in unusual cases
Paths versus fixed rope routes and climbing routes
In my view, there should be a clear distinction between a path (accessible to anyone with a reasonable fitness) and things like fixed rope routes and mountain climbing routes that require mountaineering skills and safety equipment. As pointed out here, all are currently often tagged and thus rendered as hiking paths or footways. Either one is clearly inappropriate. I am aware that a combination of highway=path, trail_visibility=... and sac_scale=... would do the job, but I believe that there should be a more strict separation between paths and (climbing) routes. A route up the Matterhorn is simply by no means a path. Any suggestions? Ukuester 08:28, 6 May 2009 (UTC)
- sport=mountaineering? sport=rock_climbing? On the other hand, just because it's at high altitude or across a glacier doesn't mean it's not a path... (meaning that many mountain routes may indeed be paths). I would feel comfortable tagging anything sac_scale=T4 and below as a path. sac_scale=T5 is probably pushing the limits of a path though.
Fixed rope route (Klettersteig)
I realized, that numerous fixed rope routes (e.g., the famous Jubiläumsgrat heading east from Zugspitze) are currently tagged as highway=footway, sometimes with additional sac_scale classification, sometimes not. I think the tagging is fine in principle, nevertheless this is quite problematic since the current renderers render them as if they were usual hiking/walking ways. Compare for instance the level walking way around the Eibsee with the strenous and demanding fixed rope routes around the Zugspitze. They are rendered exactly alike. This should be changed but who should be approached with this respect? Ukuester 17:02, 2 May 2009 (UTC)
- Not just the Zugspitze, the Hörnligrat of the Matterhorn is also shown as a path. Its also graded according to the SAC scale(albeit with a FIXME), which is completely inaccurate: It's a rock climb at TD/ZS on the standard scale, and also not a path. There are therefore issues both with tagging and rendering. SK53 19:16, 2 May 2009 (UTC)
- Why not extend the scale? We just need to add sac_scale=climbing or sac_scale=via_ferrata wikipedia:Via_ferrata Basically saying this is a thing beyond sac_scale. The only bad thing about this is that this makes the key name "sac_scale" a bit inappropriate. However all of this stetches the meaning of highway path or footway. To prevent rendering one would need a generic-specific highway type like highway=sportway. This would prevent rendering, but making mappers aware of the new type would be a challenge. Katzlbt
- I just found a new tag "via_ferrata_scale=3C" Innsbrucker Klettersteig
- Hi, this problem should imhao be discussed on the highway=path page. Doesn't anyone feel hurt if I move this discussion there ?. I totally agree this is quite a problem with path, but not only does it concerns via ferrata, but any activity which is one special sport path. I've allready commented some time ago on the very same problem about cross country skiing. A via_ferrata path, a cross country skiing path, a climbing path have in common that hikers might be unable to use them. (I said "unable" to differ from "not allowed" as tags such as foot=no could have been used for that) sletuffe 12:06, 9 October 2009 (UTC)
- (discussion on the ski case moved down there) sletuffe 16:48, 30 November 2009 (UTC)
- Back to the first remark, In my opinion highway=path shouldn't be used for via_ferrata and climbing route. I know we could add something like via_ferrata_scale and climbing_scale and that could do the job to express it isn't a "common path" and special equipement is required, but when thinking at those who use the data, they'll need to constantally adapt the rendering/routing/any software as special options are added. The piste map project is working rather well and doesn't clutter with other uses because it created a different set of tags. I would propose we do the same and invent a :
- highway=climbing_path
- highway=via_ferrata
- + adding additionnal options such as via_ferrata_scale and climbing scale (and define them correctly in the wiki !)
sletuffe 17:03, 30 November 2009 (UTC)
Proposition to move out from path "snow only path"
Edit : After more english language research, what I wanted to refere to down there was backcountry skiing, it doesn't change the problem but I wanted to make it clear. sletuffe 16:01, 11 October 2009 (UTC)
We came accross a problem with path when they are used in snow convered only conditions (I refere to cross country skiing) The example in the front says A path for bicycles(...) and cross-country skiing could be tagged as: (...) ski=designated + bicycle=designated
Well, it works well when in winter it is used by skiiers and in summer by mountain bikers, but what happen when this path only exists in winter for cross country skiiers ?
- highway=path + ski=designated is wrong as this is not a matter of a route has been specially designated (typically by a government) Usability by hikers or horses is not forbiden, it is just that physicaly, in summer there is no "path"
- highway=path only is not good either as there is still no summer path. The path only exists in snow conditions.
Interpretation of the data then becomes a problem as there is no way to make distinction between an hiking summer path and a winter cross-country skiing path.
One solution would be to add some tweaks like only_exist_with_snow=yes / only_exist_without_snow=yes but that would need much work on software using the data on already tagged data without.
My view is that the Proposed_features/Piste_Maps is good at dealing with snow related/only ways and we shouldn't try to cover everything within path.
I propose to explicitly remove snow related things from path (snowmobile and croos-country ski) and let something like :
- piste:type=cross_country
- piste:type=snowmobile
do the job.
In the case of a really shared path between skiier in winter and hiker in summer, we could do just like it is done for piste:type=downhill when it's a track, tag it like :
- highway=track + piste:type=downhill +...
In the path case, it would be :
- highway=path + sac_scale=* + piste:type=cross_country + piste:difficulty=bla
sletuffe 01:36, 12 September 2009 (UTC)
- If there is (in summer) "something's there, usable for navigation but for some reason you can't call it at least a footway", then it's a path with suitable tags to describe what the heck then travels there - skiers in this case. And if there's in winter a maintained cross country skiing piste, then it bloody well is ski=designated. Alv 08:31, 12 September 2009 (UTC)
- That is not how I understand designated (as I explain above) and refering to Tag:access=designated This tag indicates that a route has been specially designated (typically by a government) for use by a particular mode (or modes) of transport. Which is not the case here. The mountain I'm refering to is free for anyone to travel in. So this path has not been specifically designated as a cross-country skiing trail. But physically it is nothing. That's why I think the way to describe path would be better as "physical existence without snow" sletuffe 14:31, 12 September 2009 (UTC)
- Yes, I was too much thinking of a situation like the "first example picture on a snowmobile path". ski:nordic=yes. Since there is the skiing track in winter, there won't be impenetrable bushes, steep cliffs or the like, features that might exist nearby - You'd rather follow the skiing trail route if you were lost in the wilderness and there are no specific hiking trails nearby, even when there isn't any other physical indication of pedestrians users along the skiing track - if there are, it has the foot=yes. Alv 08:59, 13 September 2009 (UTC)
(moved here for easier reading)
- Cross country skiing should be covered in the piste: namespace piste:type=nordic Piste Maps. No problem to move, just please keep move notice and a link here or the other way round. Thanks.
- If the piste is just bushes, scree or it's otherwise not apparently visible in the landscape when there isn't any snow, it's better without a highway=path; and the other way round, if there's something usable for transport or navigation in the summer, I'd tag it as a highway=path foot=? wheelchair=yes/no etc. Alv 08:15, 12 October 2009 (UTC)
- To give a feed back on this, what I meant by "cross country" seams to be called "ski touring" in english, and was added to the piste map project [piste:type=skitour]. So there shouldn't be any need to use path with it (unless it also is a walking path in summer) sletuffe 16:49, 30 November 2009 (UTC)
path common usage
foot=yes ?
Can you give an example of a path where foot=no ? -- Pieren 21:39, 6 December 2008 (UTC)
- Sure. The first example on the example page, assuming foot traffic is not permitted across that rocky field. --Hawke 00:14, 7 December 2008 (UTC)
- This tag implies foot=yes because it's a exception if not. -- ck3d 16:40, 4 May 2009 (UTC)
- Without a tagged foot=yes it's foot=unknown - it can or could be assumed to be legal but nothing is said about if it is suitable for any pedestrian. Alv 16:51, 4 May 2009 (UTC)
key description evolution
Post vote modification
During the proposition of path : Proposed_features/Path A clear sentence was available and has now disapeard from this page : highway=path is a generic path, either multi-use, or unspecified usage. The default access restriction of highway=path is "open to all non-motorized vehicles, but emergency vehicles are allowed" As it become amended somehow ? Sletuffe 10:09, 12 June 2009 (UTC)
guideline to path vs. track
The recently added clause "If a path is wide enough for four-wheel-vehicles, it should at least be tagged as a highway=track if not explicitly designated to pedestrians, biclycles, horeseriders,..." is misleading, even more so with the different interpretations of the word "designated". There's plenty of ways for walking and cycling in urban parks, that do not have any signs indicating that they can't be used by cars (any signs at all, in fact), and which are tagged as cycleways or footways (or path if you really must in your country). Yet tagging them track would be wrong. Because of the controversy around highway=path I'd rather not edit it myself right away. I'd suggest something along the lines of
- "If a path is wide enough for four-wheel-vehicles, and it is not legally signposted or otherwise only allowed for pedestrians, cyclists or horseriders, it is often better tagged as a highway=track." Alv 20:13, 10 October 2009 (UTC)
- Also both sentences have the same meaning, yours is much clearer in that it avoids using "designated" and insist on the "legally" and thus avoid the controversory and it's unclear (well not for every one) meaning. sletuffe 01:15, 11 October 2009 (UTC)
- I'd like to add that putting it right into the introduction of "path" might be a bit too proiminent. I would prefere another chapter later in the page. sletuffe 01:17, 11 October 2009 (UTC)
Rationale
Benefit and abuse of the tag
In 2007, highway=path was proposed with the intention to replace highway=cycleway and highway=footway.
Fortunately, the depreciation of highway=cycleway and highway=footway was rejected. This way, highway=path has become a classification for narrow ways with a bad surface or vage designation.
Nevertheless, some mappers stick to the original idea and repeatedly destroy the more specific classifications, which are understood by renderers better than access-tags of a "path".
There are two unanswered questions:
- What benefit should the displacement of highway=cycleway and highway=footway have had for any user or program in 2007?
- What benefit for any user or program shall be provided today by the use of highway=path for well defined ways with good surfaces?
** Example:
- If a cycleway is tagged as highway=path because it is also allowed for pedestrians, some renderers don't show it as a cycleway any more. Readers of the such a map designed for showing cycleways (with highway=cycleway) can't see, if cycling is forbidden or allowed, and how is the surface.
- If every designated/allowed and approriate for cycling is tagged highway=cycleway, pedestrians have no problem. They know that almost always there is either a shared use or an adjacent or nearby footway; they can use it at least like they can use a road.
--Ulamm (talk) 14:07, 29 September 2014 (UTC) The problem has lost most of its virulency: Now the mapniks detect path/bicycle=designated, too.--Ulamm (talk) 17:23, 30 September 2014 (UTC)
General Discussions
skiDisputed and snowmobile trailsDisputed
I propose to remove ski and snowmobile trails from list of features mappable as highway=path. Both are extremely differing from other features mappable as path and deserve a separate tags Mateusz Konieczny (talk) 07:45, 14 July 2015 (UTC)
- see my reasoning in the top section of this page. In short : +1 sletuffe (talk) 10:18, 14 July 2015 (UTC)
- +1: Ski and snowmobile trails may be mapped as a route-relation --geow (talk) 20:19, 8 August 2015 (UTC)
- Edited Mateusz Konieczny (talk) 21:15, 28 July 2015 (UTC)
- -1 Ski and snowmobile trails are an existing item on the ground. It's no different than a bike path (except permissions to use are limited to skis/snowmobiles instead of bikes.) --Hawke (talk) 14:44, 23 September 2015 (UTC)
Removing link to country specific access restrictions
The tables in OSM_tags_for_routing/Access-Restrictions currently contain at least one value that is in contradiction to the approved proposal and everyones expectation. The link to the table was introduced relatively recently by this edit and in effect retroactively changed 7 years of mapping practice and changed meaning of all previously mapped paths in Austria without explicit tagging bicycle=yes/no.
Such changes should not be made without broad consensus, preferably an approved proposal.
It may be okay to reintroduce the link if it made sure that it won't reintroduce any new breakage, this would probably amount to removing fields such asfoot,bicycle etc for highway=path from the tables. RicoZ (talk) 11:30, 7 August 2015 (UTC)
Richard, is the contradiction you mentioned country-specific to Austria only? Then we should consider a specific reference for the austrian exception and not withhold the valid information in OSM_tags_for_routing/Access-Restrictions to the rest of the globe.--geow (talk) 21:16, 8 August 2015 (UTC)
- I did not validate all 29 or so country specific tables for other possible pitfalls. Instead I am wondering - what valid information is there in OSM_tags_for_routing/Access-Restrictions which is a useful refinement to path and does not contradict the approved definition - "open to all non-motorized vehicles"? Belgium and a few other countries add "moped=yes" - which might be a valid extension but I wonder what those "paths" really look like and if they have additional sings etc. Swiss adds "inline_skates=yes".. for paths?? Hong Kong says for path "bicycle=yes" with a footnote "Bicycles must use a cycleway alongside if present" :)
- The mofa/moped could be valid extensions - if verified that this is generally valid without additional signs etc but given the state of the "Access-Restrictions" table itself it might be better to list those in the highway=path page in a table.
- To explain, the "Access-Restrictions" table it seems to be an early shot ("not yet fully drafted") and does not know about "permissive". What is worse, it seems people did massively misunderstand what the table means - especially the "unless tagged otherwise" principle of a default value. Looking at the footnotes.. "Bicycles are often allowed on pedestrian roads, but this depends on the sign used", "Restrictions for bicycles, skateboard, and rollerblade usually posted at the entrances to such area", "Allowed only when additional sign "Sallittu mopoille" is present"," If the signs "footway" and "cycleway" are posted together, or if there is no warning "no foot", foots can use cycleways", " On pavements, it is illegal to cycle unless designated (hence not a default setting here)".
- The table - even when fixed - might be fine for official highways but rather useless for "OSM-defined" highways like track and path because those escape a simple legal definition. RicoZ (talk) 13:09, 9 August 2015 (UTC)
- Another problem I see is that the permissions will be likely different depending on the value of key:informal. In particular I have significant doubts that informal paths in Belgium do allow mofas by default for example. RicoZ (talk) 11:06, 13 August 2015 (UTC)
Link to "UK public rights of way"
UK public rights of way redirects to "United Kingdom Tagging Guidelines" and this one has not a single mention of "highway=path". The original page supposedly moved to UK access provisions which again does not mention highway=path. So is there any reason to keep (restore) this link? RicoZ (talk)
Rendering
For those who are wondering why highway=path is now rendered as highway=footway, here're some pointers:
- issue #1698 stop displaying highway=footway and highway=path differently
- issue #1766 highway=path/footway should not be assumed to be paved by default
- pull #1713 unify=path/footway styling, show surface for highway=path/footway/cycleway
PS Personally I hope this is not the final battle in the path controversy, the outcome is disappointing. Salmin (talk) 10:09, 24 August 2015 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:Tag:highway%3Dpath | CC-MAIN-2017-17 | refinedweb | 3,016 | 55.17 |
October 15, 2013
Bioconductors:
We are pleased to announce Bioconductor 2.13, consisting of 749 software packages, 179 experiment data packages, and more than 690 up-to-date annotation packages.
There are 84 new software packages, and many updates and improvements to existing packages; Bioconductor 2.13 is compatible with R 3.0.2, and is supported on Linux, 32- and 64-bit Windows, and Mac OS X. This release includes an updated Bioconductor Amazon Machine Image.
Visit for details and downloads.
To update to or install Bioconductor 2.13:
Install R 3.0.2. Bioconductor 2.13 has been designed expressly for this version of R.
Follow the instructions at
There are 84 new packages in this release of Bioconductor.
AllelicImbalance: Provides a framework for allelic specific expression investigation using RNA-seq data
ampliQueso: The package provides tools and reports for the analysis of amplicon sequencing panels, such as AmpliSeq
ArrayTV: Wave correction for genotyping and copy number arrays
ASSET: An R package for subset-based analysis of heterogeneous traits and subtypes
BADER: For RNA sequencing count data, BADER fits a Bayesian hierarchical model. The algorithm returns the posterior probability of differential expression for each gene between two groups A and B. The joint posterior distribution of the variables in the model can be returned in the form of posterior samples, which can be used for further down-stream analyses such as gene set enrichment.
BAGS: R package providing functions to perform geneset significance analysis over simple cross-sectional data between 2 and 5 phenotypes of interest.
BiGGR: This package provides an interface to simulate metabolic reconstruction from the BiGG database() and other metabolic reconstruction databases. The package aids in performing flux balance analysis (FBA). Metabolic networks and estimated fluxes can be visualized using hypergraphs.
bioassayR: bioassayR provides tools for statistical analysis of small molecule bioactivity data
BiocParallel: This package provides modified versions and novel implementation of functions for parallel evaluation, tailored to use with Bioconductor objects.
BiocStyle: Provides standard formatting styles for Bioconductor documents. The vignette illustrates use and functionality.
BiRewire:.
CexoR: Strand specific peak-pair calling in ChIP-exo replicates. The cumulative Skellam distribution function (package ‘skellam’) is used to detect significant normalized count differences of opposed sign at each DNA strand (peak-pairs). Irreproducible discovery rate for overlapping peak-pairs across biological replicates is estimated using the package ‘idr’.
ChAMP: The package includes quality control metrics, a selection of normalization methods and novel methods to identify differentially methylated regions and to highlight copy number aberrations.
Chemmine mor. ChemineOB aims to make a subset of these utilities available from within R. For non-developers, ChemineOB is primarily intended to be used from ChemmineR as an add-on package rather than used directly.
chipenrich: ChIP-Enrich performs gene set enrichment testing using peaks called from a ChIP-seq experiment. The method empirically corrects for confounding factors such as the length of genes, and the mappability of the sequence surrounding genes.
cleanUpdTSeq:.
cleaver: In-silico cleavage of polypeptide sequences. The cleavage rules are taken from:
clonotypeR:.
CNVrd2: CNVrd2 uses next-generation sequencing data to measure human gene copy number for multiple samples, identify SNPs tagging copy number variants and detect copy number polymorphic genomic regions.
cobindR: Finding and analysing co-occuring motifs of transcription factor binding sites in groups of genes
CSSP: Power computation for ChIP-Seq data based on Bayesian estimation for local poisson counting process.
customProDB: Generate customized protein sequence database from RNA-Seq data for proteomics search
dagLogo: Visualize significant conserved amino acid sequence pattern in groups based on probability theory
DNaseR: Strand-specific digital genomic footprinting in DNase-seq data. The cumulative Skellam distribution function (package ‘skellam’) is used to detect significant normalized count differences of opposed sign at each DNA strand. This is done in order to determine the protein-binding footprint flanks. Preprocessing of the mapped reads is recommended before running DNaseR (e.g., quality checking and removal of sequence-specific bias).
EBSeq: Differential Expression analysis at both gene and isoform level using RNA-seq data
epivizr: SummarizedExperiment objects), while providing an easy mechanism to support other data structures. Visualizations (using d3.js) can be easily added to the web app as well.
exomePeak: jia.meng@hotmail.com if you have any questions.
FGNet: Build and visualize functional gene networks from clustering of enrichment analyses in multiple annotation spaces. The package includes an interface to perform the analysis through David and GeneTerm Linker.
flipflop: Flipflop discovers which isoforms of a gene are expressed in a given sample together with their abundances, based on RNA-Seq read data.
flowBeads: This package extends flowCore to provide functionality specific to bead data. One of the goals of this package is to automate analysis of bead data for the purpose of normalisation.
flowFit:.
flowMap: This package provides an algorithm to compare and match cell populations across multiple flow cytometry samples. The method is based on the Friedman-Rafsky test, a nonparametric multivariate statistical test, where two cell distributions match if they occupy a similar feature space. The algorithm allows the users to specify a reference sample for comparison or to construct a reference sample from the available data. The output of the algorithm is a set of text files where the cell population labels are replaced by a metaset of population labels, generated from the matching process.
GOSim: This package implements several functions useful for computing similarities between GO terms and gene products based on their GO annotation. Moreover it allows for computing a GO enrichment analysis
h5vc: This package contains functions to interact with tally data from NGS experiments that is stored in HDF5 files. For detail see the webpage at.
intansv: This package provides efficient tools to read and integrate structural variations predicted by popular softwares. Annotation and visulation of structural variations are also implemented in the package.
interactiveDisplay: The interactiveDisplay package contains the methods needed to generate interactive Shiny based display methods for Bioconductor objects.
maPredictDSC: This package implements the classification pipeline of the best overall team (Team221) in the IMPROVER Diagnostic Signature Challenge. Additional functionality is added to compare 27 combinations of data preprocessing, feature selection and classifier types.
metaSeq: The probabilities by one-sided NOISeq are combined by Fisher’s method or Stouffer’s method
methylMnM: To give the exactly p-value and q-value of MeDIP-seq and MRE-seq data for different samples comparation.
mitoODE: The package contains the methods to fit a cell-cycle model on cell count data and the code to reproduce the results shown in the paper “Dynamical modelling of phenotypes in a genome-wide RNAi live-cell imaging assay” (submitted).
msmsEDA: Exploratory data analysis to assess the quality of a set of LC-MS/MS experiments, and visualize de influence of the involved factors.
msmsTests:.
MSstats: A set of tools for protein significance analysis in label-free or LC-MS, SRM and DIA experiments.
mzID: A parser for mzIdentML files implemented using the XML package. The parser tries to be general and able to handle all types of mzIdentML files with the drawback of having less ‘pretty’ output than a vendor specific parser. Please contact the maintainer with any problems and supply an mzIdentML file so the problems can be fixed quick.
neaGUI:.
NetSAM:.
omicade4: Multiple co-inertia analysis of omics datasets
OmicCircos: OmicCircos is an R application and package for generating high-quality circular maps for omic data
openCyto: This package is designed to facilitate the automated gating methods in sequential way to mimic the manual gating strategy.
paircompv.
pathifier:.
plethy: This package provides the infrastructure and tools to import, query and perform basic analysis of whole body plethysmography and metabolism data. Currently support is limited to data derived from Buxco respirometry instruments as exported by their FinePointe software.
ProCoNA: Protein co-expression network construction using peptide level data, with statisical analysis. (Journal of Clinical Bioinformatics 2013, 3:11 doi:10.1186/2043-9113-3-11)
prot2D: The purpose of this package is to analyze (i.e. Normalize and select significant spots) data issued from 2D GEl experiments
PSICQUIC: PSICQUIC is a project within the HUPO Proteomics Standard Initiative (HUPO-PSI). It standardises programmatic access to molecular interaction databases.
qcmetrics: The package provides a framework for generic quality control of data. It permits to create, manage and visualise individual or sets of quality control metrics and generate quality control reports in various formats.
qusage:)
Rchemcpp: The Rchemcpp package implements the marginalized graph kernel and extensions, Tanimoto kernels, graph kernels, pharmacophore and 3D kernels suggested for measuring the similarity of molecules.
RDAVIDWebService:
rfPred:).
Roleswitch:.
RRHO: The package is aimed at inference on the amount of agreement in two sorted lists using the Rank-Rank Hypergeometric Overlap test.
RTN: This package provides classes and methods for transcriptional network inference and analysis. Modulators of transcription factor activity are assessed by conditional mutual information, and master regulators are mapped to phenotypes using different strategies, e.g., gene set enrichment, shadow and synergy analyses.
rTRM: rTRM identifies transcriptional regulatory modules (TRMs) from protein-protein interaction networks.
rTRMui: This package provides a web interface to compute transcriptional regulatory modules with rTRM.
seqCNA: Copy number analysis of high-throughput sequencing cancer data with fast summarization, extensive filtering and improved normalization
SeqVarTools: An interface to the fast-access storage format for VCF data provided in SeqArray, with tools for common operations and analysis.
shinyTANDEM: This package provides a GUI interface for rTANDEM. The GUI is primarily designed to visualize rTANDEM result object or result xml files. But it also provides an interface for creating parameter objects, launching searches or performing conversions between R objects and xml files.
SigFuge: Algorithm for testing significance of clustering in RNA-seq data.
SimBindProfiles:.
SpacePAC: Identifies clustering of somatic mutations in proteins via a simulation approach while considering the protein’s tertiary structure.
spliceR: An R package for classification of alternative splicing and prediction of coding potential from RNA-seq data.
spliceSites: Align gap positions from RNA-seq data
sRAP:.
sSeq:.
STRINGdb: The STRINGdb package provides a user-friendly interface to the STRING protein-protein interactions database ( ).
supraHex:.
SwimR:.
TargetScore:.
T other sophisticated packages (especially edgeR, DESeq, and baySeq).
TFBSTools: Software package for TFBS.
tRanslatome: Detection of differentially expressed genes (DEGs) from the comparison of two biological conditions (treated vs. untreated, diseased vs. normal, mutant vs. wild-type) among different levels of gene expression (transcriptome ,translatome, proteome), using several statistical methods: Rank Product, t-test, SAM,.
trio:.
vtpnet: variant-transcription factor-phenotype networks, inspired by Maurano et al., Science (2012), PMID 22955828
XVector: Memory efficient S4 classes for storing sequences “externally” (behind an R external pointer, or on disk).
Package maintainers can add NEWS files describing changes to their packages. The following package NEWS is available:
Changes in version 2.1.6 (2013-09-23):
Started testing against R beta 3.0.2. Fixed Imports and Depends.
chromData: using “short”, not “ushort”, to catch more user errors.
pSegmentGLAD: using a custom daglad that minimizes unneeded calls, specially sorting, that can be very expensive.
Added “certain_noNA” to segmentation methods.
Started testing against R-devel (to become R-3.1.0).
Changes in version 2.1.5 (2013-09-16):
Fixed typos.
Minimized usage of “:::”, removed unused functions for Ansari, and some assignemts that no longer made sense (all packages now have namespaces).
Minimize “Depends” and use “Suggests” and “Imports” in DESCRIPTION with “importFrom” in NAMESPACE.
No longer using our own mergeLevels, since identical to the ones in aCGH package.
GLAD uses now the recommended fastest option (smoothfunc=haarseg).
Changes in version 2.1.4 (2013-07-01):
Changes in version 2.1.3 (2013-06-20):
Default merging of pSegmentDNAcopy changed to “MAD”, to reflect our usage.
Added more benchmarking results and recommendations to the vignette, and fixed some typos.
Changes in version 2.1.2 (2013-06-17):
More changes in cutFile to try and get it to run under Mac.
Fixed names in long examples that were leading to mistakenly reporting results as different.
Added new benchmarking results.
Changes in version 2.1.1 (2013-06-16):
Many small changes and adaptations in vignette and help to get it to work unded Win and Mac.
Changes in cutFile to try and get it to run under Mac.
Changes in version 2.1.0 (2013-05-30):
This is a major rewrite of a most of the code, has new functions, major changes in existing functions, new vignettes, etc.
We no longer use snowfall.
Major changes in parallelization, using forking.
Reading of data: many more options, parallelized reading.
Changes in version 1.34.0 (2012-10-14):
Changes in version 1.33.4 (2013-09-23):
Changes in version 1.33.3 (2013-06-29):
Changes in version 1.33.2 (2013-05-25):
Changes in version 1.33.1 (2013-05-20):
Changes in version 1.33.0 (2013-04-03):
The version number was bumped for the Bioconductor devel version.
No updates.
Changes in version 1.32.3 (2013-06-29):
BUG FIX: Since affxparser v1.30.2/1.31.2 (r72352; 2013-01-08), writeCdf() would incorrectly encode the unit types, iff the input ‘cdf’ argument specified them as integers, e.g. as done by writeCdf() for AffyGenePDInfo in aroma.affymetrix. More specifically, the unit type index would be off by one, e.g. an ‘expression’ unit (1) would be encoded as an ‘unknown’ unit (0) and so on. On the other hand, if they were specified by their unit-type names (e.g. ‘expression’) the encoding should still be correct, e.g. if input is constructed from readCdf() of affxparser. Thanks to Guido Hooiveld at Wageningen UR (The Netherlands) for reporting on this.
BUG FIX: Similarily, writeCdf() has “always”, at least affxparser v1.7.4 since (r21888; 2007-01-09), encoded unit directions and QC unit types incorrectly, iff they were specified as integers.
Changes in version 1.32.2 (2013-05-25):
SPEEDUP: Removed all remaining gc() calls.
SPEEDUP: Replaced all rm() calls with NULL assignments.
Changes in version 1.32.1 (2013-05-20):
Changes in version 1.2.1:
Changes in version 1.2.0:
NEW FEATURES
BUG FIXES
Changes in version 1.32.0 (2012-10-14):
Changes in version 1.31.10 (2013-10-08):
Added averageQuantile() for matrices in addition to lists.
SPEEDUP: Now normalizeQuantileSpline(…, sortTarget=TRUE) sorts the target only once for lists of vectors just as done for matrices.
DOCUMENTATION: Merged the documentation for normalizeQuantileSpline() for all data types into one help page. Same for plotXYCurve().
BUG FIX: Argument ‘lwd’ of plotXYCurve(X, …) was ignored if ‘X’ was a matrix.
Bumped up package dependencies.
Changes in version 1.31.9 (2013-10-07):
Now library(aroma.light, quietly=TRUE) attaches the package completely silently without any messages.
Now the ‘aroma.light’ Package object is also available when the package is only loaded (but not attached).
DOCUMENTATION: Merged the documentation for normalizeQuantileRank() for numeric vectors and lists.
DOCUMENTATION: Now documention S3 methods under their corresponding generic function.
Changes in version 1.31.8 (2013-10-02):
Changes in version 1.31.7 (2013-09-27):
SPEEDUP: Now all package functions utilizes ‘matrixStats’ functions where possible, e.g. anyMissing(), colMins(), and rowWeightedMedians().
Bumped up package dependencies.
Changes in version 1.31.6 (2013-09-25):
Changes in version 1.31.5 (2013-09-23):
ROBUSTNESS: Now properly declaring all S3 methods in the NAMESPACE file.
SPEEDUP/CLEANUP: normalizeTumorBoost() now uses which() instead of whichVector() of ‘R.utils’. Before R (< 2.11.0), which() used to be 10x slower than whichVector(), but now it’s 3x faster.
CLEANUP: Now only using ‘Authors@R’ in DESCRIPTION, which is possible since R (>= 2.14.0). Hence the new requirement on the version of R.
Bumped up package dependencies.
Changes in version 1.31.4 (2013-09-10):
CLEANUP: Now package explicitly imports what it needs from matrixStats.
Bumped up package dependencies.
Changes in version 1.31.3 (2013-05-25):
SPEEDUP: Removed all remaining gc() calls, which were in normalizeQuantileSpline().
SPEEDUP: Replaced all rm() calls with NULL assignments.
Updated the package dependencies.
Changes in version 1.31.2 (2013-05-20):
Changes in version 1.31.1 (2011-04-18):
Changes in version 1.31.0 (2013-04-03):
Changes in version 1.30.5 (2013-09-25):
Backport from v1.31.5: Declaring all S3 methods in NAMESPACE.
Backport from v1.31.5: normalizeTumorBoost() now uses which(), which also removes one dependency on ‘R.utils’.
Changes in version 1.30.4 (2013-09-25):
Changes in version 1.30.3 (2013-09-25):
Backport from v1.31.3: Removal of all gc() calls and removal of variables is now faster.
Removed one stray str() debug output in robustSmoothSpline().
Changes in version 1.30.2 (2013-05-20):
Changes in version 1.30.1 (2013-04-18):
Now backtransformPrincipalCurve() preserves dimension names.
BUG FIX: backtransformPrincipalCurve() gave an error if the pricipal curve was fitted using data with missing values.
BUG FIX: fitPrincipalCurve() would not preserve dimension names if data contain missing values.
Changes in version 1.0:
USER VISIBLE CHANGES
BUG FIXES
PLANS
Changes in version 2.1:
Changes in version 1.1:
NEW FEATURES
New high level function performOAuth() makes the App authentication easier.
initializeAuth() now fires up a browser window and starts the OAuth v2 process. Users can use ‘useBrowser’ parameter to control this behaviour.
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
Added end of line character to some messages.
fileItem$Size is now stored as a numeric, thus allowing for file larger than 2GB.
Changes in version 0.99.3:
1.3.1: # Thu 06-20-2013 - 11:29
1.4.0: # Wed 06-26-2013 - 19:56
1.4.1:
Changes in version 1.0.0 (2013-10-15):
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 2.21:
USER VISIBLE CHANGES
USER VISIBLE CHANGES
Changes in version 0.99.0:.
SIGNIFICANT USER-VISIBLE CHANGES
BPPARM is now used as the argument name for passing BiocParallelParam instances to functions.
bplapply and bpvec now only dispatch on X and BPPARAM.
Changes in version 1.0.0:
USER VISIBLE CHANGES
BUG FIXES
Changes in version 1.1.9:
BUG FIXES
BUG FIXES
Changes in version 1.1.8:
NEW FEATURES
BUG FIXES
NEW FEATURES
BUG FIXES
Changes in version 1.1.7:
NEW FEATURES
NEW FEATURES
Changes in version 1.1.6:
BUG FIXES
Changes in version 1.1.5:
NEW FEATURES
BUG FIXES
Changes in version 1.1.4:
NEW FEATURES
update biomvRGviz to plot multiple samples in one, and more param
plot method now defaults to multiple samples in one, sampleInOne=T
BUG FIXES
Changes in version 1.1.3:
NEW FEATURES
NEW FEATURES
Changes in version 1.1.2:
NEW FEATURES
add a new clustering method for emission prior estimation
add a new example section of DMR detection in the vignette
BUG FIXES
Changes in version 1.1.1:
NEW FEATURES
BUG FIXES
Changes in version 1.5.5 (2013-10-12):
BUG FIXES
BUG FIXES
BUG FIXES
Changes in version 1.5.2 (2013-10-09):
BUG FIXES
fixed force option in estimateHyperPar (it worked the wrong way around)
fixed problem with effective lengths being too small
NEW FEATURES
Changes in version 1.4.3 (2013-07-08):
BUG FIXES
NEW FEATURES
BUG FIXES
NEW FEATURES
BUG FIXES
NEW FEATURES
Changes in version 0.9:
Fixed a problem with “width” in the title of bsseq plots.
plot.BSseqTstat now allows for BSseqTstat objects computed without correction.
validObject(BSseq) has been extended to also check for sampleNames consistency.
Fixed a bug related to validity checking.
Increased maxk from 10,000 to 50,000 in calls to locfit, to allowing fitting the model on genomes with unusally long chromosomes (Thanks to Brian Herb for reporting).
The class representation for class ‘BSseq’ has changed completely. The new class is build on ‘SummarizedExperiment’ from GenomicRanges instead of ‘hasGRanges’. Use ‘x <- updateObject(x)’ to update serialized (saved) objects.
Fixing a problem in orderBSseq related to chromosome names.
Allowed user specification of maxk, with a default of 10,000 in BSmooth.
Many bugfixes made necessary by the new class representation.
Better argument checking in BSmooth.tstat.
A few undocumented functions are now documented.
Rewrote orderBSseq
Changes in version 1.1:
Added NEWS file.
Fixed a bug related to >= for numerics.
Added smoothing using a gaussian kernal as implemented in the locfit package through the function locfitByCluster.
Added closeSockets for cleanup for doParallel on Windows.
More bugfixes for windows; now using foreachCleanup().
Added a ‘bumps’ class and print method.
annotateNearest / regionMatch now give NA annotations for queries with no nearest subject (perhaps because the seqname is missing from the subject). Previously this was taken to be mistaken input and a hard error raised.
Speedup of fwer computations using foreach.
Added boundedClusterMaker.
bug fix to internal function .getModT (which are not used in the main bumphunter functions). Now the t-statistics returned a correct.
Changes in version 2012-06-12:
Changes in version 2012-06-11:
Version 1.13.4
Add ByteCompile: TRUE
Bugfix for findIsotopes, clears subscript out of bound error
Changes in version 2012-05-25:
Version 1.13.2
First changes for improved isotope detection
Changes in version 2012-04-12:
Version 1.13.1
Bugfix for findIsotopes to fix not consecutive isotope label like [M]+,[M+2]+ without [M+1]+:
1.5.1:
Changes in version 1.8.0 (2013-08-28):
BUG FIX
readMIDAS: DV, DA and TR can now be in the specy name
prep4sim can read self loop
plotOptimResultsPan: fix special cases with one or no inhibitors/stimuli
fix bug in cutNONC: notMat was not populated
readSIF: can read and gates coded in big caps as well as small ones
writeMIDAS: manages absence of inhibitors
CHANGES
CNOlist: subfield parameter has been removed. Subfield are automatically found from the header
expandAndGates are not limited to 4 inputs anymore
normaliseCNOlist: EC50 is set to 1 by default
NEW FEATURES
readSBMLQual: a function to read prior knowledge network in SBMLqual format. !! This is a prototype. use with care for now.
cutCNOlist function can cut a MIDAS file over time
Changes in version 1.0.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.3.15:
NEW FEATURES
Changes in version 1.3.12:
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.3.9:
NEW FEATURES
Changes in version 1.3.3:
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.0:
PKG FEATURES
PKG FEATURES
PKG FEATURES
PKG FEATURES
PKG FEATURES
PKG FEATURES
PKG FEATURES
PKG FEATURES
PKG FEATURES
Changes in version 0.99.2:
USER-VISIBLE CHANGES
Updated examples for various functions to be runnable (removed donttest)
Updated DESCRIPTION to use Imports: rather than Depends:
Updated license to GPL-3
Updated NEWS file for bioconductor guidelines
BUG FIXES
Changes in version 0.99.1:
USER-VISIBLE CHANGES
Changes in version 0.99.0:
NEW FEATURES
Changes in version 0.9.6:
NEW FEATURES
Changes in version 0.9.5:
BUG FIXES
USER-VISIBLE CHANGES
Changes in version 0.9.4:
USER-VISIBLE CHANGES
Changes in version 0.9.3:
NEW FEATURES
BUG FIXES
Changes in version 0.9.2:
NEW FEATURES
Added ability for user to input their own locus definition file (pass the full path to a file as the locusdef argument)
Added a data frame to the results object that gives the arguments/values passed to chipenrich, also written to file *_opts.tab
For FET and chipenrich methods, the outcome variable can be recoded to be >= 1 peak, 2 peaks, 3 peaks, etc. using the num_peak_threshold parameter
Added a parameter to set the maximum size of gene set that should be tested (defaults to 2000)
USER-VISIBLE CHANGES
Previously only peak midpoints were given in the peak –> gene assignments file, now the original peak start/ends are also given
Updated help/man with new parameters and more information about the results
BUG FIXES
Changes in version 0.9.1:
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 0.9:
NEW FEATURES
BUG FIXES
Changes in version 0.8:
BUG FIXES
Changes in version 0.7:
USER-VISIBLE CHANGES
Updated binomial test to sum gene locus lengths to get genome length and remove genes that are not present in the set of genes being tested
Updated spline fit plot to take into account mappability if requested (log mappable locus length plotted instead of simply log locus length)
Removed SAMPLEABLE_GENOME* constants since they are no longer needed
Updated help files to reflect changes to plot_spline_length and chipenrich functions
BUG FIXES
Changes in version 0.6:
BUG FIXES
Changes in version 0.5:
USER-VISIBLE CHANGES
USER-VISIBLE CHANGES
Changes in version 2.9.6:
BUG FIXES
Changes in version 2.9.5:
BUG FIXES
Changes in version 1.1.6:
Changes in version 1.1.5:
Changes in version 1.1.4:
Changes in version 1.1.3:
Changes in version 1.1.2:
Changes in version 1.1.1:
Changes in version 0.99.5 (2013-07-24):
vignette:
remove duplicated sessionInfo entries.
Changes in version 0.99.4 (2013-07-23):
vignette:
use BiocStyle.
add sessionInfo() and TOC.
Changes in version 0.99.3 (2013-07-08):
Changes in version 0.99.2 (2013-06-17):
Replace own AAStringSetList constructor by Biostrings::AAStringSetList.
man/cleaver-package.Rd: remove static date.
vignette: add dynamic date and don’t load Biostrings manually anymore.
NAMESPACE: don’t import from Biostrings and IRanges.
Changes in version 0.99.1 (2013-05-30):
Add S4-methods for character, AAString, AAStringSet.
man/cleave-methods.Rd: split table of cleavage rules to reduce table width.
Extend vignette (add BRAIN and UniProt.ws based examples).
Changes in version 0.99.0 (2013-04-27):
Changes in version 2013-10-07:
Commited 0.99.6 to Bioconductor.
Added unit tests for yassai_identifier().
Changes in version 2013-08-01:
Changes in version 2013-05-07:
Resubmitted 0.99.5 to Bioconductor.
Distribute a copy of the clonotypes extracted from example_data.
Execute all examples in the vignette.
Changes in version 2013-05-01:
Resubmitted 0.99.4 to Bioconductor
Moved the Markdown vignette to ‘/vignettes’.
Added executable example with test data to the vignette.
Changes in version 2013-04-26:
Resubmitted 0.99.3 to Bioconductor
Moved extra data to ‘inst/extdata’.
Moved the Markdown vignette to ‘inst/vignettes’.
Changes in version 2013-04-15:
Changes in version 2013-01-08:
Leaner Bioconductor package, without the wiki documentation. 2012-….. Charles Plessy plessy@riken.jp
Many entries missing.
Changes in version 2012-10-10:
Changes in version 2012-09-06:
Changes in version 1.9.4:
Changes in version 1.9.3:
extend enrichGO to support 20 species <2013-07-09, Mon>
update vignettes. <2013-07-09, Mon>
Changes in version 1.9.1:
enrichGO and enrichKEGG support rat organism <2013-05-20, Mon>
change some code according to DOSE <2013-03-27, Wed>
modify enrichGO and enrichKEGG according to the change of enrich.internal, remove qvalueCutoff parameter, add pAdjustMethod, add universe paramter for user to specify background. <2013-05-29, Wed>
add function viewKEGG for visualizing KEGG pathway and update vignette. <2013-06-14, Fri>
1.7.1:
Changes in version 1.4.0:
cSimulator handles non integers values for the ihibitors and stimuli
gaDiscreteT1.R: fix issue when only 1 model was returned within tolerance
reduceFuzzy.R fix bug that causes seg faut (model was not cut properly)
defaultParametersFuzzy.R: added nTF to set number of TF to arbitrary value (not tested)
CNORwrapFuzzy.R: fixed pMutation argument that was not populated
gaDiscrete functions return best score as well in the dataframe.
output names of the fields returned by gaDiscrete are now using camel lower case so that plotFit from CellNoptR can be used
add C simulator
Changes in version 1.30.0:
The most visible change is that the CodelinkSet interface has been adopted as the official supported system. Documentation of these topic has been improved, and a new vignette describing the CodelinkSet system is available (Codelink_Introduction). Documentation to the old Codelink class has moved to the Codelink_Legacy vignette.
Before, readCodelink() would assign NA values to spots flagged as M (MSR spot), I (Irregular) and C (Contaminated). This could cause in large datasets that many spots would have at least one sample with NA at random, reducing drastically the number of de facto spots/probes used during normalization. Many thanks to Emmanuelle Dantony for spotting this problem and subsequent feedback. Because of this and other problems implementing and appropriate method to deal with this, automatic assigment of NA values is not performed anymore. The only exception are M flagged spots, which have intensity values of -9999, and hence do not represent any measure of intensity. Also, background and normalization methods are applied calling the appropriate functions in the limma package. Support for type- and flag-based weights has been included, and weights are automatically created by readCodelinkSet(). Weights can be used to modulate the contribution of some probes to normalization and during linear modeling more efficiently. Examples on how to use these approaches are documented in the vignette Codelink_Introduction.
Added generic method normalize() for Codelink and CodelinkSet classes.
Changes in version 1.7
Bugfixes
Updates
Bugfixes
Changes in version 1.99.0 (2013-04-30):
Updates
Updates
Changes in version 1.7.4 (2013-09-28):
Updates
Updates
Changes in version 1.1.3:
corrected errors generated in denoiseSegments when segments are too small
refined plotting with data that has many subpopulations (>10)
more robust argument selection in functions
Changes in version 1.1.32:
By default, use QR decomposition on the design matrix X. This stabilizes the GLM fitting. Can be turned off with the useQR argument of nbinomWaldTest() and nbinomLRT().
Allow for “frozen” normalization of new samples using previous estimated parameters for the functions: estimateSizeFactors(), varianceStabilizingTransformation(), and rlogTransformation(). See manual pages for details and examples.
Changes in version 1.1.31:
Changes in version 1.1.24:
Changes in version 1.1.23:
Changes in version 1.1.21:
Changes in version 2013-09-12:
Major changes to vignette to provide more of an end-to-end description of the work flow.
Major changes to function names to now make TRT rather than BM the default; changed vignette to reflect this.
Added apepndix explaining TRT to vignette.
Changes in version 2013-02-27:
Changes in version 2012-11-28:
Changes in version 2012-06-26:
Changes in version 2012-05-21:
Changes in version 2011-10-03:
Changes in version 2011-07-12:
Changes in version 2011-07-01:
Changes in version 1.8.0:
Add support for DESeq2:
New: Add DBA_DESEQ2, DBA_ALL_METHODS and DBA_ALL_BLOCK method constants
Change: dba.analyze can analyze using DESeq2
Change: all reporting and plotting functions support DESeq2 results
Change: vignette includes comparison of edgeR, DESeq, and DESeq2
Changes to counting using dba.count:
Change: optimize built-in counting code to use much less memory and run faster
Change: deprecate bLowMem, replaced with bUseSummarizeOverlaps
New: add readFormat parameter to specify read file type (instead of using file suffix)
New: generation of result-based DBA object using dba.report (makes it easier to work with differentially bound peaksets)
Changes to defaults:
Change: default score is now DBA_SCORE_TMM_MINUS_FULL instead of DBA_SCORE_TMM_MINUS_EFFECTIVE in dba.count
Change: default value for bFullLibrarySize is now TRUE in dba.analyze
New: add bCorPlot option to DBA$config to turn off CorPlot by default
Various bugfixes, improved warnings, updated documentation
Changes in version 1.99.6:
Changes in version 1.99.5:
Changes in version 1.99.4:
Changes in version 1.99.3:
extend EXTID2NAME to support 20 species <2013-07-09, Tue>
update vignette. <2013-07-09, Tue>
Changes in version 1.99.1:
Changes in version 1.99.0:
extent ggplot to support enrichResult by implementing fortify method. <2013-05-22, Wed>
re-implement barplot.enrichResult. <2013-05-23, Thu>
enrich.internal support user specifiy background by parameter universe. <2013-05-24, Fri>
implement Gene Set Enrichment Analysis algorithm. <2013-05-29, Wed>
change setReadable to support groupGO of clusterProfiler. <2013-05-29, Wed>
fixed mclapply not support Windows platform issue. <2013-05-30, Fri>
rename logFC parameter to foldChange. <2013-06-13, Thu>
Changes in version 1.7.1:
use geom_bar(stat=”identity”) instead of geom_bar() in barplot for explicitly mapping y value. <2013-05-08, Wed>
bug fixed when qvalue can’t calculated. <2013-05-02, Thu>
bug fixed of enrich.internal, drop those genes that without annotation when calculating sample gene number. <2013-05-02, Thu>
change some code to satisfy ReactomePA <2013-03-27, Wed>
Changes in version 1.7.0:
Changes in version 1.6.0:
Changes in version 4.4.0:
NEW FEATURES
New colorLabels function color-coding labels of object masks by a random permutation (Bernd Fisher)
Additional argument inputRange to normalize allowing presetting a limited input intensity range
Additional argument thick to paintObjects controlling the thickness of boundary contours
SIGNIFICANT USER-VISIBLE CHANGES
normalize and combine use the generics from BiocGenerics
removed the along argument from combine
Re-introduced calculation of ‘s.radius.sd’ (standard deviation of the mean radius) in cell features
BUG FIXES
Changes in version 3.3.8:
Changes in version 3.3.5:
Refinement to cutWithMinN() to make the bin numbers more equal in the worst case.
estimateDisp() now creates the design matrix correctly when the design matrix is not given as an argument and there is only one group. Previously this case gave an error.
Minor edit to glm.h code.
Changes in version 3.3.4:
Changes in version 3.3.3:
Changes in version 3.3.2:
Update to cutWithMinN() so that it does not fail even when there are many repeated x values.
Refinement to computation for nbins in dispBinTrend. Now changes more smoothly with the number of genes. trace argument is retired.
Fixes to calcNormFactors with method=”TMM” so that it takes account of lib.size and refCol if these are preset.
Updates to help pages for the data classes.
Changes in version 3.3.1:
Changes in version 1.2.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 2.6.1:
NEW FEATURES
NEW FEATURES
Changes in version 1.0.0:
NEW FEATURES
Changes in version 2.0.0:
NEW FEATURES
NEW FEATURES
Changes in version 1.4.0:
NEW FEATURES
NEW FEATURES
Changes in version 2.11.3:
add secondary vignette, “RNA-Seq Data Pathway and Gene-set Analysis Workflows”.
add function kegg.gsets, which generates up-to-date KEGG pathway gene sets for any specified KEGG species.
Changes in version 1.5.4:
Changes in version 1.5.3:
Changes in version 1.5.2:
Changes in version 1.5.1:
Changes in version 1.1.7:
Changes in version 1.1.6:
Changes in version 1.1.2:
Changes in version 1.1.1:
Changes in version 1.1.1:
BUG FIXES.14:
NEW FEATURES
keys method now has new arguments to allow for more sophisticated filtering.
adds genes() extractor
makeTranscriptDbFromGFF() now handles even more different kinds of GFF files.
BUG FIXES
better argument checking for makeTranscriptDbFromGFF()
cols arguments and methods will now be columns arguments and methods
Changes in version 1.14.0:
NEW FEATURES
Add coercion from GenomicRangesList to RangedDataList.
Add “c” method for GAlignmentsPairs objects.
Add coercion from GAlignmentPairs to GAlignmentsList.
Add ‘inter.feature’ and ‘fragment’ arguments to summarizeOverlaps().
Add seqselect,GAlignments-method.
Add CIGAR utilities: explodeCigarOps(), explodeCigarOpLengths() cigarRangesAlongReferenceSpace(), cigarRangesAlongQuerySpace() cigarRangesAlongPairwiseSpace(), extractAlignmentRangesOnReference() cigarWidthAlongReferenceSpace(), cigarWidthAlongQuerySpace() cigarWidthAlongPairwiseSpace().
Add seqlevels0() and restoreSeqlevels().
Add seqlevelsInUse() getter for GRanges, GRangesList, GAlignments GAlignmentPairs, GAlignmentsList and SummarizedExperiment objects.
Add update,GAlignments method.
Add GIntervalTree class.
Add coercion from GAlignmentPairs to GAlignments.
Add sortSeqlevels().
Add tileGenome().
Add makeGRangesFromDataFrame() and coercion from data.frame or DataFrame to GRanges.
SIGNIFICANT USER-LEVEL CHANGES
Renaming (with aliases from old to new names): - classes GappedAlignments -> GAlignments GappedAlignmentPairs -> GAlignmentPairs - functions GappedAlignments() -> GAlignments() GappedAlignmentPairs() -> GAlignmentPairs() readGappedAlignments() -> readGAlignments() readGappedAlignmentPairs() -> readGAlignmentPairs()
Remove ‘asProperPairs’ argument to readGAlignmentsList().
Modify “show” method for Seqinfo object to honor showHeadLines and showTailLines global options.
50x speedup or more when merging 2 Seqinfo objects, 1 very small and 1 very big.
Add dependency on new XVector package.
Enhanced examples for renaming seqlevels in seqlevels-utils.Rd.
More efficient reference class constructor for ‘assays’ slot of SummarizedExperiment objects.
‘colData’ slot of SummarizedExperiment produced from call to summarizedOverlaps() now holds the class type and length of ‘reads’.
4x speedup to cigarToRleList().
Relax SummarizedExperiment class validity.
Renaming (with aliases from old to new names): cigarToWidth() -> cigarWidthOnReferenceSpace(), and cigarToQWidth() -> cigarWidthOnQuerySpace().
Improvements to summarizeOverlaps(): - mode ‘Union’: 1.5x to 2x speedup - mode ‘IntersectionNotEmpty’: 2x to 8x speedup + memory footprint reduced by ~ half
Change default ‘use.names’ to FALSE for readGAlignmentsList().
Implement ‘type=”equal”’ for findOverlaps,SummarizedExperiment methods.
Modify summarizeOverlaps() examples to use ‘asMates=TRUE’ instead of ‘obeyQname=TRUE’.
Remove unneeded “window” method for GenomicRanges objects.
Speed up seqinfo() getter and setter on SummarizedExperiment objects and derivatives (e.g. VCF) by using direct access to ‘rowData’ slot.
coverage,GenomicRanges method now uses .Ranges.coverage() when using the defaults for ‘shift’ and ‘width’.
Remove restriction that metadata column names must be different on a GRangesList and the unlisted GRanges.
GenomicRangesUseCases vignette has been redone and renamed to GenomicRangesHOWTOs
DEPRECATED AND DEFUNCT
Defunct all “match” and “%in%” methods in the package except for those with the GenomicRanges,GenomicRanges signature.
Deprecate GappedAlignment*: - GappedAlignments and GappedAlignmentPairs classes - GappedAlignments() and GappedAlignmentPairs() constructors - readGappedAlignments() and readGappedAlignmentPairs() functions
Deprecate cigar util functions: cigarToWidth(), cigarToQWidth(), cigarToIRanges() splitCigar(), cigarToIRanges(), cigarToIRangesListByAlignment() cigarToIRangesListByRName(), cigarToWidth(), cigarToQWidth() cigarToCigarTable(), summarizeCigarTable()
Deprecate seqselect().
BUG FIXES
Fix bug in c,GAlignments for case when objects were unnamed.
Fix bug in flank,GenomicRanges (when ‘ignore.strand=TRUE’ ‘start’ was being set to TRUE).
Fix bug in behavior of summarizeOverlaps() count mode ‘IntersectionNotEmpty’ when ‘inter.features=FALSE’. Shared regions are now removed before counting.
Fix bug in cigarToIRangesListByAlignment() when ‘flag’ is supplied and indicates some reads are unmapped.
Fix bug in summarizeOverlaps(…, mode=’IntersectionNotEmpty’) when ‘features’ has ‘-‘ and ‘+’ elements and ‘ignore.strand=TRUE’.
match,GenomicRanges,GenomicRanges method now handles properly objects with seqlevels not in the same order.
Changes in version 4.10:
Changes in version 4.9:
Changes in version 1.4.0:
NEW FEATURES
Add desired_read_group to BamTallyParam; will limit tallies to that specific read group (useful for multi-amplicon sequencing, like Fluidigm)
Add keep_ref_rows argument to variantSummary() for keeping rows for positions where no alt is detected (the rows where ref == alt).
gsnap() will now output a GsnapOutputList when there are multiple input files
Support ‘terminal_threshold’ and ‘gmap_mode’ parameters in GsnapParam, and use different defaults for DNA vs. RNA. This means a big improvement in alignment quality for DNA.
GmapGenome now accepts a direct path to the genome as its first argument
USER-VISIBLE CHANGES
Renamed summarizeVariants to variantSummary
The ‘which’ in GsnapParam is now a GenomicRanges instead of a RangesList
Refactor bam_tally, so that bam_tally returns a TallyIIT object, which is then summarized via summarizeVariants; this allows computing tallies once and summarizing them in different ways (like maybe get the coverage). The summarizeVariants function yields a VRanges.
BUG FIXES
fix minimum quality cutoff check to >=, instead of >
fix asBam,GsnapOutput for when unique_only is TRUE
package created by makeGmapGenomePackage now have a GmapGenome with the correct name
Changes in version 1.19.3:
Changes in version 1.19.2:
export getDb and loadGOMap <2013-07-09, Mon>
update vignettes <2013-07-9, Mon>
Changes in version 1.19.1:
Changes in version 1.23:
SIGNIFICANT USER-VISIBLE CHANGES
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.7.8:
Added gdsSetMissingGenotypes, updated argument names in ncdfSetMissingGenotypes.
Changed colorscheme in manhattanPlot.R.
Bug fix in ibdPlot - diagonal orange bars are back.
Bug fix in plinkWrite for writing just one sample.
Bug fix in printing pedigreeCheck error message.
Changes in version 1.7.7:
Changes in version 1.7.6:
Changes in version 1.7.5:
Changes in version 1.7.4:
Changes in version 1.7.3:
Changed labeling of IBD plots from “HS” to “Deg2” and “FC” to “Deg3.”
Bug fix in pedigreePairwiseRelatedness - no more warning about multiple values passed to stringsAsFactor.
pedigreeClean and pedigreeFindDuplicates are now defunct. Use pedigreeCheck instead.
Changes in version 1.2.2:
bug fix split_sparse_matrix
plot functions with other arguments ‘…’
plot arguments grid and pairs
new function ‘plotLarger’ (add samples without IBD and borders)
vcftoFABIA with command line options -s (SNVs_) and -o (output file)
vcftoFABIA in R with output file name
Changes in version 1.5.2:
NEW FEATURES
Efficient memory matrix representation using the Matrix package. The memory usage for big sparse matrix is improved by a factor 7. However, some operation are much slower based on the Matrix implementation. Thus, for some task as the plotting function, the Matrix are converted in standard matrix based object
‘show’ and ‘detail’ method for HTClist object
‘c(x, …)’ method for HTCexp and HTClist objects
SIGNIFICANT USER-VISIBLE CHANGES
The option mask.data from the mapC function is deprecated
Update of parallel computation for some functions (import, normalize)
BUG FIXES
Changes in version 1.5.1:
BUG FIXES
BUG FIXES
BUG FIXES
BUG FIXES
BUG FIXES
Changes in version 1.3.1:
Changes in version 1.3.0:
Changes in version 3.11.10:
Changes in version 3.11.9:
Changes in version 3.11.8:
Changes in version 3.11.7:
Changes in version 3.11.6:
Changes in version 3.11.5:
Changes in version 3.11.4:
update variant calling code to work with VariantTools 1.3.6
include Jens’ minlength=1 modification to handle reads fully trimmed
Changes in version 3.11.3:
exports loginfo, logdebug, logwarn, logerror
uses TallyVariantsParam and as(,”VRanges”) to fix a bug preventing compilation on BioC
the number of threads used during processChunks is divided by 2 due to an erroneous extra mcparallel(…) step in sclapply/safeExecute
use a maximum of 12 cores in preprocessReads
Changes in version 3.11.2:
Changes in version 3.11.1:
Changes in version 3.11.0:
Changes in version 3.10.1:
Changes in version 1.7.1:
Changes in version 1.0.1:
Changes in version 1.13.1:
Node labels were ignoring the ‘cex’ node data value (reported by Hannes Hettling)
Added “start” and “both” options for “arrowLoc” graph attribute (which will draw arrowhead at both ends of hyper edges)
Bug fix for converting Hypergraph to graphBPH so that hyperedge names are used as edge labels (reported by Hannes Hettling)
Update the R version check in drawEdge() to cope properly with major versions greater than 2 (!)
Changes in version 0.3.9:
Changes in version 0.3.8:
Changes in version 0.3.6:
Changes in version 0.3.5 (2013-08-02):
Cleaned up internal code of readBPM().
ROBUSTNESS: Added unit tests for readBPM(). Note that these are only run if environment variable ‘R_CHECK_FULL’ is set, i.e. they will not be perfomed on the Bioconductor servers.
Changes in version 1.12.0:
NEW FEATURES
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 0.99.3:
Notes
Notes
Changes in version 1.5.2:
09-29-2010:
01-12-2011:
01-17-2012:
Changes in version 1.20.0:
NEW FEATURES
Add IntervalForest class from Hector Corrada Bravo.
Add a FilterMatrix class, for holding the results of multiple filters.
Add selfmatch() as a faster equivalent of ‘match(x, x)’.
Add “c” method for Views objects (only combine objects with same subject).
Add coercion from SimpleRangesList to SimpleIRangesList.
Add an
%outside% that is the opposite of
%over%.
Add validation of length() and names() of Vector objects.
Add “duplicated” and “table” methods for Vector objects.
Add some split methods that dispatch to splitAsList() even when only ‘f’ is a Vector.
SIGNIFICANT USER-VISIBLE CHANGES
All functionalities related to XVector objects have been moved to the new XVector package.
Refine how isDisjoint() handles empty ranges.
Remove ‘keepLength’ argument from “window<-“ methods.
unlist( , use.names=FALSE) on a CompressedSplitDataFrameList object now preserves the rownames of the list elements, which is more consistent with what unlist() does on other CompressedList objects.
Splitting a list by a Vector just yields a list, not a List.
The rbind,DataFrame method now handles the case where Rle and vector columns need to be combined (assuming an equivalence between Rle and vector). Also the way the result DataFrame is constructed was changed (avoids undesirable coercions and should be faster).
as.data.frame.DataFrame now passes ‘stringsAsFactors=FALSE’ and ‘check.names=!optional’ to the underlying data.frame() call. as(x,”DataFrame”) sets ‘optional=TRUE’ when delegating. Most places where we called as.data.frame(), we now call ‘as(x,”data.frame”)’.
The [<-,DataFrame method now coerces column sub-replacement value to class of column when the column already exists.
DataFrame() now automatically derives rownames (from the first argument that has some). This is a fairly significant change in behavior, but it probably does better match user behavior.
Make sure that SimpleList objects are coerced to a DataFrame with a single column. The automatic coecion methods created by the methods package were trying to create a DataFrame with one column per element, because DataFrame extends SimpleList.
Change default to ‘compress=TRUE’ for RleList() constructor.
tapply() now handles the case where only INDEX is a Vector (e.g. an Rle object).
Speedup coverage() in the “tiling case” (i.e. when ‘x’ is a tiling of the [1, width] interval). This makes it much faster to turn into an Rle a coverage loaded from a BigWig, WIG or BED as a GRanges object.
Allow logical Rle return values from filter rules.
FilterRules no longer requires its elements to be named.
The select,Vector method now returns a DataFrame even when a single column is selected.
Move is.unsorted() generic to BiocGenerics.
DEPRECATED AND DEFUNCT
Deprecate seqselect() and subsetByRanges().
Deprecate ‘match.if.overlap’ arg of “match” method for Ranges objects.
“match” and “%in%” methods that operate on Views, ViewsList, RangesList, or RangedData objects (20 methods in total) are now defunct.
Remove previously defunct tofactor().
BUG FIXES
The subsetting code for Vector derivatives was substancially refactored. As a consequence, it’s now cleaner, simpler, and [ and [[ behave more consistently across Vector derivatives. Some obscure long-standing bugs have been eliminated and the code can be slightly faster in some circumstances.
Fix bug in findOverlaps(); zero-width ranges in the query no longer produce hits ever (regardless of ‘maxgap’ and ‘minoverlap’ values).
Correctly free memory allocated for linked list of results compiled for findOverlap(select=”all”).
Various fixes for AsIs and DataFrames.
Allow zero-row replacement values in [<-,DataFrame.
Fix long standing segfault in “[” method for Rle objects (when doing Rle()[0]).
“show” methods now display its most specific class when a column or slot is an S3 object for which class() returns more than one class.
“show” methods now display properly cells that are arrays.
Fix the [<-,DataFrame method for when a value DataFrame has matrix columns.
Fix ifelse() for when one or more of the arguments are Rle objects.
Fix coercion from SimpleList to CompressedList via AtomicList constructors.
Make “show” methods robust to “showHeadLines” and “showTailLines” global options set to NA, Inf or non-integer values.
Fix error condition in eval,FilterRules method.
Corrected an error formatting in eval,FilterRules,ANY method.
Changes in version 1.7.6:
added TMT 10plex (contribution from Florent Gluck)
fixed bugs with system.file not working on R < 2.11 (contribution from Florent Gluck)
fixed bug in isobar-qc which was not working without normalize=TRUE
added writeHscoreData for usage with Hscorer.py (MM Savitski, 2010)
shQuote commands correctly - should fix issues running report generation on Windows
added calculations and XLS tab for peptides with unsure localization of the modification site
updated scripts for creating multi-sample reports (create.meta.reports)
Changes in version 1.7.5:
fixed critical bugs: Excel report output had wrong ordering, ie ratios did not correspond to the meta information [introduced in version 1.7.3].
fix of real peptide names: Reexport I/L peptides in reports
Changes in version 1.7.4:
improved MSGF+ search result import
refactored report properties: all properties can now be defined in in the properties.R
speed and memory usage improvements when creating wide XLS report
ratio p-value adjustment now works per comparision instead of globally
Changes in version 1.7.3:
fix wide XLS report format
novel plot for ratio combinations in QC report showing individual ratio distributions and significant ratios
Changes in version 1.7.2:
added TMT 6plex masses to phosphoRS export script
fixed mascot parsers
MzIdentML version 1.1.0 support implemented [not fully tested]
Changes in version 1.7.1:
Changes in version 1.3.1:
BUGS FIXED
Changes in version 1.3.0:
MINOR CHANGES
Changes in version 2013-07-03 (2013-07-03):
Replaced calls to ‘exit’, ‘fprintf’ which now raise a WARNING upon check.
Iteration index not printed anymore in ‘nem’ (raised an error when compiling the vignette). <e0> NA
modification de la function getChromosomeArm pour que cytoband ne soit pas positionn<e9>e <e0> NULL
Changes in version 2008-09-04 (2008-09-04):
added a CHANGELOG
updated outdated reference in the .bib file
changed the definition of flag “rep.flag” to avoid the error now caused by sd(NA, na.rm=TRUE)
Changes in version 1.2 (2013-08-20):
Our paper got accepted and is available!
Added methods for MRexperiment objects (colSums,colMeans,rowSums,rowMeans, usage is for example colSums(obj) or colSums(obj,norm=TRUE)) (09-25)
Added two new functions, plotOrd and plotRare - a function to plot PCA/MDS coordinates and rarefaction effect (09-04,09-18)
Updated MRfisher to include thresholding for presence-absence testing (08-19)
Updated comments (roxygen2) style for all the functions using the Rd2roxygen package (07-13)
Updated plotCorr and plotMRheatmap to allow various colors/not require trials(07-13)
Rewrote vignette (and switched to knitr)
Changes in version 1.1 (2013-06-25):
Rewrote load_meta and load_metaQ to be faster/use less memory
Modified cumNormStat to remove NA samples from calculations (example would be samples without any counts)
Re-added plotGenus’ jitter
Fixed uniqueNames call in the MR tables
Changed thanks to Kasper Daniel Hansen’s suggestions the following: plotOTU and plotGenus both have much better auto-generated axis MRtable, MRfulltable, MRcoefs have a sort by p-value option now MRtable, MRfulltable, MRcoefs now have an extra option to include unique numbers for OTU features (default would automatically add them previously) cumNorm.R - now returns the object as well - not just replacing the environment 0 Still need to turn the fitZig output to S3, consider subsetting function address low p-values
Changes in version 1.7:
Added getMethSignal(), a convenience function for programming.
Changed the argument name of “type” to “what” for getMethSignal().
Added the class “RatioSet”, like “GenomicRatioSet” but without the genome information.
Bugfixes to the “GenomicRatioSet()” constructor.
Added the method ratioConvert(), for converting a “MethylSet” to a “RatioSet” or a “GenomicMethylSet” to a “GenomicRatioSet”.
Fixed an issue with GenomicMethylSet() and GenomicRatioSet() caused by a recent change to a non-exported function in the GenomicRanges package (Reported by Gustavo Fernandez Bayon gbayon@gmail.com).
Added fixMethOutliers for thresholding extreme observations in the [un]methylation channels.
Added getSex, addSex, plotSex for estimating sex of the samples.
Added getQC, addQC, plotQC for a very simple quality control measure.
Added minfiQC for a one-stop function for quality control measures.
Changed some verbose=TRUE output in various functions.
Added preprocessQuantile.
Added bumphunter method for “GenomicRatioSet”.
Handling signed zero in minfi:::.digestMatrix which caused unit tests to fail on Windows.
addSex and addQC lead to sampleNames() being dropped because of a likely bug in cbind(DataFrame, DataFrame). Work-around has been implemented.
Re-ran the test data generator.
Fixed some Depends and Imports issues revealed by new features of R CMD check.
Added blockFinder and cpgCollapse.
(internal) added convenience functions for argument checking.
Exposed and re-wrote getAnnotation().
Changed getLocations() from being a method to a simple function. Arguments have been removed (for example, now the function always drops non-mapping loci).
Implemented getIslandStatus(), getProbeType(), getSnpInfo() and addSnpInfo(). The two later functions retrieve pre-computed SNP overlaps, and the new annotation object includes SNPs based on dbSNP 137, 135 and 132.
Changed the IlluminaMethylatioAnnotation class to now include genomeBuild information as well as defaults.
Added estimateCellCounts for deconvolution of cell types in whole blood. Thanks to Andrew Jaffe and Andres Houseman.
Changes in version 0.99.6:
Changes in version 0.99.5:
Changes in version 0.99.2:
Changes in version 1.5.4:
NEW FEATURES
BUG FIXES
Changes in version 1.5.3:
NEW FEATURES
BUG FIXES
Changes in version 1.5.2:
NEW FEATURES
BUG FIXES
Changes in version 1.5.1:
NEW FEATURES
BUG FIXES
Changes in version 1.9.12:
fix MSnSet -> ExpressionSet <2013-10-13 Sun>
MSnSet -> ExpressionSet unit test <2013-10-13 Sun>
Changes in version 1.9.11:
MIAPE to MIAME conversion <2013-10-11 Fri>
proper MIAME when MSnSet -> ExpressionSet <2013-10-11 Fri>
Changes in version 1.9.10:
faster plotMzDelta <2013-09-28 Sat>
faster plotMzDelta for mzRramp instances <2013-09-29 Sun>
chromatogram method <2013-10-04 Fri>
plotMzDelta has subset arg <2013-10-04 Fri>
xic method <2013-10-04 Fri>
suggesting msdata for chromatogram example <2013-10-04 Fri>
renamed plotting arg ‘centroided.’ <2013-10-04 Fri>
Changes in version 1.9.9:
typo in filterNA Rd <2013-09-18 Wed>
writeMgfData now has a progress bar <2013-09-24 Tue>
centroided(MSnExp) <- TRUE now allowed <2013-09-24 Tue>
Changes in version 1.9.8:
using new.env(parent=emptyenv()) to get rid of enclosing env when creating new MSnExps <2013-09-17 Tue>
new (private) MSnExp.size function <2013-09-17 Tue>
Changes in version 1.9.7:
Passing … to read.table in MSnbase:::readIspy[Silac|15N]Data <2013-09-16 Mon>
QualityControl biocView <2013-09-16 Mon>
Changes in version 1.9.6:
new as.data.frame.MSnSet method <2013-08-16 Fri>
new ms2df function <2013-08-16 Fri>
new getEcols and grepEcols helpers for readMSnSet2 <2013-08-16 Fri>
Changes in version 1.9.5:
Changes in version 1.9.4:
Changes in version 1.9.3:
Using knitr as VignetteEngine <2013-04-29 Mon>
Remove LazyLoad from DESCRIPTION, which is default nowadays <2013-04-29 Mon>
knitr dependency > 1.1.0 for VignetteBuilder <2013-04-29 Mon>
Adding MSnSet creating sections in io vignette <2013-04-29 Mon>
new readMSnSet2 function <2013-04-30 Tue>
Changes in version 1.9.2:
clean has now a all param (default FALSE is retain original behavious) to remove all 0 intensity values <2013-04-17 Wed>
using BiocGenerics::normalize <2013-04-25 Thu>
Changes in version 1.9.1:
new logging utility function to update an MSnSet’s processingData(object)$processing <2013-03-29 Fri>
Proper logging in t.MSnSet <2013-03-29 Fri>
Changes in version 1.9.0:
Changes in version 1.99.1:
fixed several NOTES, added .Rbuildignore, compacted vignettes
TODO: check remaining ‘no visible binding for global variable’ NOTES
removed warn -1
added validity check when returning MSnSet
used inherits/is. for class testing
TODO fix if conditions syntax
Changes in version 1.99.0:
improve efficiency for computing groupComparison and quantification <2012-12-21>
add .rnw <2012-12-03>
update groupComparision for label-free time-course experiment with single Feature and with or without technical replicates <2013-04-08>
add option for saving QQ plot and Residual plot in order to checkin the normality assumption in groupComparison function. <2013-04-08>
use ggplot2 package for all plots. <2013-07-11>
fix bug for volcano plot : different color labeling <2013-07-12>
add power plot in sample size calculation plot <2013-07-12>
add ‘interference=TRUE or FALSE’ in sample size calculation <2013-07-15>
add ‘legend.size=7’for size of feature name legends in dataProcessPlots <2013-07-23>
add ‘text.angle=0’for angle of condition labeling in dataProcessPlots <2013-07-23>
fix bug for quantification : when there are missing values in endogenous intensities, but values in reference intensities. <2013-07-24>
fix bug for groupComparison : when there are missing values in endogenous intensities, but values in reference intensities, .make.constast.based or .free sub function were changed. <2013-07-25>
two function for transformation between required input for MSstats and MSnSet class <2013-09-04>
flexibility for visualization : save as pdf files or show in window with selected proteins or all proteins. <2013-09-04>
handle unequal variance for feature in groupComparison function with featureVar=TRUE <2013-09-04>
Add ‘missing.action’ for impute missing values in group comparison stage. <2013-09-20>
Changes in version 0.3.1:
Comment unused functions <2013-07-05 Fri>
Minor typos in vignette <2013-07-05 Fri>
Changes in version 0.3.0:
NEW FEATURES AND FUNCTIONS
Changes in version 0.2.1:
NEW FEATURES AND FUNCTIONS
NEW FEATURES AND FUNCTIONS
NEW FEATURES AND FUNCTIONS
Changes in version 0.1-1:
NEW FEATURES AND FUNCTIONS
flattento create tabular representation of results
Changes in version 0.0-2:
NEW FEATURES AND FUNCTIONS
The package is now fully documented
created helper functions:
countChildren,
attrExtract and
attrExtractNameValuePair
Added NEWS file
Added README.md file
The parser now succesfully imports all test files on the HUPO mzIdentML page
PERFORMANCE
countChildren,
attrExtractand
attrExtractNameValuePair)
Changes in version 0.0-1:
NEW FEATURES AND FUNCTIONS
NEW FEATURES AND FUNCTIONS
mzID,
mzIDdatabase,
mzIDevidence,
mzIDparameters,
mzIDpeptidesand
mzIDpsmwith related constructors
Changes in version 1.7.4:
Changes in version 1.7.3:
Changes in version 1.7.2:
Changes in version 1.7.1:
Changes in version 2.2.0 (2013-10-14):
New function to generate a Quality Control Report in PDF format including all the exploratory plots.
Plot to evaluate RNA composition bias has been changed.
Some bugs have been fixed.
Changes in version 1.8.1:
BUG FIXES
NEW FUNCTION
Changes in version 1.1.7:
Graphviz view can automatic choose different types of legends, either on nodes, edges or both depending on the specific pathways.
Vigette has been reformatted: the “Quick start” section added
Changes in version 1.1.6:
Pathview can plot/integrate/compare multiple states/samples in the same graph. Several functions and data objects are revised: including pathview, keggview.native, keggview.graph, render.kegg.node etc. New section on multiple state data with working exampleshas been added.
Vigette has been reformatted: Data integration section splitted into multiple sub-sections.
Changes in version 1.1.5:
Changes in version 1.1.3:
Changes in version 1.2.0:
NEW FEATURES
Added argument ncpus to runGSA(). Enables parts of this function to run in parallel, thus decreasing runtime. Requires R package snowfall.
Added function runGSAhyper() to perform gene set analysis using Fisher’s exact test, as an alternative to runGSA.
Added information about genes in each area of the Venn diagram in the output of diffExp().
Added volcano plot as optional output of diffExp() and added argument volcanoFC.
Added argument ncharLabel to networkPlot() and consensusHeatmap() to control the length of the labels in the plots and add the option to not truncate them.
Added the yeast metabolic model iTO977 to be loaded with loadGSC(), for detecting reporter metabolites using gene set analysis. (See vignette.)
Added support for running networkPlot() on objects returned by runGSAhyper().
USER-VISIBLE CHANGES
Minor updates to the vignette.
Updated diffExp() man page.
Updated the main legend of the plot from consensusScores() to make it clearer.
Updated error-messages in networkPlot().
Fixed typo in PC variance plot produced by runQC().
Restructered this NEWS file.
BUG FIXES
Updated diffExp() to handle changes in lmFit() and topTable() from limma.
Removed suppressWarnings(), as temporarily introduced in version 1.0.1, in polarPlot() around the calls to radial.plot() since warnings are now fixed in package plotrix.
Changes in version 1.0.7:
USER-VISIBLE CHANGES
Changes in version 1.0.6:
USER-VISIBLE CHANGES
Changes in version 1.0.5:
BUG FIXES
USER-VISIBLE CHANGES
Changes in version 1.0.4:
BUG FIXES
Changes in version 1.0.3:
USER-VISIBLE CHANGES
Changes in version 1.0.2:
USER-VISIBLE CHANGES
Changes in version 1.0.1:
USER-VISIBLE CHANGES
Updated CITATION information.
Fixed typos in DESCRIPTION and piano-package.Rd.
Updated the man file for loadGSC().
Added URL in DESCRIPTION.
BUG FIXES
Fixed bug in loadGSC() so that gmt-files are now loaded correctly.
Temporarily added suppressWarnings() in polarPlot() around the calls to radial.plot() since warnings appeared for example(“radial.plot”) in plotrix v3.4-6.
Changes in version 0.99.4:
SIGNIFICANT USER-VISIBLE CHANGES
NEW FEATURES
Changes in version 1.33.1:
Changes in version 0.99.2:
Changes in version 0.99.1:
Major update … moved to S4 methods (getters, setters, setMethods, etc).
Smaller sample data set for quick example runs in the man pages.
Integration with the MSnbase package. See the vignette for an example of raw data processing in preparation for network building. The main buildProconaNetwork function now takes either matrices of peptide data, or MSnSet objects containing data.
Man pages relating to the main proconaNet object have been combined.
Changes in version 1.1.8:
Changes in version 1.1.7:
Changes in version 1.1.6:
new outliers arg to plot2D <2013-09-23 Mon>
cite addMarkers in vignette <2013-09-26 Thu>
add code chunk in poster vig <2013-09-26 Thu>
Changes in version 1.1.5:
added biocViews <2013-09-19 Thu>
added knitr vig engine in ml <2013-09-19 Thu>
import dependencies <2013-09-20 Fri>
Changes in version 1.1.4:
Changes in version 1.1.3:
new private anyUnknown function, used in phenoDisco, to check for the presence of unlabelled (‘unknown’) data <2013-08-27 Tue>
added a note in vignette about “unknown” convention to define protein with unknown localisation <2013-08-28 Wed>
Added HUPO 2011 poster <2013-09-09 Mon>
Changes in version 1.1.2:
fixed Author[s]@R <2013-05-16 Thu>
na.rm=TRUE in f1Count <2013-05-19 Sun>
added CITATION file <2013-06-07 Fri>
new testMarkers function <2013-06-29 Sat>
error message in getBestParams suggests to use testMarkers <2013-06-29 Sat>
nndist methods (see issue #23) <2013-07-01 Mon>
remove unnecessary as.matrix in plot2D’s cmdscale <2013-07-19 Fri>
added plot2D(…, method = “MDS”) example <2013-07-19 Fri>
changed ‘euclidian’ to ‘euclidean’ in nndist_matrix <2013-07-26 Fri>
fixed row ordering in phenoDisco, input and output rownames are now the same <2013-08-13 Tue>
Using filterNA in phenoDisco <2013-08-13 Tue>
Changes in version 1.1.1:
Update README.md to reflect availability in stable release and biocLite installation <2013-04-07 Sun>
new MSe pipeline <2013-04-07 Sun>
perTurbo using Rcpp <2013-04-16 Tue>
initial clustering infrastructure (not exported) <2013-04-17 Wed>
new markerSet function <2013-04-24 Wed>
plsaOptim’s ncomp is now 2:6 <2013-04-27 Sat>
added forword to vignette <2013-04-29 Mon>
default k is now seq(3, 15, 2) in knnOptim <2013-05-09 Thu>
Changes in version 1.1.0:
Changes in version 2.4.4:
Changes in version 2.4.2:
Changes in version 2.4.0:
Major update with more functions and small bugfixes
Added sequenceReport() and groupReport() for easier report generation
Visualise motif scores along a sequence with plotMotifScores()
Creation of empirical CDFs for motif scores
Almost complete rewrite of the vignette to emphasize the main use cases
Converted documenation to roxygen2
Changes in version 2.3.2:
Changes in version 2.3.1:
Fix a bug with plotTopMotifsSequence() with calling an unknown function
Implement group.only for all background, not only pval in motifEnrichment()
New default to plotMultipleMotifs() so the margins are better
Changes in version 0.99.3:
fixed template - toc after begin document <2013-10-01 Tue>
updates to vignette <2013-10-01 Tue>
re-reoxygenise rnadeg man <2013-10-01 Tue>
Changes in version 0.99.2:
Changes in version 0.99.1:
Updated github README file <2013-09-18 Wed>
Added Arnoud’s suggestions to vig <2013-09-21 Sat>
rnadeg wrapper function available in the package <2013-09-21 Sat>
new QcMetadata class <2013-09-21 Sat>
metadata and mdata synonym <2013-09-21 Sat>
added metadata section in knitr reports <2013-09-21 Sat>
selectively import pander::pander.table to fix warning uppon installation <2013-09-21 Sat>
added n15qc wrapper <2013-09-21 Sat>
Changes in version 0.99.0:
Changes in version 1.18:
USER VISIBLE CHANGES
Updated the formatting of the vignettes to adhere to the Bioconductor style
Functions qpRndHMGM() and qpSampleFromHMGM() which were deprecated in the previous release, are now defunct.
NEW FEATURES
qpCItest() takes now also R/qtl cross objects as input, i.e., the user can test for conditional independence directly on R/qtl cross objects
added functions addGenes(), addeQTL(), and addGeneAssociation() to incrementally build and simulate eQTL networks
BUG FIXES
Real or integer valued levels in discrete variables now prompt an error when they are not positive integers
qpFunctionalCoherence() handles regulatory modules with just one target without giving an error
reQTLcross() can now simulate an initial eQTL model with no genes
fixed plotting for HMgmm objects
Changes in version 1.2.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.1.1:
NEW FEATURES
2.0:
Changes in version 0.99.1:
MINOR CHANGES
dontruntags have been removed from plotting examples (Thanks to Valerie Obenchain).
Changes in version 0.99.0:
DOCUMENTATION
NEWSfile was added.
Changes in version 1.5.3:
Changes in version 1.5.2:
implement gseAnalyzer for GSEA analysis <2013-07-10, Wed>
implement viewPathway for visualizing pathway <2013-07-10, Wed>
Changes in version 1.5.1:
extend enrichPathway to support rat, mouse, celegans, zebrafish, and fly. <2013-03-27, Wed>
modify enrichPathway according to the change of enrich.internal, remove qvalueCutoff parameter, add pAdjustMethod, add universe paramter for user to specify background. <2013-05-29, Wed>
Changes in version 2.0.0: 2.5:
Expose mai argument in plot,Ragraph-method.
Changed the LICENSE to EPL.
Fixed some wrong text in the DESCRIPTION file.
Fixed pieGylph to adress the issue of warning messages about rep(NULL) (reported by Cristobal Fresno Rodríguez Cristobalfresno@gmail.com).
Updated Imports, Depends, NAMESPACE as per R CMD check requirements.
Fixing issue with the lt~obsolete.m4 file(s) in Graphviz; R CMD check was issueing a warning.
Moved vignettes from inst/doc to vignettes directory.
Changes in version 2.6.0:
NEW FEATURES
support for logical added
support for reading attributes added (use read.attributes=TRUE)
enabeled compression for data.frame in h5write
USER VISIBLE CHANGES
1.3.3: 1. Fixed definitions of assay.filenames.per.sample and factors. 2. Fixed regulession of investigation file (i_) to be considered at the beginning of the string. 3. Added CITATION file.
Changes in version 1.0.1:
Changes in version 1.3.2:
Changes in version 1.3.1:
Changes in version 1.3.0:
Changes in version 1.2.0:
add the ability to analyze gene sets (pathways with no interaction) using only over-representation
bug fixes: plot boundaries, etc.
Changes in version 1.8.1 (2011-08-02):
Changes in version 1.8.0 (2011-07-11):
Changes in version 1.14.0:
NEW FEATURES
seqinfo(FaFile) returns available information on sequences and lengths on Fa (indexed fasta) files.
filterBam accepts FilterRules call-backs for arbitrary filtering.
add isIncomplete,BamFile-method to test for end-of-file
add count.mapped.reads to summarizeOverlaps,*,BamFileList-method; set to TRUE to collect read and nucleotide counts via countBam.
add summarizeOverlaps,*,character-method to count simple file paths
add sequenceLayer() and stackStringsFromBam()
add ‘with.which_label’ arg to readGAlignmentsFromBam(), readGappedReadsFromBam(), readGAlignmentPairsFromBam(), and readGAlignmentsListFromBam()
SIGNIFICANT USER-VISIBLE CHANGES
rename: readBamGappedAlignments() -> readGAlignmentsFromBam() readBamGappedReads() -> readGappedReadsFromBam() readBamGappedAlignmentPairs() -> readGAlignmentPairsFromBam() readBamGAlignmentsList() -> readGAlignmentsListFromBam() makeGappedAlignmentPairs() -> makeGAlignmentPairs()
speedup findMateAlignment()
DEPRECATED AND DEFUNCT
BUG FIXES
scanVcfHeader tolerates records without ID fields, and with fields named similar to ID.
close razip files only once.
report tabix input errors
Changes in version 1.12.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.1.6:
NEW FEATURES
Changes in version 1.1.5:
BUG FIXES
NEW FEATURES
Changes in version 1.1.4:
NEW FEATURES
BUG FIXES
Corrected a bug that prevented the use of multi-threading
Corrected a bug that could affect results when multiple fasta files are assigned to a same taxon..0.0:
Changes in version 1.22:
NEW FEATURES
import,BigWigFile gains an asRle parameter that returns the data as an RleList (assuming it tiles the sequence); much faster than importing a GRanges and calling coverage() on it.
add export,RleList,BigWigFile method for direct (and much more efficient) output of RleLists (like coverage) to BigWig files.
SIGNIFICANT USER-VISIBLE CHANGES
BUG FIXES
handle different Set-Cookie header field casing; often due to proxies (thanks to Altuna Akalin)
attempt to more gracefully handle UCSC truncation of large data downloads
handle re-directs due to UCSC mirroring (thanks Martin Morgan)
Changes in version 1.0:
First release in Bioconductor.
Integration with Bioconductor packages: MotifDb and PWMEnrich, graph (limited).
Human and mouse BioGRID datasets updated to version 3.2.105 (October 2013).
Changes in version 1.0:
First Bioconductor release.
Provides a shiny interface to rTRM.
All options available in rTRM can be accesses from rTRMui.
Several utilities to control network appearance (node, edge and label size, etc.)
Download results in PDF and text format.
Changes in version 1.57.1:
NEW FEATURES
Added new function S4toS3() for converting S4 SBML models of rsbml into S3 SBMLR models.
Includes code from Vishak Venkateswaran’s branch of SBMLR on github (July 2011). This is may allow it to read more models in I say may because I don’t fully understand all of his codes, but add it in anyway.
Problem of libsbml allowing multiple args to multiplication operator was solved by using prod(). Similarly for sum(). Note that “*”() is strictly a binary operator.
SIGNIFICANT USER-LEVEL CHANGES
Call my model object call SBMLR now to let SBML refer to rsbml’s SBML object. Similary my simulate() is now sim() to keep it clear of rsbml’s simulate().
SBMLR model files no longer need to have the reversible flag set. The default is FALSE, which is the opposite of Level 2: all of my reaction rate laws have always been positive, and a design objective I like is to keep SBMLR model files as lean as possible (subject to the constraint that they be valid R code).
Simulate handles lsoda() event dataframes, see simulate help.
curtoNatural.R (see Fig. 7 BMC Bioinformatics 2004, 5:190) is now in the models folder.
The SOD model of Kowald et al, JTB 238 (2006) 828–840 is now also in this folder.
Two pdfs of publications that are freely available were removed to make the package lighter.
Similarly, only manual.doc remains: its redundant pdf is now out.
Notes
Changes in version 2.2.8:
NEW FEATURES
Ability to download microarray data directly from Gene Expression Omnibus and normalize the files in a single command.
Alternate functions (SCANfast and UPCfast) for performing SCAN and UPC normalization that use fewer probes and thus execute in ~60% less time.
Ability to execute normalize multiple .CEL files in parallel either across multiple cores on a single computer or across multiple computers on a cluster.
Ability to generate RNA-Seq annotation files to be used with the UPC_RNASeq function from GTF/FASTA source files.
Ability to download and install BrainArray packages via an R function.
Improved support for Affymetrix exon arrays.
Improved support for Affymetrix HT_HG-U133A early access exon arrays.
OTHER
Changes in version 1.1.5:
minor bug fix in asVCF
update man page “SeqVarGDSClass-class.Rd” with new methods
in DESCRIPTION, BiocGenerics listed in “Suggests” instead of “Imports” as suggested by R CMD check
bug fix in seq
revise the function ‘seqTranspose’ according to the update of gdsfmt (v1.0.0)
Changes in version 1.1.4:
add a new argument “action” to the function ‘seqSetFilter’
add a new function ‘seqInfoNewVar’ which allows adding new variables to the INFO fields
Changes in version 1.1.3:
minor bug fix to avoid ‘seqGetData’ crashing when no value returned from a variable-length variable
update documents
Changes in version 1.1.2:
added methods ‘qual’, ‘filt’, ‘asVCF’
‘granges’ method uses length of reference allele to set width
Changes in version 1.1.1:
revise the argument ‘var.index’ in the function ‘seqApply’
basic supports of ‘GRanges’ and ‘DNAStringSetList’
Changes in version 1.1.5 (2013-09-01):
Added an all-in function runSeqGSEA() for one step analysis
Modified genpermuteMat() with invoking set.seed() for reproducibility
Vignette updated
Changes in version 1.1.4 (2013-08-10):
Updated functions to allow DE-only GSEA analysis.
Fixed a few bugs.
Vignette updated.
Changes in version 1.1.3 (2013-06-13):
Changes in version 1.1.2 (2013-05-01):
Changes in version 1.1.1 (2013-04-23):
The methodology paper of the SeqGSEA package published at BMC Bioinformatics (2013, 14(Suppl 5):S16).
Added a function writeScores to generate DE/DS and gene scores output.
Changes in version 0.99.1:
NEW FEATURES
Changes in version 0.99.0:
Changes in version 1.19:
SIGNIFICANT USER-VISIBLE CHANGES
NEW FEATURES
encoding,FastqQuality and encoding,SFastqQuality provide a convenient map between letter encodings and their corresponding integer quality values.
filterFastq transforms one fastq file to another, removing reads or nucleotides via a user-specified function. trimEnds,character-method & friends use this for an easy way to remove low-quality base.
BUG FIXES
writeFastq successfully writes zero-length fastq files.
FastqStreamer / FastqSampler warn on incomplete (corrupt) files
Changes in version 0.99.0:
First release of the SpacePAC package.
Two 3D clustering methods: Using a Simulation approach and using a Poisson approach.
Allows a rudimentary plotting function to visualize the most significant cluster. Currently only one sphere can be plotted at a time.
Changes in version 1.0.0:
Changes in version 1.0.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.3.4:
estimateMasterFdr now support list of vector as pepfiles <2013-09-27 Fri>
import(MSnbase) <2013-09-27 Fri>
Changes in version 1.3.3:
Changes in version 1.3.2:
Changes in version 1.3.1:
Reporting total number of peptides in dbUniquePeptideSet. Fixes issue #41. <2013-05-13 Mon>
New mergedEMRTs arg in findEMRTs. Closes issue #38. <2013-05-13 Mon>
fixed synapterTiny$QuantPep3DData, which had the rownames as first column synapterTiny$QuantPep3DData$X. Detected thanks to new mergedEMRTs arg. <2013-05-13 Mon>
added mergedEMRTs arg to synergise <2013-05-22 Wed>
Synapter checks that one file per list element is passed <2013-05-28 Tue>
minor typo/fixes <2013-05-28 Tue>
new idSource column when matching EMRTs <2013-05-29 Wed>
new performance2 method that shows identification source and NA values contingency table <2013-05-29 Wed>
new filterPeptideLength method <2013-05-29 Wed>
added peplen argument to synergise to filter on peptide length <2013-05-29 Wed>
Changes in version 1.3.0:
Changes in version 1.18.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.2.0:
NEW FEATURES
This package was released as a Bioconductor package (previously CRAN).
WAD method for identifying DEGs was added.
ROKU method for identifying tissue-specific genes was added.
‘increment’ argument of ‘calcNormFactor’ function was added.
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.1.3:
SIGNIFICANT USER-VISIBLE CHANGES
SIGNIFICANT USER-VISIBLE CHANGES
SIGNIFICANT USER-VISIBLE CHANGES
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 1.0.0:
SIGNIFICANT USER-VISIBLE CHANGES
Changes in version 3.0.0:
Changes in version 2.9.2:
Changes in version 1.14.1:
BUG FIXES
Changes in version 1.5.9:
BUG FIXES
Changes in version 1.5.8:
BUG FIXES
Changes in version 1.5.7:
BUG FIXES
Changes in version 1.5.6:
BUG FIXES
Changes in version 1.5.5:
BUG FIXES
Changes in version 1.5.4:
NEW FEATURES
BUG FIXES
Changes in version 1.5.3:
NEW FEATURES
Changes in version 1.5.2:
BUG FIXES
Changes in version 1.5.1:
BUG FIXES
Changes in version 1.2.0:
NEW FEATURES
NEW FEATURES
NEW FEATURES
NEW FEATURES
Changes in version 1.8.0:
NEW FEATURES
Add ‘upstream’ and ‘downstream’ arguments to IntergenicVariants() constructor.
Add ‘samples’ argument to ScanVcfParam().
Add readGT(), readGeno() and readInfo().
Add VRanges, VRangesList, SimpleVRangesList, and CompressedVRangesList classes.
Add coercion VRanges -> VCF and VCF -> VRanges.
Add methods for VRanges family: altDepth(), refDepth(), totalDepth(), altFraction() called(), hardFilters(), sampleNames(), softFilterMatrix() isIndel(), resetFilter().
Add stackedSamples,VRangesList method.
MODIFICATIONS
VCF validity method now requires the number of rows in info() to match the length of rowData().
PRECEDEID and FOLLOWID from locateVariants() are now CharacterLists with all genes in ‘upstream’ and ‘downstream’ range.
Modify rownames on rowData() GRanges to CHRAM:POS_REF/ALT for variants with no ID.
readVcf() returns info() and geno() in the order specified in the ScanVcfParam.
Work on scanVcf(): - free parse memory at first opportunity - define it_next in .c rather than .h - parse ALT “.” in C - hash incoming strings - parse only param-requested ‘fixed’, ‘info’, ‘geno’ fields
Add dimnames<-,VCF method to prevent ‘fixed’ fields from being copied into ‘rowData’ when new rownames or colnames were assigned.
Support read/write for an emtpy VCF.
readVcf(file=character, …) method attempts coercion to TabixFile.
Support for read/write an emtpy VCF.
Add performance section to vignette; convert to BiocStyle.
DEPRECATED and DEFUNCT
Defunct dbSNPFilter(), regionFilter() and MatrixToSnpMatrix().
Deprecate readVcfLongForm().
BUG FIXES
Fix bug in compatibility of read/writeVcf() when no INFO are columns present.
Fix bug in locateVariants() when ‘features’ has no txid and cdsid.
Fix bug in asVCF() when writing header lines.
Fix bug in “expand” methods for VCF to handle multiple ‘A’ columns in info().
Changes in version 1.4:
NEW FEATURES
tallyVariants will now keep ref rows if variant_strand=0; this is useful for getting information when no alts are present (e.g., for making wildtype calls). Better have a big cluster to do this over the whole genome.
add a keep_extra_stats param to TallyVariantsParam; setting this to FALSE will speed things up when the extra stats are not needed.
idVerify now supports VCF input like that output by GATK.
callableFraction() now supports GRangesList and TranscriptDb.
USER-VISIBLE CHANGES
The API is now based on VRanges, a formal GRanges-derived class for representing variants; use of so-called “tally” or “variant” GRanges is deprecated.
Disable proximity filter by default; we recommend this now only for whole genome calling.
QA filtering is no longer a formal part of the calling pipeline; we recommend to apply QA filters “softly” via qaVariants() and use the results for diagnostics only.
Use BiocParallel (BPPARAM argument) for tallyVariants
VariantTallyParam deprecated; use TallyVariantsParam
BUG FIXES
idVerify now correctly computes cliques instead of connected components
use the total count, rather than the ref count when calculating the alt frequency
Changes in version 1.37.6:
NEW FEATURE
USER VISIBLE CHANGES
Add Brigham Young University to LICENSE file for copyright purposes.
Add copyright information display when running findPeaks.massifquant() within xcmsRaw.R
Clean and update documentation for findPeaks.massifquant-methods.Rd
BUG FIXES
Changes in version 1.37.5:
BUG FIXES
Changes in version 1.37.4:
BUG FIXES
Changes in version 1.37.3:
BUG FIXES
Changes in version 1.37.1:
BUG FIXES
NEW FEATURES
Changes in version 3.00:
VERSION xps-1.21.5
add QualTreeSet methods NUSE() and RLE() to get stats and values
update man export.Rd
VERSION xps-1.21.4
VERSION xps-1.21.3
VERSION xps-1.21.2
VERSION xps-1.21.1
update README
update Makefile to set include path (for ~/.R/Makevars)
update XPSUtils.cxx to eliminate warning with clang
Changes in version 2.15:
VERSION xps-1.17.2
update script4schemes.R to include schemes for HuGene-2_x-st arrays
update script4xps.R with example 4a for HuGene-2_1-st arrays
update README
VERSION xps-1.17.1
remove warnings: partial argument match
zzz.R: use .onAttach()
The following packages are no longer in the release:
dualKS, externalVector, GeneGroupAnalysis, iFlow, KEGGSOAP, xmapcore | http://bioconductor.org/news/bioc_2_13_release/ | CC-MAIN-2018-39 | refinedweb | 13,413 | 51.34 |
0
I have been reading a textbook on Java-- I am a C/C++ programmer so my view of addresses and pointers are based on what I know from those languages. I am trying to make sense of something the textbook said about object addresses.
The textbook basically said when one object points to another it points to the same address and becomes an alias. The old address pointed to is lost and is taken care of by the GC. That makes sense to me but I wrote a little test code to see how this goes:
import java.util.Scanner; public class sscanner { public static void main(String[] args) { String message_one = "Hello!"; String message_two = "World!"; Scanner scan = new Scanner(System.in); message_two = message_one; // an alias? // shouldn't "world!" be lost but message_two // should print the same as message_one? message_one = scan.nextLine(); System.out.println("M1: " + message_one + "\nM2: " + message_two + "\n"); } }
The output looks like:
This is a test sentence. M1: This is a test sentence. M2: Hello!
If message two truly pointed to the address of message one, wouldn't message two print the same as message one in the end? | https://www.daniweb.com/programming/software-development/threads/363549/how-do-object-addresses-work | CC-MAIN-2017-26 | refinedweb | 191 | 74.39 |
Several years ago now, I remember being excited by the idea of webhooks which provided a simple callback mechanism for executing remote microservice commands on the web via an HTTP request. For whatever reason, webhooks never really became part of my everyday toolkit, but with Amazon Lambda functions coming to my attention again recently as Google experimented with a rival service, I’ve started looking at them again.
To get back into the swing of how these things work, I thought I’d tried to put together a Python simple script that could run a search query against a data collection from the UK Parliament API; the request would be triggered from a slash command in Slack.
On the Slack side, you need to define a couple of custom integrations:
- a Slash Command that will define the name of the command that you want to handle and provide a callback URL that should be accessed whenever the slash command is issued;
- an Incoming Webhook that provides a callback URL on Slack that can handle a response from the microservice accessed via the slash command callback.
The slash command is declared and the URL of the service to be accessed needs to be specified. To begin with, you may not have this URL, so it can be left blank to start with, though you’ll need to add it in when you get your callback service address. When the callback URL is requested, a token is passed along with any extra text from the slash command string. The callback service can check this token against a local copy of the token to check that the request has come from a known source.
The incoming webhook creates an endpoint that the service called from the slash command can itself callback to, providing a response to the slash command message.
To handle the slash command, I’m going to develop a simple microservice on hook.io using Python 2.7. To begin with, I’ll define a couple of hidden variables that I can access as variables from my callback script. These are a copy of the token that will be issued as part of the slash command request (so I can verify that the service request has come from a known source) and the Slack incoming webhook address.
I can access these environment variables in my script as elements in the Hook['env'] dict. The data package from Slack can be accessed via the Hook['params'] dict.
The service definition begins with the declaration of the service name, which will provide a stub for the hook.io callback URL. The automatically generated Home URL is the one that needs to be provided as the callback URL in the Slack slash command configuration.
The code for the service can be specified locally, or pulled in from a (public) gist.
In the service I’ve defined, I make a request over the Parliamentary research briefings dataset (others are available) and return a list of keyword matching briefings as well as links to the briefing document homepage.
import json, re import urllib2 from urlparse import urlparse from urllib import urlopen, urlencode class UKParliamentReader(): """ Chat to the UK Parliament API """ def __init__(self): """ Need to think more about the structure of this... """ pass def qpatch(self,query): t=[] for a in query.split('or'): t.append('({})'.format(a.strip().replace(' ',' AND '))) return ' OR '.join(t) def search_one(self,query, typ='Research Papers',page=0,ps=100): url='' urlargs={'_view':typ,'_pageSize':ps,'_search':self.qpatch(query),'_page':page} url='{}?{}'.format(url, urlencode(urlargs)) data =json.loads(urlopen(url).read()) response=[] for i in data['result']['items']: response.append("{} [{}]".format( i['title'],i['identifier']['_value'])) return response def search_all(self,query, typ='Research Papers',ps=100): return def responder(self,hook): ukparl_token=hook['env']['ukparl_token'] ukparl_url=hook['env']['ukparl_url'] r=self.search_one(Hook['params']['text']) r2="; \n".join(r) payload={"channel": "#slashtest", "username": "parlibot", "text":"I know about the following Parliamentary research papers:\n\n {}".format(r2)} req = urllib2.Request(ukparl_url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) u=UKParliamentReader() if Hook['params']['token'] == Hook['env']['ukparl_token']: u.responder(Hook)
With everything now set up, I can make use of the slash command:
Next up, I’ll see if I can work out a similar recipe for using Amazon AWS Lambda functions…
See also: Chatting With ONS Data Via a Simple Slack Bot
NOTE: this recipe was inspired by the following example of using Hook.io to create a Javascript powered slash command handler: Making custom Slack slash commands with hook.io. | https://blog.ouseful.info/tag/webhooks/ | CC-MAIN-2021-31 | refinedweb | 771 | 51.38 |
RFE: Add support for IPv6 on DVR Routers for the Fast-path exit
Bug Description
This RFE is to add support for IPv6 on DVR Routers for the Fast-Path-Exit.
Today DVR support Fast-Path-Exit through the FIP Namespace, but FIP Namespace does not support IPv6 addresses for the Link local address and also we don't have any ra proxy enabled in the FIP Namespace.
So this RFE should address those issues.
1. Update the link local address for 'rfp' and 'fpr' ports to support both IPv4 and IPv6.
2. Enable ra proxy in the FIP Namespace and also assign IPv6 address to the FIP gateway port.
it sounds reasonable to me.
Hi Swami,
I just had a couple of questions.
In item 2 above you mentioned "ra proxy", did you mean ND proxy? That would make this similar to the IPv4 floating case where we do ARP proxy for the addresses.
Also, is part of the assumption that subnet pools/address scopes are used so that the l3-agent correctly configures rules to not drop ingress traffic? Since this FIP namespace is considered a scope boundary where things get marked.
And I'm assuming BGP is out of scope, so you should mention that too.
Thanks.
Hi Brian,
Yes, I mentioned running 'radvd' in the fipnamespace.
Yes it is assumed, this use case is with subnet pools/address scopes, so that the traffic is directed towards the compute node fast-exit-path and also yes the l3-agent correctly configures rules.
Yes BGP is out of scope of this RFE.
We already saw a patch that Ryan Tidwell pushed in for the BGP router to advertise the fixed ips for the DVR routers for fast path entrance traffic.
https:/
This makes sense to me.
IPv6 router needs to serve RA (router advertisement) and running radvd in FIP namespaces sounds reasonable. In case of SLAAC or DHCPv6-stateless, IPv6 addresses are calculated by an MAC address, so there is no concern on IP address duplicates.
This RFE was discussed during today's Neutron Drivers meeting and is approved. Please move ahead with implementation
This bug has had a related patch abandoned and has been automatically un-assigned due to inactivity. Please re-assign yourself if you are continuing work or adjust the state as appropriate if it is no longer valid.
Change abandoned by Slawek Kaplonski (.
Ref: https:/
/review. openstack. org/#/c/ 143917/ /review. openstack. org/#/c/ 138588/
Ref: https:/ | https://bugs.launchpad.net/neutron/+bug/1774463/+index | CC-MAIN-2020-50 | refinedweb | 413 | 72.97 |
Returning a GetClone() weirdness
On 08/11/2013 at 11:50, xxxxxxxx wrote:
Super simple python in a Generator in R14. I added a userdata for the object, and one 'real' numbers
For some reason it doesn't seem to refresh properly. and when the real number Userdata updates it dissapears. Any ideas as to what I'm missing?
import c4d def main() : someUserData = op[c4d.ID_USERDATA,2] obj = op[c4d.ID_USERDATA,1 obj2 = obj.GetClone() obj2.Message (c4d.MSG_UPDATE) return obj2
Thanks for your patience, I'm learning : )
Chris
On 08/11/2013 at 20:37, xxxxxxxx wrote:
obj = op[c4d.ID_USERDATA,1 need ]
try to have console open while using python generator to view the error origin.
Ok tested in r14, not refreshing in the view but using script on the python generator return the sphere with desired radius.
working file:
On 11/11/2013 at 09:22, xxxxxxxx wrote:
Hi Focus3D,
Thanks for reply. I've opened your file, but it has the exact same problem. For instance, if you turn off the Python Generator then back on again, the sphere doesn't appear until another refresh. Tested in R14 and 15. Any thoughts?
Also I do keep the console open, it just doesn't return any errors on this one.
Thanks again.
On 11/11/2013 at 09:56, xxxxxxxx wrote:
c4d.EventAdd()
right ?
d
On 11/11/2013 at 09:58, xxxxxxxx wrote:
Originally posted by xxxxxxxx
c4d.EventAdd()
right ?
d
Never from an Expression/Generator/or alike. It could lock C4D in an unconditional loop of refreshes.
It works when you put the generator before the sphere in the hierarchy. Waiting for an answer from
the devs.
Best,
-Niklas
On 11/11/2013 at 11:11, xxxxxxxx wrote:
Got an answer. It's quite easy, the bits of the clone need to be reset. The easiest way is to pass
c4d.COPYFLAGS_NO_BITS to GetClone().
> obj = op[c4d.ID_USERDATA,1].GetClone(c4d.COPYFLAGS_NO_BITS)
Best,
-Niklas
On 11/11/2013 at 12:54, xxxxxxxx wrote:
Never would have figured that out in a million years. Thanks fellas!
On 13/11/2013 at 07:55, xxxxxxxx wrote:
@Niklas,
nice one, thanks. | https://plugincafe.maxon.net/topic/7529/9428_returning-a-getclone-weirdness | CC-MAIN-2019-22 | refinedweb | 364 | 76.72 |
In you case, I'm assuming that you cut'n'paste the code for list.rhtml, so the line on which you are getting the error contains "@recipes.each", which is going to iterate through each member of the collection contained in @recipes.
The error says the @recipes contains the value "nil" (the Ruby equivalent of Java's null), and that the object "nil" does not have a method named "each".
That means that something is wrong in the controller where @recipes is being assigned its value. Make sure you list method looks like this:
def list
@category = @params['category']
@recipes = Recipe.find_all
end
def list
@category = @params['category']
@recipes = Recipe.find_all
end
If it already does, then that means that find_all is failing to find any recipes in the database (or failing to find the database).
Did you go through part 1 successfully? Does your database contain some recipes?
You could try starting off the my zip file for your code and use my sql file to initialize your database, and then start working through part 2.
© 2015, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/cs/user/view/cs_msg/57491?page=last&x-maxdepth=0 | CC-MAIN-2015-18 | refinedweb | 205 | 74.59 |
FORUM RULES (!!!!READ THIS FIRST BEFORE POSTING!!!!)
- lukasmaximus M5Stack
Posting rules
As with any forum, we ask you to first search for an answer to your problem or visit the FAQ
section. If you really can't find the answer there, consider which forum category your question best fits. This ensures that all the relevant information is kept in the right places
Make sure to name your posts in a descriptive way
Posting code
In order to make it easier for other users to see your code and copy and paste it for testing
we request that you put your code between 3 back ticks ``` which will put your code in a nice scrollable window with context coloring of the syntax like so..
from m5stack import * from m5ui import * clear_bg(0x111111) btnA = M5Button(name="ButtonA", text="ButtonA", visibility=False) btnB = M5Button(name="ButtonB", text="ButtonB", visibility=False) btnC = M5Button(name="ButtonC", text="ButtonC", visibility=False)
If you are posting information about a terminal command please use 2 back ticks `` before and after the command to differentiate it from code snippets like so.
sudo halt
Whats Allowed
Any posts related to M5Stack, ESP-32, IoT etc.. are more than welcome. If you want to talk off topic please post in the General section of the forum. No racist, political, religious or otherwise defamatory topics are accepted on these forums. Posting such content will result in your post being deleted and your account being banned.
Languages
If you would prefer to discuss M5stack in your mother tongue, Please avoid posting to the English language forums, as both users and admin will probably not understand you. There are foreign language sections in the global communities section which are monitored by a moderator who speaks that language. If you don't see your language yet and there is enough interest you can request an admin to create a new language group for you.
Thanks for reading and happy posting | http://forum.m5stack.com/topic/535/forum-rules-read-this-first-before-posting | CC-MAIN-2019-18 | refinedweb | 325 | 57.5 |
#include <CDataSink.h>
CDataSink{
virtual =0 void putItem(const CRingItem& item);
virtual =0 void put(const void* pData, size_t nBytes);};
Abstract base class for data sinks. Data sinks are objects to which data (usually ring items) can be sent.
By deriving concrete data types
from
CDataSink, and using the
CDataSinkFactory
to create sinks based on a URI description of the sink, a program
can be written that produces data for a sink without having any
knoweldge of the details of the sink. This allows those programs to function
equally well regardless of the sink for which they are producing data.
The base class is abstract. See METHODS below for a desription of the methods a data sink must provide and the expectations for each method.
virtual =0 void putItem(const CRingItem& item);
Concrete implementations are supposed to provide strategy
code that writes an entire ring item to the sink.
item is a reference to the item
to write. All errors should be reported by throwing a
CErrnoException with the
errno global set to a value that describes
the error.
virtual =0 void put(const void* pData, size_t nBytes);
Concrete implementations are supposed to provide
stratey code that writes an arbitrary chunk of data to the sink.
pData points at the data to write
and
nBytes is the number of bytes
to write starting at
pData
Errors should be reported by throwing a
CErrnoException with
the global
errno set to a value that describes
the error. | http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/r23985.html | CC-MAIN-2017-39 | refinedweb | 245 | 57.81 |
Hey
I'm an experienced dev just getting started with Sketch Plugins.
In the Sketch API reference they are using the ES6 module notation.
For example: import {Application} from './api/Application.js
import {Application} from './api/Application.js
But when trying that in the console I get the error: Unexpected use of reserved word 'import'.
Unexpected use of reserved word 'import'.
All the examples I've seen are in ES5 but why are the docs with ES6 then?
Can someone explain?
Thanks in advance
Apparently context.api() returns an instance of Application according to the comments here:
context.api()
Application
The ES6 imports are actually from the API source code but it's transpiled with gulp/babel and not to be really used.
The embedded Sketch API is pre-built using babel. There's no way to use ES6 goodies directly in sketch plugins. You have to preprocess your ES6/CoffeeScript/Whatever source code and convert it into ES5.
If you want to use ES6 and npm modules, check this out:
-
Thanks @turbobabr that's awesome. | http://sketchplugins.com/d/444-import-app | CC-MAIN-2017-47 | refinedweb | 176 | 60.61 |
Python has many applications in the networking field. Developers often use it as a medium to communicate with others on your network. Between these communications, it’s important to get the hostname in Python.
Python Get Hostname is a way to fetch the device name or alias within the network either by using its IP or domain. Hostnames generally refer to the names of the devices which can be used to further distinguish between them. Moreover, you can also get the name of the host where your Python interpreter is running.
What is a Hostname?
The hostname is an identifying name/alias which is maybe unique to every device connected over a network node. This hostname can be completely unique and can be used to differentiate between devices. Usually, the combinations of ascii characters are used as Hostname, but it may vary depending on the network.
In many cases, if the device hostnames are changed forcefully, two or more devices can have the same Hostname. Even tho they have the same hostname, but they can have different MAC addresses.
Ways to Get Hostname in Python
There are several ways to get a hostname in Python by using modules. Each of these modules has different ways of functioning and can behave differently on different operating systems. Some of them might work perfectly in Linux and not in Windows or vice versa. Make sure you check them on all platforms before implementing them in your code.
Following are some fo the ways you can get hostname –
- Socket gethostname Function
- Platform Module
- Os Module
- Using socket gethostbyaddr Function
1. Socket gethostname Function
Code:
import socket print(socket.gethostname())
Output:
DESKTOP-PYTHONPOOL
Explanation:
Socket module is one of the most important networking modules in Python. You can communicate with sockets and connect to other devices using them. In the above example, we imported the socket module in the first line and then used the function gethostname() to get the Hostname of the device. This function will return the device hostname where the python code is interpreted.
2. Platform Module
Code:
import platform print(platform.node())
Output:
DESKTOP-PYTHONPOOL
Explanation:
Similar to the socket module, the platform module is a widely used module to access platform information. This information includes host names, IP, Operating system, and much other information. We imported the platform in the first-line and then called the node() function to get the device’s hostname. Node function returns the computer hostname if available.
3. Os Module
Code:
import os, platform if platform.system() == "Windows": print(platform.uname().node) else: print(os.uname()[1]) # doesnt work on windows
Output:
DESKTOP-PYTHONPOOL
Explanation:
You might’ve seen os module used in handling files and directories in python, but it can also be used to get the device name. uname() is the function that returns the hostname of the system. Unfortunately, this cannot be used on Windows devices, and you have to use and if clause for it to work.
4. Using socket gethostbyaddr Function
Code:
import socket print(socket.gethostbyaddr(socket.gethostname())[0])
Output:
DESKTOP-PYTHONPOOL
Explanation:
Using gethostbyaddr will take care of the hostnames which are linked to their IPs. Normally, on a local network, it’ll return the same hostname passed as a parameter, but on remote networks, it can return the remote hostnames (It’s mentioned in the next sections).
Get Hostname from URL in Python
Many times, you need to access the domain names from the URLs you have. This can be done in many ways in Python by using Regex, urllib, string splitting, or other modules. All of them work most of the time, but urllib is optimal to work all the time because it’s from the networking module and is present by default. So you don’t have to code anything fancy for it.
Code:
from urllib.parse import urlparse url = urlparse('' ) host = '{uri.scheme}://{uri.netloc}/'.format(uri=url) print(host)
Output:
Explanation:
We first start by importing urlparse from the urllib parser. This can be used to parse the URLs into understandable strings. As soon as we initialize urlparse() with a URL, it’ll break down the URL into scheme, netloc, params, query, and fragment. Then we can use the scheme and netlock to get the protocol and hostnames from the URL.
Get Hostname from IP in Python
Sometimes, you have the IP addresses in your code results. Then by using them, you can easily deduce the hostnames of the servers. But importantly, your IP should be working and reachable by your network. In the following example, I’ve used 8.8.8.8 as IP, which is Google’s DNS.
Code:
import socket print(socket.gethostbyaddr("8.8.8.8"))
Output:
('dns.google', [], ['8.8.8.8'])
Explanation:
We start by importing the socket module and then calling the gethostbyaddr function. Then you can pass the IP as a string to it. It returns (hostname, alias list, IP Address List) as a tuple. Then you can access the hostname by using tuple indexing.
See Also
References
- socket.gethostname(): Return a string containing the hostname of the machine where the Python interpreter is currently executing.
- socket.gethostbyaddr(): Return a triple (hostname, aliaslist, ipaddrlist).
- platform.node(): Returns the computer’s network name (may not be fully qualified!). An empty string is returned if the value cannot be determined. | https://www.pythonpool.com/get-hostname-python/ | CC-MAIN-2021-43 | refinedweb | 893 | 57.77 |
Debugging Angular apps created with Angular CLI in WebStorm
Angular CLI can help us bootstrap a new Angular app with a ready to use TypeScript and Webpack configuration. In this post we’ll see how to debug such apps in WebStorm.
If you have never used JavaScript debugger in WebStorm before, we recommend.
- Run
npm startto get the app running in the development mode.
You can do this either in the terminal or by double-clicking the task in the npm tool window in WebStorm.
- Wait till the app is compiled and the Webpack dev server is ready. You can see the app running in the browser ot.
Note that when the dev server is running, the app will automatically reload if you change any of the source files.
- Check if a run/debug configuration named Angular Application already exists in WebStorm. If not, create a new JavaScript debug configuration in WebStorm (menu Run – Edit configurations… – Add – JavaScript Debug) to debug the client-side TypeScript code of your app. Paste into the URL field.
-:
-.
In this IDE versions, you need to configure the mapping between the files in the file system and the paths specified in the source maps on the dev server in your debug configuration. This is required to help WebStorm correctly resolve the source maps.The mapping should be between the
srcfolder and
webpack:///./srcAdd this value to the table with the project structure in the debug configuration.To get this mapping, we investigated the content of file. This is a source map file for the bundle that contains the compiled application source code. Search for main.ts, the main app’s file; its path is
webpack:///./src/main.ts.
If you have any problems with debugging your application, please contact our tech support. Please share information about your IDE version and breakpoint locations and, if possible, your project.
Your WebStorm Team
84 Responses to Debugging Angular apps created with Angular CLI in WebStorm
Sean says:February 1, 2017
Ekaterina Prigara says:February 1, 2017
Thanks for reporting the issues, we’re investigation and hopefully will be able to fix them soon.
Sean says:February 1, 2017
Thanks, I will standby for any help needed.
regards
Brad S says:February 4, 2017
I work in Node.js, serverless-webpack, Typescript. I have no clue on how to get debugger to work in this environment. Is there help for this?
Ekaterina Prigara says:February 6, 2017
Do you know any open-source app with a similar setup we can try?
In theory, it should all “just work” if you have source maps (devtool: “source-map” in your Webpack config).
Gavin says:April 17, 2017!
Konstantin Ulitin says:April 17, 2017
Can you please open a support request and attach idea.log there?
Zuriel says:May 19, 2017?
Ekaterina Prigara says:May 19, 2017.
Ron Zeidman says:July 3, 2017
Latest 2017.2
I’m guessing it broke when webpack changed the paths from “.” to absolute values
Tim Benke says:November 1, 2017
They’ve added this to Chrome 63, which is available in Canary right now.
Ron Zeidman says:June 29, 2017
Webpack has changed their paths so debugging no longer works out of the box.
This is the correct mapping needed:
found it by reading this post: and looked at the “scripts” location in the debug window.
Ekaterina Prigara says:June 29, 2017
What WebStorm version do you use? Starting from version 2017.1 WebStorm doesn’t need any mappings to debug Angular CLI apps.
Ekaterina Prigara says:July 4, 2017
The fix will be available in the next EAP build:
Claudio says:June 30, 2017
I’m using Webstorm 2017.1.4 and I upgrade my application with angularCli 1.1.3: the debug in webstorm doesn’t works no more (instead the debug in Chrome still works). I try to create an new clean application with angularCli 1.1.3 and the debug in Webstorm still not works.
Bastian says:June 30, 2017
I have the same problem. Using IntelliJ Ultimate 2017.1.4 and angular-cli 1.2.0
The problem started with the upgrade to angular-cli 1.1.2 (1.1.1 still works fine), with no breakpoints working anymore. Any ideas how to get it up and running again?
lena_spb says:June 30, 2017
This is likely the problem caused by recent angular-cli changes: currently webpack generates absolute Windows paths in sourcemap that break the debugging ().
As a workaround, please, try the following:
– in terminal, run
ng ejectin project root directory
– open the generated webpack.config.js, in line 385 change moduleFilenameTemplate value to
info =>
webpack:///${info.resourcePath}:
new SourceMapDevToolPlugin({
"filename": "[file].map[query]",
"moduleFilenameTemplate": info =>
webpack:///${info.resourcePath},
"fallbackModuleFilenameTemplate": "[resource-path]?[hash]",
"sourceRoot": "webpack:///"
}),
– run
npm startto start the application, then use your JavaScript debug configuration for debugging
Oskar Emil Skeide says:April 9, 2018
I this still an issue ?
Using angular-cli 1.7.3 and WebStorm 2018.1 I have followed the instructions in the blog post but breakpoints in WebStorm are not hit.
Ekaterina Prigara says:April 9, 2018
Hello,
Can you please share a bit more information about your app and where the breakpoints exactly are.
We have tried to debug an app generated with Angular CLI 1.7.3 on Windows and the breakpoints put in the event handlers work fine.
The workaround suggested in another comment applies only to WebStorm 2017.1 and earlier.
Ron Zeidman says:July 3, 2017
You can eject like lena_spb said or just add at the Run/Debug configurations the following path:(Your full path)
Daniel Bunte says:July 13, 2017
Thank you Ron, this one worked for me!
In my example it’s
Oskar Emil Skeide says:April 9, 2018
Didn’t work for me.
Returns this error:
10:18 Error running ‘Debug with Chrome’: Illegal char at index 32: C:/Workspace/my-project\/webpack:/C:/Workspace/my-project
Lukas says:July 11, 2017
For me it just works fine (using IntelliJ 2017.1.5 and Angular 4 with custom webpack 2 setup which is still a leftover from our angular 2 setup).
However, I can see the objects passed to or declared in any function I debug but I cannot see the values of any member variable. IntelliJ says that all of them are undefined, even after they’ve got assigned a static value during the debug operation.
I configured webpack to use
devtool: 'source-map'and already tried to map the remote url to the full path, but nothing changed.
Can anyone help me please?
Lukas says:July 11, 2017
I found a solution. I changed the devtool to
devtool: eval-source-mapwhich seems to work now.
Thanks for this nice tool!
Ekaterina Prigara says:July 11, 2017
Great that it works fine now! Interesting that eval-source-map option performed better than source-map, though that they are basically the same. We’ll have a closer look.
TomN says:August 24, 2017
Is Live Editting possible with angular CLI/webpack and webstorm? If so how?
Ekaterina Prigara says:August 25, 2017
Angular CLI projects have auto-reload on changes enabled by default. You don’t need to use Live edit for that.
Mathieu Paquette says:December 15, 2017
Not necessary. You can use HMR if you want to enable kind of live edit.
NairN says:October 24, 2017
Hello!
I have been trying to debug an Angular CLI application with a NodeJs back end server. Both are running on the same machine and using default ports. I have followed the instructions as mentioned on this page and also tried some other options, but I can’t get Webstorm to debug properly. I have the chrome plugin installed. I am using the latest version of Webstorm.
Every time that I click the debug option with the custom configuration settings mentioned here the page does not load. If I open the page on another tab then the page loads. However, I need to wait about ~2 mins for the debugger to stop at the first break point. I don’t know what’s happening. Please help.
NairN says:October 24, 2017
I have already referred to the following links to get debugging to work properly.
Oksana Chumak says:October 24, 2017
Hi NairN,
Can you please submit a support request () and attach screenshots of your run/debug configurations as well as the contents of your log folder (Help > Show log in…)?
If the archive is quite big, you can upload it to and let us know its name.
Graham Tilson says:December 3, 2017
I’m debugging an Angular 4 CLI app using PHP Storm 2017.2.4. So I can set the breakpoints in Chrome and it will stop at that line in Storm and allow me to view variables and step forward etc. That’s all great, but what I really want to do is set the breakpoint in Storm and that bit doesn’t seem to work.
Ekaterina Prigara says:December 4, 2017
Can you please provide a bit more details: what exact @angular/cli version do you use? what happens with the breakpoints put in the editor?
Mathieu Paquette says:December 15, 2017
Hello, just tried this tutorial with Angular-CLI 1.6 with IntelliJ Ultimate 2017.3 and all my breakpoints aren’t hit. I’m using the same exact setup with an older version of CLI and this is working fine. Could you please revise your work with the latest version and tell me what’s going on? Thank you very much.
lena_spb says:December 18, 2017 is fixed in 2017.3.3. As a workaround, run
ng eject, then use
npm startto run the app on webpack web server instead of using
ng serve
Ekaterina Prigara says:December 18, 2017
There’ve been some changes in the way Angular CLI 1.5+ configures the source maps. We’ve made some fixes on our side that will address the problem, they will land in WebStorm 2017.3.3.
Sebastian says:December 18, 2017
Could you update the blog post to reflect the latest changes in 2017.3?
Ekaterina Prigara says:December 18, 2017
Hi Sebastian,
What changes exactly do you mean? That you can have Chrome DevTools open? The blog post doesn’t mention that you can’t. And it doesn’t change the steps you need to do debug the app in WebStorm, does it?
Sebastian says:December 18, 2017
It seems that its now possible to have devtools open in chrome while debugging with IntelliJ.
Ryan P says:January 15, 2018
I’m pulling in my webpack bundled files (served locally using ng serve at) into a website that is running at, but the breakpoints aren’t being hit. The main site, on port 58177, serves up the tags to the webpack files on port 4200. Am I missing something?
Ekaterina Prigara says:January 17, 2018
We will reply in the support ticket you’ve created.
andrea says:January 19, 2018
Hello,
I have followed the instructions in this post, and everything works fine, apart from a detail.
Breakpoint are not being hit during the bootstrap of the applications. Afterwords, during the usage of the application, they are hit.
I’m using Intellij Ultimate 2017.3
The version of the cli is 1.6.4
I’ve created a new app with “ng new” and added some code just to have a place where to put the breakpoints.
import {Component, OnInit} from ‘@angular/core’;
@Component({
selector: ‘first-shipping’,
templateUrl: ‘./shipping.component.html’,
styleUrls: [‘./shipping.component.css’]
})
export class ShippingComponent implements OnInit {
constructor() {
console.log(
constructor); //BREAKPOINT 1 here
}
ngOnInit() {
}
onClick() {
console.log(
onclick); //BREAKPOINT 2 here
}
}
During the boostrap, the constructor is called but the breakpoint 1 is ignored.
When i click on the button bound to onclick method, the app correctly stops at breakpoint 2.
Workaround: if I put breakpoint 1 in Chrome instead of Intellij, the breakpoint is not ignored.
Ekaterina Prigara says:January 22, 2018
Hello, please see the limitation described in the last paragraph of this blog post. The breakpoints in the code that is executed on page load might not be hit when you start the app for the first time: WebStorm needs to load the source maps to resolve them and it might not happen by the time the code under the breakpoint is executed.
Alan Clarke says:March 14, 2018
These instructions do not work! Webstorm 2017.3.5. This is an absolute deal breaker in terms of me continuing to pay for a subscription for this software when it cannot even debug.
Ekaterina Prigara says:March 14, 2018
Hello Alan,
Sorry to hear that you had a problem with the debugger.
Can you please provide a bit more details about it? What Angular CLI version do you use? Where the breakpoint is put?
We have just tested debugging a project created with @angular/cli 1.7.3 and everything worked fine.
Alan Clarke says:March 14, 2018
I upgraded @angular/cli from 1.6.4 to 1.7.3, works with newer angular-cli but not fully backwards compatible for older versions.
Thank you for checking the issue and providing a solution.
Alan Clarke says:March 14, 2018
It still does not work, breakpoints are not triggered. Cannot pay for such software.
Ekaterina Prigara says:March 15, 2018
I’ve have just tried debugging the app generated with @angular/cli 1.6.6 (it ‘s not possible to generate a new project with version 1.6.4 anymore because of the broken @angular-devkit/core dependency) and it also worked fine – the breakpoint put in the click handler (as on the screenshot in the post) was hit.
Please contact our technical support and provide a sample project if you want to investigate the issue you have further.
Gerard Carbó Ibars says:April 19, 2018
Not working with WebStorm 2018.1.1 and @angular/cli 1.7.4, neither on npm test or start tasks. Frozen in ‘Connecting to localhost:55464’
Ekaterina Prigara says:April 19, 2018
Can you please send the IDE logs (menu Help – Show logs) to our tech support for the investigation:
So far we couldn’t reproduce the issue with the same WebStorm and Angular CLI versions, so the logs will be helpful. Thanks!
Marc says:April 26, 2018
I am running Version: 2018.1.2 and get the below message when running Debug as instructed in the above article.
Waiting for connection to localhost:24085. Please ensure that the browser was started successfully with remote debugging port opened. Port cannot be opened if Chrome having the same User Data Directory is already launched.
lena_spb says:April 26, 2018
As it’s mentioned in, WebStorm needs to pass
--remote-debugging-portoption to Chrome to be able to attach the debugger. But this option can’t be passed to a running Chrome instance. So, if you have modified Chrome configuration in Settings | Tools | Web Browsers to use your custom User Data directory, and Chrome having the same profile is already launched, WebStorm can’t open the debug port, and debugger fails to attach. You have to kill all your Chrome instances and re-start the debug session.
If you like to attach to already running Chrome instance, you need using JetBrains IDE Chrome extension, with Update application in Chrome enabled in the Settings | Build, Execution, Deployment | Debugger | Live Edit per instructions at
sweet says:January 4, 2019
forget all this. just use
ng new project-namefrom
npm install @angular/cli
they generate the intellij project folder for you with run configs.
it just works.
Paul says:January 29, 2019
Are there updated instructions for Webstorm 2019? I’m attempting to debug an Angular 7 app and following the details above, my breakpoints don’t trigger.
Process:
1. Add Angular application debug config (localhost:4200, no other settings applied)
2. Add breakpoint within a component method
3.
ng serve
4. Run configuration from step 1 (Chrome opens)
5. Perform action that calls breakpoint-containing method
6. Breakpoint does not trigger
Ekaterina Prigara says:January 29, 2019
Hello Paul,
The steps you have followed should work. We couldn’t reproduce the problem in a simple app. Can you please share a project or some screenshots showing where the breakpoints are set? Is the breakpoint hit after you reload the page in the browser?
Mariusz says:February 12, 2019
Greetings,
i got the issue of debug not working for lazy loaded modules in Angular. Component/Service code from core/shared modules stops correctly but any code from lazy modules is ignored. I get the green breakpoint circles for code from regular modules and the circles are red for code from lazy loaded modules.
Are lazy modules supported?
Things ive tried:
– as described in this article
– in addition installed JetBrains IDE Support extension (chrome) and activated it under Build, … > Debugger > Live Edit and this has shown the same behaviour (regular modules debuggable, lazy loaded modules are not)
Iam using Chrome on macOS, current Angular 7 (new project).
Thanks
lena_spb says:February 14, 2019
Hi Mariusz,
debugging lazy loaded Angular modules works fine for us. Please can you create a new support ticket, providing the detailed information about your setup? The sample project that shows up the issue would be appreciated
meanstack says:March 10, 2019
the latest jetbrains refuses to hit breakpoints in typescript on a windows angular app runnign oin https
it keeps loading the main.js operating breakpoints out of THAT
reloading the browser from the IDE Extension does nothing and the app runs with no breakpoints being hit
webpack:///./src/main.ts
this document does not cut it
and this product is high risk for commercial field contracting
when it refuses to hit the breakpoints inside my typescript file
read all the threads and docs and tried everything
eager loaded modules
blue in the face
meanstack says:March 10, 2019
helloworld simple apps are not qualifying as an example to prove this product SHOULD do what its suppose to
the threads on this are unending and its never resolved
meanstack says:March 10, 2019
Ekaterina
if you want to do a remote web session
I will devote some time for you to repair your product
I have a signiicant app causing this issue
I wont post anything online
I do zoom
contact me at meanstack@hotmail.com and we can proceed and we can help get your product up to snuff and you can see what is preventing it from operating on my machine
I prefer webstorm over vscodeand I will extend my time and help to get this working
but many in the field are veay wary of this not happening we are trying to perform in the field and its critical to be able to debug typescript with seamless fast live reload of typescript thru sourcemaps
– cheers
KEN
Ekaterina Prigara says:March 11, 2019
Hello Ken,
We are sorry to hear that you’re experiencing problems with WebStorm.
Please contact our technical support using this form:
We would need information about the WebStorm and Angular versions you are using as well as information about your project setup (tsconfig.json file, project structure).
Thank you!
Marc says:June 27, 2019
I’m echoing Meanstack/Ken’s comment. I’ve opened up a request but could never get it resolved. Zoom / Remote control didn’t seem to be an option. But I am really discouraged at Webstorm / IntelliJ not working on this. This problem (possibly perpetuated by constant changes in Angular) just seems to have been dragging on forever.
Ekaterina Prigara says:June 28, 2019
Hello Marc, can you please share a bit more information about the problem? Can you send a link to the request?
Artur says:August 27, 2019
Hello!
I have the same problem as above. Can’t debug Angular app in Intelij IDEA (or WebStorm).
We use lazy loading modules too. But I can’t share the code. It’s commercial one.
Ekaterina Prigara says:August 27, 2019
Hello Artur, we would really appreciate it if you share some sample project that would help us better understand the project configuration you have and the problem you’re experiencing. Does this problem happens if you debug your app in Chrome’s DevTools?
Artur says:August 29, 2019
Ekaterina,
ok, I will try to make a sample project later.
There is no problem when I try to debug the project in VS Code. But I prefer WebStorm..
Patrik J says:September 16, 2019
I thought that debugging in VS Code was a pain but at least that works, unlike anything I have tried in WebStorm using numerous tutorials. This just makes the entire IDE pointless and I see no reason paying for it, especially since it seem to have been a problem for MANY years and never gets fixed.
Ekaterina Ryabukha says:September 16, 2019
Hi, could you kindly clarify what exactly is not working as expected? Have you tried creating an issue regarding your problem so that we could help you solve it? If not, please create one here:.
Patrik J says:September 22, 2019
Hi,
Please provide a working guide/tutorial for latest WebStorm on how to get a single breakpoint working in WebStorm debugger with a new angular application generated by the Angular CLI, using latest version of Chrome and latest version of NodeJS. Everything installed with default settings. This is more than I have been able to manage for a few days now, and based on other comments over the years, I am far from the only one with these problems.
Lets say the breakpoint should be this:
export class AppComponent {
title = ‘nameofapp’;
}
Thanks.
Patrik J says:September 22, 2019
Or even better, a video of how to do it, that unlike the one linked to at the top of this page actually work. Thanks.
Patrik J says:September 22, 2019
I was actually able to make this work by ignoring written instructions but instead “figure it out”. If it still works fine in a day or two, I will return with a guide on how to do it.
Ekaterina Ryabukha says:September 23, 2019
Hi, please keep us posted on how it works and, if possible, let us know which steps worked for you. Meanwhile, we’ll review the information provided in this blog post and think about whether it can be improved somehow.
Christian says:November 1, 2019
Hi, I am working on a SaaS full stack project. In webstorm I have all my backend nodeJS servers and my front-end Angular UIs. Notice the plural.
Up to recently, I had one Angular UI, and basic setups for Debugging were hitting breakpoints without any problem. But when I started dealing with 2 UIs, I moved to the ‘Project’ structure of Angular.
Meaning that instead of having /dev/angular-project/src/app as normally we have, I now have the following structure:
Main angular is in /dev/angular-dir/
Then projects and libraries are in ./projects/project1, ./projects/project2, ./projects/library, all from within /dev/angular-dir/.
The main dir is holding the node_modules and package.json and angular.json. Each project subdir hold their ts-config etc.
All versions are pretty much the latest.(Angular 8 and Webstorm)
Now my breakpoints are not being hit anymore.
Any idea if I need to change something in the configurations?
Thanks
Ekaterina Prigara says:November 4, 2019
Hello Christian,
It sounds like a problem with the source maps. Are breakpoints hit when you debug the app Chrome’s DevTools?
In WebStorm, in the “Remote URLs of local files” section of the JavaScript debug configuration, can you please try to specify the URLs on which each part of your app is running next to the corresponding project folder (e.g.
./projects/project1->
localhost:4200/project1(here should be your URL)). Hope it helps!
Leo says:December 22, 2019
Ekaterina Prigara can you reply to my personal email at beta@tiac.net ?
I am new to angular 8 with node.js and Chrome.
I need a simple step by step explanation of how to debug…have no idea what Karma or Protractor is….I cannot download anything to my laptop due to company rules.
Debugging is so important in developing programs…
Can you help? JK
Ekaterina Prigara says:December 23, 2019
Hello Leo,
You can debug Angular applications in the same way you can debug any other client-side JavaScript application. You can use Chrome’s DevTools or follow the steps in this blog post to debug in WebStorm.
You can read about testing strategies for Angular applications in this official guide:
We are not experts on Angular development so we can’t provide more details on using the technologies in general, I’m sure there are also lots of great tutorials on working with Angular and testing and debugging apps on the internet.
If you have any WebStorm-specific questions, please contact our tech support.
Yuval says:April 20, 2020
Tried that with WS 2020.1 and the previous 2019.x on Mac OS. Breakpoints are not working. The simplest Hello world example:
> npm install -g @angular/cli
> ng new my-dream-app
> cd my-dream-app
> nvm start
Made sure that Angular plugin is on and placed and configured a Run/Debug for JS with. Added a console.log() at app.component.ts constructor and placed several breakpoints there and at different places in app.component.html (Angular lines):
etc.
Tried that also with another example with lines inside a .ts file – nothing worked, the breakpoints are not halting. It does however show the log:
Angular is running in the development mode. Call enableProdMode() to enable the production mode.
client:52
[WDS] Live Reloading enabled.
And for instance when an error occurs.
Please advise,
Yuval
Yuval says:April 20, 2020
The “ng add @angular/material” line is wrong translation by commenting system.. (?!)
The lines are:
“# div class=”card card-small” (click)=”selection.value = ‘build'” tabindex=”0″”
“# div class=”terminal” [ngSwitch]=”selection.value””
lena_spb says:April 20, 2020
Hello!
breakpoints in HTML code are not supposed to work, you can only add breakpoints in javascript/typescript. If it doesn’t work for you, please share a video recording of your steps
Yuval says:April 21, 2020
@lena_spb it is supposed to work if there’s an inline Angular code inside them – that’s why WS allows to place breakpoints on such lines and not on plain HTML lines. But it doesn’t matter because no other place works either.
Anyway here is a video of breakpoints in the “my-dream-app” from Angular CLI installed as described above, with the addition of a “console.log()” on the app.component.ts constructor
Screen video:
Yuval says:April 21, 2020
Also added an “onClick()” that should stop on BP before alerting.
lena_spb says:April 21, 2020
A screen video is so blurry that I can hardly see anything:( Please create a new support ticket () and attach a sample project (including
.ideafolder) the issue can be reproduced with
Yuval says:April 21, 2020
@lena_spb it’s blurry because of Dropbox’s preview display, there’s a download button on the top-right (down arrow icon), you can watch it as a normal video file on your computer and there it’s clear.
Yuval says:April 22, 2020
For future reference: this issue occurred while the parent project was the root project. When opening the Angular project directly it worked fine. WS 2020.1. | https://blog.jetbrains.com/webstorm/2017/01/debugging-angular-apps/?replytocom=350849 | CC-MAIN-2020-29 | refinedweb | 4,605 | 64.81 |
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 19.8, “Examples of how to use types in your Scala classes.”
To put what you’ve learned in this chapter to use, let’s create two examples. First, you’ll create a “timer” that looks like a control structure and works like the Unix time com‐ mand. Second, you’ll create another control structure that works like the Scala 2.10.x Try/ Success/Failure classes.Back to top a similar
timer method in Scala to let you run code like this:
val (result, time) = timer(someLongRunningAlgorithm) println(s"result: $result, time: $time")
In this example, the
timer runs a method named
someLongRunningAlgorithm, and then returns the result from the algorithm, along with the algorithm’s execution time. You can see how this works by running a simple example in the REPL:
scala> val (result, time) = timer{ Thread.sleep(500); 1 } result: Int = 1 time: Double = 500.32
As expected, the code block returns the value
1, with an execution time of about 500 ms.
The timer code is surprisingly simple, and involves the use of a generic type parameter:
def timer[A](blockOfCode: => A) = { val startTime = System.nanoTime val result = blockOfCode val stopTime = System.nanoTime val delta = stopTime - startTime (result, delta/1000000d) }
The
timer method uses Scala’s call-by-name syntax to accept a block of code as a parameter. Rather than declare a specific return type from the method (such as
Int), you declare the return type to be a generic type parameter. This lets you pass all sorts of algorithms into
timer, including those that return nothing:
scala> val (result, time) = timer{ println("Hello") } Hello result: Unit = () time: Double = 0.544.
Example 2: Writing Your Own “Try” Classes
For a few moments go back in time, and imagine the days back before Scala 2.10 when there was no such thing as the
Try,
Success, and
Failure classes in scala.util. (They were available from Twitter, but just ignore that for now.) class named
Attempt, and because you know you don’t want to use the
new keyword to create a new instance, you know that you need a companion object with an
apply method. You further realize that you need to define
Succeeded.
Thinking about the API you want, you know the
getOrElse method should.
The Scala Cookbook
This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly:
You can find the Scala Cookbook at these locations:Back to top
Add new comment | https://alvinalexander.com/scala/how-to-use-scala-types-generics-call-by-name-examples | CC-MAIN-2017-39 | refinedweb | 429 | 70.84 |
How to Consume SOAP-Based Web Service With Mulesoft Anypoint Studio
How to Consume SOAP-Based Web Service With Mulesoft Anypoint Studio
Consuming SOAP-based web services with Anypoint Studio is easy. Doing so assists serialization and deserialization as well as SOAP envelope and namespace processing.
Join the DZone community and get the full member experience.Join For Free
CXF is a Java web services framework used for SOAP (Simple Object Access Protocol) messaging. It handles all serialization and deserialization as well as SOAP envelope and namespace processing.
Now, we will walk through how to consume SOAP-based web services with Anypoint Studio.
Consume SOAP-Based Web Service
Place an HTTP listener in the source section of the flow and configure it as shown below.
Now place the Web Service Consumer components to the message processor in flow and configure the components. Click Add Connector Configuration and it will open another window. You can provide your WSDL Location (it can web service URL or any physical WSDL location) and click Reload WSDL. It will automatically fill the Service, Port, and Address for you. Press OK.
Now, you need to select one Operation that you need to perform on the Web Service. In my case, I will use findFlights. This operation is expecting one argument: destination.
Passing Arguments to Web Service
We are expecting the destination as a query parameter in input request via the HTTP listener and will store query parameter in flow variable.
Place TransformMessage between FlowVariable and Web Service Consumer and you can see that the output metadata in DataWeave is generated automatically. You can also see the input data with the flow variable that we defined previously. We will map the destination from FlowVariable to the destination argument required by Web Service.
Transform Response From Web Service
Now we will place the one more TransformMessage after Web Service Consumer to map XML result from Web Service to JSON format. You can see input metadata in DataWeave is generated automatically. You can define output metadata as per your requirements and perform mapping.
Testing the Application
You can use Postman to test the application. First, deploy the application with Anypoint Studio.
Now, we will use the HTTP
GET method in Postman to send a request to the HTTP listening to this URL.
Now, you know how to consume SOAP-based web services with Anypoint Studio!
Here is the video tutorial. }} | https://dzone.com/articles/consuming-soap-based-web-service-and-raml-based-re?fromrel=true | CC-MAIN-2019-13 | refinedweb | 401 | 56.05 |
.
).
A dictionary containing the status of only the active keyboard events or keys. (read-only).
Gets the clipboard text.
Sets the clipboard text.
The current mouse.
a dictionary containing the status of each mouse event. (read-only).
a dictionary containing the status of only the active mouse events. number of ticks since the last positive pulse (read-only).
The number of ticks since the last negative pulse .
Steering Actuator for navigation.
The steering behavior to use.
Velocity magnitude
Max acceleration
Max turn speed
Relax distance
Target object
Navigation mesh
Terminate when target is reached
Enable debug visualization
Path update period.
KX_BlenderMaterial
The materials shader.
Ints used for pixel blending, (src, dst), matching the setBlending method.
The material’s index.
Returns the material’s shader.
Set the pixel color arithmetic functions.
Returns the material’s index. when the object to mutate is an armature.).
Returns the list of group members if the object is a group object, otherwise None is returned.
Returns the group object that the object belongs to or None if the object is not part of a group.
The object’s scene. . [sx, sy, sz]
The object’s local position. [x, y, z]
The object’s world position. [x, y, z]
The object’s local space transform matrix. 4x4 Matrix.
The object’s world space transform matrix. 4x4 Matrix.).
The number of seconds until the object ends, assumes 50fps. (when added with an add object actuator), .
Disables rigid body physics for this object..
Transforms the vertices of a mesh.
Transforms the vertices UV’s of a mesh.
Mouse Sensor logic brick.
current [x, y] coordinates of the mouse, in frame coordinates (pixels).
sensor mode.
Get the mouse button status.).
TODO.
Python interface for using and controlling navigation meshes.
Finds the path from start to goal points.
Raycast from start to goal points.
Draws a debug mesh for the navigation mesh.
Rebuild the navigation mesh. | # +----------+ +-----------+ +-------------------------------------+ from bge import logic # List detail meshes here # Mesh (name, near, far) # Meshes overlap so that they don't 'pop' when on the edge of the distance. meshes = ((".Hi", 0.0, -20.0), (".Med", -15.0, -50.0), (".Lo", -40.0, -100.0) ) cont = logic.getCurrentController() object = cont.owner actuator = cont.actuators["LOD." + obj.name] camera = logic.getCurrentScene().active_camera def Depth(pos, plane): return pos[0]*plane[0] + pos[1]*plane[1] + pos[2]*plane[2] + plane[3] # Depth is negative and decreasing further from the camera depth = Depth(object.position, camera.world_to_camera[2]) newmesh = None curmesh = None # Find the lowest detail mesh for depth for mesh in meshes: if depth < mesh[1] and depth > mesh[2]: newmesh = mesh if "ME" + object.name + mesh[0] == actuator.getMesh(): curmesh = mesh if newmesh != None and "ME" + object.name + newmesh[0] != actuator.mesh: # The mesh is a different mesh - switch it. # Check the current mesh is not a better fit. if curmesh == None or curmesh[1] < depth or curmesh[2] > depth: actuator.mesh = object.name + newmesh[0] cont.activate(actuator)
MeshProxy or the name of the mesh that will replace the current one.
Set to None to disable actuator.
when true the displayed mesh is replaced.
when true the physics mesh is replaced.
Immediately replace mesh without delay.! wrapper to expose character physics options.
Whether or not the character is on the ground. (read-only)
The gravity value used for the character.
The character jumps based on it’s jump speed.
A vertex holds position, UV, color and normal information.
Note: The physics simulation is NOT currently updated - physics will not respond to changes in the vertex position.
The position of the vertex.
The texture coordinates of the vertex.
The normal of the vertex.
The color of the vertex.
Black = [0.0, 0.0, 0.0, 1.0], White = [1.0, 1.0, 1.0, 1.0] color. 0.0 <= r <= 1.0.
The green component of the vertex color. 0.0 <= g <= 1.0.
The blue component of the vertex color. 0.0 <= b <= 1.0.
The alpha component of the vertex color. color of this vertex.
The color is represented as four bytes packed into an integer value. The color is packed as RGBA.
Since Python offers no way to get each byte without shifting, you must use the struct module to access color in an machine independent way.
Because of this, it is suggested you use the r, g, b and a attributes or the color attribute instead.
import struct; col = struct.unpack('4B', struct.pack('I', v.getRGBA())) # col = (r, g, b, a) # black = ( 0, 0, 0, 255) # white = (255, 255, 255, 255)
Sets the color of this vertex.
See getRGBA() for the format of col, and its relevant problems. Use the r, g, b and a attributes or the color attribute instead.
setRGBA() also accepts a four component list as argument col. The list represents the color.
The Delay sensor generates positive and negative triggers at precise time, expressed in number of frames. The delay parameter defines the length of the initial OFF period. A positive trigger is generated at the end of this period.
The duration parameter defines the length of the ON period following the OFF period. There is a negative trigger at the end of the ON period. If duration is 0, the sensor stays ON and there is no negative trigger.
The sensor runs the OFF-ON cycle once unless the repeat option is set: the OFF-ON cycle repeats indefinately (or the OFF cycle if duration is 0).
Use SCA_ISensor.reset at any time to restart sensor..
like axisValues but returns a single axis value that is set by the sensor. (read-only).
Note
Only use this for “Single Axis” type sensors otherwise it will raise an error.
The state of the joysticks hats as a list of values numHats long. (read-only).
Each spesifying the direction of the hat from 1 to 12, 0 when inactive.
Hat directions are as follows...
Like hatValues but returns a single hat direction value that is set by the sensor. (read-only).
The number of axes for the joystick at this index. (read-only).
The number of buttons for the joystick at this index. (read-only).
The number of hats for the joystick at this index. (read-only).
True if a joystick is connected at this joysticks index. (read-only).
The joystick index to use (from 0 to 7). The first joystick is always 0.
Axis threshold. Joystick axis motion below this threshold wont trigger an event. Use values between (0 and 32767), lower values are more sensitive.
The button index the sensor reacts to (first button = 0). When the “All Events” toggle is set, this option has no effect.
The axis this sensor reacts to, as a list of two values [axisIndex, axisDirection]
The hat the sensor reacts to, as a list of two values: [hatIndex, hatDirection].
Note
This is the identity matrix prior to rendering the first frame (any Python done on frame 1).
This camera’s 4x4 model view matrix. (read-only).
Note
This matrix is regenerated every frame from the camera’s position and orientation. Also, this is the identity matrix prior to rendering the first frame (any Python done on frame 1)..
from bge import logic cont = logic.getCurrentController() cam =.
from bge import logic cont = logic.getCurrentController() cam =.
from bge import logic cont = logic.getCurrentController() cam =)
The influence this actuator will set on the constraint it controls.
Armature sensor detect conditions on armatures.. | https://www.blender.org/api/blender_python_api_2_64_9/bge.types.html | CC-MAIN-2016-22 | refinedweb | 1,238 | 61.73 |
This article describes why certain blockchain games can be a good long-term investment possibility.
Gaming as an investment.
These games may not benefit much from the latest graphic cards, but put all the more emphasis on creativity, automation, freedom of choice, and collecting. In addition, they utilize blockchain technology features, such as smart contracts and the use of non-fungible tokens.
All these go hand-in-hand with investment possibilities, given that video game collectors and enthusiasts have historically been willing to pay small fortunes for impressive virtual collections or enhancements, which puts a player to a slight advantage over his foes.
The mechanics of collecting have proven themselves in non-gaming areas, such as tradeable sports cards, Magic the Gathering, historical coins, bugs, or ancient Chess figures.
The price of a single item usually is not enormous, but once gathered and put into a collection of some sort,
its costs can escalate dramatically. Also, to keep the collectors motivated and interested, they need to make those collections valuable needs to evolve and escalate simultaneously.
For example, tradeable NHL Hockey cards evolved from classic paper rectangles to 2D plastic works of art, and even adding something that belonged to an NHL player as a bonus for the collector.
Instead of a pack of cards we used to buy for a few bucks, we can now purchase bundles that contain a piece of a jersey from a championship or finale game, a part of a hockey stick, an autograph of a player, and so on.
As a matter of fact, blockchain gaming worlds that embrace the use of rarity and scarcity of the game objects in their games can provide something more; the opportunity for investment speculation.
However, an old saying goes one should invest only in what they love or understand, and accordingly, the research process is critical before investing.
In this article, I will also describe the research process needed before investing in blockchain games, and lays out opportunities that blockchain games provide to today’s players and investors, using The Sandbox ecosystem as an example.
This is because The Sandbox offers not just the potential of financial gain to its investors, but also versatile solutions and investment mechanics.
Enter The Sandbox
The Sandbox is a blockchain-powered multiplayer multi-game virtual world, in which a user can create games and play games that others created, use teleports to literally jump with their avatar from one game into another, and many others.
It is a world that turns players into developers without the need to know a programming language, and also a world that turns these creative players into game owners and ecosystem stakeholders. NFTs usage provides a user with true ownership over the earned assets.
No longer will you buy an item in your games and see your money simply disappear. Your NFT has real, tangible value and can be resold if you’re interested. At the end of this cycle, The Sandbox wants the platform to be fully decentralized, where we provide the tools for development and expand the platforms for play.
If this is the first time you are reading about The Sandbox, see the The Sandbox: A Decentralized Virtual Gaming World for more information about The Sandbox ecosystem and its in-game experience and game creation.
To create a game in The Sandbox ecosystem, the user needs to own a special “construction lot”, where it is possible to create the game in a manner not unlike Minecraft. These creation areas are called LANDs and are represented by ERC-20721 non-fungible token (NFT).
These LANDs can also be rented to players who want to develop their games on that particular land. The reason for renting could be its location, the benefits provided by its neighbors, better quality, or, mainly, the presence of unique ASSETs. The ASSETs are the “spice” dedicated to enhancing the entire gaming experience. They could be in-game avatars, NPCs, enemies your hero needs to defeat in an RPG adventure, or any other special element that can further increase a dedicated LAND’s uniqueness.
The more unique ASSETs a LAND has, the more the price of such a LAND will increase. A LAND with high uniqueness will grow in price, and its possible renting revenues from a standard LAND will be significantly different.
The Sandbox metaverse provides ASSETS of common or premium quality with the additional possibility to further enhance, tune, and change its default behavior. Standard ASSETs can be created by players called Artists. Artists can create those ASSETs using The Sandbox dedicated software VoxEdit, and upload them to the ecosystem Marketplace. There, other players can buy the ASSETs using the ecosystem’s main cryptocurrency, called SAND.
Artists can thus monetize their aesthetic craft and obtain a reward in the SAND cryptocurrency, which can be used further in the metaverse, or exchanged for a fiat currency.
To this moment, Premium ASSETs, on the other hand, can be obtained only with a Premium LAND purchase during a special event. Prices of these lands are higher when compared to standard ones, but since they provide a player with Premium ASSETS, the essence of possible gain in the value with time is imminent.
Since we have a picture of what the LAND and ASSET is, the final piece to this puzzle is the main ecosystem currency called SAND, used in all possible transactions, trades, staking, voting, and exchanging across The Sandbox universe. In other words, to buy ASSETs and common or premium LANDs, a user needs to have the SAND currency first.
All transactions of LANDs and ASSETs are paid with SAND. In addition, SAND can also be used for staking and voting.
SAND is a vital aspect of The Sandbox’s investment purposes. As The Sandbox popularity expands and reaches its full potential, the price of this cryptocurrency is bound to rise. This is also dependent on the entire crypto market situation where SAND is subject to volatility just as other currencies are. Nevertheless, from the point of the investor, it is better to buy the currency early and invest it in the premium LANDS, which are limited, and also stake the currency to mint additional coins, with the intention to monetize on its possible gains in the future.
Before further elaborating about how to invest in The Sandbox blockchain-based ecosystems, let’s take a closer look at its parts; one piece at a time.
Parts of the gaming ecosystem
This section summarizes the crucial aspect of the ecosystem with its basic use.
LAND
LANDs, represented as an in-game space for creation, are blockchain-backed tokens (using the ERC-721 standard for non-fungible tokens, aka NFTs) that represent the land lots of The Sandbox Metaverse. Each LAND comes with pre-built default terrain that can be terraformed and modified by the owner or another player authorized by the owner. LANDs allow players to own a portion of the Metaverse and thus be able to host content (ASSETS and GAMES).
LAND ownership will allow the user to:
- Secure desirable locations from the finite amount of LAND in The Sandbox
- Participate in The Sandbox gameplay and metaverse governance
- Host games, events, or any experiences using owned LAND
- Earn SAND by hosting gameplay, organize contests and events, or renting the owned LAND to other players
One LAND is a place suitable for one game, with 96x96x128 in-game meters of play space. It is possible to design larger games, taking up 2×2 LANDs. The Sandbox wishes to continue with enlargement of the land lots and the creation of even larger games as the development continues.
Larger LANDs can be assembled from multiple LANDs, allowing a user to create larger games. These large LANDs are called ESTATEs and are distinguished based on the number of LANDS they are assembled with.
ESTATE S: 9 LANDS
ESTATE M: 33 LANDS
ESTATE L: 144 LANDS
LANDs can be purchased during LAND sales. It is worth noting that there is a discount on LANDS when bought as ESTATES. Keep in mind that when a LAND is traded, all its contents and ASSETS are included in the trade.
Following on this, LAND has a maximum cardinality of 166,464 pieces – there is only a finite amount of the represented by its NFT LANDs tokens, and minting more is not possible. Thus, The Sandbox Metaverse is composed of 166,464 LANDS, in a square map of 408×408.
- 123,840 LANDS (∼74%) are available for sale in total.
- 25,920 LANDS (∼16%) will form the Reserve, that will be distributed to partners, creators, and gamers as rewards.
- 16,704 LANDS (∼10%) will remain the property of The Sandbox. They will be used to host special events, feature exclusive games, and ASSETS.
ESTATES
LANDs can be combined together to form ESTATES (respecting specific quad sizes such as 3×3, 6×6, 12×12, 24×24 ). In the future, ESTATES will have the potential to be owned by multiple players to form DISTRICTS.
When buying LANDs as an ESTATE, a discount is applied.
SAND
SAND is an essential part of The Sandbox platform, firmly established in the fundamental mechanics that makes it intrinsically tied to The Sandbox platform and its value.
SAND is an ERC-20 utility token built on the Ethereum blockchain that serves as the basis for transactions within The Sandbox, and has the following uses:
- Access The Sandbox platform: Players spend SAND in order to play games, buy equipment, or customize their Avatar character—and can potentially collect SAND through gameplay. Creators can spend SAND to acquire ASSETS, LANDS, and through Staking. LAND Sales drive demand for SAND to purchase LANDS. Artists spend SAND to upload ASSETS to the Marketplace and buy Gems for defining Rarity and Scarcity.
- Governance: SAND is a governance token that allows holders to participate in Governance decisions of the platform, using a decentralized autonomous organization (DAO) structure. They can exercise voting rights on key elements of The Sandbox, such as Foundation grant attributions to content and game creators, and feature prioritisation on the platform Roadmap. SAND owners can vote themselves or delegate voting rights to other players of their choice.
- Staking: Staking is a process in which a user locks the tokens in their possession into a contract that The Sandbox owns. The Sandbox uses a nominal value of the staked tokens, but the owner is still in full possession of their tokens, and is rewarded with regular dividends. Staking is, therefore, very similar to putting money into a savings account in a bank. Staking in The Sandbox is done using SAND, which in turn allows for passive revenues on LANDS: you get more SAND by staking it. This is also the only way to get valuable Gems and Catalysts, needed for ASSET creation.
- Fee Capture model: 5% of all transaction volume carried out in SAND tokens (Transaction Fees) shall be allocated with 50% to the Staking Pool as rewards for token holders that Stake SAND tokens and 50% to the ”Foundation.”
- Foundation: The role of the Foundation is to support the ecosystem of The Sandbox, offering grants to incentivize high-quality content & game production on the platform. To date, the Foundation has funded over 15 game projects and granted 100 artists to produce NFTs ahead of the public Launch in December 2020. The overall valuation of the metaverse grows through the valuation of all games funded by the Foundation, creating a virtuous circle to enable funding for bigger games.
SAND has a maximum of 3,000,000,000 tokens, being rolled out incrementally over 5 years. SAND tokens have a large variety of uses coming with The Sandbox future development, and the team uses SAND to reinvest in their developers.
The image below describes the circulation and recyclation of gathered revenues from LAND sales, and all Marketplace transactions to help attract developers and pay developers to create engaging content in the Metaverse.
- For additional information, read the Tokenomics article here.
- For information on how to obtain the SAND tokens, read the How To Get SAND tokens article.
ASSETS
ASSETs bring the aspect of trading goods and collecting purpose into the game. Users can use them directly as an Avatar, decoration, in addition to your hero, or to everything you own.
I started with animations of characters and animals to fill The Sandbox Marketplace store. What is expected of my work is usually maintaining aesthetic quality and giving an attractive life-likeness to each character. To do so, I create all sorts of animations, so that something as common as “eating” looks unique for each of them.
Sellanes.Gonzalo – The Sandbox Artist FreeLancer
This ASSET excavator, created by Sellanes.Gonzalo, is featured in the Space & Sci-Fi mission theme on the Planet Rift game and will be used to dig up rare resources, deep underground. Planet Rift is an upcoming Title in The Sandbox, a part of the Game Makers Fund and is known as one of the currently most anticipated games.
Artist
Anyone can apply to become an official team Artist at the fund.sandbox.game website. As an Artist, you can be paid for ASSETs you create, which can be published on the Marketplace, now for free and later using SAND. The Sandbox Marketplace is represented by the OpenSea Internet auction, and right now, users can exchange their ASSETs for Ethereum.
When the Marketplace will be fully released, the public will be able to publish ASSEts to the Blockchain in exchange for Gems and Catalysts, which are currently only obtainable through the staking of SAND using The Sandbox platform, when owning LAND. Thus, the Marketplace will be the alternative way of getting Gems and Catalysts, which creates a crucial part of The Sandbox economy and overall gaming experience.
In-game assets
ASSETs are additional elements to the virtual world of a Sandbox game, such as player Avatars, NPCs, enemies, decorations, and so on. Many of the ASSETs have customizable traits and mechanics, such as behavior.
Examples:
- Doe: Ground Herbivore — will eat plants, tree leaves; will flee when it encounters predators
- Mummy Enemy: Enemy Fighter — will attack heroes upon encounter
- Spider: Ground Predator — can patrol on ground, detect and attack prey within a given range
- Dragon: Enemy Fighter — will attack users avatars upon encounter
These in-game ASSETs come with a default behavior set, which allows users to just drop them into a LAND and see them in action. For instance, a bird will be assigned a flying behavior whereas a fighter NPC will have an attack behavior. On top of that, to add to game complexity, an ASSET can be compatible with various behaviors and have multiple uses.
The Sandbox metaverse provides ASSETS in two quality categories:
- common
- premium – rare, epic, legendary
Users can upgrade those ASSETS and further modify them by Catalyst and Gems.
Catalyst and Gems
Catalysts and Gems are Sandbox’s other ERC-20 utility tokens burnt on usage. Catalysts define your ASSET’s tier and scarcity displayed on the Marketplace, while Gems determine your ASSET’s attributes in conjunction with the Catalysts.
ASSETS have finite cardinality of copies that can be minted depending on their rarity:
- Common – Maximum 20,000 copies
- Rare – Maximum 4,000 copies
- Epic – Maximum 1,500 copies
- Legendary – Maximum 200 copies
Similarly to World of Warcraft, where Legendary equipment has sockets waiting to be filled with gems, the Sandbox Catalysts adds empty sockets to a user`s NFTs (tokens that represent the particular ASSET) that can be filled with Gems. With the increasing quality of the Catalyst, the number of sockets usable for the ASSET also increases. In other words, the higher is the tier, the higher is the scarcity, and the more powerful and wanted the ASSET becomes.
- Common Catalyst: 1 Gem socket, up to 25 Attributes; 20,000 copies
- Rare Catalyst: 2 Gem sockets, up to 50 Attributes; 4,000 copies
- Epic Catalyst: 3 Gem sockets, up to 75 Attributes; 1500 copies
- Legendary Catalyst: 4 Gem sockets, up to 100 Attributes; 200 copies
Attributes define an ASSET’s main stats that will be displayed on the Sandbox’s game experiences. The more attributes the ASSETS have the more useful you’ll find them within the Sandbox, and thus incrementing its value.
One Gem provides up to 25 attribute points to an ASSET, and with further use of Catalysts, a user can socket up to 4 Gems with a maximum of 100 attribute points.
In order to add a Catalyst token in an ASSET, it must first be uploaded to the Marketplace through VoxEdit.
You can check out some additional resources on Gems and Catalysts in these two articles:
Monetizing on user experience with The Sandbox
In this section, I will summarize the above-mentioned aspects with the focus on their monetization potential.
LAND
Much like real-world real estate, users can sell rent or exchange their LANDs. As you already know, the uniqueness of the LAND improves its business potential.
Examples:
- Purchasing LAND in unique areas will benefit from the bonuses obtained from its neighbors, like better visibility, region attractiveness, and better portal distribution.
- Purchasing and upgrading LANDs with additional ASSETs can provide a lot of SAND tokens in a later sale.
- Renting owned LAND to other players and hosting the games and events is a good start to acquiring more ingame money.
SAND
You can generate SAND tokens, and thus also their equivalent value in fiat currencies, in the following ways.
- Token obtaining by playing a game. SAND tokens can be acquired by playing a specific game, participating in events, or winning in a contest. SAND tokens can be also sent to a friend as a gift or help at the start.
Investing in the primary token and holding on the exchange
- This one is a classical investor approach but without other benefits bounded with the game, but allows to use the funds on the Binance exchange and swing trade, accumulate, and save for later use. SAND tokens can be purchased on the Binance exchange in all their main trading pairs.
Token staking using Uniswap exchange
- Liquidity mining incentives seed deep liquidity, offering a strong trading venue for those who wish to trade and use SAND in a non-custodial manner. A user provides his tokens and its equivalent in Ethereum for trading on Uniswap exchange in a contract that brings the staker a reward in cryptocurrency obtained from trading fees of other traders on the SAND/ETH trading pair. This approach requires more funds since the SAND USD equivalent needs an investor also to stake in the ETH. So for example, 350 USD equals 10 000 SAND and 350 USD is equal to 0,8 ETH. So for the Uniswap staking of 10 000 SANDs and investors need approximately 700 USD (based on the actual ETH price).
Token staking using an owned LAND as a catalyst in the Sandbox platform
This staking generates, except the SAND tokens, Gems, and Catalyst that can be sold or used for enhancing the LAND overall value. The more LANDs a user owns, the more SAND he obtains through the staking process.
During the first Staking round in The Sandbox early staking program (which is a reward for early adopters), investors earned over 30% APY on their investment.
To start staking the SAND tokens, they need to be sent from an exchange or a user’s wallet to the user’s Sandbox account.
ASSET
Here, I will talk about how to monetize your ASSETS in The Sandbox metaverse. There are a number of ways you can do that:
- By collecting ASSETs and upgrading them with Gems and Catalysts. All the upgraded items tend to increase in price since the rarity and finite cardinality of Epic, Legendary, or any other rare ASSET are limited.
The value of 350 USD
Since the valuation proximity with the fiat currencies is important to get the basic idea, let’s showcase an example with 350 US dollars and what we can possibly purchase with this investment instrument.
Right now $350 would bring 10,000 SAND, which would, roughly speaking, provide funds for obtaining:
- 1 S size ESTATE [9 LAND NFT bundle]
- 1 Premium LAND [1 LAND NFT plus Smurfs NFT Assets]
One could obtain these by making a single Smart Contract deal.
Technical analysis
The current situation corresponds with the current BTC surge, which put SAND to its local low, allowing users to purchase SAND tokens at the upcoming LAND MoonSale with an additional discount. In the following graph, pay attention to the red line – it marks the point where the most token sales and purchases happen. From the current point of view, it is a good time for accumulating the tokens, while leaving some funds as a backup in case the value drops further. In addition, the current trends indicate that around the middle of November the price will once again reach the red zone, where the Fibonacci levels indicate potential future Support/Resist zones.
Target audience analysis
The Sandbox market is directed at fans of Minecraft and Roblox. Each of those games has over 100m users. The Sandbox developers believe it is possible to attract players from this audience to The Sandbox metaverse because of the Play-to-Earn system and a no-code approach to game creation. In addition, the similarity in the VOXEL graphics reinforces the connection to these stand-out franchises. This will turn gamers into developers and owners, and inevitably incentivize play. Furthermore, the Binance exchange supports The Sandbox project by providing liquidity, sharing the information, and buying LANDs for special events. This is very likely to strengthen the trust in future prospects of the metaverse, including the potential for profit, in both players and investors.
Why to focus on Premium LANDS?
A Premium LAND is a single parcel of LAND that features specific characteristics, including Premium ASSETS on top of that. These LANDS will appear as Yellow LANDS on the Metaverse map and will be defined by two main characteristics:
- Premium Location: Each Premium LAND is located close to either a Partner, an IP, or a TSB Estate. These locations are highly likely to attract more visitors thanks to partner visibility as well as the Transportation System and the recently introduced Portals that allow players to discover new exciting LANDS and their games and travel around the metaverse. Players will arrive directly within worlds through spawn points and portals. They will be able to move to adjacent LANDS and use transportation portals to allow fast-travel between distant LANDs. Portals will allow visitors to easily explore the game experiences in their LAND — similar to the benefit of having a subway station next to your place of business
- Premium ASSETS: A bundle of exclusive ASSETS to kickstart game creation. When purchasing a Premium LAND, buyers will also receive 4 premium ASSETS: -> one common, one rare, one epic, and one legendary ASSET. There are only 200 pieces of an ASSET with the Legendary rarity.
- Each Premium LAND includes 1 Common, 1 Rare, 1 Epic, and 1 Legendary Premium ASSET. Each Legendary ASSET will showcase a unique set of attributes, behavior, and skins, all of which have a limited scarcity of 200 copies.
- A total of 19,200 LANDS will be for sale = 11.5% of total LANDS
- ≈75% of (classic) LANDs and ESTATEs (14,400 LANDS)
+ ≈25% of LANDS Premium (4,800) = 19,200 LANDS for sale
- Premium LANDS = 4,800 LANDS + 19,200 (4*4,800) Premium ASSETS
- Premium LANDS ≈ 2,100 SAND
- For additional information about the upcoming pre-sale, read the article here.
Investing example
Now when you are already familiar with the specifics of The Sandbox metaverse, let’s see an example of an investment with no exact USD valuation.
- Since we know that it’s always good to have some assets on the exchange, an investor could purchase the SAND tokens using the Binance exchange and keep his tokens for trading or any possible later use.
- If the trading volume would suggest that there is an interest in trading SAND with Ethereum pair on Uniswap, it would be good to “farm” more SAND tokens by providing the liquidy in the SAND/ETH pair,
which needs to have an equivalent of SAND and ETH at the disposal.
- SAND staking, when combined with some LANDS in the user’s possession, provides an interesting return (30% of APY from the first round for the early supporters, but this number is bound to the number of owned LANDS). Thus, it would be good to focus on LAND and Premium LAND purchasing during a special event and combined it with a SAND purchase of equivalent size to that of LAND investment
- Then, the user can create or collect interesting ASSETS and tune them with rare gems, since there could be someone who will want to have as time will have been walking on by.
Outro
Since The Sandbox is designed for the game platform ownership to be handed over from the developer team to players, all possible investing action should be supported with the major fact that is playing the game and enjoying the fun. This way, LAND owners and SAND stakers will have an additional bonus from obtained Catalyst and Gems and can turn their common ASSETS and lets with those upgrades and sell them on the auction, or directly to other players.
LAND sales that already took its place indicate a potential interest with a combination of interest of a major exchange and partners such as Atari, Square Enix, Crypto Kitties, and Shaun the Sheep signals the right directions of the way this multi-gaming Ethereum-based platform is taking.
The security is backed by a blockchain that makes everything transparent, trackable, and NFT tokens prove and declare every user’s ownership forever.
Getting listed on various exchanges and appreciating users for their SAND staking should help to prevent market manipulation.
As I already mentioned above, an investment should be only done when it comes to something that you love or understand, and honestly, the `hands-on` experience is the best possible way to get into that state.
I believe that the ecosystem development is one of the most important components of the success of any crypto project and only time will show if The Sandbox will not run out of good ideas and will hold their decent status.
Let me finish with my favourite quote from Christopher Browne who once said:
“The time to buy stocks is when they are on sale, and not when they are high priced because everyone wants to own them.”
- For additional information about the upcoming LAND sale, read the LAND MoonSale article.
With all this being said, you should now be quite familiar with why should somebody invested in the blockchain gaming and why I believe that investing in virtual leases using the LANDs of The Sandbox will be profitable in the future.
Let me end this overview and wish you good luck during your research and following investments, in which you should always invest the money you are willing to lose.
The End
Competition
Aavegotchis are crypto-collectibles living..
Minecraft integrations Discover pioneering games, apps, and projects made by talented developers and forward-thinking companies across the world. Use the gamedev tools you know and love to forge blockchain games with advanced design, smart growth, and better monetization.
Axie Infinity is a game about collecting and raising fantasy creatures called Axie. Despite belonging to the same species, each Axie has its own distinct look and abilities.
The complete competition list – Game to Earn official site
Create your free account to unlock your custom reading experience.
read original article here | https://coinerblog.com/blockchain-gaming-the-most-definitive-guide-to-the-hype-behind-it-6w2x34kr/ | CC-MAIN-2021-04 | refinedweb | 4,616 | 55.68 |
#include <wx/richtext/richtextbuffer.h>
This class stores information about an image, in binary in-memory form.
Constructor.
Copy constructor.
Clears the block.
Copy from block.
Makes the image block.
Returns the raw data.
Returns the data size in bytes.
Gets the extension for the block's type.
Returns the image type.
Initialises the block.
Returns true if the data is non-NULL.
Load the original image into a memory block.
If the image is not a JPEG, we must convert it into a JPEG to conserve space. If it's not a JPEG we can make use of image, already scaled, so we don't have to load the image a second time.
Assignment operation.
Implementation.
Allocates and reads from a stream as a block of memory.
Allocates and reads from a file as a block of memory.
Reads the data in hex from a stream.
Sets the data size.
Sets the image type.
Writes the block to a file.
Writes a memory block to stream.
Writes a memory block to a file.
Writes the data in hex to a stream. | https://docs.wxwidgets.org/3.0/classwx_rich_text_image_block.html | CC-MAIN-2018-51 | refinedweb | 182 | 80.38 |
- Advertisement
Content Count26
Joined
Last visited
Community Reputation189 Neutral
About Ars7c3
- RankMember
Personal Information
- InterestsBusiness
Programming
Help with MiniMax Algorithm for Tic Tac Toe
Ars7c3 replied to Ars7c3's topic in Artificial IntelligenceUnfortunatley, that was not the issue. I tried it, but it exhibited the same behavior. The AI will block winning moves, but when it has a winning move it does not capitalize on it.
Help with MiniMax Algorithm for Tic Tac Toe
Ars7c3 posted a topic in Artificial IntelligenceI am currently working on creating an AI player for tic tac toe. After researching, I discovered that the minimax algorithm would be perfect for the job. I am pretty confident in my understanding of the algorithm and how it works, but coding it has proven a little bit of a challenge. I will admit, recursion is one of my weak areas . The following code is my AI class. It currently runs, but it makes poor decisions. Could someone please point out where I went wrong? Thank You! import tictactoe as tic # interface to tictactoe game logic like check_victory class AI: def __init__(self, mark): self.mark = mark def minimax(self, state, player): #end condition - final state if tic.check_victory(state): if player == self.mark: return 1 else: return -1 if tic.check_cat(state): return 0 nextturn = tic.O if player == tic.X else tic.X #generate possible moves mvs = [] for i, mark in enumerate(state): if mark == tic.EMPTY: mvs.append(i) #generate child states of parent state scores = [] for mv in mvs: leaf = state[:] leaf[mv] = player result = self.minimax(leaf, nextturn) scores.append(result) if player == self.mark: maxelle = max(scores) return mvs[scores.index(maxelle)] else: minele = min(scores) return mvs[scores.index(minele)] def make_move(self, board, player): place = self.minimax(board, player) return place
Passing Objects
Ars7c3 replied to bhollower's topic in General and Gameplay Programming.
Tutorial: Designing and Writing branching and meaningful Game Conversations in our game
Ars7c3 replied to Koobazaur's topic in Writing for GamesWow! This is really cool stuff, thanks for sharing!
problems with initializing my map
Ars7c3 replied to Ars7c3's topic in General and Gameplay ProgrammingThank You Brother Bob. I still have much to learn. But then again, we're never done learning are we?
problems with initializing my map
Ars7c3 replied to Ars7c3's topic in General and Gameplay ProgrammingI tried that, and it had the same result as before. It only works when the width, height, and layers are all the same. When, for example, i set the width and height to 10, and the layers to 3, it returns an error that says: vector subscript is out of range.
problems with initializing my map
Ars7c3 posted a topic in General and Gameplay Programming(); } } } }
Letting a probability of an object to appear in the scene...
Ars7c3 replied to lucky6969b's topic in General and Gameplay ProgrammingNot.
Simple splash screen?
Ars7c3 replied to Tispe's topic in General and Gameplay ProgrammingAre you looking to have different game states, such as a Splash Screen state, a Menu state, and a Playing state? Or are you simply asking how you would go about making a transparent splash screen background?
48 Hour Challenge Result
Ars7c3 replied to alexisgreene's topic in General and Gameplay ProgrammingWow, this is really good, although I got my butt kicked by space pirates. Keep up the good work, and good luck on your engine!
!HELP! SFML Window Init Problems
Ars7c3 replied to Ars7c3's topic in General and Gameplay ProgrammingThank you Servant of the Lord, it worked! It was a Duh mistake. *facepalm
!HELP! SFML Window Init Problems
Ars7c3 posted a topic in General and Gameplay ProgrammingHello, I've had issues initializing sf::RenderWindow and cannot figure out why the program isn't working. I am getting an unhandled exception error. The program crashes right when i call the create function for RenderWindow. The code is below. Thanks in Advance! //This is the whole Engine.h file #ifndef ENGINE #define ENGINE #include <string> #include <vector> #include <SFML/Graphics.hpp> #include <iostream> #include "Debug.h" using namespace std; class State; class Engine { public: void Init(int Width, int Height,string caption); void CleanUp(); void ChangeState(State *state); void PushState(State *state); void PopState(); void HandleEvents(); void Update(); void Render(); void Run(); bool Running(); void Quit(); sf::RenderWindow *Window(); private: sf::RenderWindow *m_window; vector<State*> m_states; bool m_running; int m_width, m_height; }; #endif //This is the Init function in the cpp file for Engine void Engine::Init(int Width, int Height,string caption) { Debug::Write("Starting"); m_states.clear(); Debug::Write("states cleared"); m_width = Width; m_height = Height; Debug::Write("w/h init"); m_running = true; Debug::Write("running = true"); m_window->Create(sf::VideoMode(Width,Height),caption); Debug::Write("!Engine initialization complete..."); }
- Thats not right either ^^^ i dont know y it wont let me post the correct code???
- I actually did include that in my code... for some reason it didn't upload that way. Here is the real code i use to detect collision. ]
Collision Detection HELP!
Ars7c3 posted a topic in General and Gameplay ProgrammingOkay, so i have been working on a pong game in order to test my very basic game engine, and everything that the engine is supposed to do it's doing. The problem is in the collision code, which is confusing considering I have used this very same code successfully in other games I have made. I honestly have no idea what the issue is, and I was hoping one of you guys could help me. I don't know if this would make a difference or not but I am using Dev C++. Here is the collision code and how it is used (BTW I made sure the SDL_Rect coordinates are correct) ] And here is how I call it in the main game loop: [source lang="cpp"] if(Collision(paddle2.GetRect(),ball.GetRect())) { cout << "Hit" << endl; } [/source]
- Advertisement | https://www.gamedev.net/profile/184655-ars7c3/ | CC-MAIN-2019-22 | refinedweb | 985 | 57.27 |
Internationalization: not loading resource dlls
We have an app (ASP.NET web page). It uses the resource manager to load resources in a specified language from satellite assemblies.
For some reason, while this worked once, it no longer works but now loads all strings from the English stored in the assembly. And the archives of the build got messed up, so going back and finding out when it broke is hard.
Anyone have suggestions on debugging it?
If any parameters are changed, it fails with the expected error (e.g. switch the namespace used in the resource compiler and it won't be able to load anything). If you watch the assemblies loaded in the Visual Studio debugger, the satellite assembly gets loaded at the expected time.
mb
Tuesday, September 28, 2004
Fixed it--nant doesn't build satellite builds quite right with csc.
mb
Thursday, September 30, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/dotnetquestions/4406.html | CC-MAIN-2018-30 | refinedweb | 154 | 74.69 |
Lab Exercise 4: Interfacing with the Terminal
The goal of this week's project is to get further experience with module, hierarchical design, and to expand your ability to communicate between a Python program and the Terminal.
In particular, we will be exploring the ability to pass information from the Terminal to a program when you execute the program. This is called using command-line arguments for a program. The concept is similar to passing arguments to a function in Python, except that you do this in the Terminal when you run a program.
Tasks
If you have not already done so, mount your personal space, create a new project4 folder and bring up TextWrangler and a Terminal.
- Command-line Arguments
Create a new file com.py. Put your name, date, and the class at the top, then import sys. Then put the following line of code in your file and run it.
print sys.argv
What does it do? Run it again, but this time try typing some random things on the command line after python com.py.
What is contained in the variable sys.argv? What data type are the elements of the sys.argv list? How could we make use of that?
From your experimentation, you should see that the variable sys.argv contains a list of the strings on the command line. That means we can use sys.argv to bring information from the command line into a python program. However, we need to be aware of types. Add the following to com.py.
print sys.argv[1] + sys.argv[2]
Then run the program again as python com.py 1 2 and see what happens. Did it do what you expected?
Add another line to your program where you cast sys.argv[1] and sys.argv[2] to integers before adding them together and run your program again. Did it do what you expected that time?
What happens if you run your program as python com.py?
When you are writing any program that expects command line arguments, it is important to check to see if the user actually provided them. This is called error-checking, and is an important component of useful programs. If your program is expecting two arguments, then your program needs to check if the user provided at least two arguments. If they didn't it's common courtesy to tell them how to use the program properly. This is called a usage statement.
We can always check how many command line arguments there are by looking at the length of sys.argv. At the beginning of your active code, test if the length of sys.argv is less than three (the name of the program plus two arguments). If the user did not provide enough arguments, then print out a usage statement and exit. A usage statement generally looks something like the following.
Usage: python com.py <number> <number>
To exit a Python program, just use the function exit().
To see examples of real usage statements, go to the terminal and type cp or mv with no arguments.
When you are using command-line arguments in your own programs, you should always use the following structure.
import sys def main(argv): # usage statement here # cast the command line strings to the appropriate type and assign # them to local variables # main code here return if __name__ == "__main__": main(sys.argv)
If you follow the above structure, then the only place you will use the sys.argv variable is when you call your main function. Inside your main function, it sees only a list of strings (argv). The usage statement checks the list of strings to see if there are enough, and the next section of code does the necessary casting to set up local variables--with informative names--to hold the command line parameters. This structure follows the concept of modular design: all relevant variables enter a function as parameters.
Rewrite com.py to use the above structure. Run it and verify that it continues to function properly.
Note that many command line programs use flags as part of their command-line parameters. For example, if you run the command ls with the flag -l then it lists many different attributes of the files in a directory. Other flags take numbers or strings after the flag to indicate a value. The flags themselves are optional, and the program has to figure out what flags are given on the command line and then grab any values associated with those flags. Parsing sets of command-line parameters is a standard task that many programs do. There are packages for assisting in that task, or you can write your own code that loops over the command-line strings and figures out what is there. You don't need to do anything this complex for this project, but it is an extension if you want to try.
- Random Numbers
In Python, we create random numbers using the random package. Start a new file, rand.py, and put your standard header at the top. Then import random.
Create a for loop that loops 10 times. Each time through the loop have it print the result of calling random.random(). Then run your program. It should generate 10 numbers randomly distributed between 0 and 1. Running the program again should generate 10 different numbers.
You can create many different kinds of random numbers using the random package. For example, if you want integers randomly distributed between 0 and 100, inclusive, use the randint() method of the random package. Using the function random.randint(0,100) will generate numbers in the range 0 to 100, including the endpoints.
You can also create numbers following a Gaussian distribution using the random.gauss function, which takes two arguments: the mean and standard deviation. Try modifying your program to generate different types of random numbers and see what happens.
When you are done with the lab exercises, please begin the project. | http://cs.colby.edu/courses/F15/cs151s-labs/labs/lab04/ | CC-MAIN-2017-51 | refinedweb | 1,000 | 66.64 |
How to use facesMessages.add or facesMessages.instance().addDirk Ho Dec 28, 2008 2:34 PM
Hello,
I read in Seam Reference that it is possible to use facesMessages.add and/or facesMessages.instance().add to add messages to the next rendered page.
Now, I have a page buyCar.xhtml. If the user bought a car and it was written to the database successfully, there is a rule in my buyCar.page.xml where I defined, that the user is redirected to a page
successful.xhtml.
This page looks like that:
<!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <h1>#{messages['website.headline.successful']}</h1><br /> <h:messages /> </ui:define> </ui:composition>
Now I thought, that h:messages is replaces by my message added to facesMessages.add oder facesMessages.instance().add, but nothing is shown expected the text between <h1></h1>.
Can you help me and tell me, what I have to do to show this message?
My Bean looks like that:
... @Stateful @Scope(CONVERSATION) @Name("buyCarAction") public class BuyCarAction implements BuyCar { @Logger Log log; ... @PersistenceContext private EntityManager em; @In private FacesMessages facesMessages; ... public String buyIt() { Bestellung bestellungNeu = bv.createBestellung(bestellung); log.info("BestellungNeu: " + bestellungNeu); final Kunde kundeNeu = bestellungNeu.getKunde(); log.info("Kunde Neue Bestellung " + kundeNeu); FacesMessages.instance().add("Kauf war erfolgreich"); return "success"; } ...
Thanks and best regards,
Dirk
1. Re: How to use facesMessages.add or facesMessages.instance().addNicola Ben Dec 28, 2008 3:51 PM (in response to Dirk Ho)
Hi,
try to use the injected facesMessages:
facesMessages.add(
Kauf war erfolgreich);
instead of:
FacesMessages.instance().add(
Kauf war erfolgreich);
Bye
Nic
2. Re: How to use facesMessages.add or facesMessages.instance().addNicola Ben Dec 28, 2008 3:58 PM (in response to Dirk Ho)Oh, sorry I did not well understand your post.
Between <h1></h1> you'll always get the message
website.headline.successful
which is defined in your messages_XX.properties file.
When you add a facesMessage, this is not going to replace the message between <h1> tags.
It's going to appear at <h:messages/>
Try to have a look at Seam examples folder.
Bye,
N
3. Re: How to use facesMessages.add or facesMessages.instance().addDirk Ho Dec 28, 2008 5:25 PM (in response to Dirk Ho)
Hi Nicola,
thanks for your reply! Probably I couldn't explain it correctly. I know, the facesMessage.add-Message is not displayed between the <h1></h1>, but I also have a <h:messages /> in my successful.xhtml.
But it doesn't display my message.
Perhaps you can tell me, if I perhaps have to configure something to get these messages shown?!?! It would be very kind if you could help me.
Thanks and best regards,
Dirk
4. Re: How to use facesMessages.add or facesMessages.instance().addNicola Ben Dec 29, 2008 5:03 PM (in response to Dirk Ho)
what version of Seam are you using?
N
5. Re: How to use facesMessages.add or facesMessages.instance().addDirk Ho Dec 31, 2008 3:22 PM (in response to Dirk Ho)
Hello Nicola,
thanks for your Response! My version is Seam 2.1.0 BETA 1 (I get told when startig JBoss or deploying my project).
Hopefully it works with this version, because I don't want to update - I habe a review with my prof next week and if it wouldn't work anymore this would be very bad ;)
BTW: I added <web:redirect-filter as read in seam reference to my components.xml
Thanks and a happy new year,
Dirk | https://developer.jboss.org/thread/185628 | CC-MAIN-2018-17 | refinedweb | 591 | 70.8 |
In this section, you will learn how to calculate the sales tax and print out the receipt details for the purchased items. To evaluate this, we have taken the following things into considerations:
1) Basic sales tax is applicable at a rate of 10% on all goods, except books,food, and medical products that are exempt.
2) Import duty is an additional sales tax applicable on all imported goods at a rate of 5%, with no exemptions.
Here we have allowed the user to enter the quantity of item, name, price,whether they are exempted or not and whether they are imported or not. From this data, we have calculated the sales tax and created a receipt which lists the name of all the purchased items and their price (including tax), finishing with the total cost of the items, and the total amounts of sales taxes paid.
Here is the code:
import java.util.*; class Tax { int qty = 0; String prodName = null; float price = 0.0f; boolean imported = false; boolean exempted = false; float tax = 0.0f; void calculateSalesTax() { float totalTax = 0.0f; if (imported) totalTax = 0.05f; if (!exempted) totalTax = .1f; if ((imported) && (!exempted)) totalTax = .15f; tax = totalTax * price; } public String toString() { float p = price + tax; return qty + " " + prodName + " at " + p; } } public class SalesTax { public static void main(String[] args) throws Exception { SalesTax st = new SalesTax(); Scanner input = new Scanner(System.in); ArrayList
list = new ArrayList (); int no = 1; while (true) { Tax tax = new Tax(); System.out.println("Add Products: " + no); System.out.print("Quantity: "); int qty = input.nextInt(); tax.qty = qty; System.out.print("Product Name: "); String prod = input.next(); tax.prodName = prod; System.out.print("Price: "); float p = input.nextFloat(); tax.price = p; System.out.print("Is it Imported[y/n]: "); String imp = input.next(); if (imp.toLowerCase().equals("y")) tax.imported = true; System.out.print("Is it Exempted[y/n]: "); String exe = input.next(); if (exe.toLowerCase().equals("y")) tax.exempted = true; tax.calculateSalesTax(); list.add(tax); no++; System.out.print("Add More Products [y/n]: "); String add = input.next(); if (add.toLowerCase().equals("n")) break; } float tp = 0.0f; float tt = 0.0f; for (int i = 0; i < list.size(); i++) { Tax tax = list.get(i); tp += tax.price; tt += tax.tax; System.out.println(tax); } System.out.println("Sales Taxes:" + tt); System.out.println("Total: " + (tp + tt)); } }
Output:Add Prducts: 1 | http://www.roseindia.net/tutorial/java/core/calculateSalesTax.html | CC-MAIN-2014-52 | refinedweb | 396 | 54.29 |
I am having problems implementing a SearchableStack Class as a subclass of Stack class.
Here is the design:
List Class -->(composition)-->Stack Class-->(inheritance)-->SearchableStack Class
Ok, we are not allowed to use "protected" fields, thus not allowing me to access the private data members of the parent class (yes this is stupid, but so is college).. We do this for security reasons, so I have to use the public functions in which I created for for the Stack to achieve the search..
My code is not returning correct results, when I search for something in the stack, and it is returned - the stack for some reason now has a 0 (zero) in the last stack memory location. And other various bugs.
Any help would be greatly appreciated.
SearchableStack.h
SearchableStack find function:SearchableStack find function:Code:#include "Stack.h" using namespace std; class SearchableStack : public Stack { public: SearchableStack(); bool find(int); private: Stack tempStack; }
ALSO, I tried not haveing the private data member tempStack -- and just created one in the member function, and that was no dice either..ALSO, I tried not haveing the private data member tempStack -- and just created one in the member function, and that was no dice either..Code:bool SearchableStack::find(int n) { int temp; //Stack tempStack2; cout<<"In Find"<<endl; while(!isEmpty()) { cout<<"in FIRST while"<<endl; temp = pop(); tempStack.push(temp); if(temp == n) { cout<<"in FIRST if"<<endl; while(!tempStack.isEmpty()) { cout<<"in SECOND while"<<endl; push(tempStack.pop()); } return true; } } while(!tempStack.isEmpty()) { cout<<"in THIRD while"<<endl; push(tempStack.pop()); } cout<<"@@@ END OF FUNCTION hmm, no return before here?"<<endl; return false; }
Thanks | http://cboard.cprogramming.com/cplusplus-programming/73221-inheritance-using-stack-class-problem.html | CC-MAIN-2015-48 | refinedweb | 274 | 65.83 |
I've written this code, it looks for a file specified in the commandline and says YES if it exists or NO if it doesn't exist :
It should be working but the compiler says :It should be working but the compiler says :Code:
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, char * lpCmdLine, int nCmdShow)
{
WIN32_FIND_DATA FindFileData;
HANDLE hFind;
HANDLE Find;
hFind = FindFirstFile(lpCmdLine[1], &FindFileData);
if (hFind == INVALID_HANDLE_VALUE)
printf("NO\n");
else
printf ("YES\n");
system("PAUSE");
return 0;
}
13 [Warning] passing arg 1 of `FindFirstFileA' makes pointer from integer without a cast
I dont understand it because i'm passing it a char, not an int !
Someone can help me ?
I know i should write only lpCmdLine but i need to specify [1] and [2] in my program... so i need it :( (and i cant use argc because i need WINAPI WinMain coz it has to be an hidden application)
Thanx a lot :) | http://cboard.cprogramming.com/c-programming/52620-trouble-lpcmdline-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 164 | 58.01 |
Ports¶
A Mido port is an object that can send or receive messages (or both).
You can open a port by calling one of the open methods, for example:
>>> inport = mido.open_input('SH-201') >>> outport = mido.open_output('Integra-7')
Now you can receive messages on the input port and send messages on the output port:
>>> msg = inport.receive() >>> outport.send(msg)
The message is copied by
send(), so you can safely modify your
original message without causing breakage in other parts of the
system.
In this case, the ports are device ports, and are connected to some
sort of (physical or virtual) MIDI device, but a port can be
anything. For example, you can use a
MultiPort receive messages
from multiple ports as if they were one:
from mido.ports import MultiPort ... multi = MultiPort([inport1, inport2, inport3]) for msg in multi: print(msg)
This will receive messages from all ports and print them out. Another example is a socket port, which is a wrapper around a TCP/IP socket.
No matter how the port is implemented internally or what it does, it will look and behave like any other Mido port, so all kinds of ports can be used interchangeably.
Note
Sending and receiving messages is thread safe. Opening and closing ports and listing port names are not.
Common Things¶
How to open a port depends on the port type. Device ports (PortMidi, RtMidi and others defined in backends) are opened with the open functions, for example:
port = mido.open_output()
Input and I/O ports (which support both input and output) are opened
with
open_input() and
open_ioport() respectively. If you call
these without a port name like above, you will get the (system
specific) default port. You can override this by setting the
MIDO_DEFAULT_OUTPUT etc. environment variables.
To get a list of available ports, you can do:
>>> mido.get_output_names() ['SH-201', 'Integra-7']
and then:
>>> port = mido.open_output('Integra-7')
There are corresponding function for input and I/O ports.
To learn how to open other kinds of ports, see the documentation for the port type in question.
The port name is available in
port.name.
To close a port, call:
port.close()
or use the
with statement to have the port closed automatically:
with mido.open_input() as port: for message in port: do_something_with(message)
You can check if the port is closed with:
if port.closed: print("Yup, it's closed.")
If the port is already closed, calling
close() will simply do nothing.
Output Ports¶
Output ports basically have only one method:
outport.send(message)
This will send the message immediately. (Well, the port can choose to do whatever it wants with the message, but at least it’s sent.)
There are also a couple of utility methods:
outport.reset()
This will send “all notes off” and “reset all controllers” on every channel. This is used to reset everything to the default state, for example after playing back a song or messing around with controllers.
If you pass
autoreset=True to the constructor,
reset() will be
called when the port closes:
with mido.open_output('Integra-7') as outport: for msg in inport: outport.send(msg) # reset() is called here outport.close() # or here
Sometimes notes hang because a
note_off has not been sent. To
(abruptly) stop all sounding notes, you can call:
outport.panic()
This will not reset controllers. Unlike
reset(), the notes will
not be turned off gracefully, but will stop immediately with no regard
to decay time.
Input Ports¶
To iterate over incoming messages::
for msg in port: print(msg)
This will iterate over messages as they arrive on the port until the port closes. (So far only socket ports actually close by themselves. This happens if the other end disconnects.)
You can also do non-blocking iteration:
for msg in port.iter_pending(): print(msg)
This will iterate over all messages that have already arrived. It is typically used in main loops where you want to do something else while you wait for messages:
while True: for msg in port.iter_pending(): print(msg) do_other_stuff()
In an event based system like a GUI where you don’t write the main loop you can install a handler that’s called periodically. Here’s an example for GTK:
def callback(self): for msg in self.inport: print(msg) gobject.timeout_add_seconds(timeout, callback)
To get a bit more control you can receive messages one at a time:
msg = port.receive()
This will block until a message arrives. To get a message only if one is available, you can use poll():
msg = port.poll()
This will return
None if no message is available.
Note
There used to be a
pending() method which returned the
number of pending messages. It was removed in 1.2.0 for
three reasons:
- with
poll()and
iter_pending()it is no longer necessary
- it was unreliable when multithreading and for some ports it doesn’t even make sense
- it made the internal method API confusing. _send() sends a message so _receive() should receive a message.
Callbacks¶
Instead of reading from the port you can install a callback function which will be called for every message that arrives.
Here’s a simple callback function:
def print_message(message): print(message)
To install the callback you can either pass it when you create the
port or later by setting the
callback attribute:
port = mido.open_input(callback=print_message) port.callback = print_message ... port.callback = another_function
Note
Since the callback runs in a different thread you may need to use locks or other synchronization mechanisms to keep your main program and the callback from stepping on each other’s toes.
Calling
receive(),
__iter__(), or
iter_pending() on a port
with a callback will raise an exception:
ValueError: a callback is set for this port
To clear the callback:
port.callback = None
This will return the port to normal.
Port API¶
Common Methods and Attributes¶
Close the port. If the port is already closed this will simply do nothing.
name
Name of the port or None.
closed
True if the port is closed.
Output Port Methods¶
send(message)
Send a message.
reset()
Sends “all notes off” and “reset all controllers on all channels.
panic()
Sends “all sounds off” on all channels. This will abruptly end all sounding notes.
Input Port Methods¶
receive(block=True)
Receive a message. This will block until it returns a message. If
block=True is passed it will instead return
None if there is
no message.
poll()
Returns a message, or
None if there are no pending messages.
iter_pending()
Iterates through pending messages.
__iter__()
Iterates through messages as they arrive on the port until the port closes. | https://mido.readthedocs.io/en/latest/ports.html | CC-MAIN-2022-33 | refinedweb | 1,105 | 66.33 |
Grapefruit
From HaskellWiki
Revision as of 15:40, 6 February 2009.
3 Versions
Grapefruit has undergone fundamental interface and implementation changes in late 2008 and early 2009. A version without these changes is available as the “classic” version. The classic version contains support for animated graphics, incrementally updating list signals and a restricted form of dynamic user interfaces (user interfaces whose widget structure may change).
The current development version does not have these features at the moment. Graphics support is expected to come back later if someone finds the time to port the respective code to the new Grapefruit interface. List signal and dynamic UI support are intended to come back in a much more generalized form.
A stable release of Grapefruit is expected to happen at the end of January 2009.
4 Download
The current version can be fetched from the darcs repository at. If you want to try out the classic version, please get it from the darcs repository at.
5 Building
You need at least GHC 6.8.3 and Gtk2Hs 0.9.13 to build and use Grapefruit. GHC 6.8.2 can not be used because of GHC bug #1981. Gtk2Hs 0.9.12.1 might be okay but you would have to change the gtk dependency in grapefruit-ui-gtk/grapefruit-ui-gtk.cabal in order to use it. Grapefruit was tested with GHC 6.8.3 and Gtk2Hs 0.9.13 as well as GHC 6.10.1 and a pre-0.10.0 development version of Gtk2Hs.
In addition to Gtk2Hs, you will need a couple of other Haskell libraries. These are all available from Hackage. Cabal will tell you what libraries it wants. Alternatively, you can have a look at the build dependency specifications in the files grapefruit-*/grapefruit-*.cabal.
If you get warnings of the form “Can’t find interface-file declaration for type constructor or class …” when compiling grapefruit-ui or grapefruit-ui-gtk with GHC 6.8.3 then don’t panic. This seems to be because of a bug in the (non-official) type family support of GHC 6.8.3. However, it seems to be harmless.
Grapefruit consists of the following packages, each residing inside an equally-named directory in the source tree:
- grapefruit-frp
- grapefruit-records
- grapefruit-ui
- grapefruit-ui-gtk
- grapefruit-examples
You can use Cabal to build each single package.
There is also a Setup.lhs script in the root directory of the source tree which simplifies the building process. Alas, it only works with older Cabal versions so that it is not usable with the Cabal that comes with GHC 6.10.1. For building the complete Grapefruit library (including examples) in place with this script, run the following command:
runghc Setup.lhs up-to-register configure-options -- build-options -- --inplace further-register-options
If you rather want to install Grapefruit in some directory, use this command:
runghc Setup.lhs up-to-install configure-options -- build-options -- install-options
6 Running the examples
The package grapefruit-examples of the classic versions provides an executable for each example. Since the current version is able to use different UI toolkits (at least in theory), it would not be wise to create executables since these would be fixed to one specific toolkit. Therefore, in the current version, grapefruit-example provides a toolkit-independent library. To run an example, start GHCi and type the following:
import Graphics.UI.Grapefruit.Circuit
import Graphics.UI.Grapefruit.YourToolkit
import Examples.Grapefruit.YourExample
run YourToolkit mainCircuit
At the moment, the only meaningful replacement for
YourToolkit is
GTK and the only meaningful replacements for
YourExample are
Simple and
Switching.
7 Documentation
At the time of writing, Grapefruit lacks documentation seriously. For the current version, documentation should follow very soon. The first official release will most likely contain complete API documentation. There are no plans for providing complete API documentation for the classic version. You will have to live with what’s there. If you have questions, you may always ask the author of Grapefruit as listed in the *.cabal files.
8 Publications and talks
The following publications and talks are related to Grapefruit:
- Wolfgang Jeltsch: Declarative Programming of Interactive Systems with Grapefruit. Software Technology Colloquium of Utrecht Universiteit. May 29, 2008.
- time, place and abstract
- slides (including non-shown additional material)
- Wolfgang Jeltsch: Improving Push-based FRP. 9th.
9 Screenshots
Following are some screenshots from the examples of the classic version:
-. | https://wiki.haskell.org/index.php?title=Grapefruit&diff=26317&oldid=26190 | CC-MAIN-2015-35 | refinedweb | 740 | 50.33 |
If you poke around this site's source, you will only find HTML and CSS. But I built it with Next.JS, a React framework. Seems odd? Here's how (and why).
Some quick terminology
Some common HTML rendering terms
- Client Side Rendering: (CSR) This is the default way React renders all HTML, inside the user's browser.
- Server Side Rendering: (SSR) With SSR a server is used to build the page when a user requests it. The page gets rendered on the server, but deciding what to render can happen on the fly.
- Static Site Generation: (SSG) SSG has the pages render their HTML at build time. They then get served up to users as plain old HTML files.
A standard Next.JS site
If you take a look at this site's codebase, you'll find a very typical Next.JS site. In order to keep the site static, I ensure every page is capable of using SSG, which mostly boils down to never using
getServerSideProps. Its presence tells Next a page should use SSR.
next export
If your entire site can be statically built, then you can tell Next to do just that with the command
next build && next export. After running this command, you will find the site output at
<project root>/.next/server/pages. You can take this directory and host it on say GitHub pages or an S3 bucket.
But, I just use Vercel
Vercel, the creators of Next, provide a hosting solution that handles Next apps perfectly (as you would expect). Since it's free for hobby and personal sites, I just use that instead of using
next export.
Removing the React JavaScript
Static Next pages still load React at runtime. Just like any other Next page, React will kick in and walk the DOM, integrating itself into the page and turning the page into a live React app. This is known as hydration.
Hydration is wasteful and not needed if the page is truly static. You can tell Next to skip all of this by adding this config object to the page:
export
constconst config= {{ :: unstable_runtimeJS falsefalse};
With this config in place, the page will only have HTML, CSS and any bespoke JavaScript you add yourself (more on this below).
Using Next's Link component
Normally, Next's
<Link> is how you link between pages in your app. Using it for a fully static site is questionable though, as it ends up doing nothing at all. If you do use it, keep in mind you must set the
passHref prop
<Link
href="" passHref> <a>checkout<a>checkout Zombo</a>Zombo</a></Link>
Otherwise the
a tag will not get the href, making the link dead when you build the site. This is especially tricky because the Link will work just fine in dev mode without
passHref.
Sprinkling in a little JS
With React removed, I need to add JS myself for any interactivity I want. At the bottom of every page is a theme switcher, which uses JavaScript. The front page also uses JavaScript for a canvas graphic (if you are not on a phone). For these, I just added in JavaScript the old fashion way. Remember
querySelector and
addEventListener? 😃
To do this, I write the needed JavaScript in a standalone file, and then bring it into the page with
dangerouslySetInnerHTML.
import
ReactReact fromfrom 'react';'react';type BespokeJavaScriptPropsBespokeJavaScriptProps == {{ :: prop1 string;string; :: prop2 boolean;boolean;};function myBespokeJavaScript(props:myBespokeJavaScript(props: BespokeJavaScriptProps) {{ //// do stuff}function BespokeJavaScript(props:BespokeJavaScript(props: BespokeJavaScriptProps) {{ returnreturn (( <script<script type="text/javascript"type="text/javascript" ={{={{ dangerouslySetInnerHTML :: __html `${myBespokeJavaScript.toString()};`${myBespokeJavaScript.toString()}; ${JSON.stringify(props)})`,${JSON.stringify(props)})`, myBespokeJavaScript( }}>}}> </script></script> ););}export {{ BespokeJavaScriptBespokeJavaScript };};
Then somewhere else, I just add it to the page as a standard React component, ie
<BespokeJavaScript/>
Downsides and Gotchas
This approach has several problems, some more thought is needed.
The JavaScript gets inlined into every page that needs it. Every page on this site has its own copy of the theme switcher code. Since it's very short, I don't mind too much in this case.
The bespoke code does not get minimized or polyfilled. If you look at the source for this page, you can see the theme switcher code almost exactly as I wrote it, whitespace and all.
Also, Next does not understand this code. During development, it does not get updated with fast refresh, and I also need to account for dev mode in the code itself. This admittedly is a pretty annoying gotcha.
I might plug away at this more and see if I can make improvements. But since my bespoke JS is so minimal, I'm not too bothered (yet...). I am also going to wait to see how server side components play out, as they may impact my approach.
Opting back into React
unstable_Runtimejs is applied per page. If a page needs React, it's easy to turn it back on. This website is brand new, but I do have plans for more interactive pages and for those I will opt back into React.
I like it
So far I really like this approach to building websites. React and Next offer such an excellent development experience. My HTML is always properly formed. I get type checking with TypeScript. I can extract commonalities into components. I don't have to worry as much about pulling in large libraries (such as the syntax highlighting library), as only the resulting HTML is saved. I can also use Next plugins to accomplish common tasks such as image minification.
Not to mention all of the standard "no JavaScript" bonuses apply too: better SEO, usually more performant, no need to worry about client side routing snafus, Hacker News doesn't yell at you, etc. | https://mattgreer.dev/blog/how-i-built-this-static-site-with-nextjs/ | CC-MAIN-2022-40 | refinedweb | 958 | 72.26 |
Is there a way to prefix all sage code in order to include a custom module?
I have a library of commonly used functions and variables which I import, when required, into sage by using the following commands:
import os, sys cmd_folder = '/home/username/sage' if cmd_folder not in sys.path: sys.path.insert(0, cmd_folder) import defaults as d
This allows me to access all my frequently used saved functions and variables. I essentially only use Sage through the web interface, and I would like to know if it is possible to make this code run as a "prefix" so that I no longer have to type this include in every worksheet. | https://ask.sagemath.org/question/8407/is-there-a-way-to-prefix-all-sage-code-in-order-to-include-a-custom-module/ | CC-MAIN-2017-30 | refinedweb | 113 | 52.73 |
In real-time, we can come across scenarios where we have to make one decision by checking multiple conditions. Using C# Nested if statements, we can write several if or else if conditions inside one if or else if condition.
The syntax of the C# Nested If Statement is
If <condition> { If <condition> { Statements; } else if<condition> { Statements; } else Default statements } else Default statements;
C# Nested If Statement Example
For example, let us assume that for a particular job requirement. The job profile is as follows:
- Candidate should be from it or csc department.
- Academic percentage >= 60
- Age < 50
If all the above conditions are satisfied, then the person is eligible for the post. Let us write C# code for the above scenario.
using System; namespace CSharp_Tutorial { class Program { static void Main() { Console.WriteLine("Enter your Department"); string Department = (Console.ReadLine()); if (Department.ToUpper() == "IT" || Department.ToUpper() == "CSC") { Console.WriteLine("Enter your Percentage"); int Percentage = int.Parse(Console.ReadLine()); if (Percentage >= 60) { Console.WriteLine("Enter your age"); int Age = int.Parse(Console.ReadLine()); if (Age < 50) { Console.WriteLine("You are eligible for this post"); } else Console.WriteLine("Your age is not suitable for the requirement"); } else Console.WriteLine("Your Percentage is not suitable for the requirement"); } else Console.WriteLine("Your qualification is not suitable for the requirement"); } } }
OUTPUT
In this C# Nested If Statement example, we have given Department name as ‘csc’. Since it satisfies the condition, it entered into inner if
Next, it asks for the Percentage.
The Percentage is given as 40.
Since this percentage condition failed because Percentage should be >= 60 to enter into inner if, i.e., age condition.
The compiler exit from the loop and print the else part, i.e.,
Your Percentage is not suitable for the requirement.
Let me try Percentage as 60, age as 28, and you get the different result.
Here, we have given good Percentage but the inner if statement condition age < 50 fails. So, inner else block statement printed.
This time, we have given the wrong department so that it results the main else block statement.
| https://www.tutorialgateway.org/csharp-nested-if-statement/ | CC-MAIN-2022-05 | refinedweb | 347 | 51.44 |
perlmeditation Zaxo <p>I should subtitle this "What I did on my PerlMonks Vacation". We all had to find another profitable use of time during the server move hiatus. This article is about what I did, and how Perl helped.</p> <p.</p> <READMORE> <p>I got a gnu implementation called []. I also found an [ book] and an [.</p> <p>I've barely advanced to baby talk in the language, but I'll try to convey some of its flavor. Prolog is called "rule based" or "logic programming". A program consists of a sequence of assertions, called "predicates". Here is a classic example (adapted from the tutorial):<code> % this is a comment, % is like # in Perl % All men are mortal mortal(X) :- man(X). % Socrates is a man. man(socrates). </code>If this immortal wisdom is loaded in an interactive session ('| ?-' is gprolog's baroque prompt):<code> $ gprolog GNU Prolog 1.2.8 By Daniel Diaz Copyright (C) 1999-2001. $ </code>Knowing enough from the first two assertions, Prolog was able to confirm the mortality of Socrates with a 'yes'. The previous 'yes' confirmed that socrates.pro was truly consulted (unfortunately, the other conventional extension for Prolog files is '.pl').</p> <p>How does this work? The online book I cited explains, in a chapter called [ Box Model of Execution]. Each predicate is described as a black box with two inputs, <b>Call</b> and <b>Redo</b>, and two outputs, <b>Fail</b> and <b>Exit</b>. If a predicate fails, execution returns via redo to the previous predicate.</p> <p>This seemed vague and unsatisfactory to me until I hit a trigger word -- <em>backtracking</em>. Light dawned: Prolog acts like Perl's regex engine! In fact it acts like a hyperextended dynamic regex engine, whose 'patterns' can be modified, combined, chopped, sliced and diced.</p> <p>Perl gave me enough of a Prolog-type mental pathway to give me a flying start. Now I know for sure that studying Prolog will improve my Perl. That is a hell of a good bargain.</p> <p>I'm gathering more impressions and ideas as I go. If there's interest, I'll enjoy airing them here from time to time.</p> <p><b>Update: </b>Minor cleanups in text. Added queries to show a more active kind of response from Prolog and confirmation that it never heard of Aristotle.</p> <p><b>Replies:</b> <ol type="disc"> <li> [merlyn], thanks for the pointer. CPAN.pm wrote its own makefile.PL but that one wasn't very successful. I'll install by hand, than see about modernizing the build system. I'd been thinking in terms of a not-yet-existant Inline::Prolog, which would make use of the gprolog compilers (more about them below).</li> <li> [Hanamaki], it appears that Goddard's Language-Prolog-Interpreter package has a namespace conflict with Shirazi's. I'll certainly look at both.</li> <li> [FoxtrotUniform], funny you should mention Perl6 :) One of my other observations is on the close resemblence of Perl6's proposed architecture to that of gprolog.</li> </ol> ++ all around to you guys.</p> <p>After Compline,<br/>Zaxo</p> | http://www.perlmonks.org/index.pl?displaytype=xml;node_id=143198 | CC-MAIN-2015-27 | refinedweb | 531 | 68.26 |
Deleting a private hosted zone
This section explains how to delete a private hosted zone using the Amazon Route 53 console.
You can delete a private hosted zone only if there are no records other than the default SOA and NS records. If your hosted zone contains other records, you must delete them before you can delete your hosted zone. This prevents you from accidentally deleting a hosted zone that still contains records.
Topics
Deleting private hosted zones that were created by another service
If a private hosted zone was created by another service, you can't delete it using the Route 53 console. Instead, you need to use the applicable process for the other service:
AWS Cloud Map – To delete a hosted zone that AWS Cloud Map created when you created a private DNS namespace, delete the namespace. AWS Cloud Map deletes the hosted zone automatically. For more information, see Deleting namespaces in the AWS Cloud Map Developer Guide.
Amazon Elastic Container Service (Amazon ECS) Service Discovery – To delete a private hosted zone that Amazon ECS created when you created a service using service discovery, delete the Amazon ECS services that are using the namespace, and delete the namespace. For more information, see Deleting a service in the Amazon Elastic Container Service Developer Guide.
Using the Route 53 console to delete a private hosted zone
To use the Route 53 console to delete a private hosted zone, perform the following procedure.
To delete a private hosted zone using the Route 53 console
Sign in to the AWS Management Console and open the Route 53 console at
.
Confirm that the hosted zone that you want to delete contains only an NS and an SOA record. If it contains additional records, delete them:
Choose the name of the hosted zone that you want to delete.
On the Record page, if the list of records includes any records for which the value of the Type column is something other than NS or SOA, choose the row, and choose Delete.
To select multiple, consecutive records, choose the first row, press and hold the Shift key, and choose the last row. To select multiple, non-consecutive records, choose the first row, press and hold the Ctrl key, and choose the remaining rows.
On the Hosted Zones page, choose the row for the hosted zone that you want to delete.
Choose Delete.
Type the confirmation key and choose Delete. | https://docs.aws.amazon.com/en_us/Route53/latest/DeveloperGuide/hosted-zone-private-deleting.html | CC-MAIN-2021-17 | refinedweb | 404 | 66.88 |
#include <Camera.hh>
#include <Camera.hh>
Inheritance diagram for Camera:
This sensor is used for simulating standard monocular cameras; is is used by both camera models (e.g., SonyVID30) and user interface models (e.g., ObserverCam).
[virtual]
Initialize the sensor.
Finalize the sensor.
Update the sensor information.
Set the pose of the camera (global cs).
Get the camera pose (global cs).
Set the camera FOV (horizontal).
Get the camera FOV (horizontal).
Set the rendering options.
Get the rendering options.
Get the image dimensions.
[inline]
Get a pointer to the image data.
Get the Z-buffer value at the given image coordinate.
Compute the change in pose based on two image points.
This function provides natural feedback when using a mouse to control the camera pose. The function computes a change in camera pose, such that the initial image coordinate a and final coordinate b map both to the same global coordinate. Thus, it is possible to "grab" a piece of the terrain and "drag" it to a new location.
Naturally, with only two image coordinates the solution is under-determined (4 constraints and 6 DOF). We therefore provide a mode argument specify what kind of transformation should be affected; the supported modes are translating, zooming and rotating.
Set the path for saved frames.
Enable or disable saving. | http://playerstage.sourceforge.net/doc/Gazebo-manual-0.5-html/classCamera.html | CC-MAIN-2014-42 | refinedweb | 217 | 53.68 |
Where's My Tesla? Creating a Data API Using Kafka, Rockset and Postman to Find Out
February 14, 2020
In this post I’m going to show you how I tracked the location of my Tesla Model 3 in real time and plotted it on a map. I walk through an end to end integration of requesting data from the car, streaming it into a Kafka Topic and using Rockset to expose the data via its API to create real time visualisations in D3.
Getting started with Kafka
When starting with any new tool I find it best to look around and see the art of the possible. Within the Rockset console there’s a catalog of out of the box integrations that allow you to attach Rockset to any number of existing applications you may have. The one that immediately caught my eye was the Apache Kafka integration.
This integration allows you to take data that is being streamed into a Kafka topic and make it immediately available for analytics. Rockset does this by consuming the data from Kafka and storing it within its analytics platform almost instantly, so you can begin querying this data right away.
There are a number of great posts that outline in detail how the Rockset and Kafka integration works and how to set it up but I’ll give a quick overview of the steps I took to get this up and running.
Setting up a Kafka Producer
To get started we’ll need a Kafka producer to add our real time data onto a topic. The dataset I’ll be using is a real time location tracker for my Tesla Model 3. In Python I wrote a simple Kafka producer that every 5 seconds requests the real time location from my Tesla and sends it to a Kafka topic. Here’s how it works.
Firstly we need to setup the connection to the Tesla. To do this I used the Smart Car API and followed their getting started guide. You can try it for free and make up to 20 requests a month. If you wish to make more calls than this there is a paid option.
Once authorised and you have all your access tokens, we can use the Smart Car API to fetch our vehicle info.
vehicle_ids = smartcar.get_vehicle_ids(access['access_token'])['vehicles'] # instantiate the first vehicle in the vehicle id list vehicle = smartcar.Vehicle(vehicle_ids[0], access['access_token']) # Get vehicle info to test the connection info = vehicle.info() print(info)
For me, this returns a JSON object with the following properties.
{ "id": "XXXX", "make": "TESLA", "model": "Model 3", "year": 2019 }
Now we’ve successfully connected to the car, we need to write some code to request the car's location every 5 seconds and send that to our Kafka topic.
from kafka import KafkaProducer # initialise a kafka producer producer = KafkaProducer(bootstrap_servers=['localhost:1234']) while True: # get the vehicles location using SmartCar API location = vehicle.location() # send the location as a byte string to the tesla-location topic producer.send('tesla-location', location.encode()) time.sleep(5)
Once this is running we can double check it’s working by using the Kafka console consumer to display the messages as they are being sent in real time. The output should look similar to Fig 1. Once confirmed it’s now time to hook this into Rockset.
Fig 1. Kafka console consumer output
Streaming a Kafka Topic into Rockset
The team at Rockset have made connecting to an existing Kafka topic quick and easy via the Rockset console.
- Create Collection
- Then select Apache Kafka
- Create Integration - Give it a name, choose a format (JSON for this example) and enter the topic name (tesla-location)
- Follow the 4 step process provided by Rockset to install Kafka Connect and get your Rockset Sink running
It’s really as simple as that. To verify data is being sent to Rockset you can simply query your new collection. The collection name will be the name you gave in step 3 above. So within the Rockset console just head to the Query tab and do a simple select from your collection.
select * from commons."tesla-integration"
You’ll notice in the results that not only will you see the lat and long you sent to the Kafka topic but some metadata that Rockset has added too including an ID, a timestamp and some Kafka metadata, this can be seen in Fig 2. These will be useful for understanding the order of the data when plotting the location of the vehicle over time.
Fig 2. Rockset console results output
Connecting to the REST API
From here, my next natural thought was how to expose the data that I have in Rockset to a front end web application. Whether it’s the real time location data from my car, weblogs or any other data, having this data in Rockset now gives me the power to analyse it in real time. Rather than using the built in SQL query editor, I was looking for a way to allow a web application to request the data. This was when I came across the REST API connector in the Rockset Catalog.
Fig 3. Rest API Integration
From here I found links to the API docs with all the information required to authorise and send requests to the built in API (API Keys can be generated within the Manage menu, under API Keys).
Using Postman to Test the API
Once you have your API key generated, it’s time to test the API. For testing I used an application called Postman. Postman provides a nice GUI for API testing allowing us to quickly get up and running with the Rockset API.
Open a new tab in Postman and you’ll see it will create a window for us to generate a request. The first thing we need to do is find the URL we want to send our request to. The Rockset API docs state that the base address is
and to query a collection you need to append
/v1/orgs/self/queries - so add this into the request URL box. The docs also say the request type needs to be POST, so change that in the drop down too as shown in Fig 4.
Fig 4. Postman setup
We can hit send now and test the URL we have provided works. If so you should get a 401 response from the Rockset API saying that authorization is required in the header as shown in Fig 5.
Fig 5. Auth error
To resolve this, we need the API Key generated earlier. If you’ve lost it, don’t worry as it’s available in the Rockset Console under Manage > API Keys. Copy the key and then back in Postman under the “Headers” tab we need to add our key as shown in Fig 6. We’re essentially adding a key value pair to the Header of the request. It’s important to add ApiKey to the value box before pasting in your key (mine has been obfuscated in Fig 6.) Whilst there, we can also add the Content-Type and set it to application/json.
Fig 6. Postman authorization
Again, at this point we can hit Send and we should get a different response asking us to provide a SQL query in the request. This is where we can start to see the benefits of using Rockset as on the fly, we can send SQL requests to our collection that will rapidly return our results so they can be used by a front end application.
To add a SQL query to the request, use the Body tab within Postman. Click the Body tab, make sure ‘raw’ is selected and ensure the type is set to JSON, see Fig 7 for an example. Within the body field we now need to provide a JSON object in the format required by the API, that provides the API with our SQL statement.
Fig 7. Postman raw body
As you can see in Fig 7 I’ve started with a simple SELECT statement to just grab 10 rows of data.
{ "sql": { "query": "select * from commons.\"tesla-location\" LIMIT 10", "parameters": [] } }
It’s important you use the collection name that you created earlier and if it contains special characters, like mine does, that you put it in quotes and escape the quote characters.
Now we really are ready to hit send and see how quickly Rockset can return our data.
Fig 8. Rockset results
Fig 8 shows the results returned by the Rockset API. It provides a collections object so we know which collections were queried and then an array of results, each containing some Kafka metadata, an event ID and timestamp, and the lat long coordinates that our producer was capturing from the Tesla in real time. According to Postman that returned in 0.2 seconds which is perfectly acceptable for any front end system.
Of course, the possibilities don’t stop here, you’ll often want to perform more complex SQL queries and test them to view the response. Now we’re all set up in Postman this becomes a trivial task. We can just change the SQL and keep hitting send until we get it right.
Visualising Data using D3.js
Now we’re able to successfully call the API to return data, we want to utilise this API to serve data to a front end. I’m going to use D3.js to visualise our location data and plot it in real time as the car is being driven.
The flow will be as follows. Our Kafka producer will be fetching location data from the Tesla every 3 seconds and adding it to the topic. Rockset will be consuming this data into a Rockset collection and exposing it via the API. Our D3.js visualisation will be polling the Rockset API for new data every 3 seconds and plotting the latest coordinates on a map of the UK.
The first step is to get D3 to render a UK map. I used a pre-existing example to build the HTML file. Save the html file in a folder and name the file index.html. To create a web server for this so it can be viewed in the browser I used Python. If you have python installed on your machine you can simply run the following to start a web server in the current directory.
python -m SimpleHTTPServer
By default it will run the server on port 8000. You can then go to 127.0.0.1:8000 in your browser and if your index.html file is setup correctly you should now see a map of the UK as shown in Fig 9. This map will be the base for us to plot our points.
Fig 9. UK Map drawn by D3.js
Now we have a map rendering, we need some code to fetch our points from Rockset. To do this we’re going to write a function that will fetch the last 10 rows from our Rockset collection by calling the Rockset API.) console.log(response); // parse out the list of results (rows from our rockset collection) and print var newPoints = response.results console.log(newPoints) }) }
When calling this function and running our HTTP server we can view the console to look at the logs. Load the webpage and then in your browser find the console. In Chrome this means opening the developer settings and clicking the console tab.
You should see a printout of the response from Rockset showing the whole response object similar to that in Fig 10.
Fig 10. Rockset response output
Below this should be our other log showing the results set as shown in Fig 11. The console tells us that it's an Array of objects. Each of the objects should represent a row of data from our collection as seen in the Rockset console. Each row includes our Kafka meta, rockset ID and timestamp and our lat long pair.
Fig 11. Rockset results log
It’s all coming together nicely. We now just need to parse the lat long pair from the results and get them drawn on the map. To do this in D3 we need to store each lat long within their array with the longitude in array index 0 and the latitude in array index 1. Each array of pairs should be contained within another array.
[ [long,lat], [long,lat], [long,lat]... ]
D3 can then use this as the data and project these points onto the map. If you followed the example earlier in the article to draw the UK map then you should have all the boilerplate code required to plot these points. We just need to create a function to call it ourselves.
I’ve initialised a javascript object to be used as a dictionary to store my lat long pairs. The key for each coordinate pair will be the row ID given to each result by Rockset. This will mean that when I’m polling Rockset for new coordinates, if I receive the same set of points again, it won’t be duplicated in my array.
{ _id : [long,lat], _id : [long,lat], … }
With this in mind, I created a function called updateData that will take this object and all the points and draw them on the map, each time asking D3 to only draw the points it hasn’t seen before.
function updateData(coords){ // grab only the values (our arrays of points) and pass to D3 var mapPoints = svg.selectAll("circle").data(Object.values(coords)) // tell D3 to draw the points and where to project them on the map mapPoints.enter().append("circle") .transition().duration(400).delay(200) .attr("cx", function (d) { return projection(d)[0]; }) .attr("cy", function (d) { return projection(d)[1]; }) .attr("r", "2px") .attr("fill", "red") }
All that’s left is to change how we handle the response from Rockset so that we can continuously add new points to our dictionary. We can then keep passing this dictionary to our updateData function so that the new points get drawn on the map.
//initialise dictionary var points = {}) // parse out the list of results (rows from our rockset collection) and print var newPoints = response.results for (var coords of newPoints){ // add lat long pair to dictionary using ID as key points[coords._id] = [coords.long,coords.lat] console.log('updating points on map ' + points) // call our update function to draw points on th updateData(points) } }) }
That’s the base of the application completed. We simply need to loop and continuously call the fetchPoints function every 5 seconds to grab the latest 10 records from Rockset so they can be added to the map.
The finished application should then perform as seen in Fig 12. (sped up so you can see the whole journey being plotted)
Fig 12. GIF of points being plotted in real time
Wrap up
Through this post we’ve learnt how to successfully request real time location data from a Tesla Model 3 and add it to a Kafka topic. We’ve then used Rockset to consume this data so we can expose it via the built in Rockset API in real time. Finally, we called this API to plot the location data in real time on a map using D3.js.
This gives you an idea of the whole back end to front end journey required to be able to visualise data in real time. The advantage of using Rockset for this is that we could not only use the location data to plot on a map but also perform analytics for a dashboard that could for example show trip length or avg time spent not moving. You can see examples of more complex queries on connected car data from Kafka in this blog, and you can try Rockset with your own data here. Jp Valery on Unsplash
More from Rockset
Get started with $300 in free credits. No credit card required. | https://rockset.com/blog/wheres-my-tesla-data-api-kafka-rockset-postman/ | CC-MAIN-2022-21 | refinedweb | 2,679 | 68.91 |
Today, nerual networks ( networks ) have been a very large, subject domain [ 1 ] domain. It doesn't use a simple"algorithm","a framework"to summarize its contents. From early neurons ( neuron ), to sensor ( perceptron ), then to bp neural networks, then go to the depth learning ( deep learning ), which is a general evolution process. Although different in different times, the idea of propagation, such as forward numerical propagation, inverse error propagation is a.
This blog mainly introduces the neural network. Forward propagation And and Error propagation process By simply using a single layer hidden layer neural network, a detailed numerical example is used to demonstrate. Also, each step, the blogger provides the implementation code for tensorflow [ 2 ].
Figure 1 single hidden layer nn
For a neuron.
Code for tensorflow
defmultilayer_perceptron(x, weights, bias): layer_1 = tf.add(tf.matmul(x, weights["h1"]), bias["b1"]) layer_1 = tf.nn.sigmoid(layer_1) out_layer = tf.add(tf.matmul(layer_1, weights["out"]), bias["out"]) layer_2 = tf.nn.sigmoid(out_layer) return layer_2
( 1 ) determine input data and gd
X = [[1, 2], [3, 4]]Y = [[0, 1], [0, 1]]
Clearly batch_size = 2. The fi & t batch size is.
X1 = 1
X2 = 2
( 2 ) initialization weight and
Figure 1 shows that the number of weights is 8
weights = { 'h1': tf.Variable([[0.15, 0.16], [0.17, 0.18]], name="h1"), 'out': tf.Variable([[0.15, 0.16], [0.17, 0.18]], name="out") } biases = { 'b1': tf.Variable([0.1,0.1], name="b1"), 'out': tf.Variable([0.1, .1], name="out") }
( 3 ) forward propagation I tance
Taking the first neuron as an example:
There's
There's
Finally, the output of hidden layerM = [ [ 0. 64336514, 0.650 21855 ] ].
Code for forward propagation tensorflow
import tensorflow as tf import numpy as npx = [1, 2] weights = [[0.15, 0.16], [0.17, 0.18]] b = [0.1, 0.1]X = tf.placeholder("float", [None, 2]) W = tf.placeholder("float", [2, 2]) bias = tf.placeholder("float", [None, 2]) mid_value = tf.add(tf.matmul(X, W), b) result = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b)) with tf.Session() as sess: x = np.array(x).reshape(1, 2) print x b = np.array(b).reshape(1, 2) result, mid_value = sess.run([result, mid_value], feed_dict={X : x, W : weights, bias : b}) print mid_value print result
In the same way, we get the pred of the output layer.Brooke = [ [ 0. 57616305, 0.579 31882 ] ].
( 4 ) error calculation
A lot of error functions are optional, the mean square error function used in this example ( mean squared error ).
Notice that the difference between the square error and the square error is the square error ( sum-squared error ).
The error generated by the mean square error:
It's known that the default output and the error we expect, we need to optimize the parameters in neural networks according to the error. The error of the parameter. Bp algorithm is the core part of neural network, and almost all neural network models are optimized by the algorithm or improved algorithm. Bp algorithm is based on gradient descent ( gradient descent ) strategy, which is based on the negative direction of.
We've a given learning rate 0. 5, every update.
( 1 ) update the weight of the output layer ( weight ) first
According to chain method,
The first item is the derivative of the mean square error function
The second is the gradient of the activation function.
Third third
So so, so,
Update
In other words,
( 2 ) update bias
According to chain method,
Update
Same as the same.
( 3 ) next update the weight of the hidden layer
Figure 2 error flow feedback diagram
So.
For the total error.
There's
First, cost1.
( some of these items have been worked out before, so take it directly to calculate ).
Find cost2 again.
Total.
And calculate the second.
Calculated third.
Merge calculation
Update
Similarly, other
( 4 ) update the bias of the hidden layer
Also, according to chain method.
Update
Same update
( 1 ) to this, all of the parameters are updated, then in the next batch [ 3, 4 ], through the forward propagation of the new parameter, the resulting error isLoss = 0.238 827
It's smaller than the first 0.254 468, which also illustrates the effectiveness of gradient descent.
( 2 ) I'm using the results of calculation and code to be 0, 01, and I guess one of my calculations is the error that I calculated, in one of the numerical results.
The mistake, if you find, please give me a hint, thank you.
( 3 ) code details
An optimization method in tensorflow
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
Cost definition
cost = tf.reduce_mean(tf.pow(pred - y, 2))
( 4 ) gradient vanishing ( vanish gradient problem )
Gradient vanishing problem is, in the course of training, the gradient becomes very small, or the gradient becomes 0, so that the parameter update is too slow. Gradient disappears is a relationship with the activation function, usually we use ReLU or ReLU ( such as prelu [ 3 ] ) to reduce the gradient disappear. In our case, we use the ligistic function, which is the derivative, and of course is the derivative 1. We can see from the previous operation that there will be a problem in the process of backward uploading.
As a result, it'll be smaller and smaller when the derivative is less than 1. So, for the sake of this problem, the problem is to use ReLU ( derivative 1 ) instead of a function of ligistic type, too. | https://www.dowemo.com/article/47374/example-analysis-of-neural-network-propagation-process | CC-MAIN-2018-30 | refinedweb | 909 | 57.87 |
I want to know the coordinates of feature points after contour extraction.
import matplotlib.pyplot as plt from matplotlib.patches import Polygon img = cv2.imread ("./ pu1/1124_2_0.jpg") gray = cv2.cvtColor (img, cv2.COLOR_BGR2GRAY) ret, binary = cv2.threshold (gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) kernel = cv2.getStructuringElement (cv2.MORPH_ELLIPSE, (5, 5)) binary = cv2.dilate (binary, kernel) contours, hierarchy = cv2.findContours (binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) def draw_contours (ax, img, contours): ax.imshow (img) ax.set_axis_off () for i, cnt in enumerate (contours): if cv2.contourArea (cnt)<1900: continue else: cnt = cnt.squeeze (axis = 1) ax.add_patch (Polygon (cnt, color = "b", fill = None, lw = 2)) ax.plot (cnt [:, 0], cnt [:, 1], "ro", mew = 0, ms = 4) ax.text (cnt [0] [0], cnt [0] [1], i, color = "orange", size = "20") print (contours) fig, ax = plt.subplots (figsize = (8, 8)) draw_contours (ax, img, contours) plt.savefig ('./ rinnkaku/.jpg') Code
With this code, the following image can be obtained from the original image.
The coordinates of the outline of the small area are also extracted.
I want to know the coordinate group only at the largest contour (here 2).
Thank you for everyone.
python version is 3.7.3,
The version of opencv is 4.1.1.
- Answer # 1
- Answer # 2
1. I don't want to extract the outline of a small area (in this case, 0-7 outline).
Check the area using contourArea and try again.
2.I want to know the coordinates of the red dot.
draw_contoursseems to be writing the code that gets the coordinates, so I can't understand what I don't know.
Related articles
- python 3x - how to display webcam started with python opencv in html
- python - i got an error in opencv
- python opencv
- python - opencv videocapture cannot be used
- image composition using python opencv
- python - data and timing used for feature selection
- python - opencv color range
- python opencv face recognition multiple person image
- python - opencv circle center color
- python - [opencv] i can't get the frame of the video with read (), and the playback of the video stops halfway
- python - [opencv] i want to display the area of each object at the position adjacent to the object
- python - detect whole body and upper body with opencv
- python - machine learning feature extraction of time series data
- i want to run opencv on python
- python - i don't see the error in the face cropping program with opencv
- python opencv recording time
- python opencv color index
- python contour detection¢roid calculation
- python - i want to create a contour image
- python - image cannot be read by opencv
cv2.contourArea () can be used to obtain the contour area, so specify this function as the key argument of max () to obtain the contour with the largest area.
The outline obtained by this is a numpy array of (number of points, 1, 2), and this is the coordinates that make up the points of the outline. (The upper left corner of the image is the origin (0, 0) and the unit is the pixel image coordinate system) | https://www.tutorialfor.com/questions-151133.htm | CC-MAIN-2021-10 | refinedweb | 506 | 66.44 |
Status report for the Web Services Project
Notable Happenings
Code Releases [since the last report]
Apache Axis2/C released version 1.0.0 on 06th May 2007
Apache WSS4J released version 1.5.2 on 31st May 2007
Apache Rampart released version 1.2 on 02nd June 2007
Apache Woden release Milestone 7a on 23rd April 2007
Legal Issues
Cross-Project Issues
Problems with committers, members, projects etc?
Subproject News
Woden incubator
We have a new contributor, Sriram Rao, who has submitted an ICLA and made his first contribution. The Woden M7a release on April 23rd incremented M7 with the new, shorter WSDL 2.0 namespace URL. The W3C WSDL2 spec has now moved to Proposed Recommendation with Recommendation expected early July. The Woden M8 release, planned for July, will complete WSDL2 spec compliance by fully supporting WSDL2 Assertions (validation). After M8 we plan to review and finalize the API, then deliver a Woden 1.0 release. In May the project team discussed requirements for exiting incubation. We have some housekeeping items to complete, but expect to approach the WS and Incubator PMCs in June. | http://wiki.apache.org/ws/ReportForJun2007?action=diff | CC-MAIN-2016-30 | refinedweb | 185 | 56.55 |
In this tutorial , we will be making a IoT Weather Predictor using a Dot One and a servo motor .
What you will need :
Once you have finished the tutorials above and learned all about the Dot One , you are all set to do this tutorial below.
Create a Flow
In order to retrieve the data we require to make the device work as behaved , we're going to create a flow which makes an api call to extract the data needed at a specified time (which can be arranged to your preferred time).
Set up the block arrangement as below:
In the Timer node, we can configure the timer to activate at a user preferred time.
After this, drag over a HTTP Service node from under the Services tab and this is going to create a GET request to a weather api (weatherbit.io - sign up for your own custom api key) and retrieve the probability value for precipitation.
Next, drag over the Run Function node , where we parse the response back from the api call and retrieve the value we want which is the precipitation probability value and set it to equal the output variable of this function.
Then, lastly we drag over the Update State node and configure the code to update the state of the device we are using.
Now that we have the precipitation value located in the state of the device needed for this project all we need to do is write code that retrieves the state value , transforms it to a float and then multiplies by 120 to retrieve a more easy to use % and adds 30 so the servo will turn to a more proportionate angle so as to be able to turn towards and away from the icons
The code to serve this purpose as indicated above is located below:
Create a Code Project
#include <WiFi.h>
#include <ESP32_Servo.h>
#include <Wia.h>
int pos = 30; // variable to store the servo position
Servo myservo; // create servo object to control a servo
Wia wiaClient = Wia();
void setup() {
WiFi.begin();
delay(2500);
myservo.attach(19); // attaches the servo on pin 5 to the servo object
}
void loop() {
String rawPrecipitation = wiaClient.getDeviceState("weather");
float precipitation = rawPrecipitation.toFloat();
precipitation = precipitation * 120;
precipitation = precipitation + 30;
myservo.write(precipitation);
delay(86400000); // Wait 24 hours until reading again.
}
You can then upload the above code to your Dot One by clicking the rocket icon and now once wired up the servo motor will swing to the icon representing the weather for the day.
Now that we have successfully wrote the software for this tutorial , it's time to wire up the hardware to make the servo motor rotate and behave as required.
Using is what your final project should look similar | https://community.wia.io/d/100-iot-weather-predictor | CC-MAIN-2020-10 | refinedweb | 462 | 57.3 |
Deep Quantile Regression
Most Deep Learning frameworks currently focus on giving a best estimate as defined by a loss function. Occasionally something beyond a point estimate is required to make a decision. This is where a distribution would be useful. This article will purely focus on inferring quantiles.
By Sachin Abeywardana, Founder of DeepSchool.io
One area that Deep Learning has not explored extensively is the uncertainty in estimates. Most Deep Learning frameworks currently focus on giving a best estimate as defined by a loss function. Occasionally something beyond a point estimate is required to make a decision. This is where a distribution would be useful. Bayesian statistics lends itself to this problem really well since a distribution over the dataset is inferred. However, Bayesian methods so far have been rather slow and would be expensive to apply to large datasets.
As far as decision making goes, most people actually require quantiles as opposed to true uncertainty in an estimate. For instance when measuring a child’s weight for a given age, the weight of an individual will vary. What would be interesting is (for arguments sake) the 10th and 90th percentile. Note that the uncertainty is different to quantiles in that I could request for a confidence interval on the 90th quantile. This article will purely focus on inferring quantiles.
Quantile Regression Loss function
In regression the most commonly used loss function is the mean squared error function. If we were to take the negative of this loss and exponentiate it, the result would correspond to the gaussian distribution. The mode of this distribution (the peak) corresponds to the mean parameter. Hence, when we predict using a neural net that minimised this loss we are predicting the mean value of the output which may have been noisy in the training set.
The loss in Quantile Regression for an individual data point is defined as:
Loss of individual data point
where alpha is the required quantile (a value between 0 and 1) and
where f(x) is the predicted (quantile) model and y is the observed value for the corresponding input x. The average loss over the entire dataset is shown below:
Loss funtion
If we were to take the negative of the individual loss and exponentiate it, we get the distribution know as the Asymmetric Laplace distribution, shown below. The reason that this loss function works is that if we were to find the area under the graph to the left of zero it would be alpha, the required quantile.
Probability distribution function (pdf) of an Asymmetric Laplace distribution.
The case when alpha=0.5 is most likely more familiar since it corresponds to the Mean Absolute Error (MAE). This loss function consistently estimates the median (50th percentile), instead of the mean.
Modelling in Keras
The forward model is no different to what you would have had when doing MSE regression. All that changes is the loss function. The following few lines defines the loss function defined in the section above.
import keras.backend as K def tilted_loss(q,y,f): e = (y-f) return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)
When it comes to compiling the neural network, just simply do:
quantile = 0.5 model.compile(loss=lambda y,f: tilted_loss(quantile,y,f), optimizer='adagrad')
For a full example see this Jupyter notebook where I look at a motor cycle crash dataset over time. The results are reproduced below where I show the 10th 50th and 90th quantiles.
Acceleration over time of crashed motor cycle.
Final Notes
- Note that for each quantile I had to rerun the training. This is due to the fact that for each quantile the loss function is different, as the quantile in itself is a parameter for the loss function.
- Due to the fact that each model is a simple rerun, there is a risk of quantile cross over. i.e. the 49th quantile may go above the 50th quantile at some stage.
- Note that the quantile 0.5 is the same as median, which you can attain by minimising Mean Absolute Error, which you can attain in Keras regardless with
loss='mae'.
- Uncertainty and quantiles are not the same thing. But most of the time you care about quantiles and not uncertainty.
- If you really do want uncertainty with deep nets checkout
Edit 1
As pointed out by Anders Christiansen (in the responses) we may be able to get multiple quantiles in one go by having multiple objectives. Keras however combines all loss functions by a
loss_weights argument as shown here:. Would be easier to implement this in tensorflow. If anyone beats me to it would be happy to change my notebook/ post to reflect this. As a rough guide if we wanted the quantiles 0.1, 0.5, 0.9, the last layer in Keras would have
Dense(3) instead, with each node connected to a loss function.
Edit 2
Thanks to Jacob Zweig for implementing the simultaneous multiple Quantiles in TensorFlow:
If you are enjoying my tutorials/ blog posts, consider supporting me on or by subscribing to my YouTube channel (or both!).
Bio: Sachin Abeywardana is a PhD in Machine Learning and Founder of DeepSchool.io.
Original. Reposted with permission.
Related:
- DeepSchool.io: Deep Learning Learning
- Docker for Data Science
- Using Genetic Algorithm for Optimizing Recurrent Neural Networks | https://www.kdnuggets.com/2018/07/deep-quantile-regression.html | CC-MAIN-2018-30 | refinedweb | 894 | 55.03 |
On Mon, Jul 24, 2000 at 04:54:57PM -0400, Fred L. Drake, Jr. wrote: > > Greg Stein writes: > > I agree with Bjorn. > > That it's a convenience issue or that it's the right thing to do? Both. Having the core distro and PyXML both use the "xml" namespace means that we can migrate stuff from PyXML into the as those items become stable. This is convenient for programmers (don't worry where it comes from or whether it has moved; just use "xml"), and is the right thing (it all "just works" and provides a mechanism for future changes). > > I've posted a description of how to accomplish this "melding" of the > > packages in a flexible manner. It allows us to ship the "xml" package in the > > Yes; I've re-read your proposal and Martin's patch. I don't want to > worry about *how* until it's clear *what* the right result is. You know... sometimes it is important to just *DO* something rather than talk endlessly about whether it is perfect or not. How long has this subject been "on the table"? Too long. There has been a lot of people talking, trying to be heard about this or that. It would be nice to actually see people doing something other than typing text into their email clients. Cheers, -g -- Greg Stein, | https://mail.python.org/pipermail/xml-sig/2000-July/003058.html | CC-MAIN-2014-15 | refinedweb | 226 | 80.11 |
Hi, first of all, just import pythoncom directly <code> import pythoncom </code> As far as where to put pythoncom.CoInitialize() I'm not certain but i think at the top (or start) of your whole application.. if it's threaded then call it for each thread (at the start) "Initializes the COM library on the current thread and identifies the concurrency model as single-thread apartment (STA). Applications must initialize the COM library before they can call COM library functions ..." from: Hope that helps, Alex > From: Dominique.Holzwarth at ch.delarue.com > To: python-win32 at python.org > Date: Thu, 24 Apr 2008 16:15:17 +0100 > Subject: [python-win32] Apache & mod_python & win32com > > Hello everyone > > I'm developing a web application using mod_python and the apache web server. That application has to handle xml files and for that I'm using win32com with with MSXML. > > My problem is, that apache spawns a new process / thread (not sure wether ist a process or thread. I think it's a thread) per http request. So when multiple users generated requests "at the same time" (new request before the first one is returned) I have multiple threads running. And that's where my win32com code is crashing... > > Currently, for every request my code does the following (as an example): > > return win32com.client.Dispatch("Msxml2.DOMDocument.6.0") > > To get an empty MSXSML DOM object. > > For multiple requests at the same time I get the following error: > > File "C:\Program Files\Apache Software Foundation\Apache2.2\TCRExceptionManagerData\database_library\database.py", line 24, in __init__ > self.__configFileSchema = self.__XSDSchemaCache() > > File "C:\Program Files\Apache Software Foundation\Apache2.2\TCRExceptionManagerData\database_library\database.py", line 1774, in __XSDSchemaCache > return win32com.client.Dispatch("MSXML2.XMLSchemaCache.6.0") > > File "C:\Program Files\Python25\lib\site-packages\win32com\client\__init__.py", line 95, in Dispatch > dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch,userName,clsctx) > > File "C:\Program Files\Python25\lib\site-packages\win32com\client\dynamic.py", line 98, in _GetGoodDispatchAndUserName > return (_GetGoodDispatch(IDispatch, clsctx), userName) > > File "C:\Program Files\Python25\lib\site-packages\win32com\client\dynamic.py", line 78, in _GetGoodDispatch > IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch) > > com_error: (-2147221008, 'CoInitialize has not been called.', None, None) > > I've read already a bit in this mailing list and someone mentioned that one need to call pythoncom.CoInitialize() and pythoncom.CoUninitialize(). But I don't know where exactly i should call those functions and wether that'll solve my problem... Also, when I do a 'import win32com.pythoncom' I get an error that the module 'pythoncom' does not exist! > > I would be really happy if someone could help me and tell me how to make my win32com work for multiple threads! > > Greetings > Dominique > > _______________________________________________ > python-win32 mailing list > python-win32 at python.org > _________________________________________________________________ Play the Andrex Hello Softie Game & win great prizes -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/python-win32/2008-April/007397.html | CC-MAIN-2016-40 | refinedweb | 479 | 51.44 |
Exporting to e2studio with CMSIS_DAP DBG
Table of Contents
- Environment
- Setup Procedure
- Install Windows serial driver
- Install e2studio
- Install OpenOCD
- Associate GR-PEACH config with OpenOCD
- Install OpenOCD add-in to e2studio
- Configure OpenOCD on e2studio
- Build of e2studio environment
- Exporting to e2studio
- import project to e2studio
- Build Process
- The way to debug
- Debug
- Support features
Environment¶
If you would like to use J-Link for debugging, please refer to Exporting to e2studio (J-Link debug).
Setup Procedure¶
Install Windows serial driver¶
Install latest Windows Serial Port Driver to setup CMSIS-DAP from the link below:
Install e2studio¶
Please download e2studio 5.0.0 or lator, and install
Install OpenOCD¶
Please download exe file of OpenOCD v0.10.0-201601101000-dev, and install.
Associate GR-PEACH config with OpenOCD¶
Please copy renesas_gr-peach.cfg to scripts\board directory included in the OpenOCD installed location. By default, it should be located as follows:
- In case of using 32-bit windows:
C:\Program Files\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\scripts\board
- In case of using 64-bit windows:
C:\Program Files (x86)\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\scripts\board
Install OpenOCD add-in to e2studio¶
Information
This procedure can be skipped when you use e2studio version 5.2 or later since the add-in is incorporated in e2studio.
- Launch e2studio.
- Select[Help]menu→[Install new software...]
- Input [work with] box, and push [Add] button.
- Check [GNU ARM C/C++ OpenOCD Debugging] and push [Next >] button.
- Install and restart e2studio.
Configure OpenOCD on e2studio¶
- Select [Window] -> [Preferences].
- Select [Run/Debug] - [OpenOCD].
- Check if the directory and executable are filled with OpenOCD installation folder and openocd.ex respectively. If not, please input OpenOCD installation folder and openocd.exe there and click [OK]. Note that the default OpenOCD installation folder should be as follows:
- In case of using 32-bit windows:
C:/Program Files/GNU ARM Eclipse/OpenOCD/0.10.0-201601101000-dev/bin
- In case of using 64-bit windows:
C:/Program Files (x86)/GNU ARM Eclipse/OpenOCD/0.10.0-201601101000-dev/bin
Build of e2studio environment¶
Exporting to e2studio¶
- Go to Mbed compiler.
- Right click at the program you want to export.
- Select "Export Program"
- Select "Renesas GR-PEACH" for Export Target
Select "e2studio" for Export Toolchain
Push "Export"
- Expand zip file.
import project to e2studio¶
- Launch e2studio.
- Specify workspace directory. Workspace directory must be placed in the upper directory of the directory that includes .project file.
In this document, project file is placed in C:\WorkSpace\GR-PEACH_blinky_e2studio_rz_a1h\GR-PEACH_blinky, and the workspace is placed in C:\WorkSpace\GR-PEACH_blinky_e2studio_rz_a1h.
- If Toolchain Integration dialog appared, select [GCC ARM embedded version 4.9.3.20150529] and click [Register].
- After e2studio window opens, click [go to workbench].
- Select [File]menu-[import].
- Select [General]-[Existing Projects into Workspace], and click [Next>]
- Click [Browse].
- Click [OK].
- Click [Finish].
Build Process¶
- Launch e2studio.
- Select the [Window] menu -> [Show View] -> [Project Explorer].
- Select the project to build.
- Click build icon.
e.g.) The folder structure when making the work folder "C:\Workspase". Export project is GR-PEACH_blinky. C: +-- Workspace +-- GR-PEACH_blinky_e2studio_rz_a1h +-- .metadata +-- GR-PEACH_blinky | .cproject | .gdbinit | .hgignore | .project | exporter.yaml | GettingStarted.htm | GR-PEACH_blinky OpenOCD.launch | main.cpp | mbed.bld | SoftPWM.lib +-- .hg +-- .settings +-- Debug <- When clicking [Build Project], ".bin" and ".elf" file will be created here. +-- mbed +-- SoftPWM
The way to debug¶
Debug¶
- Connect USB cable
- Copy ".bin" file to Mbed drive
- Reconnect USB cable
- Select project to debug.
- From menu in C/C++ perspective or debug perspective , select [Run] [Debug Configurations...]
- Select [<project-name> OpenOCD] in [GDB OpenOCD Debugging]
- Click "Debug".
- If you want to reset :
please enter the following command to "arm-none-eabi-gdb.exe" screen in "console" view.
When you drop down from the console view toolbar buttons, you can switch the screen.
monitor reset init
Support features¶
These features are supported. Generic views which uses the features below would be useable.
- Software breakpoint will be replaced with Hardware breakpoint. 6 points are available in total.
- Downloading from e2 studio to serial flash memory is not supported. But you can download the program by copying the bin file to the drive which is generated when you connect the board to PC, because GR-PEACH is Mbed device. Before you download the program by the manner of Mbed, please disconnect the debugger from the board.
- Reading or writing memory while program is running are not supported. And writing is supported only for RAM.
- .gdbinit is required to stepping the program which uses interrupt.
- The button for reset in Debug View doesn't work, but the command to reset is available. Please enter "monitor reset init" to reset the program in the console for GDB (arm-none-eabi-gdb).
Although the display is not changed, but the program would be reset. The button for restart will work once. Please don't use it.
e2 studio has the special views for Renesas. The supported status are below. | https://os.mbed.com/teams/Renesas/wiki/Exporting-to-e2studio-with-CMSIS_DAP-DBG | CC-MAIN-2022-40 | refinedweb | 830 | 60.31 |
C library function - fputc()
Description
The C library function int fputc(int char, FILE *stream) writes a character (an unsigned char) specified by the argument char to the specified stream and advances the position indicator for the stream.
Declaration
Following is the declaration for fputc() function.
int fputc(int char, FILE *stream)
Parameters
char -- This is character to be written. This is passed as its int promotion.
stream -- This is the pointer to a FILE object that identifies the stream where the character is to be written.
Return Value
If there are no errors, the same character that has been written is returned.If an error occurs, EOF is returned and the error indicator is set.
Example
The following example shows the usage of fputc() function.
#include <stdio.h> int main () { FILE *fp; int ch; fp = fopen("file.txt", "w+"); for( ch = 33 ; ch <= 100; ch++ ) { fputc(ch, fp); } fclose(fp); return(0); }
Let us compile and run the above program, this will create a file file.txt in the current directory which will have following content:
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcd
Now let's); } | http://www.tutorialspoint.com/c_standard_library/c_function_fputc.htm | CC-MAIN-2015-06 | refinedweb | 184 | 65.52 |
In the part I of this tutorial, we’ve explained how to initiate a Flutter Plugin project in IntelliJ IDEA, and what does a Flutter Plugin template look like. Build Your Own Flutter Plugin Using Android Native Kotlin — Part II.
In the part I of this tutorial, we’ve explained how to initiate a Flutter Plugin project in IntelliJ IDEA, and what does a Flutter Plugin template look like.
Next, we are going to complete the content of codes to bring the Flutter API and the MIDI keyboard app to live. The file structures were introduced in the Part I tutorial, please refer to part I tutorial.
Let’s start from the Flutter API
piano.dart under the
lib. Again, this will be your Flutter API interface, which defines all the callable functions for your clients to call:
import 'dart:async'; import 'package:flutter/services.dart'; class Piano { static const MethodChannel _channel = // 1 const MethodChannel("piano"); static Future<String?> get platformVersion async { // 2 final String? version = await _channel.invokeMethod('getPlatformVersion'); // 3 return version; // 4 } static Future<int?> onKeyDown(int key) async { // 2 final int? numNotesOn = await _channel.invokeMethod('onKeyDown', [key]); // 3 return numNotesOn; // 4 } static Future<int?> onKeyUp(int key) async { // 2 final int? numNotesOn = await _channel.invokeMethod('onKeyUp', [key]); // 3 return numNotesOn; // 4 } }
Note: The_question mark_ in the dart syntax is supporting Null Safety dart version. Basically it’s a stricter type check, which requires you to explicitly identify the type can be ‘int’ or ‘null’, e.g., int?
flutter-plugin flutter android-app-development.
AppClues Infotech is a top Mobile App Development Company in USA building high-quality Android, iOS, and Native apps for Startups, SMBs, & Enterprises. Contact us now!
WebClues Infotech provides custom android app development services using the latest Android SDKs. Our team of Android developers expert in Java & Kotlin languages. | https://morioh.com/p/6d23e7f299d5 | CC-MAIN-2021-25 | refinedweb | 305 | 51.24 |
Testing coding standards¶
All new code, or changes to existing code, should have new or updated tests before being merged into master. This document gives some guidelines for developers who are writing tests or reviewing code for CKAN.
Transitioning from legacy to new tests¶
CKAN is an old code base with a large legacy test suite in ckan.tests. The legacy tests are difficult to maintain and extend, but are too many to be replaced all at once in a single effort. So we’re following this strategy:
- A new test suite has been started in ckan.new.new_tests to be:
- Fast
Don’t share setup code between tests (e.g. in test class setup() or setup_class() methods, saved against the self attribute of test classes, or in test helper modules).
Instead write helper functions that create test objects and return them, and have each test method call just the helpers it needs to do the setup that it needs.
Where appropriate, use the mock library to avoid pulling in other parts of CKAN (especially the database), see Mocking: the mock library.
- unit test,.new/new.new:!
One common exception is when you want to use a for loop to call the function being tested multiple times, passing it lots of different arguments that should all produce the same return value and/or side effects. For example, this test from ckan.new_tests.logic.action.test_update:
def test_user_update_with_invalid_name(self): user = factories.User() invalid_names = ('', 'a', False, 0, -1, 23, 'new', 'edit', 'search', 'a' * 200, 'Hi!', 'i++%') for name in invalid_names: user['name'] = name assert_raises(logic.ValidationError, helpers.call_action, 'user_update', **user)
The behavior of user_update() is the same for every invalid value. We do want to test user_update() with lots of different invalid names, but we obviously don’t want to write a dozen separate test methods that are all the same apart from the value used for the invalid user name. We don’t really want to define a helper method and a dozen test methods that call it either. So we use a simple loop. Technically this test calls the function being tested more than once, but there’s only one line of code that calls it..new.new_tests.factories.ResourceView¶
A factory class for creating CKAN resource views.
Note: if you use this factory, you need to load the image_view plugin on your test class (and unload it later), otherwise you will get an error.
Example:
class TestSomethingWithResourceViews(object): @classmethod def setup_class(cls): if not p.plugin_loaded('image_view'): p.load('image_view') @classmethod def teardown_class(cls): p.unload('image_view')
- class ckan.new_tests.factories.MockUser¶
A factory class for creating mock CKAN users using the mock library.
- ckan.new_tests.factories.validator_data_dict()¶
Return a data dict with some arbitrary data in it, suitable to be passed to validator functions for testing.
Test helper functions: ckan.new_tests.helpers¶
This is a collection of helper functions for use in tests.
We want to avoid sharing test helper functions between test modules as much as possible, and we definitely don’t want to share test fixtures between test modules, or.
This module is reserved for these very useful functions.
- ckan.new_tests.helpers.reset_db()¶
Reset CKAN’s database.
If a test class uses the database, then it should.new_tests.helpers.call_action(action_name, context=None, **kwargs)¶
Call the named ckan.logic.action function argument.
Note: this skips authorization! It passes ‘ignore_auth’: True to action functions in their context dicts, so the corresponding authorization functions will not be run. This is because ckan.new_tests.logic.action tests only the actions, the authorization functions are tested separately in ckan.new into the context dict.
- ckan.new_tests.helpers.call_auth(auth_name, context, **kwargs)¶
Call the named ckan.logic.auth function and return the result.
This is just a convenience function for tests in ckan.new_tests.logic.auth to use.
Usage:
result = helpers.call_auth('user_update', context=context, id='some_user_id', name='updated_user_name')
- class ckan.new_tests.helpers.FunctionalTestBase¶
A base class for functional test classes to inherit from.
Allows configuration changes by overriding _apply_config_changes and resetting the CKAN config after your test class has run. It creates a webtest.TestApp at self.app for your class to use to make HTTP requests to the CKAN web UI or API.
If you’re overriding methods that this class provides, like setup_class() and teardown_class(), make sure to use super() to call this class’s methods at the top of yours!
- ckan.new_tests.helpers.submit_and_follow(app, form, extra_environ, name=None, value=None, **args)¶
Call webtest_submit with name/value passed expecting a redirect and return the response from following that redirect.
- ckan.new_tests.helpers.webtest_submit(form, name=None, index=None, value=None, **args)¶
backported version of webtest.Form.submit that actually works for submitting with different submit buttons.
We’re stuck on an old version of webtest because we’re stuck on an old version of webob because we’re stuck on an old version of Pylons. This prolongs our suffering, but on the bright side it lets us have functional tests that work.
- ckan.new_tests.helpers.webtest_submit_fields(form, name=None, index=None, submit_value=None)¶
backported version of webtest.Form.submit_fields that actually works for submitting with different submit buttons.
- ckan.new_tests.helpers.change_config(key, value)¶
Decorator to temporarily changes Pylons’ pylons.config['ckan.site_title'] == 'My Test CKAN'.new_tests.logic.action and the frontend tests in ckan.new_tests.controllers are functional tests, and probably shouldn’t do any mocking.
Do use mocking in more unit-style tests. For example the authorization function tests in ckan.new_tests.logic.auth, the converter and validator tests in ckan.new_tests.logic.auth, and most (all?) lib tests in ckan.new_tests.lib are.new.new.new.new.new(self): '''user_name_validator() should raise Invalid if given a non-string value. ''' non_string_values = [ 13, 23.7, 100L, webtests.new.
Writing ckan.migration tests¶
All migration scripts should have tests.
Todo
Write some tests for a migration script, and then use them as an example to fill out this guidelines section. | https://docs.ckan.org/en/ckan-2.3.1/contributing/testing.html | CC-MAIN-2020-16 | refinedweb | 993 | 51.04 |
These matchers are bindable. Recording this information will make it
possible to introspect the matchers which can be used inside another
matcher.
Fix matchDescriptor for coding conventions, but please don't reuse the name of a type when fixing it. :-)
Same comment here.
Mildly not keen on the use of auto here. It's a factory function, so it kind of names the resulting type, but it also looks like the type will be ast_matchers::internal::VariadicAllOfMatcher<ResultT>::Type from the template argument, which is incorrect.
Elide braces.
Same matchDescriptor here.
Similar comment about auto here.
There is no reason to assume that taking a template argument means that type is result.
The method is getFrom which decreases the ambiguity still further.
Spelling out the type doesn't add anything useful. This should be ok.
Update
Aside from the uses of auto and the lack of tests, this LGTM.
Aside from all the other places that do exactly that (getAs<>, cast<>, dyn_cast<>, castAs<>, and so on). Generally, when we have a function named get that takes a template type argument, the result when seen in proximity to auto is the template type.
I slept on this one and fall on the other side of it; using auto hides information that tripped up at least one person when reading the code, so don't use auto. It's not clear enough what the resulting type will be.
I put this in the category of requiring
SomeType ST = SomeType::create();
instead of
auto ST = SomeType::create();
ast_type_traits::ASTNodeKind is already on that line and you want to see it again.
FWIW I'm with Aaron here. Im' not familiar at all with this codebase, and looking at the code, my first instinct is "the result type is probably ast_matchers::internal::VariadicAllOfMatcher<ResultT>::Type". According to Aaron's earlier comment, that is incorrect, so there's at least 1 data point that it hinders readability.
So, honest question. What *is* the return type here?
So, honest question. What *is* the return type here?
Much to my surprise, it's ASTNodeKind. It was entirely unobvious to me that this was a factory function.
@zturner Quoting myself:
The expression is getFromNodeKind. There is a pattern of SomeType::fromFooKind<FooKind>() returning a SomeType. This is not so unusual.
Note that at the top of this file there's already a using namespace clang::ast_type_traits; So if you're worried about verbosity, then you can remove the ast_type_traits::, remove the auto, and the net effect is that the overall line will end up being shorter.
The funny thing is - if you see a line like
Parser CodeParser = Parser::getFromArgs(Args);
you have no idea what type Parser is!
To start, it could be clang::Parser or clang::ast_matchers::dynamic::Parser, depending on a using namespace which might appear any distance up in the file. It is not uncommon for clang files to be thousands of lines lone.
Or it could be inside a template with template<typename Parser>, or there could be a using Parser = Foo; any distance up in the file.
is no more helpful than
auto CodeParser = Parser::getFromArgs(Args);
Sticking with the same example, if CodeParser is a field, then you might have a line
CodeParser = Parser::getFromArgs(Args);
and you could object and create a macro which expands to nothing to ensure that the type appears on the line
CodeParser = CALL_RETURNS(Parser)Parser::getFromArgs(Args);
No one does that, because it is ok for the type to not appear on the line.
You would also have to object to code such as
Object.setParser(Parser::getFromArgs(Args));
requiring it to instead be
Parser CodeParser = Parser::getFromArgs(Args);
Object.setParser(CodeParser);
so that you can read the type.
Even then, what if those two lines are separated by a full page of code? How do you know the type of CodeParser in the Object.setParser(CodeParser) call? The answer is you don't and you don't need to.
You would also require
return Parser::getFromArgs(Args);
to instead be
Parser CodeParser = Parser::getFromArgs(Args);
return CodeParser;
Claims that human readers need to see a type are as incorrect as they are inconsistent.
I don’t really have much more to add here except to refer you to the style
guide:
Specifically this line: “Use auto if and only if it makes the code more
readable or easier to maintain.”
Given that 2 out of 2 reviewers who have looked at this have said it did
not make the code more readable or easier to maintain for them , and have
further said that they feel the return type is not “obvious from the
context”, i do not believe this usage fits within our style guidelines.
I don’t think it’s worth beating on this anymore though, because this is a
lot of back and forth over something that doesn’t actually merit this much
discussion. So in the interest of conforming to the style guide above,
please remove auto and then start a thread on llvm-dev if you disagree with
the policy
Refactor
You still can't see the return type of getFromNodeKind() here, but I trust that this is fine nonetheless.
LGTM! Please commit with whatever patch makes use of nodeConstructors() (as this functionality doesn't stand on its own). | https://reviews.llvm.org/D54405 | CC-MAIN-2022-21 | refinedweb | 889 | 62.48 |
Array is a data structure that stores multiple elements of the same data type. It can store entire set of values at once. But its length needs to be defined beforehand.
In this sum array puzzle, we are given an array A1 of a definite size say n. In order to solve this puzzle, we will create an array called S1 that stores the sum of all elements of the array except the element whose position is being used. For example, if S1[3] is being calculated then we will find the sum of all elements of A1 except the element at position 4.
Array A1 = {1,2,3,4,6} Output S1 = {15,14,13,12,10}
Explanation − To calculate the sum array, we will add each of the elements of the initial array to a sum variable accept the value that has the same number as sum array. Which means for the first element of the sum array we will calculate the sum of all elements except the first element of the array, and the same for the whole array. Let’s calculate the values for each element of the sum array using this logic.
Sum[0], we will calculate the sum of elements except the element at 0th index. So ,
Sum[0] = 2+3+4+6 = 15
Similarly, we will calculate the value of sum[1]...
Sum[1] = 1+3+4+6 = 14
Sum[2] = 1+2+4+6 = 13
Sum[3] = 1+2+3+6 = 12
Sum[4] = 1+2+3+4 = 10
So, all elements of the sum array are not ready and the sum array is sum = {15,14,13,12,10}
Step 1 : Initialise a sum array sum[n] to zero, where n = size of the original array. Step 2 : Iterate over sum[] and do : Step 2.1 : For sum[i], run a for loop for j -> 0 to n Step 2.2 : if(i != j) {sum[i] += arr[j] } Step 3: Print sum array using std print statement.
#include <iostream> using namespace std; int main() { int arr[] = { 3, 6, 4, 8, 9 }; int n = sizeof(arr) / sizeof(arr[0]); int leftSum[n], rightSum[n], Sum[n], i, j; leftSum[0] = 0; rightSum[n - 1] = 0; cout<<"The original array is : \n"; for (i = 0; i < n; i++) cout << arr[i] << " "; for (i = 1; i < n; i++) leftSum[i] = arr[i - 1] + leftSum[i - 1]; for (j = n - 2; j >= 0; j--) rightSum[j] = arr[j + 1] + rightSum[j + 1]; for (i = 0; i < n; i++) Sum[i] = leftSum[i] + rightSum[i]; cout<<"\nThe sum array is : \n"; for (i = 0; i < n; i++) cout << Sum[i] << " "; return 0; }
The original array is : 3 6 4 8 9 The sum array is : 27 24 26 22 21 | https://www.tutorialspoint.com/cplusplus-sum-array-puzzle | CC-MAIN-2021-43 | refinedweb | 464 | 63.02 |
thanks, I really should test the package in a non kde environment.
Search Criteria
Package Details: openlp 2.4.1-1
Dependencies (15)
- phonon (phonon-qt4)
- python
- python-alembic
- python-beautifulsoup4
- python-chardet
- python-lxml
- python-pyenchant
- python-pyqt5
- qt5-multimedia (qt5-multimedia-git)
- qt5-tools (qt5-tools-git) (make)
- libreoffice-fresh (optional) – Display impress presentations
- mupdf (mupdf-git, mupdf-gl) (optional) – Display pdfs
- python-mysql-connector (optional) – Use a mysql/mariadb database
- python-psycopg2 (python-psycopg2-git) (optional) – Use a postgresql database
- vlc (vlc-clang-git, vlc-decklink, vlc-git, vlc-nightly) (optional) – Play multimedia
Required by (0)
Sources (2)
Latest Comments
thelinuxguy commented on 2016-04-28 09:41
jonarnold commented on 2016-04-25 02:18
It seems like this now needs qt5-multimedia as a dependency.
Without it I got this:
Traceback (most recent call last):
File "/usr/bin/openlp", line 27, in <module>
from openlp.core.common import is_win, is_macosx
File "/usr/lib/python3.5/site-packages/openlp/__init__.py", line 26, in <module>
from openlp import core, plugins
File "/usr/lib/python3.5/site-packages/openlp/core/__init__.py", line 41, in <module>
from openlp.core.lib import ScreenList
File "/usr/lib/python3.5/site-packages/openlp/core/lib/__init__.py", line 331, in <module>
from .renderer import Renderer
File "/usr/lib/python3.5/site-packages/openlp/core/lib/renderer.py", line 31, in <module>
from openlp.core.ui import MainDisplay
File "/usr/lib/python3.5/site-packages/openlp/core/ui/__init__.py", line 103, in <module>
from .maindisplay import MainDisplay, Display
File "/usr/lib/python3.5/site-packages/openlp/core/ui/maindisplay.py", line 35, in <module>
from PyQt5 import QtCore, QtWidgets, QtWebKit, QtWebKitWidgets, QtOpenGL, QtGui, QtMultimedia
ImportError: libQt5Multimedia.so.5: cannot open shared object file: No such file or directory
tgc commented on 2016-02-12 21:37
For optional mysql/mariadb and postgresql support, please add python-mysql-connector and python-psycopg2 as optional dependencies.
macxcool commented on 2015-12-29 15:47
macxcool commented on 2015-12-29 15:45
@thelinuxguy FYI there's a discussion on openlp-dev about QT5.6 and the dropping of qtwebkit in favour of qtwebengine. Apparently cross-platform support is bad in qtwebengine so OpenLP will be staying with qtwebkit since the other big distros won't be moving to 5.6 soon. This might be a problem in Arch.
thelinuxguy commented on 2015-12-22 10:43
@morkatros: First of all you found out that the issue is with python-alembic, in that case there is no need to file it here.
The issue is about a failed gpg check I assume. In that case go read the wiki about that
The failure is on your side
morkatros commented on 2015-12-22 00:23
error in the compilation of python-alembic
wrst commented on 2015-12-21 21:07
thelinuxguy, thanks for taking over this package!
thelinuxguy commented on 2015-12-21 16:37
Yes, they updated the tarball. I will update this as soon as I get home later today
ClawOfLight commented on 2015-12-21 15:09
I get a checksum verification failure for the tarball. Can anyone else confirm?
thelinuxguy commented on 2015-12-21 11:53
I just updated openlp to the latest version. It runs fine on my machine. Please let me know of any troubles you run into (specifically about dependencies)
thelinuxguy commented on 2015-11-30 13:54
I don't mind taking it over.
I have a working version here (at least I think I do, haven't tested it on a fresh machine yet):
I'm unsure about the dependencies though. I haven't found any offical dependencies collection yet.
The Pkgbuild of jonarnold definietly includes unneeded dependencies that are only useful for development
wrst commented on 2015-11-27 02:50
jonarnold would you like to take over the packaging? Time has not been my friend on getting things updated.
jonarnold commented on 2015-11-20 02:36
I finally found the 2.2.x dependencies () and updated the PKGBUILD. Everything should be correct now.
Same link:
ClawOfLight commented on 2015-11-19 19:39
Okay, thanks :)
I will test it when I have time.
Our church will switch to 2.2.1 for production soon, so it will be good to have it.
Do you happen to know an ETA for the missing dependencies?
jonarnold commented on 2015-11-16 23:53
I have a working PKGBUILD for 2.2.1 here:
Some of the dependencies aren't in Python 3 (beautifulsoup3, pysqlite-legacy), so I'm not sure that everything works, but the program installs and runs. Didn't update the libreoffice dep yet.
ClawOfLight commented on 2015-11-14 21:14
This package has been flagged out-of-date for a while.
In fact, a major new version (2.2.1) has been released.
Are you planning on updating this, or was it a one-time kind of thing? | https://aur.archlinux.org/packages/openlp/?comments=all | CC-MAIN-2016-26 | refinedweb | 830 | 57.47 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to avoid Import validation error ?
While uploading CSV file into the Odoo getting this error:
"Unknown error during import: <class 'openerp.exceptions.ValidationError'>: ('ValidateError', u"Error while validating constraint\n\n'iprodstep.log' object has no attribute 'sales_record_number_id'") at row 2
Resolve other errors first"
Help me please, record "sales_record_number" needs to be checked.
If record number exist .... skip it.
Validation script:
@api.one
@api.constrains('sales_record_number')
def _check_sales_record_number(self):
search = self.search(sales_record_number)
#..... if there is "sales_record_number" in any of this states .... raise error.
# Or maybe someone will tell me how should look like a validation formula ? .....
# I'm a newbee so ...
sales_record_number = [sales_record_number.id for sales_record_number in search]
for order in self:
if order.sales_record_number in sales_record_number:
raise ValidationError(_('This order is already in Database'))
Guessing that there is an error in this code.
Any suggestion ?
Or just how to check if there is already "sales_record_number" in odoo.
Rob,
You can not simply check sales_record_number value using constraint, because while creating a every new record, this constraint will run.
So, if, while importing data from file(csv), it gets voilated, then an exception will be thrown at the very first voilation only and as a result of which your import will get stopped.
Thats why inspite of using constraint, you can create your own new object and create a new functionality in that for importing the files using your required validation.
Hope it seems valid for you ..
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-avoid-import-validation-error-90171 | CC-MAIN-2017-26 | refinedweb | 296 | 58.38 |
Some instructions on how to get started with using Mono in Linux environments: from installation to running your first “Hello World!” example.
Open up your terminal and issue the following commands:
Check that your Mono has installed by using the ‘version’ command:
mono --version
We can now test our installation of Mono by writing some code, in this case the proverbial “Hello World!” example. Open a text editor of your choice and enter the following C# code:
using System; namespace MonoExample { public class HelloWorld { public static void Main(string[] args) { Console.WriteLine("Hello World"); } } }
After saving your code sample eg as ‘main.cs’, using the terminal once more, compile the code by navigating to its location and running the command
mcs main.cs
Finally, to run the ‘main.exe’ that gets generated use the ‘mono’ command:
mono main.exe | http://www.technical-recipes.com/2016/getting-started-with-mono-in-linux/ | CC-MAIN-2017-09 | refinedweb | 139 | 52.29 |
Hello,
I have some stored procedure at Microsoft SQL Server, it returns a view ('myview') based on parameters passed:
The stored procedure ('requserinfo') has 3 parameters: @fname; @lname;@locationcode .....
I want to use the procedure to get a user information in the form (something like: FirstName, LastName, Age, Location etc). It is going to be some kind of a search form... I don't have an access to the data source, only can execute the procedure so this is only the way to get the user information....
Hi guys,
Is this possible to change the Log Level for the SQL Datasource connector ?
I think in the code it might write INFO messages (Request and Results ...)
1- Should I make a product update request for it to be set to DEBUG ?
2- If let to INFO, is it possible to change the Logging config for this connector Specifically :
2a - in the Studio ?
2b - on the Server once deployed ?
Thank you for your answers :)
Pierre
Hi,
Following the official documentation and new to ubuntu / MySQL / Bonita I wish to setup a Bonita BPM platform.
I downloaded the Tomcat-bundle and I wish to setup MySQL Database.
But when looking at the official documentation, the Database must be configured, (it is writed literaly everywhere) there is no informations / guide for MySQL database.
Here is the only thing they say for MySQL :
*MySQL
Maximum packet size
Hi,
I use Bonita BPM 7.3.1 local on Windows 7. I m novice at bonita
I have a question. I need to select some data sets from Oracle DB.
I d read a lot of topics at this forum and others.
And I resolve this problem like this:
Then I create in Connectors In new Connector to my base with query , scripting Mode and assign to my process variable (type List) data from query result, by this script:
import java.util.logging.Logger;
import groovy.json.JsonBuilder;
Hi,
I want to know if it is possible to use only one connector in order to execute 2 or more different select querys and store the returned values in differents resultsets.
And also if it is possible how to do it
Thanks. | https://community.bonitasoft.com/tags/sql | CC-MAIN-2021-17 | refinedweb | 364 | 62.78 |
malloc - a memory allocator
#include <stdlib.h> void *malloc(size_t size);
The malloc() function allocates unused space for an object whose size in bytes is specified by size and whose value is indeterminate.
The order and contiguity of storage allocated by successive calls to malloc() size not equal to 0, malloc() returns a pointer to the allocated space. If size is 0, either a null pointer or a unique pointer that can be successfully passed to free() will be returned. Otherwise, it returns a null pointer and sets errno to indicate the error.
The malloc() function will fail if:
- [ENOMEM]
- Insufficient storage space is available.
None.
None.
None.
calloc(), free(), realloc(), <stdlib.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/007908799/xsh/malloc.html | CC-MAIN-2016-40 | refinedweb | 120 | 56.66 |
User:Mu301/Learning blog/Archive
Contents
- 1 Pinned to top
- 2 February 2018
- 3 January 2018
- 4 February 2016
- 5 January 2016
- 5.1 Wikiversity:Year of Science
- 5.2 Welcome and expand considered harmful
- 5.3 Wiki participation by experts and academics
- 5.4 Public humanities
- 5.5 Pending changes
- 5.6 An essay on the philosophy and practice of the "wiki way"
- 5.7 Real-time wiki data
- 6 December 2015
- 7 August 2009
- 8 January 2009
- 9 September 2008
- 10 March 2008
- 11 January 2008
- 12 December 2007
- 13 February 2007
- 14 January 2007
- 15 December 2006
Pinned to top
Monday's learning project image
February 2018
Fizzled
What I learned this month...
-
Interconnections)
February 2016
Promoting and improving Wikiversity
Note: this started as a discussion on my talk page. I've copied it here to continue the conversation. --mikeu talk 21:23, 8 February 2016 (UTC)
Guy vandegrift
{{ping|Dave Braunschweig|Mu301}} In retrospect, I think I should have answered Miked's comment about "promoting" WV's tolerance for weirdness here and not on the Colloquium. My father was a writer, not a great one, but good enough to coach my mother who wanted to write a book about my autistic brother. I grew up hearing about what makes for good writing, even though I myself struggled with all my writing courses. The collaborative writing on Wikipedia is at best mediocre, and almost always lacks focus. The articles are encyclopedic. Wikiversity does tolerate parallel efforts, forking, POV and in the past, so-called "research" that is unrefereed.
But let me come to the point and argue that you and I are almost certainly in agreement on the important points. You and Dave want a cleaner WV, with less attention to the clutter and perhaps even less clutter. Let me pose and answer number of questions about your efforts:
- Is it harmful, even if it means deleting pages? ... I say no (and you say no).
- Is it necessary? ...I don't think so (and you perhaps you think it is).
- Is it sufficient? ... I say no (and perhaps you say yes or maybe).
Even if we disagree, none of our disagreements have any relevance because I have no intention of interfering with your efforts, which I consider to be unambiguously beneficial. Now let's reverse roles. I want to do two things: (1) organize student writing on Wikiversity and (2) establish an online refereed Journal that credits authors.
- Are my ideas harmful? Not unless I promote them with too much hyperbole (which I probably do on occasion).
- Are they necessary? I think so, but I am not sure. I just can't think of how a smaller version of Wikipedia can do anything Wikipedia can't do (unless your goal is to make WV "ultra-organized", but good luck with that!)
- Are they sufficient? I have no idea.
A couple of details:
- Most WV resources have single authors, which make the bylines easy to write. This is not the case with the WP articles. Refereed Journals all have this problem. We can attribute the editor who condenses the WP article, gives it focus, and submits it to the Journal. People can argue and complain about how many original authors deserve to be mentioned. If the Journal is a resource page equal to all others on WV, the complainers can create their own Journal.
- Of the three wikis (WP, WV, and WBooks), WV seems to make the least effort to highlight quality resources. I just say we do this democratically, in keeping with the sole wiki that permits POV in mainspace. In other words we make the Journal an ordinary resource page, creating an environment where anybody can start a journal.
- I'm still struggling with how the referees will "vote". Perhaps it needs to be secret and off wiki. Don't forget that if people don't like it they can create their own journal.
--Yours truly, Guy vandegrift (discuss • contribs) 04:59, 9 January 2016 (UTC)
Mu301
@Guy vandegrift: and @Dave Braunschweig: - anyone/everyone else is also welcome to jump in
Re: clutter. Yes, I do feel that there are some important reasons for managing, reducing, or otherwise placing constraints on the "clutter." The first reason is to optimize the functionality of our internal search engine. It does little good for a new user trying to find a resource to get pointed to an empty page with a note that says "welcome, please help us create a resource about this {{topic}}." It is human nature to get frustrated and give up when the results are unhelpful. There are even objective (scientific) studies of how to improved the usefulness of a website's search and/or navigation. I'll look into a {{cite}} about that to inform our efforts. But, my gut feeling is that "we've been doing it wrong." (See my blog post about the wormhole page, which may have even damaged our Google search ranking. It certainly didn't help.)
Kipple drives out nonkipple[2] is a phrase used to describe how clutter attracts more clutter. While Wikiversity is perpetually a work in progress... we really need to do more to highlight our Featured resources and improve the Initial experiences that visitors have here. One possibility is to identify a carefully curated short list of the best recent resources and use templates on wikipedia to direct users here. Think of these efforts as an attempt to flip the kipple quote to "quality attracts more quality." But, it's difficult to find quality learning materials when there is an overabundance of kipple flooding the search results.
These are just a couple of my thought on this, I'll write more about it as I think it is an important topic to explore. --mikeu talk 22:17, 8 February 2016 (UTC)
Guy vandegrift (redux)
Comment added after I wrote this: Oops! I somehow found myself on this page and noticed the January 8 date. I failed to note the year. I have accidentally continued with a discussion we had almost exactly
one TWO year(S) ago!!! Sorry. --Guy vandegrift (discuss • contribs) 02:46, 8 January 2018 (UTC)
- Your concern about Google search is correct. That is how I search the wikis, and I have noticed a distinct prejudice against Wikiversity, even though some of our resources are better than their Wikipedia counterparts. I am too busy to find a specific example, but if you ever need some, I can find you several in the field of physics education.
- The idea of highlighting quality resources is good, but only if we think strategically. Our goal is to impress the maximum number of potential "customers", and focus on what they call the "high-end" customers (i.e. those with good taste.) To maximize this impact, we should keep in mind the following:
- Most of our these high-end customers won't go to the Main Page or any similar resource because they will not expect to see anything of interest. Is it possible to put a banner announcement at the top of each page they see, not unlike those used by Wikipedia requesting donations?
- Our target clientele consists of specialists in wildly diverse fields. There is little chance that we will pick something of interest to any given reader. Therefore, a mathematics resource needs to look attractive to history buffs, and vice versa. I once heard that the friendliest referees of a submitted paper are those who are close enough to the field to partially understand the paper, but not so close that they can see the faults.
- While we want to impress our readers with the fact that we recognize the need to clean up, we should present only one or two of our best resources. For that reason, I suggest only one or two links. One would be sufficient, but two would emphasize that there is more than one quality resource. The links need to be rotated as often as possible, but not so often that we display anything but the best and most attractive. But, let's not burden them with a long "reading list".--Guy vandegrift (discuss • contribs) 02:30, 8 January 2018 (UTC)
- Welcome back! Let's continue the conversation that we started so long ago.
- No one but The Google truly knows exactly how their search results get ranked. But, I suspect the so-called "prejudice" that you've noticed is a direct consequence of the large quantity of low quality pages that we host. I'm simplifying it a bit, but that is basically how site ranking works. It ain't rocket surgery to make that connection. You keep asking "what's the harm of those pages?" The answer is right in front of us. See the graph at right? I just downloaded those datapoints from the Google Webmaster dashboard. Google has cut the number of our pages that it is indexing to ½ of what it was just one year ago. I project we'll hit zero sometime around the middle of the fourth quarter 2018. How much longer do you want to continue debating value of these pages to our mission?
- We had about 225 clicks to our entire site from Google search in the last 90 days. That is a rather pathetic average of 2.5 per day. Ohio Youth Problems, Functioning and Satisfaction Scales (Ohio Scales) appears to have shown up the most in search results with 250 something impressions, but with only 47 clicks during that time frame. Our other pages don't come close to that. That one page accounted for 1/5 of our incoming search traffic. Google doesn't even register any external links to our site. Apparently there are too few for Google to even bother counting them. It's like we've been sucked into a wormhole where no activity can reach us.
- As to your bullet points:
- That is probably true, but still our main page should be up to date and showcase our best work. Someone landing at Parkinson's Symptoms (one of the most linked to pages[3] last time I checked in 2015) might then click on the Main page to learn about our site. At some point we're probably going to need to make some changes anyway. There are a couple of things that break on mobile devices.
- Thinking... I'm not sure this is an urgent consideration.
- I've been playing around with generating dynamic content. See "Today's featured resources" in Public humanities for an example. There are seven sets of three quotes which rotate depending on the day of the week. Instead of flooding the reader with two dozen examples it selectively displays carefully choosen puil-quotes . This is one way to address your last concern and it also "spreads the load" of page views across multiple pages. I used the same {{tl:switch}} function that the Main page uses to rotate Featured content.
- Any ideas on bringing our featured resources to the front of the store would be greatly appreciated. --mikeu talk 07:16, 8 January 2018 (UTC)
- Update: in 2009 no resource on our site had more than 400 hits. The Main Page was an order of magnitude more popular.[4]
- 40,581 [32.98 %]: Special:AutoLogin
- 4,465 [3.63 %]: Wikiversity:Main Page
- 1,043 [0.85 %]: Special:Search
- 392 [0.32 %]: Favicon.ico (incl. icon requests)
- 315 [0.26 %]: Wikiversity:Browse
- I don't have more recent data, but clearly it is a high priority given the attention it receives. One out of every 30 page views hit the Main Page. --mikeu talk 17:08, 8 January 2018 (UTC)
- More recent update from Wikiversity:Statistics/2017/12.
- 112,704 - Wikiversity:Main Page
- 52,907 - Special:Search
- 22,971 - Special:CreateAccount
- 19,280 - Principles of Management
- The main page is still getting an order of magnitude more activity than any other page. --mikeu talk 00:40, 11 January 2018 (UTC)
- With those hit numbers on the Main Page, we definitely need to ensure that that page is of high quality, no argument from me there. I surmise that your interest in Wikiversity is not based on what Wikiversity is today, but what it could be. That certainly describes my interest. If that is true, our goal is not so much to promote Wikiversity, but the use of wikitext in a manner that extends beyond its use Wikipedia.
- As you do your Google investigations, you might want to compare Wikiversity with the w:Wikipedia:Education_program. It seems to be a parallel effort that is aimed exclusively at Wikipedia. I do not see them as our competition, but a collaborators working to achieve the same goal, which IMHO is to promote wikitext. My semi-humorous reference to "Making Wikiversity Great", actually referred to a strong belief that the academic world should be largely based on wikitext. The ability to store all edits, include sister-links to vocabulary (and perhaps someday to quality focused articles on any given subject) is only part of my enthusiasm for wikitext. I also like the convenience with which the CC licensing and the simplicity of its markup language permits one to recycle and reuse the work of others. My (and possibly your) goal is to "Make wikitext ubiquitous" (link to wiktionary not intended to help you with your vocabulary, but to emphasize how wikitext permits us to write in higher level prose that does not require us to stop and explain every word or topic to which we refer.)
- I see our poor Google stats as more of a symptom than the disease itself. The disease is that we have failed to make Wikiversity useful. That is why my initial interest in the "cleanup" was based more on dividing Wikiversity into useful and useless portions. I am, however coming around to the idea of simply cleaning it up.--Guy vandegrift (discuss • contribs) 02:09, 11 January 2018 (UTC)
┌─────────────────────────────────┘
Yes, I've been in some contact with WikiEd, but we really should collaborate more with them. They are very focused on wp, specifically getting students to write about topics that most regular contributors don't, like women in science or the history of third world countries. I wholly agree that the Google statistics are a symptom, rather than the problem. However, improving our visibility with search engines could draw more attention and participation on our site leading to growth. The current situation is detrimental to that. --mikeu talk 04:20, 11 January 2018 (UTC)
- I agree, but have inserted a much needed comma , into your nearly flawless prose because I believe they do like women ; ) Guy vandegrift (discuss • contribs) 09:33, 11 January 2018 (UTC)
January 2016
Wikiversity:Year of Science
Announcement
The Wiki Education Foundation is about to launch Wikipedia Year of Science 2016. This could be a great opportunity to expand science resources here at Wikiversity. Please share your thoughts at Wikiversity:Year of Science 2016. --mikeu talk 14:05, 2 January 2016 (UTC)
Update
See projects and events at Wikipedia:Year of Science to get a sense of possible activities that we could work on during the year. --mikeu talk 12:22, 31 January 2016 (UTC)
Welcome and expand considered harmful
The template was called {{Whas}} which was short for "Welcome Header And Search." There was once a trend here that our greatest weakness was a lack of pages. Thus began a movement to bulk generate a very large number of stub articles. Most of them only contained a couple of links to Wikipedia and maybe a sentence or two. A very large number were empty, containing only the template and no content. There were a number of people who argued that if we just had enough of these it would be so enticing to new users that they would edit the stubs and flesh them out. Here's the complete listing of edits made to a typical example called Wormhole:
- 2 September 2008
- 2 September 2008
- 10 January 2009
- 17 June 2009
- 15 July 2009
- 25 August 2009
- 15 December 2009
- 27 December 2017 deleted (No educational objectives or discussion in history)
Most of the edits are wikignome maintenance like adding a category or removing a broken link. There's about 100 pages using this template that contain very little except for boilerplate. There are hundreds more that use {{we}} "Welcome and Expand" like Topic:Metaphilosophy. These pages haven't been edited since about 2009. The presence of these pages really didn't do much for development of content here and it makes searching very difficult. cf w:GOTO Considered Harmful --mikeu talk 02:14, 3 January 2016 (UTC)
- Notes --mikeu talk 16:26, 5 January 2016 (UTC)
- See my comment at the bottom of Wikiversity:Community_Review/User:Wikademia#New_discussion (17 July 2009)
- A bot tagged the empty pages for speedy Wikiversity:Notices_for_custodians/Archive/3#Candidates_for_speedy_deletion (5 September 2009)
- A very small number were eventually turned in stub resources, many still remain devoid of content except for the template search buttons. See pages that link to {{Welcome and expand}} and {{Welcome header and search}}
Wiki participation by experts and academics
Survey
I really need to reexamine Why academics do and do not participate on wikis. There are slides from a presentation describing the results of a expert participation survey but it is a bit lacking in detail. --mikeu talk 23:59, 4 January 2016 (UTC)
A medium for scholarly publication
I found an interesting reference to the use of a wiki as a medium for scholarly publication. I've also been updating User:Mu301/Refs which lists research on or about wiki use. --mikeu talk 00:25, 31 January 2016 (UTC)
- Black, Erik (2008). "Wikipedia and academic peer review: Wikipedia as a recognised medium for scholarly publication?". Online Information Review 32 (1): 73-88. DOI 10.1108/14684520810865994.. Retrieved 2016-01-30.
Public humanities
It was a pleasant surprise to (re-)discover The Crafting Freedom Project. It is a wonderful exercise to introduce teachers to the public humanities and involve students in discovering the relevance of history to their own lives. There are at least a couple of resources that fall into the Category:Public humanities but there is no organizational structure to bring them together. I'll need to put some thought into creating an introduction to the topic. I also learned that wikipedia:Public humanities is in sad need of references, so I've decided to adopt the article. --mikeu talk 21:27, 25 January 2016 (UTC)
Pending changes
I'm now a Wikipedia:Reviewer so I'm familiarizing myself with Wikipedia:Pending changes. The FlaggedRevs extension to MediaWiki could be very useful here to give stability to resources created by educators. For example, a grade school teacher who does not want to risk a student seeing age inappropriate vandalism. Another instance would be controversial topics like politics, sexuality, or religion which are frequent targets of vandalism. Unlike a permalink the "safe" reviewed page would be the default landing for any visitor until a reviewer approves the recent edits. (A click on the "View history" tab shows edits that are pending review.) Currently I'm watching some biographies that include:
Also w:Milky Way (the galaxy, not the w:chocolate bar). I'm guessing these are "test edit bait" given the prominence of the subjects in elementary education (or interest among students of that age for the last one;) Do school children look up the articles and scribble on them??? Other articles are an obvious source of contention such as w:Climatic Research Unit email controversy and w:Creation–evolution controversy. The system works quite well and is a better tool than semi-protection in some cases. --mikeu talk 22:48, 25 January 2016 (UTC)
An essay on the philosophy and practice of the "wiki way"
I feel like we need to have a conversation about anti-vandalism and the tools to prevent it. There seems to be some misconceptions about the use of page protection and other features.
The basic design of mediawiki software includes features like Undo and Rollback for occasional vandalism, and the Block for repeated vandalism from a single source. This is the appropriate response in the vast majority of cases. I've had unprotected resources that have only seen 3 vandalism edits in 9 years. The number of productive contributions to the same page from anonymous editors was greater than that. Should that page be protected just because I'm annoyed by pushing the rollback button once per three years? We have Curators, Custodians, and many other members of our community (both at Wikiversity and globally throughout the WMF who watch our recent changes) that look for this activity and remove it. The system works very well.
Page protection is primarily intended for situations where vandalism is high profile (like the Main Page) or where a single edit could affect a large number of pages (like a heavily used template.) Wikiversity does have some unique needs that differ from other projects. I can see a final accepted paper to one of our journals being "frozen" by page protection, perhaps with a second editable copy that is not marked "reviewed." This could also be accomplished by uploading a PDF file stamped "reviewed" and leaving the wiki page unprotected... There are a variety of ways this could be accomplished. Suggesting page protection as the first, and only, mechanism seems IMHO to embody a lack of imagination.
When I hear contributors casually suggesting page protection on a routine basis I shudder. This runs contrary to our WMF approved mission statement:
." Approved Wikiversity project proposal, 30 July 2006
The Wikiversity Proposal (which btw is indef protected as it is both an historical page and in a sense a kind of legal document) places great emphasis on Learning groups and cautions against excluding anyone from joining these groups. The creation of Wikiversity was conditional on this premise. The WMF core values include the idea that the best way to generate knowledge is to ensure that participation is as broad as possible. You might be able to convince some of the local community that this is a good idea, but trying to sway the members of our governing Foundation that we are on the right track is like handing out panda sandwiches at a PETA convention. Many of them (myself included) are fanatically committed to the radical notion that "Information wants to be free" and that everyone can contribute to creating a world "in which every single person on the planet is given free access to the sum of all human knowledge."[5]
I've been investigating the FlaggedRevs extension to Mediawiki. This solution could be the best of both worlds. Anyone could edit a page but the edits are held in limbo and not visible by default until a Reviewer approves or rejects the edit. There are a number of details that we need to work out before requesting this extension. There are certain switches that determine default behavior and we would need to define how we want it to work before submitting a request. I would like to suggest that others try out Wikipedia:Reviewing to learn the details of how it works. The page also has instructions on how to request Reviewer status at WP.
One other note is that Page Protection and Flagged Revisions are not an all or nothing solution. Both include an expiry time which should be used if we expand the use of these tools. There is no reason to indef protect a typical lesson beyond the term of a semester or school year if it doesn't repeat the following year. I can see indef protect on quiz questions or an exam study guide as this has real world consequences. In many other cases a temporary Flagged Revisions setting might be better suited to these tasks. Keep in mind the "indef" mean "for all of eternity." In essence you are protecting the page for a duration of decades or centuries. Is this reasonable? I've seen pages where an instructor swore that indef was necessary only to see them leave the project after 5 years.
On a personal note: as a scientist I find the idea of Evidence-based policy very appealing. Start with an objective look at the problem that we are trying to solve. Is there evidence that it really is a problem? How big of a problem is it? What methods have a proven track record of solving similar problems? Tossing out solutions that are in need of a problem (the shotgun approach) is an inefficient use of our time. --mikeu talk 19:22, 26 January 2016 (UTC)
Comment by User:Guy vandegrift
I am coming around to your viewpoint, and certainly retract the suggestion that we use "page protection" as a selling point to bring in Curators. At the time, I was looking at it from the recruitment angle, which asks, "Is there anything about new Curator status that will bring teachers into Wikiversity". The answer was page protection, and in retrospect, I overplayed that card.
I also explored page protection as a way to introduce a bit of "individualism". After you (User:Mu301) convinced me that page protection was an inappropriate way to achieve this, I decided to create an online journal. Then, I was delighted to see that Wikiversity Journal of Medicine had already pioneered exactly how to create such a journal. The Second Journal of Science has only one protected page, and if the community so requests, I could unprotect that page.
I like your essay, and think the next step is to categorize instances where page protection is required:
- High volume pages where even a few minutes of vandalism would do harm, and where it is plausible that page protection is a higher risk. (I emphasize plausible because I have no evidence in this regard).
- An active course where disgruntled students could use vandalism to hurt their fellow student's education in order to cover their own inadequacies. This is something Wikimedia should take seriously because there is no evidence that this won't happen (except that it hasn't happened to me in the past 2 years).
- Sensitive data where there is no need to ever edit. I am speaking, for example about this steam table This and the next item clearly represent weaker claims. In the previous two, harm was done by not protecting. But in this case, the argument is that since there is no reason to edit, we should forbid editing. This is a weaker justification than one that points to actual harm (because I believe that steam tables are a low traffic item).
- Another "debatable" reason to page protect involves large amounts of text that the writer is too busy to watch carefully. I am trying to write an open source Quizbank of exam questions. Let's consider the extreme limit: Suppose, hypothetically that it was necessary for each question to have it's own page. That would place almost a thousand of pages on my watchlist. In the case of Quizbank, the solution is to many questions on each quiz, which can keep my watchlist to a managable size.
It is highly likely that I left out essential examples. For example, both of our Wikiversity Journals (WJM and SJS) store permalinks to the checked (accepted) versions of the submitted article. It would be tempting for a contributing author to make a well-intentioned edit to "upgrade" a checked version with corrections. But routinely allowing such practice could hurt the journal's reputation.
Another example of using page protection against well-intentioned edits involves my current effort to have students improve the quality of my exam questions. I have invited students to submit corrections, not to the current version of the quiz, but on a special subpage devoted to each question. With 65 students making such corrections, it is inevitable that one of them would get confused and edit the actual question. For an example of this, see question 7 of this page-protected quiz, and this sample of a recent student's effort to improve the quiz.
Recommendations:
- We need to carefully vet candidates for Curator status to verify that they understand that routine page protection needs to be avoided.
- Curators and Custodians who page protect should welcome audits of their protection policies. Looking over my pages, it would be possible for me to reduce the number of pages that I protect. I only recently realized that
[[Special:Permalink/####]]permits permalink addresses that are easy to edit. It would be possible for me to unprotect every quiz in Quizbank and secure my versions of them with a single page-protected page full of permalinks. In fact, this index page could be in my userspace. Updates are as simple as updating the permalink "oldid". I would like to first beta-test having 65 student editors on Wikiversity using protected pages before I open everything up to edits.
- Keep in mind, that we at Wikiversity are following the spirit of Wikimedia's "make information free", but at the same time are exploring new ways to do it. It might be necessary to alter the traditional approach to page-protection, just a bit.
--Guy vandegrift (discuss • contribs) 00:38, 27 January 2016 (UTC)
- I'm the fanatical open source zealot that your mother warned you about ;) But, I'm also very open to new way of doing things. FlaggedRevs is the tool of a Wiki Wizard. Not as clumsy or random as a permalink; an elegant tool for a more civilized age. I'm very interested in discussing ideas as to when and how certain tools can benefit the development of learning resources. I am in agreement that quizzes and raw data really shouldn't need to be edited, and the consequences of a student using incorrect information in an exam are serious. I'll discuss this at length later. --mikeu talk 03:03, 27 January 2016 (UTC)
Evidence that Wikipedia is loathe to page protect even when is is "reasonable" to do so
Both your "fundamentalist" open source viewpoint, and my "liberal reformist" take on this issue can be seen in the fact that Wikipedia routinely bypasses the issue with educational dashboards like w:Wikipedia:Wiki Ed/Wright State University/Introduction to Astronomy (Spring 2016). They do not page protect these educational pages. Instead the dashboard routinely uploads a refreshed version from wikiedu.org. There are many pages that use this backdoor approach. See: w:Special:Prefixindex/Wikipedia:Wiki Ed
I did a harmless test edit on my own page, and there really is no page protection. This refusal to protect essentially uneditable pages shows how much they are loathe to page-protect. I am not fond of this because you need to do "markdown" on the website, which then "marks up" to wikitext, and the markdown is just one more skill I wish I didn't have to learn. My favorite way to page protect is now that extension you propose which allows edits to be first checked and approved (forget what its called) --PS: My calling you a "fundamenatalist" and myself a "liberal reformer" is meant to be taken very narrowly. I was not speaking about politics, of course.--Guy vandegrift (discuss • contribs) 13:06, 2 February 2016 (UTC)
- I also feel that the FlaggedRevs extension could be an optimum solution. It was specifically created for Wikipedia articles that are not watchlisted by very many people and are at risk of test edits or vandalism going unnoticed for a long while. Many of our older resources fall into this category. I don't see any point in protecting resources that are actively being worked on as any unproductive changes (and they are few and far between) will get quickly noticed and reverted. There are exceptions, of course. Some of the Main Page linked pages are such frequent targets that it is a burden to revert often. In general I am quite surprised to see how rare vandalism is on this site compared to just a few years ago. Wikimedia seems to have taken some significant steps in squelching this activity globally which reduces our burden. Yes, I got the sense of the terms you used ;) --mikeu talk 15:29, 2 February 2016 (UTC)
I agree generally with a low need for protecting resources or talk pages. The Victor Hugo quote which was receiving spamming, apparently induced by a link from our main page, at a rate of about 1 every 3.6 months, was stopped by protection. Our Physics lecture was receiving IP vandalism requiring 3 reverts per hour. This persistence was stopped by indef protection for some months and has not reoccurred with protection removed. Indef is not infinity. From a negotiation point of view, a persistent vandal will use a def protection as a point of return, unless distracted in some other way. More importantly are tools for "educational moments" such as w:WP:AGF and not w:WP:BITE'ing the newcomer to turn uncivil, persistent, or deletionist IPs into positive contributors. I hope this helps. --Marshallsumter (discuss • contribs) 21:21, 1 February 2016 (UTC)
- Thank you for joining the conversation.
- Victor Hugo is part of the Main page learning project/QOTD which I have recently sought to revive. The idea was to create a kind of Honeypot to deflect vandal prone editors to a safe space that was on the watchlist of a number of contributors who volunteered to keep up with the traffic. This lapsed while I was on wiki break as no one else was watching it. The original plan was to engage anons in the way that I've been describing to educate them about what we do and entice them to make positive contributions. I won't claim that the success was overwhelming, but overall I was pleased with the contribs and it wasn't much work. There's no longer a need to protect those QOTD pages as I'm again watching them closely. As a precaution I've redirected (and protected) the mainspace pages to the talk to prevent search engines from indexing test edits or spam.
- Slight point about wording. "3 reverts per hour" implies an average over a sustained period of time. I only see a total of 3 incidents of vandalism in 8 years which averages about 1 revert per 2.3 years. (not counting page protected time.) All 3 edits occurred within an hour indicating that the ip should have been blocked. I would've expected that to stop disruption to the resource without a need for protection. It is disingenuous to claim that the page protection solved the problem when it is obvious why the vandalism stopped; a single day block did the trick without the need for "some months" of protection. There were no vandalism edits here or globally from this ip after the block expired. Many vandal edits, like this example, are "drive by" scribbling. While there are some persistent attempts that I've seen it is more often the exception, than the rule.
- I just don't see compelling evidence in the revision history or logs that support the idea that protection is necessary or useful. The examples that I've looked at indicate otherwise. --mikeu talk 22:48, 1 February 2016 (UTC)
- Just FYI but the Victor Hugo quote resource page (now a redirect to the talk page) is still page protected like my Ice cores resource page (now a redirect) was. --Marshallsumter (discuss • contribs) 03:24, 2 February 2016 (UTC)
- As I have already explained in detail above the QOTD pages are demonstrated to be at high risk of vandalism. To prevent spam or test edits from getting indexed by search engines I have redirected to the talk page and protected the redirect from editing by anons. The QOTD pages were created specifically to attract vandalism in a central location where it could be dealt with. Ice cores is not linked prominently from the Main Page, so it is not even close to being at the same level of risk as is evident by looking at the page history. Also, please include an edit summary when using the page protect. It make it difficult for others to review actions later if there is no rationale for why it was done. --mikeu talk 15:21, 2 February 2016 (UTC)
Real-time wiki data
Version 1.0
I'm experimenting with a system for automatically uploading scientific data in near real-time to a wiki page for use in exercises where students analyze the data. The project uses a variety of instruments mounted in weather proof enclosures on the roof of Ladd Observatory:
- Davis Weather Station; live data at
- Boltek Storm Tracker for detecting lightning strikes; live data at
- SBIG All Sky "meteor" camera for monitoring sky conditions such as cloud cover or haze; live data at
- Unihedron SQM-LE sky brightness meter for measuring urban w:light pollution; live data at
In the past I've manually uploaded some of this data to SkyCam or other pages. Now I'm trying to include dynamic scientific data into lessons. To implement this Mu301Bot runs on the same webserver that collects the data from these instruments and stores the images or generates real-time graphs/maps. I'm currently running a test where hourly samples of sky brightness measurements are added to User:Mu301Bot/nelm when new data is available. I'm also looking into the possibility of formatting the uploaded data points in mw:Extension:Graph format.
It is also possible to add image files from the sky camera automatically, perhaps for a special event like a Lunar eclipse or the May 9, 2016 w:Transit of Mercury. This could be used for a "real-time lab" where remote students work on the data while it is being collected or analyze it afterwards. Current images of the transit of Mercury could also be incorporated dynamically into a wikinews: story or a wikipedia: article.
Another possibility is weather events such as a severe thunderstorm. When a storm is detected the bot would edit a page that interested students have on their watchlist to alert them to a live data upload such as a map of lightning activity or a graph of wind speed during a hurricane. These meteorology uploads would be triggered by a threshold such as the w:Saffir–Simpson hurricane wind scale.
There are a number of possibilities that could be implemented. This could be of use to schools or colleges with limited resources to teach lessons using data from instruments that are too expensive to purchase for student use. It could also be used more generally for astronomical or meteorological w:Template:Current events. This is an extension to the Observational astronomy project that I started many years ago. --mikeu talk 19:36, 28 January 2016 (UTC)
Version 1.1
Last image filename: 00002000.FIT Exposure started: 2018-09-22T05:23:32.639 UTC Exposure time: 10 seconds
I've rewritten User:Mu301Bot/nelm to include an image that is dynamically updated once per day. I'm not sure what the ramifications are for a File: that will eventually have 365 revision updates per year. I'm not sure if anyone has ever dealt with this issue in wikimedia before. Another option, for a different purpose, would be to encode the timestamp with a prefix in the filename. But, here I want the page to automatically show the latest image and caption without editing the file link. I'm conducting a live test of near real-time uploads of images from a telescope. (Note: Check back tomorrow to see the most current image taken at about 2 a.m. local time. It will either show stars or cloud cover.)
I've also encountered the issue that appending data points to a file grows without bound. Currently the data in the subpage Mu301Bot/nelm/data only includes the last 50 measurements and is overwritten each hour, but only when new data is available. A possible solution is subpages of the form PageName/YYYY to create multiple archives by date. The Mediawiki graph extension is complicated. I'll have to rewrite the scripts to parse the data into the correct format.
I'll need to be careful with the particulars of lessons that use live data to take into account that there might be an interruption of uploads or long gaps where there is no new data. It is important to phrase the captions to avoid language like "today's image" given that for any page view the data might be stale. It will also be tricky to write lessons where I can't know ahead of time what the correct answer to an exercise is.
I'm also trying something new with my User:Mu301 page which now incorporates the "switch" feature used by {{QOTD}} to rotate a different image once per day. This could be used to feature one of 7 different learning projects that I'm currently working on each day of the week. Currently it just shows a gallery of images that I've contributed to Commons.
I'm excited about the possibilities of near live data in science lessons. We've come a long way since the w:mimeograph handouts of my childhood. Today's generation of students expect more from educational activities and we now have the technology to aide us in generating richer and more ambitious educational content. --mikeu talk 19:14, 1 February 2016 (UTC)
Version 1.2
The local copy of the image has been temporarily deleted to allow a Commons version of the file to appear here live. I've requested the bot flag there and I started running some automated upload tests today. The skies are clear tonight so I'm uploading twice per hour instead of just once per night. --mikeu talk 03:29, 10 February 2016 (UTC)
Version 1.3
The camera is currently offline and needs some work. It might be time to consider replacing it with a more modern camera. The images are showing artifacts such as "hot pixels" probably from exposure to the elements. Overall, I consider the project to be a success. I've learned a lot which will allow me to design a much better system the second time around. --mikeu talk 00:23, 3 January 2018 (UTC)
December 2015
Many of the Observational astronomy resources are in need of update. There are many broken links and unfinished lessons. They are also somewhat lacking in clarity. I'm a bit disappointed that just about the only activity after all these years has been vandalism. I'll need to rethink the participatory aspects of these projects when creating new resources. --mikeu talk 14:52, 27 December 2015 (UTC)
August 2009
- Time to give this learning blog thing another try. There have been many delays, but things are finally in place for me to move forward on the SkyCam project. That will be a high priority for this coming semester. There is also the citizen science project to observe the eclipse of epsilon Aurigae that I'll be working on during the next two years, both on and off wiki. --mikeu talk 00:03, 24 August 2009 (UTC)
- Very interesting results from a citizen science project called Galaxy Zoo. --mikeu talk 17:30, 29 August 2009 (UTC)
January 2009
- I've been nominated for the 2009 WP:DICK of Distinction awards pageant. [6] --mikeu talk 15:37, 6 February 2009 (UTC)
September 2008
- At the bottom of every page there is text that reads: "All text is available under the terms of the GNU Free Documentation License (see Copyrights for details)." The page that gives details, however, is a proposed policy. We might want to change the link to a WMF copyright page until we make our own policy official. It doesn't look good to link to a copyright page that basically says "maybe this is our policy." --mikeu talk 00:04, 23 September 2008 (UTC)
- Thought of the day: consensus and the zeroth law of robotics --mikeu talk 13:48, 24 September 2008 (UTC)
- Philosophical Question #3: Is the individual greater than the community?
- That depends on the nature of the community. In a combat unit fighting for material gains, it is customary to sacrifice a chess piece for the sake of winning the game. In a spiritual community, saving the individual is an overarching goal. In an educational community, the dilemma is comparable to the one faced by Mrs. Zajac in Tracy Kidder's book, Among Schoolchildren. In that story, the teacher spent an enormous amount of time and energy trying to save two problem children (Clarence and Robert). In the end, Mrs. Zajac failed to reach them, and so everyone lost, including the other children in the class who were neglected while Mrs. Zajac spent so much time on a futile effort to reach Clarence and Robert. [7]
- Reminder to self: check the links in Special:Statistics and find the MediaWiki: page to edit it. --mikeu talk 23:39, 26 September 2008 (UTC)
- Reminder to check up on students who appear to be creating a project at wikipedia. --mikeu talk 17:36, 29 September 2008 (UTC)
- The Association of American Colleges and Universities will be holding a conference in Providence called Engaging Science, Advancing Learning: General Education, Majors, and the New Global Century. --mikeu talk 21:17, 29 September 2008 (UTC)
March 2008
- I have been rather busy with real world projects lately and have not done as much editing and content creation here as I would have liked. The projects that I have been working on involve education and outreach through the observatory. There are a number of programs that include teacher training workshops and bringing science into the classrooms of local schools. Once these projects are underway I intend on using wikiversity pages to develop and distribute lesson plans and activities. There will also be some online collaborative learning. Here are some of the projects that will soon be developed: Weather station, new pages related to Observational astronomy, and SkyCam. Hopefully I will be able to start working on these within the next few weeks. Things are starting to come together and will probably develop quickly once they get off the ground. I have also been confirmed as custodian and recently nominated for bureaucrat. --mikeu talk 15:07, 9 March 2008 (UTC)
- Next week I'll be attending a workshop on wiki and education. Here's the description:
Enhancing Student Engagement: the WIKI Collaborative Workspace
"Wiki software is a powerful yet flexible communication tool for collaborative work. A wiki website can be viewed and modified by anybody with a web browser and access to the internet. Its ease and flexibility has resulted in broad adoption. Wikis can address a variety of pedagogical needs such as student involvement, group collaboration, and asynchronous communication. Since wikis reside on the internet, students can access and participate from any location. Faculty and students can engage in collaborative activities that might not be possible in a classroom. The following faculty panel will share with us what they have learned from their experiences and help us formulate best (and worst) practices."
- I am currently testing Unified Login and Flagged Revisions. Please see (and edit) my test page. --mikeu talk 13:55, 1 April 2008 (UTC)
January 2008
- As mentioned in my Dec. 2007 blog I'll be working with teachers in the Rhode Island area as part of an outreach project through Ladd Observatory. I'll try to develop the Wikiversity:School and university projects page as this progresses. I am also trying to coordinate with the Wikipedia projects at w:Wikipedia:School and university projects and w:Wikipedia:WikiProject Classroom coordination plus some notes that I left on individual wikipedian talk pages. The w:Wikiversity page really needs some work. Mirwin and I started User:Mu301/Sandbox to improve the references in that article. --mikeu 16:20, 21 January 2008 (UTC)
- I've started a planning page for the outreach project that I am working on. This will be developed more in the next month or so, and then spunoff into mainspace. One component already has a mainspace project coordination page at SkyCam and a domain on the Sandbox Server at --mikeu 16:35, 21 January 2008 (UTC)
- I am now a probationary custodian. Feel free to provide feedback on my talk page or comments at Wikiversity:Candidates for Custodianship/Mu301. See also: Wikiversity:Custodian feedback. --mikeu 16:43, 21 January 2008 (UTC)." [9])
February 2007
- The Systemic project now includes a wiki at 207.111.201.70 At the request of one of the admins there I created a Wikiversity stub article that points to the Observational astronomy/Extrasolar planet page here.--mikeu 01:44, 4 February 2007 (UTC)
January 2007
- It has been a while since I posted here. I've been busy making telescope observations for the supernova page (which is still under development.) I also added a supernova related activity section to observational astronomy and created a new page on extrasolar planets. These two pages are now a featured learning resource at Portal:Physical Sciences.--mikeu 18:20, 27 January 2007 (UTC)
- Take a look at the image at right. The bright star near the center goes by the rather unimaginative name of HD 209458. There is something about this star that is quite remarkable. You can't tell by looking at the image, but this star is home to a planet {which goes by the even less imaginative name of HD 209458 b.) It is one of more than 200 planets that have been discovered outside of our solar system. These distant planetary systems are the subject of one of the observational astronomy subpages that I recently started. The first activity encourages someone interested in learning about this topic to participate in the systemic research collaboration and join a learning group here. Learning projects like this bring up an interesting question about the role of research here at Wikiversity. Since many of the projects that I'm working on or planning involve some kind of research I'll need to address this question. In the next few days I'll start by writing up some more detailed descriptions of the projects that I have in mind.--mikeu 20:30, 27 January 2007 (UTC)
December 2006
First thoughts (mostly on astronomy learning)
Note: the contents of this section have been copied to Observational astronomy/Planning to encourage participation in the development of these ideas.
- First, a little background... I used to work at a planetarium and I now work at an observatory. One of the problems at an observatory is that a public education program requires clear skies to see objects through the telescope. (There are always clear skies in a planetarium :) I decided to create an activity that uses computers as a sort of virtual observatory. This would not only create a rainy night activity but it would also bring astronomy to a larger group of people who don't have the chance to visit an observatory. I started the Observational astronomy activity to try out these ideas. I'm familiar with other virtual observatory learning activities. (For instance, the Hubble FITS Liberator.) One thing that will distinguish this learning project from others is that it is hosted on a wiki which will allow the students to interact in creating the lessons. I have no idea how that will play out or where it will lead, but I suspect that the results will be interesting.--mikeu
- The idea that I would eventually like to pursue is to allow students to do "real" astronomy. Students usually learn by solving "toy problems" where the only goal is to learn, and the results of solving the problem are then only given to the teacher for the purpose of getting a grade. I'd like to give the participants original data to analyze such that their results are of some use to professional astronomers. This is similar to ongoing projects such as the American Association of Variable Star Observers in which amateur astronomers who own a telescope contribute observations which are then analyzed by professionals. (The National Science Foundation refers to people who do this as "citizen scientists.") However, most of these amateurs are working individually or in small groups that only include those who have reached a certain level where they have the experience and knowledge to make a contribution. I'd like to create an environment where it is easier for someone with no background to get involved and walk them through that first, steep, part of the learning curve.--mikeu
- One example of a citizen science project is Stardust@home. It has a low threshold for getting involved and uses a slick tutorial and interface to train participants and get them started. However, this project is more busy work than a learning experience. The results make a valuable contribution to processing the science data but the only thing the person invlolved gains is a sense of satisfaction at being a part of the project. They really don't learn much about interstellar dust. My contributions here at Wikiversity are an experiment to determine how to create a more meaningfull learning experience while doing real science.--mikeu
- The learning curve for doing astronomical data analysis is very steep for the uninitiated. For example, go through the short tutorial on astrometric calibration with Aladin to see the steps involved in processing a raw telescope image. Is it reasonable to expect someone with no background to go through a complicated process like this as part of a lesson? Probably not, but I plan on creating activities here that will involve doing that.--mikeu
- I'll start by creating some simple lessons that show how to use the software tools that astronomers use to analyze data. The main focus, at first, will be on learning basic concepts in astronomy while getting familiar with these tools. I hope that the wikiversity community can help me learn how to implement the ideas that I have described.--mikeu
(The preceding unsigned comment was added by Mu301 (talk • contribs) 12:50, 27 January 2007)
Wikiversity random thoughts
- Many pages are created from templates, but the templates are so complicated that very few bother to fully fill them out. This leads to the creation of a multitude of pages that contain more boilerplate text than real content.
- A long list of subjects that don't exist is an impediment to growth. It might entice the ambitious few to expand the content, but it probably turns away the majority of people. There is too much clutter to attract new users who would be willing to do casual editing. New users are going to be confused trying to find content that exists, and it preimposes a structure for the future growth of what will be covered rather than just letting it happen naturaly. These lists need to be checked to insure that they follow established Wikiversity:Naming conventions.
- I was working on the astronomy pages and had some confusion about namespace, problems with searching and a few other things. See Topic_talk:Astronomy#Cleanup_needed for the details.
- I've seen a couple of pages that have a nice welcome for new users. One is the Introduction at the school of mathematics which has a link to the Topic:School of Mathematics Help Desk.
(The preceding unsigned comment was added by Mu301 (talk • contribs) 22:06, 27 January 2007)
Other
- I decided to change the format of this page. I had originally intended to use this as a scratchspace to jot down my thoughts and then edit them into something more coherent. I was also thinking that this could be a place to collaborate on planning projects. That is different from a blog, and I probably should have created the content outside my user space to encourage participation. This new format will be a log of things that I'm thinking and working on, with links to other pages.--mikeu 13:29, 16 December 2006 (UTC)
- I started to cleanup the astronomy related pages. Many of the pages contain a lot of confusing clutter and things are so disorganized that it is difficult to find what little content exits. I added all of the existing pages that I could find to the main Astronomy category. This is not really the best way to organize things, but there is so little content that it will have to do for now. I moved the index of content that does not yet exist from the main page to a new page and added a welcome section. This needs to be expanded. I doubt most new visitors have much idea about what to expect from wikiversity.--mikeu 16:56, 19 December 2006 (UTC)
- My attention is now on creating specific content. Observational astronomy contains a couple of activities for getting started. The focus is on learning basic principles for someone with no background in astronomy. This page requires cleanup to make it more user friendly. Some ideas for expanding the topic are described at Observational astronomy/Planning. The hope is that others will provide input on the direction the project takes. Observational astronomy/Supernova is an observing program to collect and analyze telescope data.--mikeu 15:50, 25 December 2006 (UTC) | https://en.wikiversity.org/wiki/User:Mu301/Learning_blog/Archive | CC-MAIN-2018-43 | refinedweb | 9,603 | 61.87 |
Where does it state this?
Interesting however for UWP it is useless... unless Microsoft allows OpenCL GPU access for graphics processing... funny enough while Microsoft has its reasons to keep OpenGL off the platform... they do not help their own API in the same way that OpenGL is now forced to and that is supporting multiple languages [Which mind you neither has OpenGL up until and yet now... still C++ based as far as I know...]
That's not really the same, it's languages specific to Visual computation, can rule C# out of the list... though again, highly unlikely that it would make any use on UWP until Microsoft opens up OpenGL/Vulkan onto it... then perhaps some sort of C# related thing would make sense... for people like me...
Here's the official stream for that milestone for Human Kind
It's public so you can just ignore the sign in request by clicking Not Now, if you don't have an account...
just to chime in, Vulkan is similar to DX12.
I know more about Dx12, but the general consensus is that it takes a lot of time to make it run as fast as DirectX11 and it's not beginner friendly. The Api exposes a lot less stuff, which can be good, but it also means that even stuff like Mipmapping has to be implemented by hand.
Great for experienced devs, but I'm not sure if Monogame is the right target group.
SlimDX supports DX12, MonoGame is using DX11, the same was said of DX11 yet here we are
But I know what you mean as DX12 gives more lower level access too which is probably why it is less dev friendly I guess... plus, DX11 is mature enough to be flexible for simple games I guess...
actually directx11 can realize all things that dx12 does, too. MS even updated dx11 to 11.3 after 12 came out.
Btw, here i am now
Yep, precisely, DX12 was just as they say a rebuild to allow more direct access to the hardware otherwise it basically maintains the same functionality albeit more powerfully...
Awesome! so wish I could do that stuff... can you get it into a sphere? you could go into commercial advertising with that
sphere is a bit more simple, I basically solved that in my deferred engine for volumetric lights (unshadowed), which have spherical fog around
About the SpriteBatch implementation, the transformations (except for the default SpriteBatch matrix and optionally a matrix you passed to SpriteBatch.Begin) happen on CPU. You can see that here:. All Draw calls internally use one of the Set methods in SpriteBatchItem.
There's an interesting PR up that boosts performance by inlining some of the calls that are done internally in SpriteBatch. For example if you only pass a position for the sprite some redundant calculations would be made. Results are a lot better than you'd expect (or at least that was the case for me).
I meant the Vulkan tools are the best way forward to get cross-platform shader compilation. And once we have that set up we can look at implementing compute/geometry/other cool stuff.
I know, but would still need DirectX for Windows naturally Keep in mind, OpenGL and Vulkan do not include drivers for input etc., that is separate, OpenGL ... Khronos's projects are strictly graphical processing... whereas DX is a lot more...
Hey jjag how does spriteBatch actually batch up separate groups of different textures into calls, Im not getting how the batcher chunks up its draws. I see this but i don't get how it works with compare to in order to actually break up drawIndexedPrimitives when items with different textures are to actually be rendered?
var item = _batcher.CreateBatchItem();
item.Texture = texture;
// set SortKey based on SpriteSortMode.
switch ( _sortMode )
{
// Comparison of Texture objects.
case SpriteSortMode.Texture:
item.SortKey = texture.SortingKey;
break;
// Comparison of Depth
case SpriteSortMode.FrontToBack:
item.SortKey = depth;
break;
// Comparison of Depth in reverse
case SpriteSortMode.BackToFront:
item.SortKey = -depth;
break;
}
Or am i totally looking at the idea behind it wrong.
Ah i kinda see ill have to check it out later on.
I was playing around with my little spritebuffer thingy and ran a bit more of a practical test it seems to hold up fairly well even when a bunch of other stuff is going on though there is a lot of garbage and im pretty sure cpu is the only thing slowing it down,
still pushing 12k sprites though the random doesn't update fast enough made a cool picture though and its in motion falling,
@willmotil that looks good, are you using an effect for the colour fading? random colours too?
On another note I have started to notice a problem developing for me...
I have spent the past 6 years, 2 months, 1 week, 5 days or 2265 days reading poorly used English on the web on dev forums and social media... [Mainly focussing on the forums side of things as this is where most of the misuse was occurring... and most of my reading]
It has begun to affect my own English lol, only slightly but it is getting to the point where I have to re-read a sentence again to make sure I:
A- Read it correctlyB- Worked out if they used the correct grammarC- Worked out if the correct version of a term was usedD- Know which is the correct term to be usedE- Just wasted a few seconds of my life figuring it out
So the main culprit here is You're and Your, more You're now as everybody just slaps Your everywhere and as such my brain is beginning to stupefy itself lol
Anyway just noticed it this one time tonight while reading some poorly written governmental material, but I think after venting this I will ignore it for another five years now...
Any new projects people? I am thinking to move along with my web dev soon, just packing to move in about a month or so however... and dealing with some other stuff which is now delaying me...
Hope all is well with you guys
i honestly saw more of that in American High School than on the internet. On reddit etc. many people admit to doing it wrong even though it's their mothertongue... this is especially infuriating
that looks good, are you using an effect for the colour fading? random colours too?
Nop Just straight alpha blending scaling and rotating. The image itself has some transparency in it and the colors im passing sometimes have its alpha set lower.
This is a updated version of the earlier code. The shader is the same empty one used before to bypass spritebatch altogether. The method AddSpriteToBatch(...) is basically a spriteBatch.Draw(...) call. It does it all without any matrices either. Just straight linear math, translations, rotations and projection. It bypasses everything and does it directly, but it only works on a single texture per buffer at least right now.
using System;
using System.Collections.Generic;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
namespace MyDirectSpriteBatcher
{ static int windowWidth = 0;
private static)
{
AddSpriteToBatch(destinationRectangle, Color.White, 0f, Vector2.One, 0f);
}
public void AddSpriteToBatch(Rectangle destinationRectangle, Color color, float rotation, Vector2 scale, float depth)
{
// The 2x coefficent which is a mulitpiplier to put the rectangle in the 0 to 2 range.
// With respect to the size of the window width and height.
float cw = 2f / windowWidth;
float ch = 2f / windowHeight;
// Determine the vertice origin for rotation and scaling which i will do in place instead.
Vector2 origin = new Vector2((destinationRectangle.Left + destinationRectangle.Right) * .5f, (destinationRectangle.Top + destinationRectangle.Bottom) * .5f);
// Determines the vertice positions from the rectangle, translates and then scales
// If we really want to just scale from top left to bottom right.
// Then below we may find RB = (...(...) - origin) * scale; then set RT.X = RB.X; and LB.Y = RB.Y;
var LT = (new Vector2(destinationRectangle.Left, destinationRectangle.Top) - origin) * scale;
var LB = (new Vector2(destinationRectangle.Left, destinationRectangle.Bottom) - origin) * scale;
var RT = (new Vector2(destinationRectangle.Right, destinationRectangle.Top) - origin) * scale;
var RB = (new Vector2(destinationRectangle.Right, destinationRectangle.Bottom) - origin) * scale;
// If rotation is specified we will perform that now!
if (rotation != 0)
{
// Sin Cos of rotation
Vector2 q = new Vector2((float)Math.Sin(rotation), (float)Math.Cos(rotation));
// Rotates the vertices from the origin here we make it the center as opposed to how spritebatch does it.
// We could pass a offset here to further translate the origin like spritebatch.
// However it is no longer necessary, this way we can pre transform the position of the rectangle itself.
LT = GetRotatedVector(LT, q);
LB = GetRotatedVector(LB, q);
RT = GetRotatedVector(RT, q);
RB = GetRotatedVector(RB, q);
}
// Translate the vertices from the origin local space back to world space
// Then transform the vertices by projecting them to screen space
var _LT = Project(LT + origin, cw, ch, depth);
var _LB = Project(LB + origin, cw, ch, depth);
var _RT = Project(RT + origin, cw, ch, depth);
var _RB = Project(RB + origin, cw, ch, depth);
// create the vertice quad
spriteVertices[vi_pointer + 0] = CreateVPCTstruct(_LT, color, uvLT);
spriteVertices[vi_pointer + 1] = CreateVPCTstruct(_LB, color, uvLB);
spriteVertices[vi_pointer + 2] = CreateVPCTstruct(_RT, color, uvRT);
spriteVertices[vi_pointer + 3] = CreateVPCTstruct(_RB, color, uvRB);
// create the indexs im not sure this parts not simply redundant.;
}
public Vector3 Project(Vector2 v, float cw, float ch, float depth)
{
// invert the Y positions upon the X axis
return new Vector3(v.X * cw - 1f, v.Y * -ch + 1f, depth);
}
public Vector2 GetRotatedVector(Vector2 v, Vector2 q)
{
return new Vector2(v.X * q.Y - v.Y * q.X, v.X * q.X + v.Y * q.Y);
}
public Vector3 GetRotatedVector(Vector3 v, Vector2 q)
{
return new Vector3
(
v.X * q.Y - v.Y * q.X,
v.X * q.X + v.Y * q.Y,
v.Z
);
}
/// <summary>
/// simplest version
/// </summary>
public void DrawAll(GraphicsDevice gd)
{
if (TriangleDrawCount() > 0)
{ static);
}
private VertexPositionColorTexture CreateVPCTstruct(Vector3 position, Color c, Vector2 uv)
{
return new VertexPositionColorTexture(position, c, uv);
}
}
}
The image itself is solid alpha at the center and fades as it goes outwards. Its in pure white, to work with any color i pass to the method as a parameter.
Check this out exact same lone image as before drawn a ton of times moving.
Same as above but with variably dependent decreasing alpha in the color. That is passed to the method.
It actually looks much cooler when you see it. Because all the soft colors are flashing like soft lightning in the background.
The class that moves the particles is just a footnote to the actual functional methods and classes.
For instance the particles are in a non sorted buffer that basically only moves things on death or creation of a particle. It only really moves the pointer most of the time so no huge re-sort ever occurs but that is more of my own pattern then a class. The class itself is full of junk to make the particles behave like in the picture but... the buffer pattern it uses is the cool part.
public class DeadAliveBuffer
{
// This is my own little invention or pattern if you like quite proud of this one
public static int width = 0;
public static int height = 0;
// technically this is pointless in most cases it can just grow forever.
// it still will function optimally, its always gonna sort O(n) or below.
public int bufferLimit = 32; // default
// we could use a array here or a list if done properly i didnt create a proper swap
/// <summary>
/// Ideally a class that Implements either a base class with a isAlive bool or interface
/// However we can just copy paste the entire class and just use a object of our own type
/// </summary>
public List<IsAliveOrDeadBaseObjectItem> itemList = new List<IsAliveOrDeadBaseObjectItem>();
private int aliveMarker = 0;
public int DeadMarker { get { return aliveMarker + 1; } }
public int AliveMarker { get { return aliveMarker; } }
public DeadAliveBuffer(int bufferlimit)
{
bufferLimit = bufferlimit;
// initialize all the possible items in the buffer
for (int i = 0; i < bufferlimit; i++)
{
itemList.Add(new IsAliveOrDeadBaseObjectItem());
}
aliveMarker = -1;
}
public void CreateDefaultItem(GameTime gameTime)
{
if (DeadMarker < bufferLimit)
{
int index = MakeAliveItemReturnIndex();
// re use a dead item basically overwrite old data mark it alive
itemList[index].ReUse(gameTime);
}
}
public void MakeItemDead(int deadItemIndex)
{
// This is basically the entire sort it is simple.
if (deadItemIndex == aliveMarker)
{
aliveMarker--;
itemList[DeadMarker].isAlive = false;
}
else
{
// should redo this swap its not a very good way to swap it probably makes garbage
IsAliveOrDeadBaseObjectItem A = itemList[aliveMarker];
IsAliveOrDeadBaseObjectItem B = itemList[deadItemIndex];
itemList[deadItemIndex] = A;
itemList[aliveMarker] = B;
aliveMarker--;
itemList[DeadMarker].isAlive = false;
}
}
public void MakeAliveItem()
{
if (DeadMarker < bufferLimit)
{
itemList[DeadMarker].isAlive = true;
aliveMarker++;
}
else
{
// the buffer is completely filled with live objects
// we have two choices expand the buffer or do nothing
}
}
public int MakeAliveItemReturnIndex()
{
if (DeadMarker < bufferLimit)
{
itemList[DeadMarker].isAlive = true;
aliveMarker++;
}
else
{
// the buffer is completely filled with live objects
// we have two choices expand the buffer or do nothing
}
return aliveMarker;
}
public void Update()
{
for (int i = 0; i < DeadMarker; i++)
{
if (itemList[i].isAlive)
{
if (itemList[i].duration < 1 || itemList[i].positionRect.Y > height)
{
MakeItemDead(i);
}
}
if (itemList[i].isAlive)
{
//itemList[i].velocity.Y += .0010f; // gravitasity
//itemList[i].positionRect.X += itemList[i].velocity.X;
//itemList[i].positionRect.Y += itemList[i].velocity.Y;
}
}
}
}
That is especially infuriating... particularly when someone has a degree in English... [The times I have seen this...]
I thought so, was going to ask this
Very slick, coding wise too...
Why is it blurry? because of movement or?
Hey, do any of you dabble in electronics?
No i have a bad habit of shocking myself.
Oh its blured due to stretching in one direction the width. As well as how many there are that have super low alpha nearly invisible, when they overlap it looks like blur. All the streaching and fading occurs at the top 10% and above offscreen. Particularly the slow moving particles get walloped as they stay in that part longer it makes them oblong and they get rotated too.
Also because unlike the solid ball looking image posted earlier. I turned off...graphics.PreferMultiSampling = false;and setBlendstate = BlendState.NonPremultiplied;
So the edges aren't black and they all blend with each other well. That previous solid ball look was a artificial effect of not turning stuff off..The bottom most images are basically the fastest longest living so they they didn't get butchered much till they die. They are all just a transparent white image colored like the bottom red image that's blowing up.
here's what it looks like with a square image.
Nice! smart thinking! so it step changes rather than float change I mean it is more flickering than smooth motion right? reminds me of Chinese Lanterns in the night sky somewhat...
Are you not on Windows 10 or can you screen record somehow? [Windows 10 has built in recording DVR features!]
Just saw your updated picture, now if you added depth shadowing or FOG effect and played with size changes, larger being closer and smaller being further in the distance, it could look really cool... Something I aim to achieve soon...
Hows this for a bug in my secondary test method i messed up the math the rotational origin got screwed up and i got this bug with the same code lol.
Hence forth i will call this bug the wormhole effect bug. | http://community.monogame.net/t/general-conversation-thread/8319?page=10 | CC-MAIN-2017-26 | refinedweb | 2,571 | 55.13 |
Hi all,
I have an java ee application that I am trying to convert from JBoss 4 to JBoss 7.1. I tryed JBoss 7 first but I had too many classloading problems. I suspect that my current problem is related to classloading also.
First let me describe the structure of my app. I have a ear that includes 3 wars and 1 jar with stateless sessions beans:
Each of those war have their own code but also share some classes. One of those classes looks like this:
public class AccessAuditListener implements HttpSessionAttributeListener
{
@EJB(name = "SeasionBean")
private SessionBean sessionBean;
public void attributeAdded(HttpSessionBindingEvent httpSessionBindingEvent)
{
// Do nothing
}
public void attributeRemoved(HttpSessionBindingEvent httpSessionBindingEvent)
{
if("attribute".equalsIgnoreCase(httpSessionBindingEvent.getName()))
{
sessionBean.doSomething(httpSessionBindingEvent.getValue());
}
}
public void attributeReplaced(HttpSessionBindingEvent httpSessionBindingEvent)
{
// Do nothing.
}
}
The problem I have is that the first war that gets deployed has its EJB named "SessionBean" properly injected but subsequent wars do not get any ejb reference injected. It is as if while wars would all have their own classloaders but that they would be linked in a way that common classes only get classloaded (and therefore EJB injected) once. In JBoss 4, there was this really useful configuration you could put in the jboss-web.xml (<class-loading) that made sure all wars were 100% isolated.
My question is how can I get the JBoss 4 behavior without getting into the really messy work of turning ear-subdeployments-isolated to true?
Thank you!
Is the AccessAuditListener declared in the web.xml of each WAR file?
Does the web.xml in each WAR file specify at least version 2.5?
At this point I'm not convinced this is a class loading issue, because each WAR should get it's own instance of the AccessAuditListener, irrespective of which class loader it comes from. | https://developer.jboss.org/thread/197157 | CC-MAIN-2018-39 | refinedweb | 301 | 54.83 |
I’m often in circles of functional programmers, and one very consistent topic is “how object-oriented programming is the root of all evil, and functional programming is nothing like it”. Not necessarily with those words, but to that effect in any case.
And things tend to take an awkward direction whenever I mention that I like object-oriented programming. “Why?”. It gets worse when I say I’m building one. “No, really, why?”. And then it gets confusing when I throw terms like pure and algebraic effects. “No… wait, WHAT?”
I mean, aren’t “purity” and “algebraic effects” concepts from functional programming anyway? What kind of bullshit am I on? Well, let’s put aside the question and the fact that programming paradigms aren’t even that useful of a concept, and let’s look into Crochet for a moment.
What’s Crochet? But, more importantly, why is Crochet? And how is Crochet?
How does one make programming safe?
Imagine the following scenario:
You want to do stuff with computers. In particular, you want to build games, and interactive stories, and interactive music, and interactive art. Well, art in general. This might not be you, but this is Crochet’s target audience—artists and writers who might want to use computers as a creative medium.
Now, because you want to do stuff. And you want to do some fairly complex stuff—art is a lot of work—you’re not going to do it all on your own. Rather, you’re going to rely on other people’s work. You’re going to use— gasp—libraries.
Sometimes these libraries are going to be written by the maintainers of Crochet. Crochet does have a standard library (it needs to, but let’s talk about that later). But it’s unrealistic to think that a standard library could provide everything an end-user needs—for starters, I know nothing about music, how am I supposed to design a language for interactive music?
So we need to face the harsh truth that if Crochet gets users, all of them will get things done by downloading random pieces of arbitrary code from obscure places in the internet and running all of them in their machines. The same machines they will later on use for internet banking, online shopping, etc.
I believe that, as tech people, we have some sort of obligation to ensure that the things we produce do their best to protect our users’ security and privacy. Crochet is a powerful programming language that encourages non-professional programmers to download arbitrary programs from obscure places on the internet and run those programs in their machines. (Literally. One of the core tenets of Crochet is enabling a culture of remixing digital, interactive art).
Which means that something needs to be done so users of Crochet can safely run these programs in their computer, even though Crochet is a powerful programming language.
“Wait, Crochet is powerful?”
Fine, NOBODY gets any power
Luckily I did not need to invent anything here.
Sure, Crochet is a powerful programming language. By which I mean that programs in Crochet are able to read files that possibly contain personal data, upload this personal data to random servers on the internet, encrypt files in users’ personal directories and ask for ransoms, watch all keyboard signals and applications being used, and so on, and so forth.
Just like most other programming languages out there.
So the first step is to remove all of that power. Make Crochet a powerless language. Nobody gets to do anything in it. I mean, sure, they can do pure computations, as a treat. But output things on the screen? READ FILES? At best they can spin the fans until the computer gets hot—they can’t observe it.
But a powerless programming language is also useless. So Crochet programs need some power. And some programs may need more power than others.
Now, there are two basic theories of granting power to programs:
Access Control Lists: If I run this program as myself, then the program shall be able to do anything I can, on my behalf.
Capability Security: If I run this program as myself, then the program shall not be able to do anything… unless I delegate some of my powers to it.
So capability security, an old concept that has hardly found any use outside of mobile operating systems—sadly—is the way to go. Using capability security allows us to give the user the power to control how much they trust a program. If they never give a program any file system writing capabilities they don’t need to worry about their files being encrypted by malicious actors.
Well, they still have to worry about their personal data being uploaded to the internet if given read access to all files plus unrestricted network access. But a start is a start.
Programs are not libraries…
Of course, the bigger problem here is that we’re not just encouraging users to run arbitrary programs in their machines. We’re also encouraging them to take arbitrary programs from the internet and put that straight into their own programs.
Should Alice grant file-writing powers to a program she wrote? HECK YES.
…except Alice is using this one component written by Bob in her program, you see. And Bob is kinda shady, idk. Wouldn’t trust them.
So the problem is that, even with capability security, we’re still thinking about security in terms of whole programs. As if the entire program should be held to the same standards of scrutinity and enjoy the same powers.
This turns out to be a pretty bad idea.
Luckily, again, I didn’t have to invent anything to solve this, because the Object Oriented community had already solved it… in 1966. Sure pretty much nobody adopted it. And no mainstream language has it. But Java only added lambdas, a concept from 1935, a handful of years ago. So I’m sure that, given another couple of centuries or so, we might…
In all seriousness, though. Object-Capability Security is a very neat idea. It’s also fundamentally incompatible with all mainstream programming languages. And it is fundamentally incompatible with FP as generally implemented and described.
What makes Object-Capability Security (OCS) secure—and also incompatible with all of this—is that the approach it takes to things is, again, to just strip programs of all their power. But we must be able to provide powers to things, and OCS does so by the following mechanisms:
An object is a capability: it is an unforgeable token that grants people power to something.
The object’s methods are the powers it grants. There’s no need to separate the definitions and figure out how to map the token to the powers. And how to get the powers in the hands of the right people. Or how to combine different powers to grant them at once. It’s already there.
Power can be granted by passing references around. The initial object (e.g.: the object with the “main” method) gets all of the powers granted to the program. Every other object gets no power. The initial object decides who gets some of the power, why they get it, and how they get it.
Power can be revoked by passing ephemeral references—that is, references that will at a later point cease to exist. Maybe because it has been used off. Maybe because it was only valid in a certain time window.
In order for all of this to work, you first need to make your language powerless. That is, by default any code in the system is required to not be able to do anything—except for pure computation. Most mainstream programming languages use a global namespace. Most mainstream programming idioms rely on this omnipresent power. There’s no way to apply OCS to them without a huge shift in both how they are designed and how people program in them. Bummer.
But that doesn’t necessarily preclude a functional language to be designed for this form of security mechanism… right?
Well, not necessarily so. However, point (1) is often absent in most functional languages. Functional languages will generally push you towards the idea that every object has some sort of structural equivalence—because they’re just values. Observing differences in values is generally frowned upon because it breaks some fundamental ideas about optimisation and composition.
Functional programming also tends to be huge on reflection. And, of course, not Mirror-based reflection. Almost every mainstream functional language will have pattern matching and then go on to deconstruct any possible kind of object that could ever exist in the language—and also construct them. As it turns out, if you need to harden your code against internal attacks by malicious code in your application, this is not the greatest thing to do. Typed functional languages at least mitigate some of this if they manage to not make all types global.
Still, first-class functions are unforgeable references. They are (if allowed to be compared at all) compared by reference, not by their extensional equivalence. Well, maybe in some obscure theorem prover functions are compared by their extensional equivalence—I don’t know of any, but I’m not huge on provers.
In a sense, first-class functions are actually objects. Not that this is an agreed-upon definition or anything. But they can be seen that way. And seeing them that way can be useful in some particular cases—this is one of them. We could think of first-class functions as capabilities.
But then comes (2). Powers. We have a capability, sure, but how are we going to go about granting powers? Will we make functions available to every piece of code and then add a “please provide your capabilities (functions) as an argument here” to every function call? That’s possible in theory, sure, but it is a terrible interface for secure programming, and it makes points (3) and (4) impractical.
Incidentally, interfaces are extremely important for the “secure” part of “secure programming”. Because programming is often done by humans, which will take unsafe shortcuts (or just not use something) if it gets too cumbersome.
Implicit calculus could help, but functions are not a great way of grouping functions. We could, maybe, use modules? You know, little bags of functions. We promote them, pass them around with all of the functions. Maybe allow them to be constructed dynamically. Sounds great? Well, generative first-class modules are objects, so now we’re back to object-oriented programming anyway.
What if we were object-oriented… without objects?
So if Crochet wants to adopt Object-Capability Security—and it does, because I’m not knowledgeable enough about security to invent a new security theory and spend the next four decades proving that it is actually sound—it has to also adopt object orientation. That’s just the way these things are.
The thing about OCS, however, is that it precludes most of the things people associate with object-oriented programming:
Mutating objects? You can’t do that. Mutation is a power because it can be observed by attackers. nobody gets to mutate anything. All functions (methods) have to be pure.
Inheritance? Well, sure, but you can’t actually do any of the wacky reflection stuff. At best you get to know that you can treat
xas
Iterable. But what is that buying you? Nothing, really.
Dynamic dispatch? Oh, okay, you need this one.
But dynamic dispatch is a good idea anyway, and no, you cannot change my mind on this one.
*Ahem*.
So Crochet gets to be an object-oriented language that has to be pure, which means that a lot of the idioms that are common in functional languages will apply here as well: you need tail-calls, you work with recursive algorithms primarily, you mostly do data transformations that are operational, but not place-oriented.
But then Crochet goes on and says: “Let there be no objects.”
Well, not exactly. “Let there be no objects.” In the mainstream sense of “object”. Remember: Crochet needs objects because they’re capabilities. It needs objects to group powers into these little bags that can be conveniently passed around.
The problem, really, is how most Object-Oriented Languages do objects. It doesn’t fit Crochet’s design vision. It doesn’t fit Crochet’s target audience. It doesn’t even fit Crochet’s tools.
Incidentally, but unsurpsisingly, tools are a huge part of Crochet’s design. But we’ll talk about that another time.
Anyway, the problem with objects in most Object-Oriented programming languages is that they are closed. Generally. And they are also dispatching on only one of the arguments. Now, I say “problem”, but I really mean “problem (for Crochet)”. In an ideal world, it may make sense to give people control over what others can and can’t tack onto objects.
Sadly, the idea of fostering a remixing culture hinges on the idea of allowing people to modify and extend anything. Closed objects make this impossible, and open objects with single dispatch make this painful (and impossible)—just look at Ruby’s open classes. And expecting people to modify the actual source code of the definition is a security nightmare, as now there’s no way to update components, and boundaries are much less clear-cut for defining where capabilities should go.
By the way, earlier iterations of Crochet toyed a lot with the idea of contextual programming, where dynamically scoped “contexts” are used to define and control object extension. But it turns out that the lack of global coherence that is inherent to it leads to many confusing situations that I did not want to deal with. A similar issue applies to implicit calculus and parameterised modules.
And so Crochet goes with types (not in the sense Haskell uses types) along with multi-methods. This is very similar to what Julia does, in fact.
Sometimes we need to adapt things
Let’s look at another simple scenario.
You’re trying to write an interactive fiction. That is, a computer game where you read fiction, and at certain points of it you can interact with the fiction—and the narrative may change as a result of your interaction.
You might write some of the following code in Crochet:
action player take: (Item is item) do fact Item on: player; say: "You reach out for [Item name] and place [Item it] in your pocket. [inventory alongside-list: Item]"; end
In Crochet, actions and facts are part of the model simulation system used to define rules for games written in the language (among other things). Here, we have an action that allows the player of this game to interact with it by taking some kind of item from the scenario and placing it on the player’s inventory.
When the player does so, there’s a text displayed saying that the item is now with them, alongside other items they may have taken previously. For example, if the player has taken the car keys, they could see on the screen:
You reach out for the car keys and place them in your pocket.
Now, note that we’ve evaluated
Item name to
car keys,
Item it to
them,
and
inventory alongside-list: Item to nothing.
_ name,
_ it, and
_ alongside-list: _
are all functions (or rather, multi-methods).
Later on the player may choose to pick another item, and they’ll see:
You reach out for the phone and place it in your pocket. It sits comfortably alongside your car keys.
Incidentally, the person who’s writing this interactive fiction did not write
the
_ alongside-list: _ function. They’re just using it from another
writer who coded these little things for their own fiction and decided to share
it with their peers. The current writer gets that functionality “for free”.
Sadly, the person who’s writing this interactive fiction wants it to be read
in both English and Portuguese. Now, the original author of
_ alongside-list: _
did not know Portuguese at all—nor ever intended for this function to be used
for anything but English. Heck, they did never intend for it to be reused. It
was never designed—it just so happened that they found it useful when it
happened in their art, and they thought others may also find it useful.
Note that the fact that Crochet expects most reuse to be completely accidental rather than having any kind of intention—remember, the target audience of Crochet is not professional programmers; it’s rather artists, writers, and other creative people who want to use computers as a creative medium. Having accidental reuse be the primary way of reusing code means that most “good reuse techniques” fall flat. Crochet can’t add Traits, it can’t add Type Classes, it can’t add Interfaces. All of these concepts require designers to be intentional about reuse, to think about all of the use cases and edge cases, to design carefully an interface for their components.
None of those tasks are beyond non-professional programmers. They could certainly achieve it with enough information if they put effort into it. But that’s not their goal. That’s not why they’re writing software. These patterns of reuse will never arise automatically, and quite frankly people will just revert to copy-pasting code, which makes security impossible.
So Crochet takes the “You can write anything. Share anything. Modify anything.” route.
And it tries to make that secure. In this case, this writer has a few options.
One of them is to create a new
inventory type for portuguese inventories that
inherits from the original
inventory. This way they only need to redefine the
functions that include English prose.
Here’s what this code would look like:
singleton inventory-pt is inventory; command inventory-pt alongside-list: (Item is item) do let Items = self items; condition when Items is-empty => ""; when Items count < 3 => "[Item it-pt] [Items join-and-pt]."; when Items count >= 3 => "It's crammed with your [Items join-and-pt]."; end end
They also have to define
join-and-pt, of course.
command tuple join-and-pt do Items join-separator: ", " final-separator: ", e "; end
Now when they use
inventory-pt alongside-list, they would see the text:
Você pega o telefone e o coloca no bolso. Ele faz companhia para suas chaves do carro.
Adaptation by overwriting things
Note that there’s no information actually stored in either the
inventory-pt
or
inventory types. Rather, most information (and certainly all changing
information) comes from the global facts database. In this case, the fiction
likely has a logical relation declared as follows:
relation item* on: inventory;
Which is to say that there’s a way in which many items can be “on” some
inventory, but each item can only be at one inventory at any given time.
Relations are path-sensitive, and the
* notation means that a particular
segment of that path can hold multiple values. The path branches as a tree
as you go further down its segments.
So a more natural way of solving this conundrum here is to rather rely on this global fact database as a provider of the idea of “what language are we using?”, and then just replace the inventory functions to be language-aware.
In this case, there would be some global fact like:
enum language = english, portuguese; relation language: language;
Which, again, is to say that for this fiction there will be one language active at any given time, and it can be either “english” or “portuguese”.
Tangent One could also define an effect handler for it. But facts are just easier to use and have more tools available for working with them and debugging them—it also involves zero computations and no idea of continuations.
Then the writer may go on to replace the
_ it and
inventory alongside-list: _
functions:
open the.other.writer exposing item it as _ it-en, inventory alongside-list: item as _ alongside-list-en: _; override command item it = match one when language: english => self it-en; when language: portuguese => item it-pt; end; override command inventory alongside-list: (Item is item) = match one when language: english => self alongside-list-en: Item; when language: portuguese => self alongside-list-pt: Item; end;
Of course, it could be argued that making the function dispatched over
the
language type, such as creating a new function
inventory alongside-list: item for: language would make it easier to extend,
but the point here is that writers have the tools to adapt these little
pieces to get their work done, and can then go back to focusing on whatever
artistic content they intend to produce.
Their end goal is not to design software libraries.
Tangent Overrides in Crochet are still a work-in-progress capability, but the core idea is that they work as a controlled adaption mechanism, where you can replace parts of the system by other parts as needed without touching the any of the defining source code.
But libraries you include don’t get this power handed out to them by default (you have to go and say that, yes, Bob’s library is allowed to override this specific function), which prevents the all-too-common “prototype pollution” vulnerabilities in open-extension models such as Ruby’s, Python’s, and JavaScript’s, which in turn means that it’s harder for attackers to subvert the system for remote code execution—at least from this feature, without the user’s knowledge, anyway.
In conclusion
There are many other aspects to Crochet’s security and how it uses ideas from OCS and other concepts that I plan to cover in other articles. But I hope that this short(?) write-up dispels some of the confusion around why Crochet needs to be “object-oriented” in order to be secure.
It all boils down to Crochet’s target audience (non-professional programmers wanting to use computers as a creative medium), and the intended workflows, domains, and modes-of-use that I’ve seen happen (emergent, but not purposefully designed, abstractions, with copy-pasting for reusability and adaptation).
A lot of programming languages and tools are not designed for these users. They’re designed for professional programmers who can take the time to worry about systems and design and proper data-structures and type consistency and—the list goes on. Which is not wrong, we do need those, too, but they’re simply not designed for non-professional programmers, whose “programming” aspect is… completely incidental.
(It also has some of “Quil just wants to annoy functional programming people who go about saying that OOP is terrible” tongue-in-cheek use of the term as a taxonomy, but then that’s pretty much my whole internet presence I guess…)
Further reading
- Programming Paradigms and Beyond
- Shriram Krishnamurthi, Kathi Fisler — Argues for the use of notional machines rather than programming paradigms for understanding and talking about language behaviour. You should read it, and also get The Cambridge Book of Computing Education Research if you can. It was one of my favourite recent reads.
- npm blog about the event-stream incident
- npm — Honestly, I think the npm ecosystem has given us enough mainstream examples of how bad things can be when thirdy-party modules are added to an application—apparently this is now called a supply-chain attack. Vetting every line of code you use is humanly impossible, so we need a different approach that does not require that.
- Mirror-based Reflection
- Gilad Bracha, David Ungar — Reflection breaks pretty much every security property a programming language could hope to provide. Mirror-based reflection allows one to use reflection as a distinct capability, and finally reconcile reflection and security. Sadly not adopted by… almost every programming language out there—Dart is possibly the only mainstream one (it has Bracha in the design team), even though JavaScript tried to get them as well… but without removing the non-mirror based part.
- A Proposal for Simplified, Modern Definition of “Object Oriented”
- William Cook — The term “Object Oriented” isn’t very well-defined. And although Cook’s definition here isn’t widely accepted, I think it’s an useful writing of what concepts enable other particular ideas often associated with OOP.
- The Implicit Calculus: A New Foundation for Generic Programming
- Bruno C. D. S. Oliveira, Tom Schrijvers, Wontae Choi, Wonchan Lee, Kwangkeun Yi, Philip Wadler — The idea of the implicit calculus (which you can see in Scala’s implicits) is interesting for thinking about dependency injection in a more principled way, and could be used as a basis for describing powers in a capability-secure system. But without a good way of grouping these powers it doesn’t feel like you could make it practical, at least not without tying it to an IDE, as otherwise the overhead of maintaining annotations would get too cumbersome.
- F-ing Modules
- Andreas Rossberg, Claudio Russo, Derek Dreyer — The ML languages have often had good module systems with parameterisation support (which is needed for both object-capability security and dependency injection), so it seems like you could rely on them for this, if you restrict how they can be instantiated. But, again, generative first-class modules are pretty much objects in the OOP sense.
- The Expression Problem
- Philip Wadler — Though Wadler’s description of the tension between object-based and pattern-matching and ADT-based languages and extension by non-writers of that code still ring true for a lot of languages, I think most features combining ideas from FP and OOP have addressed some of it. The problem is the other trade-offs you need to make (e.g.: Type Classes require intentional and up-front design for extension).
- Korz: Simple, Symmetric, Subjective, Context-Oriented Programming
- David Ungar, Harold Ossher, Doug Kimelman — Korz (and other context-oriented languages) address the extension problem by letting users do contextual extensions. This avoids the problem with things like open-classes, if you can control the contexts. My previous attempt to do this did get reasonable results until I started running into impossible-to-debug issues with the lack of global coherence. It turns out that having to pass contexts around is too much trouble for it to be worth. And having them be dynamically scoped makes it unpredictable.
- The Unreasonable Effectiveness of Multiple Dispatch
- Stefan Karpinski — This is a youtube presentation rather than an article, but the idea of multimethods that Crochet uses is very close to the one Julia uses—with different syntax. Crochet has a few peculiarities in the dispatch and in how the entire soup of commands + overrides end up being presented (and manipulated) by the programmer, though. | https://robotlolita.me/diary/2021/04/why-crochet-is-oop/ | CC-MAIN-2021-21 | refinedweb | 4,490 | 61.97 |
Flex and Java
Integrating Adobe Flex with Java through Remote Services
What is Flex?
Adobe Flex is among the top choices for developers when building Rich Internet Application. It. The framework is build on top of the Flash Platform and provides a programming model familiar to the developers.
The Flex family includes several components:
- Adobe Flex SDK – the free Flex framework available as an open source project,
- Adobe Flex Builder 3 – an Eclipse based development tool,
- BlazeDS - a free open source project for Remoting and Messaging,
- Adobe LiveCycle Data Services ES - enabling RIAs to talk to back-end data and business logic,
- IBM ILOG Elixir – a tool for graphical data –display components.
This article will explain how a client application written in Flex can access your server side Java application. You will see how straightforward and powerful can be the integration between Flex and Java.
Architecture of a Flex/Java Application
There are several ways of integrating Flex with Java and in order to have a clear picture you have below a picture describing the architecture of a Flex/Java application.
Data exchange channels
There are several ways (called channels) to do the data exchange between client and the server.
On the server side we have different kinds of services which can be accessed by the client application: REST, SOAP, and the so called LC Data Services. While REST and SOAP services are server agnostic (you can generate your webservices from any programming language), for the Data services you will need a J2EE Application Server. You will also need to deploy an open source application called BlazeDS (or the commercial version called Livecycle DataServices) and to integrate your existing code into it. Because of space constraints in this article we will talk only by BlazeDS and only about the basic data services - the remote services.
Getting started
The first step in order to get started is to download the BlazeDS application from Adobe’s open source repository []. Download the archive containing the binary release and inside you will find the file blazeds.war. The war file is a basic web application containing some jar files and some configuration files.
From here on you have two options - if you start building a Java application from the scratch you can start using the blazeds.war; if not you should integrate the libraries and configuration files from blazeds.war into your web application. To find out how you can do that you can read online an article about this subject [].
If you plan to use Flex Builder the project configuration it is much easier []. However you can use whatever IDE do you like.
Building the classic Hello World example.
We will start building this with the Java files and create simple Java class called Hello
package test; public class Hello { public Message sayHello(Message message){ return new Message("Hello "+message.getBody()); } }
and the
Message class - it represents our domain object
package test; public class Message { private String body; public Message() {} public Message(String body) { this.body = body; } //geters and setters.. }
As you can see there is a method called
sayHello which receives a Message parameter and also returns Message. Now let's see how can we call this method from Flex.
The first step is to open the configuration file called remoting-config.xml and add the following entry:
<destination id="hello"> <properties> <source>test.Hello</source> </properties> </destination>
This declaration will associate our Java class with an identifier called "
hello" - we will use this identifier in the Flex application in order to call methods from the Java class.
We will start with our first ActionScript file called Message.as. It has the same structure as the corresponding Message.java.
package test{ [RemoteClass(alias="test.Message")] public class Message{ public function Message(){ } public var body:String; } }
You can see the declaration
[RemoteClass(alias="test.Message")]. This is a hint for the compiler in order to do the correspondence between the ActionScript class and the Java one.
The main Flex application is called helloworld.mxml. The code is below:
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ import mx.rpc.events.ResultEvent; import mx.controls.Alert; import test.Message; private function messageReceived(event:ResultEvent){ var message:Message = event.result as Message; Alert.show(message.body); } private function sendMessage(event:Event){ var message:Message = new Message(); message.body = inputName.text; service.sayHello(message); } ]]> </mx:Script> <mx:Panel <mx:VBox <mx:HBox <mx:Label <mx:TextInput <mx:Button </mx:HBox> </mx:VBox> </mx:Panel> <mx:RemoteObject </mx:Application>
How does the application work?
The application is pretty simple. The GUI contains a
textfield and a send button. When the user presses the send button the application will call the method
sendMessage. It also declares a remote object having an identifier, a destination called "
hello" (the same as the one from the configuration file remoting-config.xml) and a handler method which is going to be called after the result of the method is returned from Java to Flex. Why do we need this handler? Because all the input output operation from Flex are asynchronous, and you need to register handlers which are going to be invoked at some moment in time.
The
sendMessage method is pretty straightforward: it calls the "
sayHello" method from the Java class with a Message parameter. That's all, you don't need to do any manual data conversion, and this is valid even for very complex domain objects.
How does it work? One method is with the AMF binary protocol, the underlying format used for BlazeDS communication services. The AMF protocol knows how to do serialization and deserialization between Flex and Java using a table of type conversions [] and it also knows how to pack efficiently the data. Domain objects written in Java should have their equivalent in Action Script (you can use a tool in order to generate them from the Java sources) and are passed back and forth through the AMF gateway. In our case the serialized Message object from ActionScript is converted to the Message.java one way, and the serialized java object is converted back to the Message.as.
The second method is the use of Java reflection in order to instantiate the Java objects and to invoke the methods, which we will not detail here.
The AMF protocol will take into consideration also the case when you throw exceptions in your Java methods. The exception is serialized as a regular object and sent through the wire to the Flex application.
Find out more
To find out more about how you can integrate Flex intro Java projects please visit []
Nobody has commented it yet. | http://www.javaexpress.pl/article/show/Flex_and_Java | CC-MAIN-2020-16 | refinedweb | 1,112 | 55.03 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.