text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
C# Programming Training Classes in Apple Valley, California
Learn C# Programming in Apple Valley, California and surrounding areas via our hands-on, expert led courses. All of our classes either are offered on an onsite, online or public instructor led basis. Here is a list of our current C# Programming related training offerings in Apple Valley,
- Object-Oriented Programming in C#
14 December, 2020 - 18 December, 2020
- VMware vSphere 6.7 with ESXi and vCenter
2 November, 2020 - 6 November, 2020
- ENTERPRISE LINUX HIGH AVAILABILITY CLUSTERING
9 November, 2020 - 12 November, 2020
- Docker
Checking to see if a file exists is a two step process in Python. Simply import the module shown below and invoke the isfile function:
import os.path os.path.isfile(f… | https://www.hartmannsoftware.com/Training/csharp/Apple-Valley-California | CC-MAIN-2020-45 | en | refinedweb |
#include <deal.II/base/table_indices.h>
A class representing a fixed size array of indices.
It is used in tensorial objects like the TableBase and SymmetricTensor classes to represent a nested choice of indices.
Definition at line 44 of file table_indices.h.
Default constructor. This constructor sets all indices to zero.
Constructor. Initializes the indices stored by this object by the given arguments
indices
This constructor will result in a compiler error if the template argument
N is different from the number of the arguments.
Definition at line 116 of file table_indices.h.
Read-only access the value of the
ith index.
Definition at line 128 of file table_indices.h.
Write access the value of the
ith index.
Definition at line 137 of file table_indices.h.
Compare two index fields for equality.
Definition at line 146 of file table_indices.h.
Compare two index fields for inequality.
Definition at line 156 of file table_indices.h.
Sort the indices in ascending order. While this operation is not very useful for Table objects, it is used for the SymmetricTensor class.
Definition at line 164 of file table_indices.h.
Write or read the data of this object to or from a stream for the purpose of serialization.
Definition at line 173 of file table_indices.h.
Output operator for TableIndices objects; reports them in a list like this:
[i1,i2,...].
Definition at line 187 of file table_indices.h.
Store the indices in an array.
Definition at line 107 of file table_indices.h. | https://dealii.org/developer/doxygen/deal.II/classTableIndices.html | CC-MAIN-2020-45 | en | refinedweb |
getsockopt - Get socket options
#include <sys/socket.h>
int getsockopt(
int socket,
int level,
int option_nam,
void *option_value,
socklen_t *option_len );
[XNS4.0] The definition of the getsockopt() function in
XNS4.0 uses a size_t data type instead of a socklen_t data
type as specified in XNS5.0 (the previous definition).
[Tru64 UNIX] The following definition of the getsockopt()
function does not conform to current standards and is supported
only for backward compatibility (see standards(5)):
int getsockopt(
int socket,
int level,
int option_nam,
char *option_value,
int *option_len );
Interfaces documented on this reference page conform to
industry standards as follows:
getsockopt(): XNS4.0, XNS5.0
Refer to the standards(5) reference page for more information
about industry standards and associated tags.
Specifies the file descriptor for the socket. Specifies
the protocol level at which the option resides. To
retrieve options at the socket level, specify the level
parameter as SOL_SOCKET. To retrieve options at other levels,
supply the appropriate protocol number for the protocol
controlling the option. For example, to indicate that
an option will be interpreted by the TCP protocol, set
level to the protocol number of TCP, as defined in the
netinet/in.h header file, or as determined by using the
getprotobyname() function. Specifies a single option to
be retrieved. The socket level options can be enabled or
disabled by the setsockopt() function. The getsockopt()
function retrieves information about the following
options: Reports whether socket listening is enabled. This
option returns an int value. Reports whether transmission
of broadcast messages is supported. This option returns an
int value. [Tru64 UNIX] In a cluster, reports whether
the socket will use the default cluster alias as its
source address. [Tru64 UNIX] In a cluster, reports
whether the socket can only receive packets addressed to
this cluster member. [Tru64 UNIX] In a cluster, reports
whether the socket must receive packets addressed to a
cluster alias and will drop any packets that are not
addressed to a cluster alias. Reports whether debugging
information is being recorded. This option returns an int
value. Reports whether outgoing messages should bypass
the standard routing facilities. The destination must be
on a directly-connected network; messages are directed to
the appropriate network interface. The protocol in use
determines the effect of this option. (Not recommended,
for debugging purposes only.) This option returns an int
value. Reports information about error status and clears
it. This option returns an int value. Reports whether
connections are kept active with periodic transmission of
messages. If the connected socket fails to respond to
these messages, the connection is broken and processes
using that socket are notified with a SIGPIPE signal. This
option returns an int value. Reports whether the socket
lingers on a close() function if data is present. returns an struct linger value.
Reports whether the socket leaves received out-of-band
data (data marked urgent) in line. This option returns an
int value. Reports receive buffer size information. This
option returns an int value. Reports the minimum number
of bytes (low-water mark) for socket receive operations.
The default value returns an int value. Reports
receive time-out information. This option returns a
struct timeval value that specifies the amount of time to
wait for a receive operation to complete. If a receive
operation has blocked, reports whether an attempt to bind the socket to
a port in the reserved range (512-1024) will fail if the
port is marked static. Reports whether the rules used in
validating addresses supplied by a bind() function should
allow reuse of local addresses. This option returns an int
value. [Tru64 UNIX] In a cluster, reports whether the
socket can reuse a locked cluster alias port. Reports
send buffer size information. This option returns an int
value. Reports the minimum number of bytes (low-water
mark) for socket transmit operations. Non-blocking transmit
operations process no data if flow control does not
allow either the send low water mark value or the entire
request (whichever is smaller) to be processed. This
option returns an int value. Reports send time-out information.
This option returns a struct timeval value that
specifies. Reports the socket type. This option
returns an int value. Only valid for routing sockets.
Reports whether the sender receives a copy of each message.
This option returns an int value.
[Tru64 UNIX] Options at other protocol levels
vary in format and name. See the tcp(7) and ip(7)
reference pages for more information on option
names relevant for TCP and IP options respectively.
Note
[Tru64 UNIX] The default values for socket
level options like SO_SENDBUF, SO_RCVBUF, SO_SNDLOWAT,
and SO_RCVLOWAT are not constant across different
protocols and implementations. Use the getsockopt(2) routine to obtain the default values
programmatically. The address of a buffer. Specifies
the length of buffer pointed to by
option_value. The option_len parameter initially
contains the size of the buffer pointed to by the
option_value parameter. On return, the option_len
parameter is modified to indicate the actual size
of the value returned. If no option value is supplied
or returned, the option_value parameter can
be 0 (zero). Options at other protocol levels vary
in format and name.
The getsockopt() function allows an application program to
query socket options. The calling program specifies the
name of the socket, the name of the option, and a place to
store the requested information. The operating system gets
the socket option information from its internal data
structures and passes the requested information back to
the calling program.
Options may exist at multiple protocol levels. They are
always present at the uppermost socket level. When
retrieving socket options, specify the level at which the
option resides and the name of the option.
Upon successful completion, the getsockopt() function
returns a value of 0 (zero). Otherwise, a value of -1 is
returned, and errno is set to indicate the error.
If the getsockopt() function fails, errno may be set to
one of the following values: The calling process does not
have appropriate permissions. The socket parameter is not
valid. [POSIX] The send and receive timeout values are
too large to fit in the timeout fields of the socket
structure. The address pointed to by the option_value
parameter is not in a valid (writable) part of the process
space, or the option_len parameter is not in a valid part
of the process address space. The option_value or
option_len parameter is invalid; or the socket is shut
down. Insufficient resources are available in the system
to complete the call. The option is unknown. The available
STREAMS resources were insufficient for the operation
to complete. The socket parameter refers to a file, not a
socket. [XNS4.0] The operation is not supported by the
socket protocol.
Functions: bind(2), close(2), endprotoent(3), getprotobynumber(3), getprotoent(3), setprotoent(3), setsockopt(2), socket(2).
NetworkInformation: ip(7), tcp(7).
Standards: standards(5).
Network Programmer's Guide
getsockopt(2) | https://nixdoc.net/man-pages/Tru64/man2/getsockopt.2.html | CC-MAIN-2020-45 | en | refinedweb |
meta data for this page
Media Manager
Namespaces
Choose namespace
Media Files
- Media Files
- Upload
- Search
Search in courses:ct30a6900
File
- Date:
- 2016/03/23 21:43
- Filename:
- edx.pdf
- Size:
- 174KB
- References for:
- Nothing was found.
Wiki for LUT School of Industrial Engineering and Management | https://www.it.lut.fi/wiki/doku.php/start?tab_files=search&do=media&tab_details=view&image=courses%3Act60a7000%3Aspring2016%3Agreen%3Agreening%3Aedx.pdf&ns=courses%2Fct30a6900 | CC-MAIN-2020-45 | en | refinedweb |
Hello i have been trying to modify this script i have so it is a Linear spline instead of a (Catmull)Bezier like in Pic1 but my results are in Pic2, would anyone know how to do/fix this?
(My Error is obviously in line 29 to 39 the calculation of the spline, i know this question is rather stupid since it uses simple math)
-Pic1 (what i am trying to achieve) -Pic2 (the results i am getting)
Here is my code so you know what i have done. i believe my mistake is in the Vector3 ReturnCatmullRom.
using UnityEngine;
using System.Collections.Generic;
public class LinearSpline : MonoBehaviour {
public List<Transform> controlPointsList = new List<Transform>();
public int segCount = 2;
public bool isLooping = true;
void OnDrawGizmos()
{
Gizmos.color = Color.white;
for (int i = 0; i < controlPointsList.Count; i++)
{
Gizmos.DrawWireSphere(controlPointsList[i].position, 0.3f);
}
for (int i = 0; i < controlPointsList.Count; i++)
{
if ((i == 0 || i == controlPointsList.Count - 2 || i == controlPointsList.Count - 1) && !isLooping)
{
continue;
}
DisplayLinearSpline(i);
}
}
Vector3 ReturnLinearRom(float t, Vector3 p0, Vector3 p1, Vector3 p2, Vector3 p3)
{
Vector3 a = p0;
Vector3 b = p1;
Vector3 c = p2;
Vector3 d = p3;
Vector3 pos = a + b + c + d;
return pos;
}
void DisplayLinearSpline(int pos)
{
Vector3 p0 = controlPointsList[ClampListPos(pos - 1)].position;
Vector3 p1 = controlPointsList[pos].position;
Vector3 p2 = controlPointsList[ClampListPos(pos + 1)].position;
Vector3 p3 = controlPointsList[ClampListPos(pos + 2)].position;
Vector3 lastPos = Vector3.zero;
for (int t = 0; t < segCount; t++)
{
float n = (float)t / segCount;
Vector3 newPos = ReturnLinearRom(n, p0, p1, p2, p3);
if (n == 0)
{
lastPos = newPos;
continue;
}
Gizmos.color = Color.white;
Gizmos.DrawLine(lastPos, newPos);
lastPos = newPos;
Gizmos.DrawSphere(newPos, 0.3f);
}
Gizmos.DrawLine(lastPos, p2);
}
int ClampListPos(int pos)
{
if (pos < 0)
{
pos = controlPointsList.Count - 1;
}
if (pos > controlPointsList.Count)
{
pos = 1;
}
else if (pos > controlPointsList.Count - 1)
{
pos = 0;
}
return pos;
}
}
Answer by ldeboer
·
Aug 31, 2016 at 08:59 AM
You realize the original code was drawing a spline out to screen in a loop because it needed to draw little line segments to make the curve so it had to loop. You have no need to do that you are drawing a line between the control points so use Gizmos.DrawLine directly
// Draw line between each control point
for (int i = 0; i < controlPointsList.Count-1; i++) {
Gizmos.DrawLine(controlPointsList[i].position, controlPointsList[i+1].position);
}
// Finally line between first and last control point
Gizmos.DrawLine(controlPointsList[controlPointsList.Count-1].position, controlPointsList[0].position);
I should add that as a catmull-rom MUST PASS THRU any middle points you could also just use your catmull-rom code as it was but set the two middle control points of the spline to the same value of the mid point between the two points or even the point 1/4 and 3/4 of the way along line if you wanted nice even little line segments.
Ah forgot about that part. anyway Thanks for the help.
But is there a way to do it in a calculation? ins$$anonymous$$d of just drawing the lines to each point.
There is no such thing as a linear spline ... some spline types can be made to draw a line by organizing the control points in a special way. So the is NO CALCULATION called linear spline in the way you are trying to use it as a parametric equation. That is what the loop function inside the catmull-rom code did it blends the control points.
So your options are
Use a spline like catmull-rom and set the control points from the two end control points
Draw a simple line between the points
Recursively divid the line into small sections
For 1 and 3 what you want is points at set distances along the line so lets look at that using you control points from code above
This is the calculation for the point midway between any two of your control points
Transform mid.position = (controlPointsList[i].position + controlPointList[i+1].position)/2;
So the 1/4 point would be .. half way between first control and the mid point
Transform onequarter.position = (controlPointsList[i].position + mid.position)/2;
Transform threequarter.position = (controlPointsList[i+1].position + mid.position)/2;
So you could modify your loop code to recurse the subdivision process which is called tweening. Generally it would be easier to organize the function in paramateric form going from 0 to 1 to do that so
xt = (x2-x1)*t + x1 where 0 <= t <= 1
In your code it would look like this
double t;
Transform tpos.position;
tpos.x = (controlPointsList[i+1].position.x -controlPointList[i].position.x)*t + controlPointList[i].position.x;
tpos.y = (controlPointsList[i+1].position.y -controlPointList[i].position.y)*t + controlPointList[i].position.y;
tpos.z = (controlPointsList[i+1].position.z -controlPointList[i].position.z)*t + controlPointList[i].position.z;
If you make t = 0.1 .. tpos will be the point 1/10 of the way along the line, t= 0.25 will be the point quarter of the way along the line etc. So you can loop it like your spline code just changing the t value.
OR you use standard catmull-rom spline you already have using these four points
controlPointsList[i].position
onequarter.position
threequarter.position
controlPointsList[i+1].position
The catmull-rom spline is GUARANTEED to be a straight line and the draw segments will be evenly.
How do i do this type of spline??
1
Answer
Adding segments in missing spots
1
Answer
How do i create points along a spline???
0
Answers
Problem with curved Cornered Spline that i need help with
0
Answers
How do i fix Error CS0029?
1
Answer | https://answers.unity.com/questions/1237235/how-do-i-make-a-linear-spline.html | CC-MAIN-2020-45 | en | refinedweb |
I have been working on some AR training and found a lack of usable VuMark scripts. While I am not a programmer, I put something together that works for a proof of concept I am working on, and figured I should post it here for anyone to use and hopefully save you some time.
The script goes onto an empty game object which I will call the "PickerObject" that is a child of the VuMark. Empty game objects are made with the exact VuMark Id names (I made this using numeric VuMark Ids) These objects are parented to the PickerObject . Anything you then parent to the empty game objects named with the VuMark Ids will become active when the VuMark Id is found.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;
/*
*******Trigger a scene with Matching Vumark Id***********
Place script on a empty game object that is a child of the VuMark.
Create empty game objects with matching VuMark Instance Ids.
inside of those empty game objects put in what you want to be active when the VuMark Id id tracked.
1. Vumark
a. Empty game object with this script (child of VuMark)
i. Empty game object- named same as VuMark Id (child of empty game object with script)
1. Stuff you want to show up whe the Vumark ID is found (Child of empty game object with VuMark Id name)
i. Empty game object- named same as VuMark Id (child of empty game object with script)
1. Stuff you want to show up whe the Vumark ID is found (Child of empty game object with VuMark Id name)
You can make as many as you want. The script will look down the list
and if it finds a matching VuMarkId and a child of the object the script is on,
it will make that object active, if not all objects stay inactive
If you find any errors please email me
Rich
*/
public class VuMarkIdToScene : MonoBehaviour {
public VuMarkTarget vumark;
private VuMarkManager mVuMarkManager;
void Start ()
{
mVuMarkManager = TrackerManager.Instance.GetStateManager().GetVuMarkManager();
}
void Update ()
{
foreach (var bhvr in mVuMarkManager.GetActiveBehaviours())
{
vumark = bhvr.VuMarkTarget;
var VuId = vumark.InstanceId;
print ("Found ID number " + VuId);
foreach (Transform child in transform)
{
if (child.name == VuId.ToString())
{
child.gameObject.SetActive (true);
}
else
{
child.gameObject.SetActive(false);
}
}
}
}
}
Thanks a lot for this script ! It works much better than the one I have.
I'm also trying to track 2 vumarks simultaneously but it shows the same vumark ID. But works fine when trying to detect just one.
Would you have a script that enables this ?
Thanks again | https://developer.vuforia.com/forum/vumarks/working-vumark-unity-script | CC-MAIN-2020-45 | en | refinedweb |
The following section contains answers to some frequently asked questions about Dataflow.
General questions
Where can I find additional support?
You can visit Google Cloud Support to obtain a support package for Google Cloud, including Dataflow.
You can use StackOverflow to research your question or to submit a new question. When submitting, please tag your question with google-cloud-dataflow. This group is monitored by members of Google's engineering staff who are happy to answer your questions.
You can also submit questions, feature requests, bug or defect reports, and other feedback on the UserVoice forum.
Is it possible to share data across pipeline instances?
There is no Dataflow-specific cross pipeline communication mechanism for sharing data or processing context between pipelines. You can use durable storage like Cloud Storage or an in-memory cache like App Engine to share data between pipeline instances.
Is there a built-in scheduling mechanism to execute pipelines at given time or interval?
You can automate pipeline execution by:
- Using Cloud Scheduler
- Using Apache Airflow's Dataflow Operator, one of several Google Cloud Operators in a Cloud Composer workflow.
- Running custom (cron) job processes on Compute Engine.
How can I tell what version of the Dataflow SDK is installed/running in my environment?
Installation details depend on your development environment. If you're using Maven, you can have multiple versions of the Dataflow SDK "installed," in one or more local Maven repositories.
Java
To find out what version of the Dataflow SDK that a given pipeline is running, you can look at
the console output when running with
DataflowPipelineRunner or
BlockingDataflowPipelineRunner. The console will contain a message like
the following, which contains the Dataflow SDK version information:
Python
To find out what version of the Dataflow SDK that a given pipeline is running, you can look at
the console output when running with
DataflowRunner. The console will contain a message like
the following, which contains the Dataflow SDK version information:
INFO: Executing pipeline on the Dataflow Service, ... Dataflow SDK version: <version>
Interacting with your Cloud Dataflow job
Is it possible to access my job's worker machines (Compute Engine VMs) while my pipeline is running?
You can view the VM instances for a given pipeline by using the Google Cloud Console. From there, you can use SSH to access each instance. However, once your job either completes or fails, the Dataflow service will automatically shut down and clean up the VM instances.
In the Cloud Dataflow Monitoring Interface, why don't I see Reserved CPU Time for my streaming job?
The Dataflow service reports Reserved CPU Time after jobs are completed. For unbounded jobs, this means Reserved CPU time is only reported after jobs have been cancelled or have failed.
In the Cloud Dataflow Monitoring Interface, why are the job state and watermark information unavailable for recently updated streaming jobs?
The Update operation makes several changes that take a few minutes to propagate to the Dataflow Monitoring Interface. Try refreshing the monitoring interface 5 minutes after updating your job.
Why do my custom composite transforms appear expanded in the Dataflow Monitoring Interface?
In your pipeline code, you might have invoked your composite transform as follows:
result = transform.apply(input);
Composite transforms invoked in this manner omit the expected nesting and may thus appear expanded in the Dataflow Monitoring Interface. Your pipeline may also generate warnings or errors about stable unique names at pipeline execution time.
To avoid these issues, make sure you invoke your transforms using the recommended format:
result = input.apply(transform);
Why can't I see my ongoing job's information anymore in the Cloud Dataflow Monitoring Interface, even though it appeared previously?
There is a known issue that currently can affect some Dataflow jobs that have been running for one month or longer. Such jobs might fail to load in the Dataflow Monitoring Interface, or they might show outdated information, even if the job was previously visible.
You can still obtain your job's status in the job list when using the Dataflow Monitoring or Dataflow Command-line Interfaces. However, if this issue is present, you won't be able to view details about your job.
Programming with the Apache Beam SDK for Java
Can I pass additional (out-of-band) data into an existing ParDo operation?
Yes. There are several patterns to follow, depending on your use case:
- You can serialize information as fields in your
DoFnsubclass.
- Any variables referenced by the methods in an anonymous
DoFnwill be automatically serialized.
- You can compute data inside
DoFn.startBundle().
- You can pass in data via
ParDo.withSideInputs.
For more information, see the ParDo documentation, specifically the sections on Creating a DoFn and Side Inputs, as well as the API for Java reference documentation for ParDo.
How are Java exceptions handled in Cloud Dataflow? will retry.
Exceptions in user code (for example, your
DoFn instances) are
reported in the
Dataflow Monitoring Interface.
If you run your pipeline with
BlockingDataflowPipelineRunner, you'll also see
error messages printed in your console or terminal window.
Consider guarding against errors in your code by adding exception handlers. For
example, if you'd like to drop elements that fail some custom input validation
done in a
ParDo, use a try/catch block within your
ParDo to handle the
exception and drop the element. You may also want to use an
Aggregator
to keep track of error counts.
Programming with the Cloud Dataflow SDK for Python
How do I handle
NameErrors?
If you're getting a
NameError when you execute your pipeline using the Dataflow
service but not when you execute locally (i.e. using the
DirectRunner), your
DoFns may be using values in the global namespace that are not available on the
Dataflow worker.
By default, global imports, functions, and variables defined in the main session
are not saved during the serialization of a Dataflow job. If, for
example, your
DoFns are defined in the main file and reference imports and
functions in the global namespace, you can set the
--save_main_session
pipeline option to
True. This will cause the state of the global namespace to
be pickled and loaded on the Dataflow worker.
Notice that if you have objects in your global namespace that cannot be pickled, you will get a pickling error. If the error is regarding a module that should be available in the Python distribution, you can solve this by importing the module locally, where it is used.
For example, instead of:
import re … def myfunc(): # use re module
use:
def myfunc(): import re # use re module
Alternatively, if your
DoFns span multiple files, you should use
a different approach to packaging your workflow and
managing dependencies.
Pipeline I/O
Does the TextIO source and sink support compressed files, such as GZip?
Yes. Dataflow Java can read files compressed with
gzip and
bzip2. See the TextIO documentation for additional
information.
Can I use a regular expression to target specific files with the TextIO source?
Dataflow supports general wildcard patterns; your glob expression
can appear anywhere in the file path. However, Dataflow does not
support recursive wildcards (
**).
Does the TextIO input source support JSON?
Yes. However, for the Dataflow service to be able to parallelize input and output, your source data must be delimited with a line feed.
Why isn't dynamic work rebalancing activating with my custom source?
Dynamic work rebalancing uses the return value of your custom source's
getProgress()
method to activate. The default implementation for
getProgress() returns
null. To ensure auto-scaling activates, make sure your custom source overrides
getProgress() to return an appropriate value.
How do I access BigQuery datasets or Pub/Sub topics or subscriptions owned by a different Google Cloud Platform project (i.e., not the project with which I'm using Cloud Dataflow)?
See Dataflow's Security and Permissions guide for information on how to access BigQuery or Pub/Sub data in a different Google Cloud project than the one with which you're using Dataflow.
Why do I get "rateLimitExceeded" errors when using the BigQuery connector and what should I do about them?
BigQuery has short term quota limits that apply when too many API requests are sent during a short duration. It's possible for your Dataflow pipeline to temporarily exceed such a quota. Whenever this happens, API requests from your Dataflow pipeline to BigQuery might fail, which could result in
rateLimitExceeded errors in worker logs. Note that Dataflow retries such failures, so you can safely ignore these errors. If you believe that your pipeline is significantly impacted due to
rateLimitExceeded errors, please contact Google Cloud Support.
I'm using the BigQuery connector to write to BigQuery using streaming inserts and my write throughput is lower than expected. What can I do to remedy this?
Slow throughput might be due to your pipeline exceeding the available BigQuery streaming insert quota. If this is the case, you should see quota related error messages from BigQuery in the Dataflow worker logs (look for
quotaExceeded errors). If you see such errors, consider setting the BigQuery sink option
ignoreInsertIds() when using the Apache Beam SDK for Java or using the
ignore_insert_ids option when using the Apache Beam SDK for Python to become automatically eligible for a one GB/sec per-project BigQuery streaming insert throughput. For more information on caveats related to automatic message de-duplication, see the BigQuery documentation. To increase the BigQuery streaming insert quota above one GB/s, submit a request through the Cloud Console.
If you do not see quota related errors in worker logs, the issue might be that default bundling or batching related parameters do not provide adequate parallelism for your pipeline to scale. There are several Dataflow BigQuery connector related configurations that you can consider adjusting to achieve the expected performance when writing to BigQuery using streaming inserts. For example, for Apache Beam SDK for Java, adjust
numStreamingKeys to match the maximum number of workers and consider increasing
insertBundleParallelism to configure BigQuery connector to write to BigQuery using more parallel threads. For configurations available in the Apache Beam SDK for Java, see BigQueryPipelineOptions, and for configurations available in the Apache Beam SDK for Python, see the WriteToBigQuery transform.
Streaming
How do I run my pipeline in streaming mode?
You can set the
--streaming flag at the
command line
when you execute your pipeline. You can also set the streaming mode
programmatically
when you construct your pipeline.
What data sources and sinks are supported in streaming mode?
You can read streaming data from Pub/Sub, and you can write streaming data to Pub/Sub or BigQuery.
What are the current limitations of streaming mode?
Dataflow's streaming mode has the following limitations:
- Batch sources are not yet supported in streaming mode.
- The Dataflow service's Automatic Scaling features are supported in beta.
It looks like my streaming pipeline that reads from Pub/Sub is slowing down. What can I do?
Your project may have insufficient Pub/Sub quota.
You can find out if your project has insufficient quota by checking for
429 (Rate limit exceeded) client errors:
- Go to the Google Cloud Console.
- In the menu on the left, select APIs & services.
- In the Search Box, search for Cloud Pub/Sub.
- Click the Usage tab.
- Check Response Codes and look for
(4xx)client error codes.
Why isn't my streaming job upscaling properly when I Update my pipeline with a larger pool of workers?
JavaNumWorkers
that you specified for your original job.
Python_num_workers
that you specified for your original job.
Streaming autoscaling
What should I do if I want a fixed number of workers?
To enable streaming autoscaling, you need to opt in; it's not on by default. The semantics of the current options are not changing, so to keep using a fixed number of workers, you don't need to do anything.
I’m worried autoscaling will increase my bill. How can I limit it?
Java
By specifying
--maxNumWorkers, you limit the scaling range used to process
your job.
Python
By specifying
--max_num_workers, you limit the scaling range used to process
your job.
What is the scaling range for streaming autoscaling pipelines?
JavaNumWorkers.
For streaming autoscaling jobs that use Streaming Engine, the minimum number of workers is 1.
Dataflow balances the number of Persistent Disks between the
workers. For example, if your pipeline needs 3 or 4 workers in steady
state, you could set
--maxNumWorkers=15. The pipeline automatically
scales between 1 and 15 workers, using 1, 2, 3, 4, 5, 8, or 15 workers, which
corresponds to 15, 8, 5, 4, 3, 2, or 1 Persistent Disks per worker,
respectively.
--maxNumWorkers can be 1000 at most.
Python_num_workers.
For streaming autoscaling jobs that use Streaming Engine, the minimum number of workers is 1.
Dataflow balances the number of Persistent Disks between the
workers. For example, if your pipeline needs 3 or 4 workers in steady
state, you could set
--max_num_workers=15. The pipeline automatically
scales between 1 and 15 workers, using 1, 2, 3, 4, 5, 8, or 15 workers, which
corresponds to 15, 8, 5, 4, 3, 2, or 1 Persistent Disks per worker,
respectively.
--max_num_workers can be 1000 at most.
What’s the maximum number of workers autoscaling might use?
Java
Dataflow operates within the limits of your project's
Compute Engine instance count quota or
maxNumWorkers, whichever is
lower.
Python
Dataflow operates within the limits of your project's
Compute Engine instance count quota or
max_num_workers, whichever is
lower.
Can I turn off autoscaling on my streaming pipeline?
Java
Yes. Set
--autoscalingAlgorithm=NONE. Update the pipeline with fixed cluster
specifications, as described in the
manual scaling documentation,
where
numWorkers is within the scaling range.
Python
Yes. Set
--autoscaling_algorithm=NONE. Update the pipeline with fixed cluster
specifications, as described in the
manual scaling documentation,
where
num_workers is within the scaling range.
Can I change the scaling range on my streaming pipeline?
Java
Yes, but you cannot do this with Update*. You must
stop your pipeline by using
Cancel
or Drain and redeploy
your pipeline with the new desired
maxNumWorkers.
Python
Yes, but you cannot do this with Update*. You must
stop your pipeline by using
Cancel
or Drain and redeploy
your pipeline with the new desired
max_num_workers.
Setting up your Google Cloud Platform project to use Cloud Dataflow
How do I determine whether the project I'm using with Cloud Dataflow owns a Cloud Storage bucket that I want to read from or write to?
To determine whether your Google Cloud project owns a particular Cloud Storage bucket, you can use the following console command:
gsutil acl get gs://<your-bucket>
The command outputs a JSON string similar to the following:
[ { "entity": "project-owners-123456789", "projectTeam": { "projectNumber": "123456789", "team": "owners" }, "role": "OWNER" }, .... ]
The relevant entries are the ones for which the "role" is owner. The associated
projectNumber tells you which project owns that bucket. If the project number doesn't
match your project’s number, you will need to either:
- Create a new bucket that is owned by your project.
- Give the appropriate accounts access to the bucket.
How do I create a new bucket owned by my Cloud Dataflow project?
To create a new bucket in the Google Cloud project in which you're using Dataflow, you can use the following console command:
gsutil mb -p <Project to own the bucket> <bucket-name>
How do I make a bucket owned by a different project readable or writable for the Google Cloud Platform project I'm using with Cloud Dataflow?
See Dataflow's Security and Permissions guide for information on how your Dataflow pipeline can access Google Cloud resources owned by a different Google Cloud project.
When I try to run my Cloud Dataflow job, I see an error that says "Some Cloud APIs need to be enabled for your project in order for Cloud Dataflow to run this job." What should I do?
To run a Dataflow job, you must enable the following Google Cloud APIs in your project:
- Compute Engine API (Compute Engine)
- Cloud Logging API
- Cloud Storage
- Cloud Storage JSON API
- BigQuery API
- Pub/Sub
- Datastore API
See the Getting Started section on enabling Google Cloud APIs for detailed instructions. | https://cloud.google.com/dataflow/docs/resources/faq | CC-MAIN-2020-45 | en | refinedweb |
I have a hyperlink vbscript that displays a message using msgbox. I am receiving permission denied trying to use msgbox on 10.3.1 clients.
The script still works fine on 10.0 clients.
Anyone know a fix for this problem?
I have a hyperlink vbscript that displays a message using msgbox. I am receiving permission denied trying to use msgbox on 10.3.1 clients.
The script still works fine on 10.0 clients.
Anyone know a fix for this problem?
The script executes fine when the message box line is excluded. The hyperlink is also vbscript and not VB or VBA. MsgBox works when executed from a .vbs file outside of ArcMap.
well, then I am sure you have been through these, but it appears to be a difference between the versions that isn't obvious
Using Hyperlinks—Help | ArcGIS for Desktop
perhaps showing the lines would help
The message box call will cause the script to fail. Even the simplest of scripts fail.
Function OpenLink ( [FACILITY_ID] )
MsgBox "Hello"
End Function
The rest of the script does not matter. Adding MsgBox anywhere causes the error when you click verify and the link does nothing when used.
I ran into a similar problem when writing scripts for the web using vbscript ... apparently in my case the code was server side code which was not valid.... I had to re-write the section for "Client Side"
This server side code fails with Permission denied and continued processing just stops similar to what you are describing
<% function x() msgbox("Hello World") end function %>
Had to re-write to client side:
<HTML> <Body onload= x()> <script language=vbscript> sub x() msgbox("Hello World") end sub </Script> </Body> </HTML>
However I don't know if this still applies this was back in the ASP days "Active Server Pages"
Now I think I understand .. you are trying to use the msgbox in the script areas for the hyperlink definition. You can't....nor can you use alert in javascript.
As always there are ways around it but you would have to create your own hyperlink launcher as an object and you would have to use the createobject command on your custom launcher to employ msgbox.
Better explanation can be found here:
Advanced Hyperlink Functionality
I ended up converting the script to python and using the ctypes library to create the popup. Sadly, 10.0 doesn't have a python option for hyperlinks, so now I have 2 scripts and lyr files to support, one for 10.0 and one for 10.3.
This is the python code for open a folder and displaying a message if it doesnt exist:
import subprocess,os,ctypes def OpenLink ( [Foldername] ): path = '\\\\server\\sharename\\'+[Foldername] if (os.path.isdir(path)): subprocess.call('explorer "'+path+'"', shell=True) else: ctypes.windll.user32.MessageBoxW(0,u'Folder was not found:\n'+path,u'Not Found',0) return
Sucks that they killed MsgBox functionality sometime after 10.0 though.
guessing wildly that vb wasn't installed... it used to be, but it is being slowly nudged out
Introduction to installing and configuring ArcGIS for Desktop—Help | ArcGIS for Desktop | https://community.esri.com/thread/171690 | CC-MAIN-2020-45 | en | refinedweb |
public class FieldPathPayloadSubsectionExtractor extends Object implements PayloadSubsectionExtractor<FieldPathPayloadSubsectionExtractor>
PayloadSubsectionExtractorthat extracts the subsection of the JSON payload identified by a field path.
PayloadDocumentation.beneathPath(String)
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
protected FieldPathPayloadSubsectionExtractor(String fieldPath)
FieldPathPayloadSubsectionExtractorthat will extract the subsection of the JSON payload found at the given
fieldPath. The
fieldPathprefixed with
beneath-with be used as the subsection ID.
fieldPath- the path of the field
protected FieldPathPayloadSubsectionExtractor(String fieldPath, String subsectionId)
FieldPathPayloadSubsectionExtractorthat will extract the subsection of the JSON payload found at the given
fieldPathand that will us the given
subsectionIdto identify the subsection.
fieldPath- the path of the field
subsectionId- the ID of the subsection
public byte[] extractSubsection(byte[] payload, MediaType contentType)
payloadthat has the given
contentType.
extractSubsectionin interface
PayloadSubsectionExtractor<FieldPathPayloadSubsectionExtractor>
payload- the payload
contentType- the content type of the payload
public String getSubsectionId()
getSubsectionIdin interface
PayloadSubsectionExtractor<FieldPathPayloadSubsectionExtractor>
protected String getFieldPath()
public FieldPathPayloadSubsectionExtractor withSubsectionId(String subsectionId)
subsectionId.
withSubsectionIdin interface
PayloadSubsectionExtractor<FieldPathPayloadSubsectionExtractor>
subsectionId- the subsection ID | https://docs.spring.io/spring-restdocs/docs/1.2.1.RELEASE/api/org/springframework/restdocs/payload/FieldPathPayloadSubsectionExtractor.html | CC-MAIN-2020-45 | en | refinedweb |
.48 ! anton 6: @dircategory GNU programming tools ! 7: @direntry ! 8: * Gforth: (gforth). A fast interpreter for the Forth language. ! 9: @end direntry 1.4 anton 10: @comment @setchapternewpage odd 1.1 anton 11: @comment %**end of header (This is for running Texinfo on a region.) 12: 13: @ifinfo 1.43 anton 14: This file documents Gforth 0.3 1.1 anton 15: 1.43 anton 16: Copyright @copyright{} 1995-1997 Free Software Foundation, Inc. 1.1 anton 17: 18: Permission is granted to make and distribute verbatim copies of 19: this manual provided the copyright notice and this permission notice 20: are preserved on all copies. 21: 1.4 anton 22: @ignore 1.1 anton 23: Permission is granted to process this file through TeX and print the 24: results, provided the printed document carries a copying permission 25: notice identical to this one except for the removal of this paragraph 26: (this paragraph not being relevant to the printed manual). 27: 1.4 anton 28: @end ignore 1.1 anton: 1.24 anton 43: @finalout 1.1 anton 44: @titlepage 45: @sp 10 1.17 anton 46: @center @titlefont{Gforth Manual} 1.1 anton 47: @sp 2 1.43 anton 48: @center for version 0.3 1.1 anton 49: @sp 2 50: @center Anton Ertl 1.25 anton 51: @center Bernd Paysan 1.17 anton 52: @sp 3 53: @center This manual is under construction 1.1 anton 54: 55: @comment The following two commands start the copyright page. 56: @page 57: @vskip 0pt plus 1filll 1.43 anton 58: Copyright @copyright{} 1995--1997 Free Software Foundation, Inc. 1.1 anton 1.17 anton 83: Gforth is a free implementation of ANS Forth available on many 1.43 anton 84: personal machines. This manual corresponds to version 0.3. 1.1 anton 85: @end ifinfo 86: 87: @menu 1.4 anton 88: * License:: 1.17 anton 89: * Goals:: About the Gforth Project 1.4 anton 90: * Other Books:: Things you might want to read 1.43 anton 91: * Invoking Gforth:: Starting Gforth 1.17 anton 92: * Words:: Forth words available in Gforth 1.40 anton 93: * Tools:: Programming tools 1.4 anton 94: * ANS conformance:: Implementation-defined options etc. 1.17 anton 95: * Model:: The abstract machine of Gforth 1.43 anton 96: * Integrating Gforth:: Forth as scripting language for applications 1.17 anton 97: * Emacs and Gforth:: The Gforth Mode 1.43 anton 98: * Image Files:: @code{.fi} files contain compiled code 99: * Engine:: The inner interpreter and the primitives 1.4 anton 100: * Bugs:: How to report them 1.29 anton 101: * Origin:: Authors and ancestors of Gforth 1.4 anton 102: * Word Index:: An item for each Forth word 1.43 anton 103: * Concept Index:: A menu covering many topics 1.1 anton 104: @end menu 105: 1.47 anton 106: @node License, Goals, Top, Top 1.20 pazsan. 1.1 anton 497: 498: @iftex 499: @unnumbered Preface 1.23 pazsan 500: @cindex Preface 1.17 anton 501: This manual documents Gforth. The reader is expected to know 1.1 anton 502: Forth. This manual is primarily a reference manual. @xref{Other Books} 503: for introductory material. 504: @end iftex 505: 1.47 anton 506: @node Goals, Other Books, License, Top 1.1 anton 507: @comment node-name, next, previous, up 1.17 anton 508: @chapter Goals of Gforth 1.1 anton 509: @cindex Goals 1.17 anton 510: The goal of the Gforth Project is to develop a standard model for 1.43 anton 511: ANS Forth. This can be split into several subgoals: 1.1 anton 512: 513: @itemize @bullet 514: @item 1.43 anton 515: Gforth should conform to the Forth standard (ANS Forth). 1.1 anton: 1.17 anton 524: To achieve these goals Gforth should be 1.1 anton: 1.17 anton 540: Have we achieved these goals? Gforth conforms to the ANS Forth 541: standard. It may be considered a model, but we have not yet documented 1.1 anton 542: which parts of the model are stable and which parts we are likely to 1.17 anton. 1.1 anton 548: 1.43 anton 549: @node Other Books, Invoking Gforth, Goals, Top 1.1 anton 550: @chapter Other books on ANS Forth 1.43 anton 551: @cindex books on Forth 1.1 anton 552: 553: As the standard is relatively new, there are not many books out yet. It 1.17 anton 554: is not recommended to learn Forth by using Gforth and a book that is 1.1 anton 555: not written for ANS Forth, as you will not know your mistakes from the 556: deviations of the book. 557: 1.43 anton 558: @cindex standard document for ANS Forth 559: @cindex ANS Forth document 1.1 anton 560: There is, of course, the standard, the definite reference if you want to 1.19 anton 1.48 ! anton 571: @*@url{}. 1.1 anton 572: 1.43 anton 1.1 anton 578: introductory book based on a draft version of the standard. It does not 579: cover the whole standard. It also contains interesting background 1.41 anton 580: information (Jack Woehr was in the ANS Forth Technical Committee). It is 1.1 anton 581: not appropriate for complete newbies, but programmers experienced in 582: other languages should find it ok. 583: 1.43 anton 1.1 anton 591: 592: You will usually just say @code{gforth}. In many other cases the default 1.17 anton 593: Gforth image will be invoked like this: 1.1 anton 1.43 anton 612: @cindex -i, command-line option 613: @cindex --image-file, command-line option 1.1 anton 614: @item --image-file @var{file} 1.43 anton 615: @itemx -i @var{file} 1.1 anton 616: Loads the Forth image @var{file} instead of the default 1.43 anton 617: @file{gforth.fi} (@pxref{Image Files}). 1.1 anton 618: 1.43 anton 619: @cindex --path, command-line option 620: @cindex -p, command-line option 1.1 anton 621: @item --path @var{path} 1.43 anton 622: @itemx -p @var{path} 1.39 anton). 1.1 anton 628: 1.43 anton 629: @cindex --dictionary-size, command-line option 630: @cindex -m, command-line option 631: @cindex @var{size} parameters for command-line options 632: @cindex size of the dictionary and the stacks 1.1 anton 633: @item --dictionary-size @var{size} 1.43 anton 634: @itemx -m @var{size} 1.1 anton: 1.43 anton 642: @cindex --data-stack-size, command-line option 643: @cindex -d, command-line option 1.1 anton 644: @item --data-stack-size @var{size} 1.43 anton 645: @itemx -d @var{size} 1.1 anton 646: Allocate @var{size} space for the data stack instead of using the 647: default specified in the image (typically 16K). 648: 1.43 anton 649: @cindex --return-stack-size, command-line option 650: @cindex -r, command-line option 1.1 anton 651: @item --return-stack-size @var{size} 1.43 anton 652: @itemx -r @var{size} 1.1 anton 653: Allocate @var{size} space for the return stack instead of using the 1.43 anton 654: default specified in the image (typically 15K). 1.1 anton 655: 1.43 anton 656: @cindex --fp-stack-size, command-line option 657: @cindex -f, command-line option 1.1 anton 658: @item --fp-stack-size @var{size} 1.43 anton 659: @itemx -f @var{size} 1.1 anton 660: Allocate @var{size} space for the floating point stack instead of 1.43 anton 661: using the default specified in the image (typically 15.5K). In this case 1.1 anton 662: the unit specifier @code{e} refers to floating point numbers. 663: 1.43 anton 664: @cindex --locals-stack-size, command-line option 665: @cindex -l, command-line option 1.1 anton 666: @item --locals-stack-size @var{size} 1.43 anton 667: @itemx -l @var{size} 1.1 anton 668: Allocate @var{size} space for the locals stack instead of using the 1.43 anton 669: default specified in the image (typically 14.5K). 1.1 anton 670: 1.43 anton}). 1.1 anton 697: @end table 698: 1.43 anton 699: @cindex loading files at startup 700: @cindex executing code on startup 701: @cindex batch processing with Gforth 1.1 anton 702: As explained above, the image-specific command-line arguments for the 703: default image @file{gforth.fi} consist of a sequence of filenames and 1.41 anton 704: @code{-e @var{forth-code}} options that are interpreted in the sequence 1.1 anton: 1.43 anton 712: @cindex versions, invoking other versions of Gforth 1.22 anton: 1.1 anton: 1.43 anton 727: @node Words, Tools, Invoking Gforth, Top 1.1 anton 728: @chapter Forth Words 1.43 anton 729: @cindex Words 1.1 anton 730: 731: @menu 1.4 anton 732: * Notation:: 733: * Arithmetic:: 734: * Stack Manipulation:: 1.43 anton 735: * Memory:: 1.4 anton 736: * Control Structures:: 737: * Locals:: 738: * Defining Words:: 1.37 anton 739: * Tokens for Words:: 1.4 anton 740: * Wordlists:: 741: * Files:: 742: * Blocks:: 743: * Other I/O:: 744: * Programming Tools:: 1.18 anton 745: * Assembler and Code words:: 1.4 anton 746: * Threading Words:: 1.1 anton 747: @end menu 748: 749: @node Notation, Arithmetic, Words, Words 750: @section Notation 1.43 anton 751: @cindex notation of glossary entries 752: @cindex format of glossary entries 753: @cindex glossary notation format 754: @cindex word glossary entry format 1.1 anton 755: 756: The Forth words are described in this section in the glossary notation 1.43 anton 757: that has become a de-facto standard for Forth texts, i.e., 1.1 anton 758: 1.4 anton 759: @format 1.1 anton 760: @var{word} @var{Stack effect} @var{wordset} @var{pronunciation} 1.4 anton 761: @end format 1.1 anton 762: @var{Description} 763: 764: @table @var 765: @item word 1.43 anton 766: @cindex case insensitivity 1.17 anton 767: The name of the word. BTW, Gforth is case insensitive, so you can 1.14 anton 768: type the words in in lower case (However, @pxref{core-idef}). 1.1 anton 769: 770: @item Stack effect 1.43 anton 771: @cindex stack effect 1.1 an, 1.17 anton 776: i.e., a stack sequence is written as it is typed in. Note that Gforth 1.1 anton: 1.19 anton: 1.43 anton 790: @cindex pronounciation of words 1.1 anton 791: @item pronunciation 1.43 anton 792: How the word is pronounced. 1.1 anton 793: 1.43 anton 794: @cindex wordset 1.1 anton 1.19 anton. 1.1 anton 806: 807: @item Description 808: A description of the behaviour of the word. 809: @end table 810: 1.43 anton 811: @cindex types of stack items 812: @cindex stack item types 1.4 anton 813: The type of a stack item is specified by the character(s) the name 814: starts with: 1.1 anton 815: 816: @table @code 817: @item f 1.43 anton 818: @cindex @code{f}, stack item type 819: Boolean flags, i.e. @code{false} or @code{true}. 1.1 anton 820: @item c 1.43 anton 821: @cindex @code{c}, stack item type 1.1 anton 822: Char 823: @item w 1.43 anton 824: @cindex @code{w}, stack item type 1.1 anton 825: Cell, can contain an integer or an address 826: @item n 1.43 anton 827: @cindex @code{n}, stack item type 1.1 anton 828: signed integer 829: @item u 1.43 anton 830: @cindex @code{u}, stack item type 1.1 anton 831: unsigned integer 832: @item d 1.43 anton 833: @cindex @code{d}, stack item type 1.1 anton 834: double sized signed integer 835: @item ud 1.43 anton 836: @cindex @code{ud}, stack item type 1.1 anton 837: double sized unsigned integer 838: @item r 1.43 anton 839: @cindex @code{r}, stack item type 1.36 anton 840: Float (on the FP stack) 1.1 anton 841: @item a_ 1.43 anton 842: @cindex @code{a_}, stack item type 1.1 anton 843: Cell-aligned address 844: @item c_ 1.43 anton 845: @cindex @code{c_}, stack item type 1.36 anton 846: Char-aligned address (note that a Char may have two bytes in Windows NT) 1.1 anton 847: @item f_ 1.43 anton 848: @cindex @code{f_}, stack item type 1.1 anton 849: Float-aligned address 850: @item df_ 1.43 anton 851: @cindex @code{df_}, stack item type 1.1 anton 852: Address aligned for IEEE double precision float 853: @item sf_ 1.43 anton 854: @cindex @code{sf_}, stack item type 1.1 anton 855: Address aligned for IEEE single precision float 856: @item xt 1.43 anton 857: @cindex @code{xt}, stack item type 1.1 anton 858: Execution token, same size as Cell 859: @item wid 1.43 anton 860: @cindex @code{wid}, stack item type 1.1 anton 861: Wordlist ID, same size as Cell 862: @item f83name 1.43 anton 863: @cindex @code{f83name}, stack item type 1.1 anton 864: Pointer to a name structure 1.36 anton 865: @item " 1.43 anton 866: @cindex @code{"}, stack item type 1.36 anton 867: string in the input stream (not the stack). The terminating character is 868: a blank by default. If it is not a blank, it is shown in @code{<>} 869: quotes. 1.1 anton 870: @end table 871: 1.4 anton 872: @node Arithmetic, Stack Manipulation, Notation, Words 1.1 anton 873: @section Arithmetic 1.43 anton 874: @cindex arithmetic words 875: 876: @cindex division with potentially negative operands 1.1 anton 1.4 anton 885: former, @pxref{Mixed precision}). 886: 887: @menu 888: * Single precision:: 889: * Bitwise operations:: 890: * Mixed precision:: operations with single and double-cell integers 891: * Double precision:: Double-cell integer arithmetic 892: * Floating Point:: 893: @end menu 1.1 anton 894: 1.4 anton 895: @node Single precision, Bitwise operations, Arithmetic, Arithmetic 1.1 anton 896: @subsection Single precision 1.43 anton 897: @cindex single precision arithmetic words 898: 1.1 anton 899: doc-+ 900: doc-- 901: doc-* 902: doc-/ 903: doc-mod 904: doc-/mod 905: doc-negate 906: doc-abs 907: doc-min 908: doc-max 909: 1.4 anton 910: @node Bitwise operations, Mixed precision, Single precision, Arithmetic 1.1 anton 911: @subsection Bitwise operations 1.43 anton 912: @cindex bitwise operation words 913: 1.1 anton 914: doc-and 915: doc-or 916: doc-xor 917: doc-invert 918: doc-2* 919: doc-2/ 920: 1.4 anton 921: @node Mixed precision, Double precision, Bitwise operations, Arithmetic 1.1 anton 922: @subsection Mixed precision 1.43 anton 923: @cindex mixed precision arithmetic words 924: 1.1 anton 925: doc-m+ 926: doc-*/ 927: doc-*/mod 928: doc-m* 929: doc-um* 930: doc-m*/ 931: doc-um/mod 932: doc-fm/mod 933: doc-sm/rem 934: 1.4 anton 935: @node Double precision, Floating Point, Mixed precision, Arithmetic 1.1 anton 936: @subsection Double precision 1.43 anton 937: @cindex double precision arithmetic words 1.16 anton 938: 1.43 anton 939: @cindex double-cell numbers, input format 940: @cindex input format for double-cell numbers 1.16 anton 941: The outer (aka text) interpreter converts numbers containing a dot into 942: a double precision number. Note that only numbers with the dot as last 943: character are standard-conforming. 944: 1.1 anton 945: doc-d+ 946: doc-d- 947: doc-dnegate 948: doc-dabs 949: doc-dmin 950: doc-dmax 951: 1.4 anton 952: @node Floating Point, , Double precision, Arithmetic 953: @subsection Floating Point 1.43 anton 954: @cindex floating point arithmetic words 1.16 anton 955: 1.43 anton 956: @cindex floating-point numbers, input format 957: @cindex input format for floating-point numbers 1.16 anton 958: The format of floating point numbers recognized by the outer (aka text) 959: interpreter is: a signed decimal number, possibly containing a decimal 960: point (@code{.}), followed by @code{E} or @code{e}, optionally followed 1.41 anton 961: by a signed integer (the exponent). E.g., @code{1e} is the same as 1.35 anton 962: @code{+1.0e+0}. Note that a number without @code{e} 1.16 anton). 1.4 anton 970: 1.43 anton 971: @cindex angles in trigonometric operations 972: @cindex trigonometric operations 1.4 anton 973: Angles in floating point operations are given in radians (a full circle 1.17 anton 974: has 2 pi radians). Note, that Gforth has a separate floating point 1.4 anton 975: stack, but we use the unified notation. 976: 1.43 anton 977: @cindex floating-point arithmetic, pitfalls 1.4 anton 1.11 anton 983: avoid them), you might start with @cite{David Goldberg, What Every 1.6 anton 984: Computer Scientist Should Know About Floating-Point Arithmetic, ACM 985: Computing Surveys 23(1):5@minus{}48, March 1991}. 1.4 anton 1.6 anton 1004: doc-falog 1.4 anton: 1.43 anton 1020: @node Stack Manipulation, Memory, Arithmetic, Words 1.1 anton 1021: @section Stack Manipulation 1.43 anton 1022: @cindex stack manipulation words 1.1 anton 1023: 1.43 anton 1024: @cindex floating-point stack in the standard 1.17 anton 1025: Gforth has a data stack (aka parameter stack) for characters, cells, 1.1 anton 1.4 anton 1035: it. Instead, just say that your program has an environmental dependency 1036: on a separate FP stack. 1037: 1.43 anton 1038: @cindex return stack and locals 1039: @cindex locals and return stack 1.4 anton 1040: Also, a Forth system is allowed to keep the local variables on the 1.1 anton: 1.4 anton 1047: @menu 1048: * Data stack:: 1049: * Floating point stack:: 1050: * Return stack:: 1051: * Locals stack:: 1052: * Stack pointer manipulation:: 1053: @end menu 1054: 1055: @node Data stack, Floating point stack, Stack Manipulation, Stack Manipulation 1.1 anton 1056: @subsection Data stack 1.43 anton 1057: @cindex data stack manipulation words 1058: @cindex stack manipulations words, data stack 1059: 1.1 anton: 1.4 anton 1079: @node Floating point stack, Return stack, Data stack, Stack Manipulation 1.1 anton 1080: @subsection Floating point stack 1.43 anton 1081: @cindex floating-point stack manipulation words 1082: @cindex stack manipulation words, floating-point stack 1083: 1.1 anton 1084: doc-fdrop 1085: doc-fnip 1086: doc-fdup 1087: doc-fover 1088: doc-ftuck 1089: doc-fswap 1090: doc-frot 1091: 1.4 anton 1092: @node Return stack, Locals stack, Floating point stack, Stack Manipulation 1.1 anton 1093: @subsection Return stack 1.43 anton 1094: @cindex return stack manipulation words 1095: @cindex stack manipulation words, return stack 1096: 1.1 anton 1097: doc->r 1098: doc-r> 1099: doc-r@ 1100: doc-rdrop 1101: doc-2>r 1102: doc-2r> 1103: doc-2r@ 1104: doc-2rdrop 1105: 1.4 anton 1106: @node Locals stack, Stack pointer manipulation, Return stack, Stack Manipulation 1.1 anton 1107: @subsection Locals stack 1108: 1.4 anton 1109: @node Stack pointer manipulation, , Locals stack, Stack Manipulation 1.1 anton 1110: @subsection Stack pointer manipulation 1.43 anton 1111: @cindex stack pointer manipulation words 1112: 1.1 anton 1113: doc-sp@ 1114: doc-sp! 1115: doc-fp@ 1116: doc-fp! 1117: doc-rp@ 1118: doc-rp! 1119: doc-lp@ 1120: doc-lp! 1121: 1.43 anton 1122: @node Memory, Control Structures, Stack Manipulation, Words 1123: @section Memory 1124: @cindex Memory words 1.1 anton 1125: 1.4 anton 1126: @menu 1.43 anton 1127: * Memory Access:: 1.4 anton 1128: * Address arithmetic:: 1.43 anton 1129: * Memory Blocks:: 1.4 anton 1130: @end menu 1131: 1.43 anton 1132: @node Memory Access, Address arithmetic, Memory, Memory 1133: @subsection Memory Access 1134: @cindex memory access words 1.1 anton: 1.43 anton 1150: @node Address arithmetic, Memory Blocks, Memory Access, Memory 1.1 anton 1151: @subsection Address arithmetic 1.43 anton 1152: @cindex address arithmetic words 1.1 anton: 1.43 anton 1161: @cindex alignment of addresses for types 1.1 anton 1162: ANS Forth also defines words for aligning addresses for specific 1.43 anton 1163: types. Many computers require that accesses to specific data types 1.1 anton 1164: must only occur at specific addresses; e.g., that cells may only be 1165: accessed at addresses divisible by 4. Even if a machine allows unaligned 1166: accesses, it can usually perform aligned accesses faster. 1167: 1.17 anton 1168: For the performance-conscious: alignment operations are usually only 1.1 anton: 1.43 anton 1177: @cindex @code{CREATE} and alignment 1.1 anton 1178: The standard guarantees that addresses returned by @code{CREATE}d words 1.17 anton 1179: are cell-aligned; in addition, Gforth guarantees that these addresses 1.1 anton 1180: are aligned for all purposes. 1181: 1.9 anton 1182: Note that the standard defines a word @code{char}, which has nothing to 1183: do with address arithmetic. 1184: 1.1 anton 1185: doc-chars 1186: doc-char+ 1187: doc-cells 1188: doc-cell+ 1.43 anton 1189: doc-cell 1.1 anton 1190: doc-align 1191: doc-aligned 1192: doc-floats 1193: doc-float+ 1.43 anton 1194: doc-float 1.1 anton 1.10 anton 1205: doc-maxalign 1206: doc-maxaligned 1207: doc-cfalign 1208: doc-cfaligned 1.1 anton 1209: doc-address-unit-bits 1210: 1.43 anton 1211: @node Memory Blocks, , Address arithmetic, Memory 1212: @subsection Memory Blocks 1213: @cindex memory block words 1.1 anton: 1.43 anton 1226: @node Control Structures, Locals, Memory, Words 1.1 anton 1227: @section Control Structures 1.43 anton 1228: @cindex control structures 1.1 anton 1229: 1230: Control structures in Forth cannot be used in interpret state, only in 1.43 anton. 1.1 anton 1235: 1.4 anton 1236: @menu 1237: * Selection:: 1238: * Simple Loops:: 1239: * Counted Loops:: 1240: * Arbitrary control structures:: 1241: * Calls and returns:: 1242: * Exception Handling:: 1243: @end menu 1244: 1245: @node Selection, Simple Loops, Control Structures, Control Structures 1.1 anton 1246: @subsection Selection 1.43 anton 1247: @cindex selection control structures 1248: @cindex control structures for selection 1.1 anton 1249: 1.43 anton 1250: @cindex @code{IF} control structure 1.1 anton: 1.4 anton 1267: You can use @code{THEN} instead of @code{ENDIF}. Indeed, @code{THEN} is 1.1 anton: 1.31 anton}. 1.1 anton 1291: 1.43 anton 1292: @cindex @code{CASE} control structure 1.1 anton 1293: @example 1294: @var{n} 1295: CASE 1296: @var{n1} OF @var{code1} ENDOF 1297: @var{n2} OF @var{code2} ENDOF 1.4 anton 1298: @dots{} 1.1 anton: 1.4 anton 1307: @node Simple Loops, Counted Loops, Selection, Control Structures 1.1 anton 1308: @subsection Simple Loops 1.43 anton 1309: @cindex simple loops 1310: @cindex loops without count 1.1 anton 1311: 1.43 anton 1312: @cindex @code{WHILE} loop 1.1 anton 1313: @example 1314: BEGIN 1315: @var{code1} 1316: @var{flag} 1317: WHILE 1318: @var{code2} 1319: REPEAT 1320: @end example 1321: 1322: @var{code1} is executed and @var{flag} is computed. If it is true, 1.43 anton 1323: @var{code2} is executed and the loop is restarted; If @var{flag} is 1324: false, execution continues after the @code{REPEAT}. 1.1 anton 1325: 1.43 anton 1326: @cindex @code{UNTIL} loop 1.1 anton 1327: @example 1328: BEGIN 1329: @var{code} 1330: @var{flag} 1331: UNTIL 1332: @end example 1333: 1334: @var{code} is executed. The loop is restarted if @code{flag} is false. 1335: 1.43 anton 1336: @cindex endless loop 1337: @cindex loops, endless 1.1 anton 1338: @example 1339: BEGIN 1340: @var{code} 1341: AGAIN 1342: @end example 1343: 1344: This is an endless loop. 1345: 1.4 anton 1346: @node Counted Loops, Arbitrary control structures, Simple Loops, Control Structures 1.1 anton 1347: @subsection Counted Loops 1.43 anton 1348: @cindex counted loops 1349: @cindex loops, counted 1350: @cindex @code{DO} loops 1.1 anton: 1.46 anton 1376: doc-i 1377: doc-j 1378: doc-k 1379: 1.1 anton: 1.18 anton 1.30 anton 1397: unsigned loop parameters. 1.18 anton 1398: 1.1 anton 1399: @code{LOOP} can be replaced with @code{@var{n} +LOOP}; this updates the 1400: index by @var{n} instead of by 1. The loop is terminated when the border 1401: between @var{limit-1} and @var{limit} is crossed. E.g.: 1402: 1.18 anton 1403: @code{4 0 +DO i . 2 +LOOP} prints @code{0 2} 1.1 anton 1404: 1.18 anton 1405: @code{4 1 +DO i . 2 +LOOP} prints @code{1 3} 1.1 anton 1406: 1.43 anton 1407: @cindex negative increment for counted loops 1408: @cindex counted loops with negative increment 1.1 anton 1409: The behaviour of @code{@var{n} +LOOP} is peculiar when @var{n} is negative: 1410: 1.2 anton 1411: @code{-1 0 ?DO i . -1 +LOOP} prints @code{0 -1} 1.1 anton 1412: 1.2 anton 1413: @code{ 0 0 ?DO i . -1 +LOOP} prints nothing 1.1 anton 1414: 1.18 anton.: 1.1 anton 1420: 1.18 anton 1421: @code{-2 0 -DO i . 1 -LOOP} prints @code{0 -1} 1.1 anton 1422: 1.18 anton 1423: @code{-1 0 -DO i . 1 -LOOP} prints @code{0} 1.1 anton 1424: 1.18 anton 1425: @code{ 0 0 -DO i . 1 -LOOP} prints nothing 1.1 anton 1426: 1.30 anton 1427: Unfortunately, @code{+DO}, @code{U+DO}, @code{-DO}, @code{U-DO} and 1428: @code{-LOOP} are not in the ANS Forth standard. However, an 1429: implementation for these words that uses only standard words is provided 1430: in @file{compat/loops.fs}. 1.18 anton. 1.1 anton 1437: 1438: @code{UNLOOP} is used to prepare for an abnormal loop exit, e.g., via 1439: @code{EXIT}. @code{UNLOOP} removes the loop control parameters from the 1440: return stack so @code{EXIT} can get to its return address. 1441: 1.43 anton 1442: @cindex @code{FOR} loops 1.1 anton 1443: Another counted loop is 1444: @example 1445: @var{n} 1446: FOR 1447: @var{body} 1448: NEXT 1449: @end example 1450: This is the preferred loop of native code compiler writers who are too 1.17 anton 1451: lazy to optimize @code{?DO} loops properly. In Gforth, this loop 1.1 anton 1452: iterates @var{n+1} times; @code{i} produces values starting with @var{n} 1453: and ending with 0. Other Forth systems may behave differently, even if 1.30 anton 1454: they support @code{FOR} loops. To avoid problems, don't use @code{FOR} 1455: loops. 1.1 anton 1456: 1.4 anton 1457: @node Arbitrary control structures, Calls and returns, Counted Loops, Control Structures 1.2 anton 1458: @subsection Arbitrary control structures 1.43 anton 1459: @cindex control structures, user-defined 1.2 anton 1460: 1.43 anton 1461: @cindex control-flow stack 1.2 anton 1462: ANS Forth permits and supports using control structures in a non-nested 1463: way. Information about incomplete control structures is stored on the 1464: control-flow stack. This stack may be implemented on the Forth data 1.17 anton 1465: stack, and this is what we have done in Gforth. 1.2 anton 1466: 1.43 anton 1467: @cindex @code{orig}, control-flow stack item 1468: @cindex @code{dest}, control-flow stack item 1.2 anton: 1.3 anton 1474: doc-if 1475: doc-ahead 1476: doc-then 1477: doc-begin 1478: doc-until 1479: doc-again 1480: doc-cs-pick 1481: doc-cs-roll 1.2 anton 1482: 1.17 anton 1483: On many systems control-flow stack items take one word, in Gforth they 1.2 anton: 1.3 anton 1491: doc-else 1492: doc-while 1493: doc-repeat 1.2 anton 1494: 1.31 anton 1495: Gforth adds some more control-structure words: 1496: 1497: doc-endif 1498: doc-?dup-if 1499: doc-?dup-0=-if 1500: 1.2 anton 1501: Counted loop words constitute a separate group of words: 1502: 1.3 anton 1503: doc-?do 1.18 anton 1504: doc-+do 1505: doc-u+do 1506: doc--do 1507: doc-u-do 1.3 anton 1508: doc-do 1509: doc-for 1510: doc-loop 1511: doc-+loop 1.18 anton 1512: doc--loop 1.3 anton 1513: doc-next 1514: doc-leave 1515: doc-?leave 1516: doc-unloop 1.10 anton 1517: doc-done 1.2 anton 1.3 anton 1522: through the definition (@code{LOOP} etc. compile an @code{UNLOOP} on the 1523: fall-through path). Also, you have to ensure that all @code{LEAVE}s are 1.7 pazsan 1524: resolved (by using one of the loop-ending words or @code{DONE}). 1.2 anton 1525: 1526: Another group of control structure words are 1527: 1.3 anton 1528: doc-case 1529: doc-endcase 1530: doc-of 1531: doc-endof 1.2 anton 1532: 1533: @i{case-sys} and @i{of-sys} cannot be processed using @code{cs-pick} and 1534: @code{cs-roll}. 1535: 1.3 anton: 1.30 anton 1575: That's much easier to read, isn't it? Of course, @code{REPEAT} and 1.3 anton 1576: @code{WHILE} are predefined, so in this example it would not be 1577: necessary to define them. 1578: 1.4 anton 1579: @node Calls and returns, Exception Handling, Arbitrary control structures, Control Structures 1.3 anton 1580: @subsection Calls and returns 1.43 anton 1581: @cindex calling a definition 1582: @cindex returning from a definition 1.3 anton 1583: 1584: A definition can be called simply be writing the name of the 1.17 anton 1585: definition. When the end of the definition is reached, it returns. An 1586: earlier return can be forced using 1.3 an: 1.4 anton 1596: @node Exception Handling, , Calls and returns, Control Structures 1.3 anton 1597: @subsection Exception Handling 1.43 anton 1598: @cindex Exceptions 1.3 anton 1599: 1600: doc-catch 1601: doc-throw 1602: 1.4 anton 1603: @node Locals, Defining Words, Control Structures, Words 1.1 anton 1604: @section Locals 1.43 anton 1605: @cindex locals 1.1 anton 1606: 1.2 anton: 1.24 anton 1613: The ideas in this section have also been published in the paper 1614: @cite{Automatic Scoping of Local Variables} by M. Anton Ertl, presented 1615: at EuroForth '94; it is available at 1.48 ! anton 1616: @*@url{}. 1.24 anton 1617: 1.2 anton 1618: @menu 1.17 anton 1619: * Gforth locals:: 1.4 anton 1620: * ANS Forth locals:: 1.2 anton 1621: @end menu 1622: 1.17 anton 1623: @node Gforth locals, ANS Forth locals, Locals, Locals 1624: @subsection Gforth locals 1.43 anton 1625: @cindex Gforth locals 1626: @cindex locals, Gforth style 1.2 anton: 1.43 anton 1659: @cindex types of locals 1660: @cindex locals types 1.2 anton: 1.43 anton 1671: @cindex flavours of locals 1672: @cindex locals flavours 1673: @cindex value-flavoured locals 1674: @cindex variable-flavoured locals 1.17 anton 1675: Gforth currently supports cells (@code{W:}, @code{W^}), doubles 1.2 anton 1.41 anton 1681: left). E.g., the standard word @code{emit} can be defined in terms of 1.2 anton 1682: @code{type} like this: 1683: 1684: @example 1685: : emit @{ C^ char* -- @} 1686: char* 1 type ; 1687: @end example 1688: 1.43 anton 1689: @cindex default type of locals 1690: @cindex locals, default type 1.2 anton: 1.17 anton 1697: Gforth allows defining locals everywhere in a colon definition. This 1.7 pazsan 1698: poses the following questions: 1.2 anton 1699: 1.4 anton 1700: @menu 1701: * Where are locals visible by name?:: 1.14 anton 1702: * How long do locals live?:: 1.4 anton 1703: * Programming Style:: 1704: * Implementation:: 1705: @end menu 1706: 1.17 anton 1707: @node Where are locals visible by name?, How long do locals live?, Gforth locals, Gforth locals 1.2 anton 1708: @subsubsection Where are locals visible by name? 1.43 anton 1709: @cindex locals visibility 1710: @cindex visibility of locals 1711: @cindex scope of locals 1.2 anton 1.41 anton 1745: rest of this section. If you really must know all the gory details and 1.2 anton: 1.43 anton 1758: doc-unreachable 1759: 1.2 anton 1760: Another problem with this rule is that at @code{BEGIN}, the compiler 1.3 anton 1.2 anton 1765: loops). Perhaps the most insidious example is: 1766: @example 1767: AHEAD 1768: BEGIN 1769: x 1770: [ 1 CS-ROLL ] THEN 1.4 anton 1771: @{ x @} 1.2 anton 1.41 anton 1800: warns the user if it was too optimistic: 1.2 anton 1801: @example 1802: IF 1.4 anton 1803: @{ x @} 1.2 anton 1.4 anton 1819: @{ x @} 1.2 anton 1.17 anton 1834: visible after the @code{BEGIN}. However, the user can use 1.2 anton 1835: @code{ASSUME-LIVE} to make the compiler assume that the same locals are 1.17 anton 1836: visible at the BEGIN as at the point where the top control-flow stack 1837: item was created. 1.2 anton 1838: 1839: doc-assume-live 1840: 1841: E.g., 1842: @example 1.4 anton 1843: @{ x @} 1.2 anton 1.4 anton 1864: @{ x @} 1.2 anton 1865: ... 0= 1866: WHILE 1867: x 1868: REPEAT 1869: @end example 1870: 1.17 anton 1871: @node How long do locals live?, Programming Style, Where are locals visible by name?, Gforth locals 1.2 anton 1872: @subsubsection How long do locals live? 1.43 anton 1873: @cindex locals lifetime 1874: @cindex lifetime of locals 1.2 anton: 1.17 anton 1887: @node Programming Style, Implementation, How long do locals live?, Gforth locals 1.2 anton 1888: @subsubsection Programming Style 1.43 anton 1889: @cindex locals programming style 1890: @cindex programming style, locals 1.2 anton 1.4 anton 1901: unlikely to become a conscious programming objective. Still, the number 1902: of stack manipulations will be reduced dramatically if local variables 1.17 anton 1903: are used liberally (e.g., compare @code{max} in @ref{Gforth locals} with 1.4 anton 1904: a traditional implementation of @code{max}). 1.2 anton 1905: 1906: This shows one potential benefit of locals: making Forth programs more 1907: readable. Of course, this benefit will only be realized if the 1908: programmers continue to honour the principle of factoring instead of 1909: using the added latitude to make the words longer. 1910: 1.43 anton 1911: @cindex single-assignment style for locals 1.2 anton 1.36 anton 1924: addr1 c@@ addr2 c@@ - 1.31 anton 1925: ?dup-if 1.2 anton @} 1.36 anton 1947: s1 c@@ s2 c@@ - 1.31 anton 1948: ?dup-if 1.2 anton 1949: unloop exit 1950: then 1951: s1 char+ s2 char+ 1952: loop 1953: 2drop 1954: u1 u2 - ; 1955: @end example 1956: Here it is clear from the start that @code{s1} has a different value 1957: in every loop iteration. 1958: 1.17 anton 1959: @node Implementation, , Programming Style, Gforth locals 1.2 anton 1960: @subsubsection Implementation 1.43 anton 1961: @cindex locals implementation 1962: @cindex implementation of locals 1.2 anton 1963: 1.43 anton 1964: @cindex locals stack 1.17 anton 1965: Gforth uses an extra locals stack. The most compelling reason for 1.2 anton: 1.12 anton 1986: doc-compile-@local 1987: doc-compile-f@local 1.2 anton: 1.43 anton 2000: @cindex wordlist for defining locals 1.17 anton 2001: A special feature of Gforth's dictionary is used to implement the 1.2 anton 2002: definition of locals without type specifiers: every wordlist (aka 2003: vocabulary) has its own methods for searching 1.4 anton 2004: etc. (@pxref{Wordlists}). For the present purpose we defined a wordlist 1.2 anton 1.4 anton 2041: level at the @var{orig} point to the level after the @code{THEN}. The 1.2 anton 2042: first @code{lp+!#} adjusts the locals stack pointer from the current 2043: level to the level at the orig point, so the complete effect is an 2044: adjustment from the current level to the right level after the 2045: @code{THEN}. 2046: 1.43 anton 2047: @cindex locals information on the control-flow stack 2048: @cindex control-flow stack items, locals information 1.2 anton: 1.17 anton 2093: @node ANS Forth locals, , Gforth locals, Locals 1.2 anton 2094: @subsection ANS Forth locals 1.43 anton 2095: @cindex locals, ANS Forth style 1.2 anton 2096: 2097: The ANS Forth locals wordset does not define a syntax for locals, but 2098: words that make it possible to define various syntaxes. One of the 1.17 anton 2099: possible syntaxes is a subset of the syntax we used in the Gforth locals 1.2 anton: 1.1 anton 2112: 1.2 anton 2113: @itemize @bullet 2114: @item 1.17 anton 2115: Locals can only be cell-sized values (no type specifiers are allowed). 1.2 anton 2116: @item 2117: Locals can be defined only outside control structures. 2118: @item 2119: Locals can interfere with explicit usage of the return stack. For the 2120: exact (and long) rules, see the standard. If you don't use return stack 1.17 anton 2121: accessing words in a definition using locals, you will be all right. The 1.2 anton 2122: purpose of this rule is to make locals implementation on the return 2123: stack easier. 2124: @item 2125: The whole definition must be in one line. 2126: @end itemize 2127: 1.35 anton 2128: Locals defined in this way behave like @code{VALUE}s (@xref{Simple 2129: Defining Words}). I.e., they are initialized from the stack. Using their 1.2 anton 2130: name produces their value. Their value can be changed using @code{TO}. 2131: 1.17 anton 2132: Since this syntax is supported by Gforth directly, you need not do 1.2 anton 2133: anything to use it. If you want to port a program using this syntax to 1.30 anton 2134: another ANS Forth system, use @file{compat/anslocal.fs} to implement the 2135: syntax on the other system. 1.2 an 1.17 anton 2147: syntax to make porting to Gforth easy, but do not document it here. The 1.2 anton. 1.3 anton 2153: 1.37 anton 2154: @node Defining Words, Tokens for Words, Locals, Words 1.4 anton 2155: @section Defining Words 1.43 anton 2156: @cindex defining words 1.4 anton 2157: 1.14 anton 2158: @menu 1.35 anton 2159: * Simple Defining Words:: 2160: * Colon Definitions:: 2161: * User-defined Defining Words:: 2162: * Supplying names:: 2163: * Interpretation and Compilation Semantics:: 1.14 anton 2164: @end menu 2165: 1.35 anton 2166: @node Simple Defining Words, Colon Definitions, Defining Words, Defining Words 2167: @subsection Simple Defining Words 1.43 anton 2168: @cindex simple defining words 2169: @cindex defining words, simple 1.35 anton 1.43 anton 2186: @cindex colon definitions 1.35 anton 1.43 anton 2205: @cindex user-defined defining words 2206: @cindex defining words, user-defined 1.35 anton 2207: 2208: You can create new defining words simply by wrapping defining-time code 2209: around existing defining words and putting the sequence in a colon 2210: definition. 2211: 1.43 anton 2212: @cindex @code{CREATE} ... @code{DOES>} 1.36 anton 2213: If you want the words defined with your defining words to behave 2214: differently from words defined with standard defining words, you can 1.35 anton} 1.36 anton: 1.35 anton 2247: 2248: @example 2249: : constant ( w "name" -- ) 2250: create , 2251: DOES> ( -- w ) 1.36 anton 2252: @@ ; 1.35 anton: 1.43 anton 2260: @cindex stack effect of @code{DOES>}-parts 2261: @cindex @code{DOES>}-parts, stack effect 1.35 an>} 1.43 anton 2270: @cindex @code{CREATE} ... @code{DOES>}, applications 1.35 anton 2271: 1.36 anton 2272: You may wonder how to use this feature. Here are some usage patterns: 1.35 anton 2273: 1.43 anton 2274: @cindex factoring similar colon definitions 1.35 anton 1.41 anton 2281: : ori, ( reg-target reg-source n -- ) 1.35 anton 2282: 0 asm-reg-reg-imm ; 1.41 anton 2283: : andi, ( reg-target reg-source n -- ) 1.35 anton 2284: 1 asm-reg-reg-imm ; 2285: @end example 2286: 2287: This could be factored with: 2288: @example 2289: : reg-reg-imm ( op-code -- ) 2290: create , 1.41 anton 2291: DOES> ( reg-target reg-source n -- ) 1.36 anton 2292: @@ asm-reg-reg-imm ; 1.35 anton 2293: 2294: 0 reg-reg-imm ori, 2295: 1 reg-reg-imm andi, 2296: @end example 2297: 1.43 anton 2298: @cindex currying 1.35 anton ) 1.36 anton 2308: @@ + ; 1.35 anton 2309: 2310: 3 curry+ 3+ 2311: -2 curry+ 2- 2312: @end example 2313: 2314: @subsubsection The gory details of @code{CREATE..DOES>} 1.43 anton 2315: @cindex @code{CREATE} ... @code{DOES>}, details 1.35 anton 2316: 2317: doc-does> 2318: 1.43 anton 2319: @cindex @code{DOES>} in a separate definition 1.35 anton: 1.43 anton 2341: @cindex @code{DOES>} in interpretation state 1.35 an 1.41 anton 2354: This is equivalent to the standard 1.35 anton 1.43 anton 2369: @cindex names for defined words 2370: @cindex defining words, name parameter 1.35 anton 2371: 1.43 anton 2372: @cindex defining words, name given in a string 1.35 anton: 1.43 anton 2389: @cindex defining words without name 1.35 anton 2390: Sometimes you want to define a word without a name. You can do this with 2391: 2392: doc-noname 2393: 1.43 anton 2394: @cindex execution token of last defined word 1.35 anton 1.43 anton 2428: @cindex semantics, interpretation and compilation 1.35 anton 2429: 1.43 anton 2430: @cindex interpretation semantics 1.36 anton: 1.43 anton 2438: @cindex compilation semantics 1.36 anton: 1.43 anton 2445: @cindex execution semantics 1.36 anton: 1.43 anton 2456: @cindex immediate words 1.36 anton 2457: You can change the compilation semantics into @code{execute}ing the 2458: execution semantics with 2459: 1.35 anton 2460: doc-immediate 1.36 anton 2461: 1.43 anton 2462: @cindex compile-only words 1.36 anton: 1.35 anton 2474: doc-interpret/compile: 2475: 1.36 anton:}: 1.35 anton 2497: 1.36 anton: 1.43 anton 2511: @cindex state-smart words are a bad idea 1.36 anton: 1.43 anton 2537: @cindex defining words with arbitrary semantics combinations 1.36 anton}. 1.4 anton 2586: 1.37 anton 2587: @node Tokens for Words, Wordlists, Defining Words, Words 2588: @section Tokens for Words 1.43 anton 2589: @cindex tokens for words 1.37 anton 2590: 2591: This chapter describes the creation and use of tokens that represent 2592: words on the stack (and in data space). 2593: 2594: Named words have interpretation and compilation semantics. Unnamed words 2595: just have execution semantics. 2596: 1.43 anton 2597: @cindex execution token 1.37 anton: 1.43 anton 2610: @cindex code field address 2611: @cindex CFA 1.37 anton: 1.43 anton 2627: @cindex compilation token 1.37 anton: 1.38 anton 2638: You can compile the compilation semantics with @code{postpone,}. I.e., 2639: @code{COMP' @var{word} POSTPONE,} is equivalent to @code{POSTPONE 2640: @var{word}}. 2641: 2642: doc-postpone, 2643: 1.37 anton 2644: At present, the @var{w} part of a compilation token is an execution 2645: token, and the @var{xt} part represents either @code{execute} or 1.41 anton 2646: @code{compile,}. However, don't rely on that knowledge, unless necessary; 1.37 anton 2647: we may introduce unusual compilation tokens in the future (e.g., 2648: compilation tokens representing the compilation semantics of literals). 2649: 1.43 anton 2650: @cindex name token 2651: @cindex name field address 2652: @cindex NFA 1.37 anton 1.4 anton: 1.18 anton 2674: @node Programming Tools, Assembler and Code words, Other I/O, Words 1.4 anton 2675: @section Programming Tools 1.43 anton 2676: @cindex programming tools 1.4 anton 2677: 1.5 anton 2678: @menu 2679: * Debugging:: Simple and quick. 2680: * Assertions:: Making your programs self-checking. 2681: @end menu 2682: 2683: @node Debugging, Assertions, Programming Tools, Programming Tools 1.4 anton 2684: @subsection Debugging 1.43 anton 2685: @cindex debugging 1.4 anton 2686: 2687: The simple debugging aids provided in @file{debugging.fs} 2688: are meant to support a different style of debugging than the 2689: tracing/stepping debuggers used in languages with long turn-around 2690: times. 2691: 1.41 anton 2692: A much better (faster) way in fast-compiling languages is to add 1.4 anton 1.5 anton 2704: source level using @kbd{C-x `} (the advantage over a stepping debugger 2705: is that you can step in any direction and you know where the crash has 2706: happened or where the strange data has occurred). 1.4 anton: 1.5 anton 2716: @node Assertions, , Debugging, Programming Tools 1.4 anton 2717: @subsection Assertions 1.43 anton 2718: @cindex assertions 1.4 anton 2719: 1.5 anton 2720: It is a good idea to make your programs self-checking, in particular, if 2721: you use an assumption (e.g., that a certain field of a data structure is 1.17 anton 2722: never zero) that may become wrong during maintenance. Gforth supports 1.5 anton 1.17 anton 2745: keep others turned on. Gforth provides several levels of assertions for 1.5 anton: 1.18 anton 2776: @node Assembler and Code words, Threading Words, Programming Tools, Words 2777: @section Assembler and Code words 1.43 anton 2778: @cindex assembler 2779: @cindex code words 1.18 anton 2780: 2781: Gforth provides some words for defining primitives (words written in 2782: machine code), and for defining the the machine-code equivalent of 2783: @code{DOES>}-based defining words. However, the machine-independent 1.40 anton 2784: nature of Gforth poses a few problems: First of all, Gforth runs on 1.18 anton 2785: several architectures, so it can provide no standard assembler. What's 2786: worse is that the register allocation not only depends on the processor, 1.25 anton 2787: but also on the @code{gcc} version and options used. 1.18 anton 2788: 1.25 anton 2789: The words that Gforth offers encapsulate some system dependences (e.g., the 1.18 anton 1.19 anton 2805: present). You can load them with @code{require code.fs}. 1.18 anton 2806: 1.43 anton 2807: @cindex registers of the inner interpreter 1.25 anton: 1.43 anton 2840: @cindex code words, portable 1.18 anton 2841: Another option for implementing normal and defining words efficiently 2842: is: adding the wanted functionality to the source of Gforth. For normal 1.35 anton}. 1.18 anton 2847: 2848: 2849: @node Threading Words, , Assembler and Code words, Words 1.4 anton 2850: @section Threading Words 1.43 anton 2851: @cindex threading words 1.4 anton 2852: 1.43 anton 2853: @cindex code address 1.4 anton 2854: These words provide access to code addresses and other threading stuff 1.17 anton 2855: in Gforth (and, possibly, other interpretive Forths). It more or less 1.4 anton 2856: abstracts away the differences between direct and indirect threading 2857: (and, for direct threading, the machine dependences). However, at 1.43 anton 2858: present this wordset is still incomplete. It is also pretty low-level; 2859: some day it will hopefully be made unnecessary by an internals wordset 1.4 anton 2860: that abstracts implementation details away completely. 2861: 2862: doc->code-address 2863: doc->does-code 2864: doc-code-address! 2865: doc-does-code! 2866: doc-does-handler! 2867: doc-/does-handler 2868: 1.18 anton 2869: The code addresses produced by various defining words are produced by 2870: the following words: 1.14 anton 2871: 1.18 anton 2872: doc-docol: 2873: doc-docon: 2874: doc-dovar: 2875: doc-douser: 2876: doc-dodefer: 2877: doc-dofield: 2878: 1.35 anton 2879: You can recognize words defined by a @code{CREATE}...@code{DOES>} word 2880: with @code{>DOES-CODE}. If the word was defined in that way, the value 2881: returned is different from 0 and identifies the @code{DOES>} used by the 2882: defining word. 1.14 anton 2883: 1.40 anton 2884: @node Tools, ANS conformance, Words, Top 2885: @chapter Tools 2886: 2887: @menu 1.43 anton 2888: * ANS Report:: Report the words used, sorted by wordset. 1.40 anton 2889: @end menu 2890: 2891: See also @ref{Emacs and Gforth}. 2892: 2893: @node ANS Report, , Tools, Tools 2894: @section @file{ans-report.fs}: Report the words used, sorted by wordset 1.43 anton 2895: @cindex @file{ans-report.fs} 2896: @cindex report the words used in your program 2897: @cindex words used in your program 1.40 anton: 1.43 anton 2937: @c ****************************************************************** 1.40 anton 2938: @node ANS conformance, Model, Tools, Top 1.4 anton 2939: @chapter ANS conformance 1.43 anton 2940: @cindex ANS conformance of Gforth 1.4 anton 2941: 1.17 anton 2942: To the best of our knowledge, Gforth is an 1.14 anton 2943: 1.15 anton 2944: ANS Forth System 1.34 anton 2945: @itemize @bullet 1.15 anton 1.34 anton 2964: @item providing @code{;CODE}, @code{AHEAD}, @code{ASSEMBLER}, @code{BYE}, @code{CODE}, @code{CS-PICK}, @code{CS-ROLL}, @code{STATE}, @code{[ELSE]}, @code{[IF]}, @code{[THEN]} from the Programming-Tools Extensions word set 1.15 anton 2965: @item providing the Search-Order word set 2966: @item providing the Search-Order Extensions word set 2967: @item providing the String word set 2968: @item providing the String Extensions word set (another easy one) 2969: @end itemize 2970: 1.43 anton 2971: @cindex system documentation 1.15 anton 1.17 anton 2978: change during the maintenance of Gforth. 1.15 anton 2979: 1.14 anton ===================================================================== 1.43 anton 3002: @cindex core words, system documentation 3003: @cindex system documentation, core words 1.14 anton 3004: 3005: @menu 1.15 anton 3006: * core-idef:: Implementation Defined Options 3007: * core-ambcond:: Ambiguous Conditions 3008: * core-other:: Other System Documentation 1.14 anton 3009: @end menu 3010: 3011: @c --------------------------------------------------------------------- 3012: @node core-idef, core-ambcond, The Core Words, The Core Words 3013: @subsection Implementation Defined Options 3014: @c --------------------------------------------------------------------- 1.43 anton 3015: @cindex core words, implementation-defined options 3016: @cindex implementation-defined options, core words 3017: 1.14 anton 3018: 3019: @table @i 3020: @item (Cell) aligned addresses: 1.43 anton 3021: @cindex cell-aligned addresses 3022: @cindex aligned addresses 1.17 anton 3023: processor-dependent. Gforth's alignment words perform natural alignment 1.14 anton 3024: (e.g., an address aligned for a datum of size 8 is divisible by 3025: 8). Unaligned accesses usually result in a @code{-23 THROW}. 3026: 3027: @item @code{EMIT} and non-graphic characters: 1.43 anton 3028: @cindex @code{EMIT} and non-graphic characters 3029: @cindex non-graphic characters and @code{EMIT} 1.14 anton 3030: The character is output using the C library function (actually, macro) 1.36 anton 3031: @code{putc}. 1.14 anton 3032: 3033: @item character editing of @code{ACCEPT} and @code{EXPECT}: 1.43 anton 3034: @cindex character editing of @code{ACCEPT} and @code{EXPECT} 3035: @cindex editing in @code{ACCEPT} and @code{EXPECT} 3036: @cindex @code{ACCEPT}, editing 3037: @cindex @code{EXPECT}, editing 1.14 anton: 1.43 anton 3045: @cindex character set 1.14 anton 3046: The character set of your computer and display device. Gforth is 3047: 8-bit-clean (but some other component in your system may make trouble). 3048: 3049: @item Character-aligned address requirements: 1.43 anton 3050: @cindex character-aligned address requirements 1.14 anton 3051: installation-dependent. Currently a character is represented by a C 3052: @code{unsigned char}; in the future we might switch to @code{wchar_t} 3053: (Comments on that requested). 3054: 3055: @item character-set extensions and matching of names: 1.43 anton 3056: @cindex character-set extensions and matching of names 3057: @cindex case sensitivity for name lookup 3058: @cindex name lookup, case sensitivity 3059: @cindex locale and case sensitivity 1.17 anton 3060: Any character except the ASCII NUL charcter can be used in a 1.43 anton 3061: name. Matching is case-insensitive (except in @code{TABLE}s). The 1.36 anton. 1.14 anton 3073: 3074: @item conditions under which control characters match a space delimiter: 1.43 anton 3075: @cindex space delimiters 3076: @cindex control characters as delimiters 1.14 anton: 1.43 anton 3086: @cindex control flow stack, format 1.14 anton 1.43 anton 3096: @cindex digits > 35 1.14 anton 3097: The characters @code{[\]^_'} are the digits with the decimal value 3098: 36@minus{}41. There is no way to input many of the larger digits. 3099: 3100: @item display after input terminates in @code{ACCEPT} and @code{EXPECT}: 1.43 anton 3101: @cindex @code{EXPECT}, display after end of input 3102: @cindex @code{ACCEPT}, display after end of input 1.14 anton 3103: The cursor is moved to the end of the entered string. If the input is 3104: terminated using the @kbd{Return} key, a space is typed. 3105: 3106: @item exception abort sequence of @code{ABORT"}: 1.43 anton 3107: @cindex exception abort sequence of @code{ABORT"} 3108: @cindex @code{ABORT"}, exception abort sequence 1.14 anton 3109: The error string is stored into the variable @code{"error} and a 3110: @code{-2 throw} is performed. 3111: 3112: @item input line terminator: 1.43 anton 3113: @cindex input line terminator 3114: @cindex line terminator on input 3115: @cindex newline charcter on input 1.36 anton 3116: For interactive input, @kbd{C-m} (CR) and @kbd{C-j} (LF) terminate 3117: lines. One of these characters is typically produced when you type the 3118: @kbd{Enter} or @kbd{Return} key. 1.14 anton 3119: 3120: @item maximum size of a counted string: 1.43 anton 3121: @cindex maximum size of a counted string 3122: @cindex counted string, maximum size 1.14 anton 3123: @code{s" /counted-string" environment? drop .}. Currently 255 characters 3124: on all ports, but this may change. 3125: 3126: @item maximum size of a parsed string: 1.43 anton 3127: @cindex maximum size of a parsed string 3128: @cindex parsed string, maximum size 1.14 anton 3129: Given by the constant @code{/line}. Currently 255 characters. 3130: 3131: @item maximum size of a definition name, in characters: 1.43 anton 3132: @cindex maximum size of a definition name, in characters 3133: @cindex name, maximum length 1.14 anton 3134: 31 3135: 3136: @item maximum string length for @code{ENVIRONMENT?}, in characters: 1.43 anton 3137: @cindex maximum string length for @code{ENVIRONMENT?}, in characters 3138: @cindex @code{ENVIRONMENT?} string length, maximum 1.14 anton 3139: 31 3140: 3141: @item method of selecting the user input device: 1.43 anton 3142: @cindex user input device, method of selecting 1.17 anton 3143: The user input device is the standard input. There is currently no way to 3144: change it from within Gforth. However, the input can typically be 3145: redirected in the command line that starts Gforth. 1.14 anton 3146: 3147: @item method of selecting the user output device: 1.43 anton 3148: @cindex user output device, method of selecting 1.36 anton. 1.14 anton 3154: 3155: @item methods of dictionary compilation: 1.17 anton 3156: What are we expected to document here? 1.14 anton 3157: 3158: @item number of bits in one address unit: 1.43 anton 3159: @cindex number of bits in one address unit 3160: @cindex address unit, size in bits 1.14 anton 3161: @code{s" address-units-bits" environment? drop .}. 8 in all current 3162: ports. 3163: 3164: @item number representation and arithmetic: 1.43 anton 3165: @cindex number representation and arithmetic 1.14 anton 3166: Processor-dependent. Binary two's complement on all current ports. 3167: 3168: @item ranges for integer types: 1.43 anton 3169: @cindex ranges for integer types 3170: @cindex integer types, ranges 1.14 anton: 1.43 anton 3178: @cindex read-only data space regions 3179: @cindex data-space, read-only regions 1.14 anton 3180: The whole Forth data space is writable. 3181: 3182: @item size of buffer at @code{WORD}: 1.43 anton 3183: @cindex size of buffer at @code{WORD} 3184: @cindex @code{WORD} buffer size 1.14 anton: 1.43 anton 3192: @cindex cell size 1.14 anton 3193: @code{1 cells .}. 3194: 3195: @item size of one character in address units: 1.43 anton 3196: @cindex char size 1.14 anton 3197: @code{1 chars .}. 1 on all current ports. 3198: 3199: @item size of the keyboard terminal buffer: 1.43 anton 3200: @cindex size of the keyboard terminal buffer 3201: @cindex terminal buffer, size 1.36 anton 3202: Varies. You can determine the size at a specific time using @code{lp@@ 1.14 anton 3203: tib - .}. It is shared with the locals stack and TIBs of files that 3204: include the current file. You can change the amount of space for TIBs 1.17 anton 3205: and locals stack at Gforth startup with the command line option 1.14 anton 3206: @code{-l}. 3207: 3208: @item size of the pictured numeric output buffer: 1.43 anton 3209: @cindex size of the pictured numeric output buffer 3210: @cindex pictured numeric output buffer, size 1.14 anton 3211: @code{PAD HERE - .}. 104 characters on 32-bit machines. The buffer is 3212: shared with @code{WORD}. 3213: 3214: @item size of the scratch area returned by @code{PAD}: 1.43 anton 3215: @cindex size of the scratch area returned by @code{PAD} 3216: @cindex @code{PAD} size 3217: The remainder of dictionary space. @code{unused pad here - - .}. 1.14 anton 3218: 3219: @item system case-sensitivity characteristics: 1.43 anton 3220: @cindex case-sensitivity characteristics 1.36 anton. 1.14 anton 3226: 3227: @item system prompt: 1.43 anton 3228: @cindex system prompt 3229: @cindex prompt 1.14 anton 3230: @code{ ok} in interpret state, @code{ compiled} in compile state. 3231: 3232: @item division rounding: 1.43 anton 3233: @cindex division rounding 1.14 anton 3234: installation dependent. @code{s" floored" environment? drop .}. We leave 1.43 anton 3235: the choice to @code{gcc} (what to use for @code{/}) and to you (whether 3236: to use @code{fm/mod}, @code{sm/rem} or simply @code{/}). 1.14 anton 3237: 3238: @item values of @code{STATE} when true: 1.43 anton 3239: @cindex @code{STATE} values 1.14 anton 1.36 anton 3246: typically results in a @code{-55 throw} (Floating-point unidentified 1.14 anton 3247: fault), although a @code{-10 throw} (divide by zero) would be more 3248: appropriate. 3249: 3250: @item whether the current definition can be found after @t{DOES>}: 1.43 anton 3251: @cindex @t{DOES>}, visibility of current definition 1.14 anton 3252: No. 3253: 3254: @end table 3255: 3256: @c --------------------------------------------------------------------- 3257: @node core-ambcond, core-other, core-idef, The Core Words 3258: @subsection Ambiguous conditions 3259: @c --------------------------------------------------------------------- 1.43 anton 3260: @cindex core words, ambiguous conditions 3261: @cindex ambiguous conditions, core words 1.14 anton 3262: 3263: @table @i 3264: 3265: @item a name is neither a word nor a number: 1.43 anton 3266: @cindex name not found 3267: @cindex Undefined word 1.36 anton 3268: @code{-13 throw} (Undefined word). Actually, @code{-13 bounce}, which 3269: preserves the data and FP stack, so you don't lose more work than 3270: necessary. 1.14 anton 3271: 3272: @item a definition name exceeds the maximum length allowed: 1.43 anton 3273: @cindex Word name too long 1.14 anton 3274: @code{-19 throw} (Word name too long) 3275: 3276: @item addressing a region not inside the various data spaces of the forth system: 1.43 anton 3277: @cindex Invalid memory address 1.14 anton: 1.43 anton 3284: @cindex Argument type mismatch 1.14 anton: 1.43 anton 3290: @cindex Interpreting a compile-only word, for @code{'} etc. 3291: @cindex execution token of words with undefined execution semantics 1.36 anton 3292: @code{-14 throw} (Interpreting a compile-only word). In some cases, you 3293: get an execution token for @code{compile-only-error} (which performs a 3294: @code{-14 throw} when executed). 1.14 anton 3295: 3296: @item dividing by zero: 1.43 anton 3297: @cindex dividing by zero 3298: @cindex floating point unidentified fault, integer division 3299: @cindex divide by zero 1.14 anton 3300: typically results in a @code{-55 throw} (floating point unidentified 3301: fault), although a @code{-10 throw} (divide by zero) would be more 3302: appropriate. 3303: 3304: @item insufficient data stack or return stack space: 1.43 anton. 1.14 anton 3318: 3319: @item insufficient space for loop control parameters: 1.43 anton 3320: @cindex insufficient space for loop control parameters 1.14 anton 3321: like other return stack overflows. 3322: 3323: @item insufficient space in the dictionary: 1.43 anton). 1.14 anton 3332: 3333: @item interpreting a word with undefined interpretation semantics: 1.43 anton 3334: @cindex interpreting a word with undefined interpretation semantics 3335: @cindex Interpreting a compile-only word 3336: For some words, we have defined interpretation semantics. For the 3337: others: @code{-14 throw} (Interpreting a compile-only word). 1.14 anton 3338: 3339: @item modifying the contents of the input buffer or a string literal: 1.43 anton 3340: @cindex modifying the contents of the input buffer or a string literal 1.14 anton 3341: These are located in writable memory and can be modified. 3342: 3343: @item overflow of the pictured numeric output string: 1.43 anton 3344: @cindex overflow of the pictured numeric output string 3345: @cindex pictured numeric output string, overflow 3346: Not checked. Runs into the dictionary and destroys it (at least, 3347: partially). 1.14 anton 3348: 3349: @item parsed string overflow: 1.43 anton 3350: @cindex parsed string overflow 1.14 anton 3351: @code{PARSE} cannot overflow. @code{WORD} does not check for overflow. 3352: 3353: @item producing a result out of range: 1.43 anton 3354: @cindex result out of range 1.14 anton: 1.43 anton 3364: @cindex stack empty 3365: @cindex stack underflow 1.14 anton 3366: The data stack is checked by the outer (aka text) interpreter after 3367: every word executed. If it has underflowed, a @code{-4 throw} (Stack 1.43 anton). 1.14 anton 3375: 1.36 anton 3376: @item unexpected end of the input buffer, resulting in an attempt to use a zero-length string as a name: 1.43 anton 3377: @cindex unexpected end of the input buffer 3378: @cindex zero-length string as a name 3379: @cindex Attempt to use zero-length string as a name 1.14 anton: 1.43 anton 3386: @cindex @code{>IN} greater than input buffer 1.41 anton 3387: The next invocation of a parsing word returns a string with length 0. 1.14 anton 3388: 3389: @item @code{RECURSE} appears after @code{DOES>}: 1.43 anton 3390: @cindex @code{RECURSE} appears after @code{DOES>} 1.36 anton 3391: Compiles a recursive call to the defining word, not to the defined word. 1.14 anton 3392: 3393: @item argument input source different than current input source for @code{RESTORE-INPUT}: 1.43 anton 3394: @cindex argument input source different than current input source for @code{RESTORE-INPUT} 3395: @cindex Argument type mismatch, @code{RESTORE-INPUT} 3396: @cindex @code{RESTORE-INPUT}, Argument type mismatch 1.27 anton: 1.36 anton 3403: In the future, Gforth may be able to restore input source specifications 1.41 anton 3404: from other than the current input source. 1.14 anton 3405: 3406: @item data space containing definitions gets de-allocated: 1.43 anton 3407: @cindex data space containing definitions gets de-allocated 1.41 anton 3408: Deallocation with @code{allot} is not checked. This typically results in 1.14 anton 3409: memory access faults or execution of illegal instructions. 3410: 3411: @item data space read/write with incorrect alignment: 1.43 anton 3412: @cindex data space read/write with incorrect alignment 3413: @cindex alignment faults 3414: @cindex Address alignment exception 1.14 anton,}: 1.43 anton 3422: @cindex data space pointer not properly aligned, @code{,}, @code{C,} 1.14 anton 3423: Like other alignment errors. 3424: 3425: @item less than u+2 stack items (@code{PICK} and @code{ROLL}): 1.43 anton 3426: Like other stack underflows. 1.14 anton 3427: 3428: @item loop control parameters not available: 1.43 anton 3429: @cindex loop control parameters not available 1.14 anton 3430: Not checked. The counted loop words simply assume that the top of return 3431: stack items are loop control parameters and behave accordingly. 3432: 3433: @item most recent definition does not have a name (@code{IMMEDIATE}): 1.43 anton 3434: @cindex most recent definition does not have a name (@code{IMMEDIATE}) 3435: @cindex last word was headerless 1.14 anton 3436: @code{abort" last word was headerless"}. 3437: 3438: @item name not defined by @code{VALUE} used by @code{TO}: 1.43 anton). 1.14 anton 3444: 1.15 anton 3445: @item name not found (@code{'}, @code{POSTPONE}, @code{[']}, @code{[COMPILE]}): 1.43 anton 3446: @cindex name not found (@code{'}, @code{POSTPONE}, @code{[']}, @code{[COMPILE]}) 3447: @cindex Undefined word, @code{'}, @code{POSTPONE}, @code{[']}, @code{[COMPILE]} 1.14 anton 3448: @code{-13 throw} (Undefined word) 3449: 3450: @item parameters are not of the same type (@code{DO}, @code{?DO}, @code{WITHIN}): 1.43 anton 3451: @cindex parameters are not of the same type (@code{DO}, @code{?DO}, @code{WITHIN}) 1.14 anton 3452: Gforth behaves as if they were of the same type. I.e., you can predict 3453: the behaviour by interpreting all parameters as, e.g., signed. 3454: 3455: @item @code{POSTPONE} or @code{[COMPILE]} applied to @code{TO}: 1.43 anton 3456: @cindex @code{POSTPONE} or @code{[COMPILE]} applied to @code{TO} 1.36 anton 3457: Assume @code{: X POSTPONE TO ; IMMEDIATE}. @code{X} performs the 3458: compilation semantics of @code{TO}. 1.14 anton 3459: 3460: @item String longer than a counted string returned by @code{WORD}: 1.43 anton 3461: @cindex String longer than a counted string returned by @code{WORD} 3462: @cindex @code{WORD}, string overflow 1.14 anton 3463: Not checked. The string will be ok, but the count will, of course, 3464: contain only the least significant bits of the length. 3465: 1.15 anton 3466: @item u greater than or equal to the number of bits in a cell (@code{LSHIFT}, @code{RSHIFT}): 1.43 anton 3467: @cindex @code{LSHIFT}, large shift counts 3468: @cindex @code{RSHIFT}, large shift counts 1.14 anton 3469: Processor-dependent. Typical behaviours are returning 0 and using only 3470: the low bits of the shift count. 3471: 3472: @item word not defined via @code{CREATE}: 1.43 anton 3473: @cindex @code{>BODY} of non-@code{CREATE}d words 1.14 anton 3474: @code{>BODY} produces the PFA of the word no matter how it was defined. 3475: 1.43 anton 3476: @cindex @code{DOES>} of non-@code{CREATE}d words 1.14 anton --------------------------------------------------------------------- 1.43 anton 3491: @cindex other system documentation, core words 3492: @cindex core words, other system documentation 1.14 anton 3493: 3494: @table @i 3495: @item nonstandard words using @code{PAD}: 1.43 anton 3496: @cindex @code{PAD} use by nonstandard words 1.14 anton 3497: None. 3498: 3499: @item operator's terminal facilities available: 1.43 anton 3500: @cindex operator's terminal facilities available 1.26 anton 3501: After processing the command line, Gforth goes into interactive mode, 3502: and you can give commands to Gforth interactively. The actual facilities 3503: available depend on how you invoke Gforth. 1.14 anton 3504: 3505: @item program data space available: 1.43 anton 3506: @cindex program data space available 3507: @cindex data space available 1.42 anton 3508: @code{UNUSED .} gives the remaining dictionary space. The total 3509: dictionary space can be specified with the @code{-m} switch 1.43 anton 3510: (@pxref{Invoking Gforth}) when Gforth starts up. 1.14 anton 3511: 3512: @item return stack space available: 1.43 anton 3513: @cindex return stack space available 1.42 anton 3514: You can compute the total return stack space in cells with 3515: @code{s" RETURN-STACK-CELLS" environment? drop .}. You can specify it at 1.43 anton 3516: startup time with the @code{-r} switch (@pxref{Invoking Gforth}). 1.14 anton 3517: 3518: @item stack space available: 1.43 anton 3519: @cindex stack space available 1.42 anton 3520: You can compute the total data stack space in cells with 3521: @code{s" STACK-CELLS" environment? drop .}. You can specify it at 1.43 anton 3522: startup time with the @code{-d} switch (@pxref{Invoking Gforth}). 1.14 anton 3523: 3524: @item system dictionary space required, in address units: 1.43 anton 3525: @cindex system dictionary space required, in address units 1.14 anton 3526: Type @code{here forthstart - .} after startup. At the time of this 1.42 anton 3527: writing, this gives 80080 (bytes) on a 32-bit system. 1.14 anton 3528: @end table 3529: 3530: 3531: @c ===================================================================== 3532: @node The optional Block word set, The optional Double Number word set, The Core Words, ANS conformance 3533: @section The optional Block word set 3534: @c ===================================================================== 1.43 anton 3535: @cindex system documentation, block words 3536: @cindex block words, system documentation 1.14 anton 3537: 3538: @menu 1.43 anton 3539: * block-idef:: Implementation Defined Options 1.15 anton 3540: * block-ambcond:: Ambiguous Conditions 3541: * block-other:: Other System Documentation 1.14 anton 3542: @end menu 3543: 3544: 3545: @c --------------------------------------------------------------------- 3546: @node block-idef, block-ambcond, The optional Block word set, The optional Block word set 3547: @subsection Implementation Defined Options 3548: @c --------------------------------------------------------------------- 1.43 anton 3549: @cindex implementation-defined options, block words 3550: @cindex block words, implementation-defined options 1.14 anton 3551: 3552: @table @i 3553: @item the format for display by @code{LIST}: 1.43 anton 3554: @cindex @code{LIST} display format 1.14 anton 3555: First the screen number is displayed, then 16 lines of 64 characters, 3556: each line preceded by the line number. 3557: 3558: @item the length of a line affected by @code{\}: 1.43 anton 3559: @cindex length of a line affected by @code{\} 3560: @cindex @code{\}, line length in blocks 1.14 anton 3561: 64 characters. 3562: @end table 3563: 3564: 3565: @c --------------------------------------------------------------------- 3566: @node block-ambcond, block-other, block-idef, The optional Block word set 3567: @subsection Ambiguous conditions 3568: @c --------------------------------------------------------------------- 1.43 anton 3569: @cindex block words, ambiguous conditions 3570: @cindex ambiguous conditions, block words 1.14 anton 3571: 3572: @table @i 3573: @item correct block read was not possible: 1.43 anton 3574: @cindex block read not possible 1.14 anton 3575: Typically results in a @code{throw} of some OS-derived value (between 3576: -512 and -2048). If the blocks file was just not long enough, blanks are 3577: supplied for the missing portion. 3578: 3579: @item I/O exception in block transfer: 1.43 anton 3580: @cindex I/O exception in block transfer 3581: @cindex block transfer, I/O exception 1.14 anton 3582: Typically results in a @code{throw} of some OS-derived value (between 3583: -512 and -2048). 3584: 3585: @item invalid block number: 1.43 anton 3586: @cindex invalid block number 3587: @cindex block number invalid 1.14 anton 3588: @code{-35 throw} (Invalid block number) 3589: 3590: @item a program directly alters the contents of @code{BLK}: 1.43 anton 3591: @cindex @code{BLK}, altering @code{BLK} 1.14 anton}: 1.43 anton 3597: @cindex @code{UPDATE}, no current block buffer 1.14 anton 3598: @code{UPDATE} has no effect. 3599: 3600: @end table 3601: 3602: @c --------------------------------------------------------------------- 3603: @node block-other, , block-ambcond, The optional Block word set 3604: @subsection Other system documentation 3605: @c --------------------------------------------------------------------- 1.43 anton 3606: @cindex other system documentation, block words 3607: @cindex block words, other system documentation 1.14 anton ===================================================================== 1.43 anton 3623: @cindex system documentation, double words 3624: @cindex double words, system documentation 1.14 anton 3625: 3626: @menu 1.15 anton 3627: * double-ambcond:: Ambiguous Conditions 1.14 anton 3628: @end menu 3629: 3630: 3631: @c --------------------------------------------------------------------- 1.15 anton 3632: @node double-ambcond, , The optional Double Number word set, The optional Double Number word set 1.14 anton 3633: @subsection Ambiguous conditions 3634: @c --------------------------------------------------------------------- 1.43 anton 3635: @cindex double words, ambiguous conditions 3636: @cindex ambiguous conditions, double words 1.14 anton 3637: 3638: @table @i 1.15 anton 3639: @item @var{d} outside of range of @var{n} in @code{D>S}: 1.43 anton 3640: @cindex @code{D>S}, @var{d} out of range of @var{n} 1.14 anton ===================================================================== 1.43 anton 3650: @cindex system documentation, exception words 3651: @cindex exception words, system documentation 1.14 anton 3652: 3653: @menu 1.15 anton 3654: * exception-idef:: Implementation Defined Options 1.14 anton 3655: @end menu 3656: 3657: 3658: @c --------------------------------------------------------------------- 1.15 anton 3659: @node exception-idef, , The optional Exception word set, The optional Exception word set 1.14 anton 3660: @subsection Implementation Defined Options 3661: @c --------------------------------------------------------------------- 1.43 anton 3662: @cindex implementation-defined options, exception words 3663: @cindex exception words, implementation-defined options 1.14 anton 3664: 3665: @table @i 3666: @item @code{THROW}-codes used in the system: 1.43 an. 1.14 anton 3675: @end table 3676: 3677: @c ===================================================================== 3678: @node The optional Facility word set, The optional File-Access word set, The optional Exception word set, ANS conformance 3679: @section The optional Facility word set 3680: @c ===================================================================== 1.43 anton 3681: @cindex system documentation, facility words 3682: @cindex facility words, system documentation 1.14 anton 3683: 3684: @menu 1.15 anton 3685: * facility-idef:: Implementation Defined Options 3686: * facility-ambcond:: Ambiguous Conditions 1.14 anton 3687: @end menu 3688: 3689: 3690: @c --------------------------------------------------------------------- 3691: @node facility-idef, facility-ambcond, The optional Facility word set, The optional Facility word set 3692: @subsection Implementation Defined Options 3693: @c --------------------------------------------------------------------- 1.43 anton 3694: @cindex implementation-defined options, facility words 3695: @cindex facility words, implementation-defined options 1.14 anton 3696: 3697: @table @i 3698: @item encoding of keyboard events (@code{EKEY}): 1.43 anton 3699: @cindex keyboard events, encoding in @code{EKEY} 3700: @cindex @code{EKEY}, encoding of keyboard events 1.41 anton 3701: Not yet implemented. 1.14 anton 3702: 1.43 anton 3703: @item duration of a system clock tick: 3704: @cindex duration of a system clock tick 3705: @cindex clock tick duration 1.14 anton 3706: System dependent. With respect to @code{MS}, the time is specified in 3707: microseconds. How well the OS and the hardware implement this, is 3708: another question. 3709: 3710: @item repeatability to be expected from the execution of @code{MS}: 1.43 anton 3711: @cindex repeatability to be expected from the execution of @code{MS} 3712: @cindex @code{MS}, repeatability to be expected 1.14 anton 3713: System dependent. On Unix, a lot depends on load. If the system is 1.17 anton 3714: lightly loaded, and the delay is short enough that Gforth does not get 1.14 anton 3715: swapped out, the performance should be acceptable. Under MS-DOS and 3716: other single-tasking systems, it should be good. 3717: 3718: @end table 3719: 3720: 3721: @c --------------------------------------------------------------------- 1.15 anton 3722: @node facility-ambcond, , facility-idef, The optional Facility word set 1.14 anton 3723: @subsection Ambiguous conditions 3724: @c --------------------------------------------------------------------- 1.43 anton 3725: @cindex facility words, ambiguous conditions 3726: @cindex ambiguous conditions, facility words 1.14 anton 3727: 3728: @table @i 3729: @item @code{AT-XY} can't be performed on user output device: 1.43 anton 3730: @cindex @code{AT-XY} can't be performed on user output device 1.41 anton 3731: Largely terminal dependent. No range checks are done on the arguments. 1.14 anton ===================================================================== 1.43 anton 3742: @cindex system documentation, file words 3743: @cindex file words, system documentation 1.14 anton 3744: 3745: @menu 1.43 anton 3746: * file-idef:: Implementation Defined Options 1.15 anton 3747: * file-ambcond:: Ambiguous Conditions 1.14 anton 3748: @end menu 3749: 3750: @c --------------------------------------------------------------------- 3751: @node file-idef, file-ambcond, The optional File-Access word set, The optional File-Access word set 3752: @subsection Implementation Defined Options 3753: @c --------------------------------------------------------------------- 1.43 anton 3754: @cindex implementation-defined options, file words 3755: @cindex file words, implementation-defined options 1.14 anton 3756: 3757: @table @i 1.43 anton 3758: @item file access methods used: 3759: @cindex file access methods used 1.14 anton 3760: @code{R/O}, @code{R/W} and @code{BIN} work as you would 3761: expect. @code{W/O} translates into the C file opening mode @code{w} (or 3762: @code{wb}): The file is cleared, if it exists, and created, if it does 1.43 anton 3763: not (with both @code{open-file} and @code{create-file}). Under Unix 1.14 anton 3764: @code{create-file} creates a file with 666 permissions modified by your 3765: umask. 3766: 3767: @item file exceptions: 1.43 anton 3768: @cindex file exceptions 1.14 anton 3769: The file words do not raise exceptions (except, perhaps, memory access 3770: faults when you pass illegal addresses or file-ids). 3771: 3772: @item file line terminator: 1.43 anton 3773: @cindex file line terminator 1.14 anton 3774: System-dependent. Gforth uses C's newline character as line 3775: terminator. What the actual character code(s) of this are is 3776: system-dependent. 3777: 1.43 anton 3778: @item file name format: 3779: @cindex file name format 1.14 anton 3780: System dependent. Gforth just uses the file name format of your OS. 3781: 3782: @item information returned by @code{FILE-STATUS}: 1.43 anton 3783: @cindex @code{FILE-STATUS}, returned information 1.14 anton 3784: @code{FILE-STATUS} returns the most powerful file access mode allowed 3785: for the file: Either @code{R/O}, @code{W/O} or @code{R/W}. If the file 3786: cannot be accessed, @code{R/O BIN} is returned. @code{BIN} is applicable 1.41 anton 3787: along with the returned mode. 1.14 anton 3788: 3789: @item input file state after an exception when including source: 1.43 anton 3790: @cindex exception when including source 1.14 anton 3791: All files that are left via the exception are closed. 3792: 3793: @item @var{ior} values and meaning: 1.43 anton 3794: @cindex @var{ior} values and meaning 1.15 anton}. 1.14 anton 3799: 3800: @item maximum depth of file input nesting: 1.43 anton 3801: @cindex maximum depth of file input nesting 3802: @cindex file input nesting, maximum depth 1.14 anton 3803: limited by the amount of return stack, locals/TIB stack, and the number 3804: of open files available. This should not give you troubles. 3805: 3806: @item maximum size of input line: 1.43 anton 3807: @cindex maximum size of input line 3808: @cindex input line size, maximum 1.14 anton 3809: @code{/line}. Currently 255. 3810: 3811: @item methods of mapping block ranges to files: 1.43 anton 3812: @cindex mapping block ranges to files 3813: @cindex files containing blocks 3814: @cindex blocks in files 1.37 anton 3815: By default, blocks are accessed in the file @file{blocks.fb} in the 3816: current working directory. The file can be switched with @code{USE}. 1.14 anton 3817: 3818: @item number of string buffers provided by @code{S"}: 1.43 anton 3819: @cindex @code{S"}, number of string buffers 1.14 anton 3820: 1 3821: 3822: @item size of string buffer used by @code{S"}: 1.43 anton 3823: @cindex @code{S"}, size of string buffer 1.14 anton 3824: @code{/line}. currently 255. 3825: 3826: @end table 3827: 3828: @c --------------------------------------------------------------------- 1.15 anton 3829: @node file-ambcond, , file-idef, The optional File-Access word set 1.14 anton 3830: @subsection Ambiguous conditions 3831: @c --------------------------------------------------------------------- 1.43 anton 3832: @cindex file words, ambiguous conditions 3833: @cindex ambiguous conditions, file words 1.14 anton 3834: 3835: @table @i 1.43 anton 3836: @item attempting to position a file outside its boundaries: 3837: @cindex @code{REPOSITION-FILE}, outside the file's boundaries 1.14 anton 3838: @code{REPOSITION-FILE} is performed as usual: Afterwards, 3839: @code{FILE-POSITION} returns the value given to @code{REPOSITION-FILE}. 3840: 3841: @item attempting to read from file positions not yet written: 1.43 anton 3842: @cindex reading from file positions not yet written 1.14 anton 3843: End-of-file, i.e., zero characters are read and no error is reported. 3844: 3845: @item @var{file-id} is invalid (@code{INCLUDE-FILE}): 1.43 anton 3846: @cindex @code{INCLUDE-FILE}, @var{file-id} is invalid 1.14 anton 3847: An appropriate exception may be thrown, but a memory fault or other 3848: problem is more probable. 3849: 1.43 anton 3850: @item I/O exception reading or closing @var{file-id} (@code{INCLUDE-FILE}, @code{INCLUDED}): 3851: @cindex @code{INCLUDE-FILE}, I/O exception reading or closing @var{file-id} 3852: @cindex @code{INCLUDED}, I/O exception reading or closing @var{file-id} 1.14 anton 3853: The @var{ior} produced by the operation, that discovered the problem, is 3854: thrown. 3855: 1.43 anton 3856: @item named file cannot be opened (@code{INCLUDED}): 3857: @cindex @code{INCLUDED}, named file cannot be opened 1.14 anton 3858: The @var{ior} produced by @code{open-file} is thrown. 3859: 3860: @item requesting an unmapped block number: 1.43 anton 3861: @cindex unmapped block numbers 1.14 anton: 1.43 anton 3867: @cindex @code{SOURCE-ID}, behaviour when @code{BLK} is non-zero 1.14 anton 1.15 anton 3876: @section The optional Floating-Point word set 1.14 anton 3877: @c ===================================================================== 1.43 anton 3878: @cindex system documentation, floating-point words 3879: @cindex floating-point words, system documentation 1.14 anton 3880: 3881: @menu 1.15 anton 3882: * floating-idef:: Implementation Defined Options 3883: * floating-ambcond:: Ambiguous Conditions 1.14 anton 3884: @end menu 3885: 3886: 3887: @c --------------------------------------------------------------------- 3888: @node floating-idef, floating-ambcond, The optional Floating-Point word set, The optional Floating-Point word set 3889: @subsection Implementation Defined Options 3890: @c --------------------------------------------------------------------- 1.43 anton 3891: @cindex implementation-defined options, floating-point words 3892: @cindex floating-point words, implementation-defined options 1.14 anton 3893: 3894: @table @i 1.15 anton 3895: @item format and range of floating point numbers: 1.43 anton 3896: @cindex format and range of floating point numbers 3897: @cindex floating point numbers, format and range 1.15 anton 3898: System-dependent; the @code{double} type of C. 1.14 anton 3899: 1.15 anton 3900: @item results of @code{REPRESENT} when @var{float} is out of range: 1.43 anton 3901: @cindex @code{REPRESENT}, results when @var{float} is out of range 1.15 anton 3902: System dependent; @code{REPRESENT} is implemented using the C library 3903: function @code{ecvt()} and inherits its behaviour in this respect. 1.14 anton 3904: 1.15 anton 3905: @item rounding or truncation of floating-point numbers: 1.43 anton 3906: @cindex rounding of floating-point numbers 3907: @cindex truncation of floating-point numbers 3908: @cindex floating-point numbers, rounding or truncation 1.26 anton). 1.14 anton 3913: 1.15 anton 3914: @item size of floating-point stack: 1.43 anton 3915: @cindex floating-point stack size 1.42 anton 3916: @code{s" FLOATING-STACK" environment? drop .} gives the total size of 3917: the floating-point stack (in floats). You can specify this on startup 1.43 anton 3918: with the command-line option @code{-f} (@pxref{Invoking Gforth}). 1.14 anton 3919: 1.15 anton 3920: @item width of floating-point stack: 1.43 anton 3921: @cindex floating-point stack width 1.15 anton 3922: @code{1 floats}. 1.14 anton 3923: 3924: @end table 3925: 3926: 3927: @c --------------------------------------------------------------------- 1.15 anton 3928: @node floating-ambcond, , floating-idef, The optional Floating-Point word set 3929: @subsection Ambiguous conditions 1.14 anton 3930: @c --------------------------------------------------------------------- 1.43 anton 3931: @cindex floating-point words, ambiguous conditions 3932: @cindex ambiguous conditions, floating-point words 1.14 anton 3933: 3934: @table @i 1.15 anton 3935: @item @code{df@@} or @code{df!} used with an address that is not double-float aligned: 1.43 anton 3936: @cindex @code{df@@} or @code{df!} used with an address that is not double-float aligned 1.37 anton 3937: System-dependent. Typically results in a @code{-23 THROW} like other 1.15 anton 3938: alignment violations. 1.14 anton 3939: 1.15 anton 3940: @item @code{f@@} or @code{f!} used with an address that is not float aligned: 1.43 anton 3941: @cindex @code{f@@} used with an address that is not float aligned 3942: @cindex @code{f!} used with an address that is not float aligned 1.37 anton 3943: System-dependent. Typically results in a @code{-23 THROW} like other 1.15 anton 3944: alignment violations. 1.14 anton 3945: 1.43 anton 3946: @item floating-point result out of range: 3947: @cindex floating-point result out of range 1.15 anton 3948: System-dependent. Can result in a @code{-55 THROW} (Floating-point 3949: unidentified fault), or can produce a special value representing, e.g., 3950: Infinity. 1.14 anton 3951: 1.15 anton 3952: @item @code{sf@@} or @code{sf!} used with an address that is not single-float aligned: 1.43 anton 3953: @cindex @code{sf@@} or @code{sf!} used with an address that is not single-float aligned 1.15 anton 3954: System-dependent. Typically results in an alignment fault like other 3955: alignment violations. 1.14 anton 3956: 1.43 anton 3957: @item @code{BASE} is not decimal (@code{REPRESENT}, @code{F.}, @code{FE.}, @code{FS.}): 3958: @cindex @code{BASE} is not decimal (@code{REPRESENT}, @code{F.}, @code{FE.}, @code{FS.}) 1.15 anton 3959: The floating-point number is converted into decimal nonetheless. 1.14 anton 3960: 1.15 anton 3961: @item Both arguments are equal to zero (@code{FATAN2}): 1.43 anton 3962: @cindex @code{FATAN2}, both arguments are equal to zero 1.15 anton 3963: System-dependent. @code{FATAN2} is implemented using the C library 3964: function @code{atan2()}. 1.14 anton 3965: 1.43 anton 3966: @item Using @code{FTAN} on an argument @var{r1} where cos(@var{r1}) is zero: 3967: @cindex @code{FTAN} on an argument @var{r1} where cos(@var{r1}) is zero 1.15 anton 3968: System-dependent. Anyway, typically the cos of @var{r1} will not be zero 3969: because of small errors and the tan will be a very large (or very small) 3970: but finite number. 1.14 anton 3971: 1.15 anton 3972: @item @var{d} cannot be presented precisely as a float in @code{D>F}: 1.43 anton 3973: @cindex @code{D>F}, @var{d} cannot be presented precisely as a float 1.15 anton 3974: The result is rounded to the nearest float. 1.14 anton 3975: 1.15 anton 3976: @item dividing by zero: 1.43 anton 3977: @cindex dividing by zero, floating-point 3978: @cindex floating-point dividing by zero 3979: @cindex floating-point unidentified fault, FP divide-by-zero 1.15 anton 3980: @code{-55 throw} (Floating-point unidentified fault) 1.14 anton 3981: 1.15 anton 3982: @item exponent too big for conversion (@code{DF!}, @code{DF@@}, @code{SF!}, @code{SF@@}): 1.43 anton 3983: @cindex exponent too big for conversion (@code{DF!}, @code{DF@@}, @code{SF!}, @code{SF@@}) 1.15 anton 3984: System dependent. On IEEE-FP based systems the number is converted into 3985: an infinity. 1.14 anton 3986: 1.43 anton 3987: @item @var{float}<1 (@code{FACOSH}): 3988: @cindex @code{FACOSH}, @var{float}<1 3989: @cindex floating-point unidentified fault, @code{FACOSH} 1.15 anton 3990: @code{-55 throw} (Floating-point unidentified fault) 1.14 anton 3991: 1.43 anton 3992: @item @var{float}=<-1 (@code{FLNP1}): 3993: @cindex @code{FLNP1}, @var{float}=<-1 3994: @cindex floating-point unidentified fault, @code{FLNP1} 1.15 anton 3995: @code{-55 throw} (Floating-point unidentified fault). On IEEE-FP systems 3996: negative infinity is typically produced for @var{float}=-1. 1.14 anton 3997: 1.43 anton 3998: @item @var{float}=<0 (@code{FLN}, @code{FLOG}): 3999: @cindex @code{FLN}, @var{float}=<0 4000: @cindex @code{FLOG}, @var{float}=<0 4001: @cindex floating-point unidentified fault, @code{FLN} or @code{FLOG} 1.15 anton 4002: @code{-55 throw} (Floating-point unidentified fault). On IEEE-FP systems 4003: negative infinity is typically produced for @var{float}=0. 1.14 anton 4004: 1.43 anton} 1.15 anton 4009: @code{-55 throw} (Floating-point unidentified fault). @code{fasinh} 4010: produces values for these inputs on my Linux box (Bug in the C library?) 1.14 anton 4011: 1.43 anton} 1.15 anton 4017: @code{-55 throw} (Floating-point unidentified fault). 1.14 anton 4018: 1.43 anton 4019: @item integer part of float cannot be represented by @var{d} in @code{F>D}: 4020: @cindex @code{F>D}, integer part of float cannot be represented by @var{d} 4021: @cindex floating-point unidentified fault, @code{F>D} 1.15 anton 4022: @code{-55 throw} (Floating-point unidentified fault). 1.14 anton 4023: 1.15 anton 4024: @item string larger than pictured numeric output area (@code{f.}, @code{fe.}, @code{fs.}): 1.43 anton 4025: @cindex string larger than pictured numeric output area (@code{f.}, @code{fe.}, @code{fs.}) 1.15 anton 4026: This does not happen. 4027: @end table 1.14 anton 4028: 4029: @c ===================================================================== 1.15 anton 4030: @node The optional Locals word set, The optional Memory-Allocation word set, The optional Floating-Point word set, ANS conformance 4031: @section The optional Locals word set 1.14 anton 4032: @c ===================================================================== 1.43 anton 4033: @cindex system documentation, locals words 4034: @cindex locals words, system documentation 1.14 anton 4035: 4036: @menu 1.15 anton 4037: * locals-idef:: Implementation Defined Options 4038: * locals-ambcond:: Ambiguous Conditions 1.14 anton 4039: @end menu 4040: 4041: 4042: @c --------------------------------------------------------------------- 1.15 anton 4043: @node locals-idef, locals-ambcond, The optional Locals word set, The optional Locals word set 1.14 anton 4044: @subsection Implementation Defined Options 4045: @c --------------------------------------------------------------------- 1.43 anton 4046: @cindex implementation-defined options, locals words 4047: @cindex locals words, implementation-defined options 1.14 anton 4048: 4049: @table @i 1.15 anton 4050: @item maximum number of locals in a definition: 1.43 anton 4051: @cindex maximum number of locals in a definition 4052: @cindex locals, maximum number in a definition 1.15 anton. 1.14 anton 4057: 4058: @end table 4059: 4060: 4061: @c --------------------------------------------------------------------- 1.15 anton 4062: @node locals-ambcond, , locals-idef, The optional Locals word set 1.14 anton 4063: @subsection Ambiguous conditions 4064: @c --------------------------------------------------------------------- 1.43 anton 4065: @cindex locals words, ambiguous conditions 4066: @cindex ambiguous conditions, locals words 1.14 anton 4067: 4068: @table @i 1.15 anton 4069: @item executing a named local in interpretation state: 1.43 an). 1.14 anton 4076: 1.15 anton 4077: @item @var{name} not defined by @code{VALUE} or @code{(LOCAL)} (@code{TO}): 1.43 anton 4078: @cindex name not defined by @code{VALUE} or @code{(LOCAL)} used by @code{TO} 4079: @cindex @code{TO} on non-@code{VALUE}s and non-locals 4080: @cindex Invalid name argument, @code{TO} 1.15 anton 4081: @code{-32 throw} (Invalid name argument) 1.14 anton 4082: 4083: @end table 4084: 4085: 4086: @c ===================================================================== 1.15 anton 4087: @node The optional Memory-Allocation word set, The optional Programming-Tools word set, The optional Locals word set, ANS conformance 4088: @section The optional Memory-Allocation word set 1.14 anton 4089: @c ===================================================================== 1.43 anton 4090: @cindex system documentation, memory-allocation words 4091: @cindex memory-allocation words, system documentation 1.14 anton 4092: 4093: @menu 1.15 anton 4094: * memory-idef:: Implementation Defined Options 1.14 anton 4095: @end menu 4096: 4097: 4098: @c --------------------------------------------------------------------- 1.15 anton 4099: @node memory-idef, , The optional Memory-Allocation word set, The optional Memory-Allocation word set 1.14 anton 4100: @subsection Implementation Defined Options 4101: @c --------------------------------------------------------------------- 1.43 anton 4102: @cindex implementation-defined options, memory-allocation words 4103: @cindex memory-allocation words, implementation-defined options 1.14 anton 4104: 4105: @table @i 1.15 anton 4106: @item values and meaning of @var{ior}: 1.43 anton 4107: @cindex @var{ior} values and meaning 1.15 anton}. 1.14 anton 4112: 4113: @end table 4114: 4115: @c ===================================================================== 1.15 anton 4116: @node The optional Programming-Tools word set, The optional Search-Order word set, The optional Memory-Allocation word set, ANS conformance 4117: @section The optional Programming-Tools word set 1.14 anton 4118: @c ===================================================================== 1.43 anton 4119: @cindex system documentation, programming-tools words 4120: @cindex programming-tools words, system documentation 1.14 anton 4121: 4122: @menu 1.15 anton 4123: * programming-idef:: Implementation Defined Options 4124: * programming-ambcond:: Ambiguous Conditions 1.14 anton 4125: @end menu 4126: 4127: 4128: @c --------------------------------------------------------------------- 1.15 anton 4129: @node programming-idef, programming-ambcond, The optional Programming-Tools word set, The optional Programming-Tools word set 1.14 anton 4130: @subsection Implementation Defined Options 4131: @c --------------------------------------------------------------------- 1.43 anton 4132: @cindex implementation-defined options, programming-tools words 4133: @cindex programming-tools words, implementation-defined options 1.14 anton 4134: 4135: @table @i 1.43 anton 1.37 anton 4145: the input is processed by the text interpreter, (starting) in interpret 4146: state. 1.15 anton 4147: 4148: @item search order capability for @code{EDITOR} and @code{ASSEMBLER}: 1.43 anton 4149: @cindex @code{ASSEMBLER}, search order capability 1.37 anton 4150: The ANS Forth search order word set. 1.15 anton 4151: 4152: @item source and format of display by @code{SEE}: 1.43 anton 4153: @cindex @code{SEE}, source and format of output 1.15 anton 4154: The source for @code{see} is the intermediate code used by the inner 4155: interpreter. The current @code{see} tries to output Forth source code 4156: as well as possible. 4157: 1.14 anton 4158: @end table 4159: 4160: @c --------------------------------------------------------------------- 1.15 anton 4161: @node programming-ambcond, , programming-idef, The optional Programming-Tools word set 1.14 anton 4162: @subsection Ambiguous conditions 4163: @c --------------------------------------------------------------------- 1.43 anton 4164: @cindex programming-tools words, ambiguous conditions 4165: @cindex ambiguous conditions, programming-tools words 1.14 anton 4166: 4167: @table @i 4168: 1.15 anton 4169: @item deleting the compilation wordlist (@code{FORGET}): 1.43 anton 4170: @cindex @code{FORGET}, deleting the compilation wordlist 1.15 anton 4171: Not implemented (yet). 1.14 anton 4172: 1.15 anton 4173: @item fewer than @var{u}+1 items on the control flow stack (@code{CS-PICK}, @code{CS-ROLL}): 1.43 anton 4174: @cindex @code{CS-PICK}, fewer than @var{u}+1 items on the control flow stack 4175: @cindex @code{CS-ROLL}, fewer than @var{u}+1 items on the control flow stack 4176: @cindex control-flow stack underflow 1.15 anton: 1.43 anton 4182: @item @var{name} can't be found (@code{FORGET}): 4183: @cindex @code{FORGET}, @var{name} can't be found 1.15 anton 4184: Not implemented (yet). 1.14 anton 4185: 1.15 anton 4186: @item @var{name} not defined via @code{CREATE}: 1.43 anton 4187: @cindex @code{;CODE}, @var{name} not defined via @code{CREATE} 4188: @code{;CODE} behaves like @code{DOES>} in this respect, i.e., it changes 1.37 anton 4189: the execution semantics of the last defined word no matter how it was 4190: defined. 1.14 anton 4191: 1.15 anton 4192: @item @code{POSTPONE} applied to @code{[IF]}: 1.43 anton 4193: @cindex @code{POSTPONE} applied to @code{[IF]} 4194: @cindex @code{[IF]} and @code{POSTPONE} 1.15 anton 4195: After defining @code{: X POSTPONE [IF] ; IMMEDIATE}. @code{X} is 4196: equivalent to @code{[IF]}. 1.14 anton 4197: 1.15 anton 4198: @item reaching the end of the input source before matching @code{[ELSE]} or @code{[THEN]}: 1.43 anton 4199: @cindex @code{[IF]}, end of the input source before matching @code{[ELSE]} or @code{[THEN]} 1.15 anton 4200: Continue in the same state of conditional compilation in the next outer 4201: input source. Currently there is no warning to the user about this. 1.14 anton 4202: 1.15 anton 4203: @item removing a needed definition (@code{FORGET}): 1.43 anton 4204: @cindex @code{FORGET}, removing a needed definition 1.15 anton 4205: Not implemented (yet). 1.14 anton 4206: 4207: @end table 4208: 4209: 4210: @c ===================================================================== 1.15 anton 4211: @node The optional Search-Order word set, , The optional Programming-Tools word set, ANS conformance 4212: @section The optional Search-Order word set 1.14 anton 4213: @c ===================================================================== 1.43 anton 4214: @cindex system documentation, search-order words 4215: @cindex search-order words, system documentation 1.14 anton 4216: 4217: @menu 1.15 anton 4218: * search-idef:: Implementation Defined Options 4219: * search-ambcond:: Ambiguous Conditions 1.14 anton 4220: @end menu 4221: 4222: 4223: @c --------------------------------------------------------------------- 1.15 anton 4224: @node search-idef, search-ambcond, The optional Search-Order word set, The optional Search-Order word set 1.14 anton 4225: @subsection Implementation Defined Options 4226: @c --------------------------------------------------------------------- 1.43 anton 4227: @cindex implementation-defined options, search-order words 4228: @cindex search-order words, implementation-defined options 1.14 anton 4229: 4230: @table @i 1.15 anton 4231: @item maximum number of word lists in search order: 1.43 anton 4232: @cindex maximum number of word lists in search order 4233: @cindex search order, maximum depth 1.15 anton 4234: @code{s" wordlists" environment? drop .}. Currently 16. 4235: 4236: @item minimum search order: 1.43 anton 4237: @cindex minimum search order 4238: @cindex search order, minimum 1.15 anton 4239: @code{root root}. 1.14 anton 4240: 4241: @end table 4242: 4243: @c --------------------------------------------------------------------- 1.15 anton 4244: @node search-ambcond, , search-idef, The optional Search-Order word set 1.14 anton 4245: @subsection Ambiguous conditions 4246: @c --------------------------------------------------------------------- 1.43 anton 4247: @cindex search-order words, ambiguous conditions 4248: @cindex ambiguous conditions, search-order words 1.14 anton 4249: 4250: @table @i 1.15 anton 4251: @item changing the compilation wordlist (during compilation): 1.43 anton 4252: @cindex changing the compilation wordlist (during compilation) 4253: @cindex compilation wordlist, change before definition ends 1.33 anton. 1.14 anton 4259: 1.15 anton 4260: @item search order empty (@code{previous}): 1.43 anton 4261: @cindex @code{previous}, search order empty 4262: @cindex Vocstack empty, @code{previous} 1.15 anton 4263: @code{abort" Vocstack empty"}. 1.14 anton 4264: 1.15 anton 4265: @item too many word lists in search order (@code{also}): 1.43 anton 4266: @cindex @code{also}, too many word lists in search order 4267: @cindex Vocstack full, @code{also} 1.15 anton 4268: @code{abort" Vocstack full"}. 1.14 anton 4269: 4270: @end table 1.13 anton 4271: 1.43 anton 4272: @c *************************************************************** 1.34 anton 4273: @node Model, Integrating Gforth, ANS conformance, Top 4274: @chapter Model 4275: 4276: This chapter has yet to be written. It will contain information, on 4277: which internal structures you can rely. 4278: 1.43 anton 4279: @c *************************************************************** 1.34 anton 1.36 anton 4290: importantly, it is not based on ANS Forth, and it is apparently dead 1.34 anton 1.36 anton 4307: variables of the interface with @code{#include <forth.h>}. 1.34 anton 4308: 4309: Types. 1.13 anton 4310: 1.34 anton 1.4 anton 4329: 1.43 anton 4330: @node Emacs and Gforth, Image Files, Integrating Gforth, Top 1.17 anton 4331: @chapter Emacs and Gforth 1.43 anton 4332: @cindex Emacs and Gforth 1.4 anton 4333: 1.43 anton 1.17 anton 4342: Gforth comes with @file{gforth.el}, an improved version of 1.33 anton 4343: @file{forth.el} by Goran Rydqvist (included in the TILE package). The 1.4 anton 4344: improvements are a better (but still not perfect) handling of 4345: indentation. I have also added comment paragraph filling (@kbd{M-q}), 1.8 anton}. 1.4 anton 4351: 1.43 anton 4352: @cindex source location of error or debugging output in Emacs 4353: @cindex error output, finding the source location in Emacs 4354: @cindex debugging output, finding the source location in Emacs 1.17 anton 4355: In addition, Gforth supports Emacs quite well: The source code locations 1.4 anton: 1.43 anton 4363: @cindex @file{TAGS} file 4364: @cindex @file{etags.fs} 4365: @cindex viewing the source of a word in Emacs 1.4 anton 1.17 anton 4370: several tags files at the same time (e.g., one for the Gforth sources 1.28 anton 4371: and one for your program, @pxref{Select Tags Table,,Selecting a Tags 4372: Table,emacs, Emacs Manual}). The TAGS file for the preloaded words is 4373: @file{$(datadir)/gforth/$(VERSION)/TAGS} (e.g., 1.33 anton 4374: @file{/usr/local/share/gforth/0.2.0/TAGS}). 1.4 anton 4375: 1.43 anton 4376: @cindex @file{.emacs} 1.4 anton: 1.43 anton. 1.45 anton 4401: * Fully Relocatable Image Files:: better yet. 1.43 anton 1.45 anton 4412: definitions written in Forth. Since the Forth compiler itself belongs to 4413: those definitions, it is not possible to start the system with the 1.43 anton 4414: primitives and the Forth source alone. Therefore we provide the Forth 4415: code as an image file in nearly executable form. At the start of the 1.45 anton 4416: system a C routine loads the image file into memory, optionally 4417: relocates the addresses, then sets up the memory (stacks etc.) according 4418: to information in the image file, and starts executing Forth code. 1.43 anton 1.45 anton 4431: By contrast, our loader performs relocation at image load time. The 4432: loader also has to replace tokens standing for primitive calls with the 4433: appropriate code-field addresses (or code addresses in the case of 4434: direct threading). 1.43 anton 1.44 anton 4453: The only kinds of relocation supported are: adding the same offset to 4454: all cells that represent data addresses; and replacing special tokens 4455: with code addresses or with pieces of machine code. 1.43 anton).} 1.44 anton 1.45 anton 4480: executions tokens of appropriate words (see the definitions of 1.44 anton 4481: @code{docol:} and friends in @file{kernel.fs}). 1.43 anton., 1.45 anton 1.43 anton 4528: 1.45 anton} 1.43 anton 4548: @cindex @file{comp-image.fs} 1.45 anton 4549: @cindex @file{gforth-makeimage} 1.43 anton 4550: 1.45 anton 4551: You will usually use @file{gforth-makeimage}. If you want to create an 4552: image @var{file} that contains everything you would load by invoking 4553: Gforth with @code{gforth @var{options}}, you simply say 1.43 anton 4554: @example 1.45 anton 4555: gforth-makeimage @var{file} @var{options} 1.43 anton 1.45 anton 4563: gforth-makeimage asm.fi asm.fs 1.43 anton 4564: @end example 4565: 1.45 anton: 1.43 anton 4574: 1.45 anton 4575: @example 4576: 78DC BFFFFA50 BFFFFA40 4577: @end example 1.43 anton 4578: 1.45 anton). 1.43 anton 4584: 1.48 ! anton} 1.45 anton} 1.43 anton 4603: @cindex cross-compiler 4604: @cindex metacompiler 1.45 anton 4605: 4606: You can also use @code{cross}, a batch compiler that accepts a Forth-like 4607: programming language. This @code{cross} language has to be documented 1.43 anton 4608: yet. 4609: 4610: @cindex target compiler 1.45 anton: 1.43 anton 1.45 anton: 1.43 anton 4632: 4633: @example 1.45 anton 4634: gforth-makeimage gforth.fi -m 1M 1.43 anton 4635: @end example 4636: 4637: In other words, if you want to set the default size for the dictionary 1.45 anton 4638: and the stacks of an image, just invoke @file{gforth-makeimage} with the 4639: appropriate options when creating the image. 1.43 anton 1.48 ! anton}}. 1.43 anton 4668: 1.48 ! anton 4669: doc-#! 1.43 anton 1.45 anton 4683: processing (by default, loading files and evaluating (@code{-e}) strings) 1.43 anton: 1.48 ! anton: 1.43 anton 4715: @c ****************************************************************** 4716: @node Engine, Bugs, Image Files, Top 4717: @chapter Engine 4718: @cindex engine 4719: @cindex virtual machine 1.3 anton 4720: 1.17 anton 4721: Reading this section is not necessary for programming with Gforth. It 1.43 anton 4722: may be helpful for finding your way in the Gforth sources. 1.3 anton 4723: 1.24 anton 1.48 ! anton 4728: @*@url{}. 1.24 anton 4729: 1.4 anton 4730: @menu 4731: * Portability:: 4732: * Threading:: 4733: * Primitives:: 1.17 anton 4734: * Performance:: 1.4 anton 4735: @end menu 4736: 1.43 anton 4737: @node Portability, Threading, Engine, Engine 1.3 anton 4738: @section Portability 1.43 anton 4739: @cindex engine portability 1.3 anton: 1.43 anton 4747: @cindex C, using C for the engine 1.3 anton 1.43 anton 4755: significantly slower. Another problem with C is that it is very 1.3 anton 4756: cumbersome to express double integer arithmetic. 4757: 1.43 anton 4758: @cindex GNU C for the engine 4759: @cindex long long 1.3 anton, , 1.33 anton 4766: Double-Word Integers, gcc.info, GNU C Manual}) corresponds to Forth's 1.32 anton. 1.3 anton 4777: 4778: Writing in a portable language has the reputation of producing code that 4779: is slower than assembly. For our Forth engine we repeatedly looked at 4780: the code produced by the compiler and eliminated most compiler-induced 1.43 anton 4781: inefficiencies by appropriate changes in the source code. 1.3 anton 4782: 1.43 anton 4783: @cindex explicit register declarations 4784: @cindex --enable-force-reg, configuration flag 4785: @cindex -DFORCE_REG 1.3 anton 1.43 anton. 1.3 anton 4796: 1.43 anton 4797: @node Threading, Primitives, Portability, Engine 1.3 anton 4798: @section Threading 1.43 anton 4799: @cindex inner interpreter implementation 4800: @cindex threaded code implementation 1.3 anton 4801: 1.43 anton 4802: @cindex labels as values 1.3 anton: 1.43 anton 4810: @cindex NEXT, indirect threaded 4811: @cindex indirect threaded inner interpreter 4812: @cindex inner interpreter, indirect threaded 1.3 anton 4813: With this feature an indirect threaded NEXT looks like: 4814: @example 4815: cfa = *ip++; 4816: ca = *cfa; 4817: goto *ca; 4818: @end example 1.43 anton 4819: @cindex instruction pointer 1.3 anton: 1.43 anton 4827: @cindex NEXT, direct threaded 4828: @cindex direct threaded inner interpreter 4829: @cindex inner interpreter, direct threaded 1.3 anton: 1.4 anton 4839: @menu 4840: * Scheduling:: 4841: * Direct or Indirect Threaded?:: 4842: * DOES>:: 4843: @end menu 4844: 4845: @node Scheduling, Direct or Indirect Threaded?, Threading, Threading 1.3 anton 4846: @subsection Scheduling 1.43 anton 4847: @cindex inner interpreter optimization 1.3 anton 1.4 anton: 1.3 anton 4872: @example 4873: n=sp[0]+sp[1]; 4874: sp++; 4875: NEXT_P1; 4876: sp[0]=n; 4877: NEXT_P2; 4878: @end example 1.4 anton 4879: This can be scheduled optimally by the compiler. 1.3 anton 4880: 4881: This division can be turned off with the switch @code{-DCISC_NEXT}. This 4882: switch is on by default on machines that do not profit from scheduling 4883: (e.g., the 80386), in order to preserve registers. 4884: 1.4 anton 4885: @node Direct or Indirect Threaded?, DOES>, Scheduling, Threading 1.3 anton 4886: @subsection Direct or Indirect Threaded? 1.43 anton 4887: @cindex threading, direct or indirect? 1.3 anton 4888: 1.43 anton 4889: @cindex -DDIRECT_THREADED 1.3 anton: 1.43 anton. 1.3 anton 4910: 1.4 anton 4911: @node DOES>, , Direct or Indirect Threaded?, Threading 1.3 anton 4912: @subsection DOES> 1.43 anton 4913: @cindex @code{DOES>} implementation 4914: 4915: @cindex dodoes routine 4916: @cindex DOES-code 1.3 anton 4917: One of the most complex parts of a Forth engine is @code{dodoes}, i.e., 4918: the chunk of code executed by every word defined by a 4919: @code{CREATE}...@code{DOES>} pair. The main problem here is: How to find 1.43 anton 4920: the Forth code to be executed, i.e. the code after the 4921: @code{DOES>} (the DOES-code)? There are two solutions: 1.3 anton 4922: 4923: In fig-Forth the code field points directly to the dodoes and the 1.43 anton 4924: DOES-code address is stored in the cell after the code address (i.e. at 4925: @code{@var{cfa} cell+}). It may seem that this solution is illegal in 4926: the Forth-79 and all later standards, because in fig-Forth this address 1.3 anton 4927: lies in the body (which is illegal in these standards). However, by 4928: making the code field larger for all words this solution becomes legal 1.43 anton). 1.3 anton 4936: 1.43 anton 4937: @cindex DOES-handler 1.3 anton 4938: The other approach is that the code field points or jumps to the cell 4939: after @code{DOES}. In this variant there is a jump to @code{dodoes} at 1.43 anton. 1.3 anton 4949: 1.43 anton 4950: @node Primitives, Performance, Threading, Engine 1.3 anton 4951: @section Primitives 1.43 anton 4952: @cindex primitives, implementation 4953: @cindex virtual machine instructions, implementation 1.3 anton 4954: 1.4 anton 4955: @menu 4956: * Automatic Generation:: 4957: * TOS Optimization:: 4958: * Produced code:: 4959: @end menu 4960: 4961: @node Automatic Generation, TOS Optimization, Primitives, Primitives 1.3 anton 4962: @subsection Automatic Generation 1.43 anton 4963: @cindex primitives, automatic generation 1.3 anton 4964: 1.43 anton 4965: @cindex @file{prims2x.fs} 1.3 anton 4966: Since the primitives are implemented in a portable language, there is no 4967: longer any need to minimize the number of primitives. On the contrary, 1.43 anton 4968: having many primitives has an advantage: speed. In order to reduce the 1.3 anton: 1.43 anton 4975: @cindex primitive source format 1.3 anton */ 1.4 anton 5006: @{ 1.3 anton) */ 1.4 anton 5015: @{ 1.3 anton 5016: n = n1+n2; /* C code taken from the source */ 1.4 anton 5017: @} 1.3 anton 5018: NEXT_P1; /* NEXT part 1 */ 5019: TOS = (Cell)n; /* output */ 5020: NEXT_P2; /* NEXT part 2 */ 1.4 anton 5021: @} 1.3 anton: 1.4 anton 5043: @node TOS Optimization, Produced code, Automatic Generation, Primitives 1.3 anton 5044: @subsection TOS Optimization 1.43 anton 5045: @cindex TOS optimization for primitives 5046: @cindex primitives, keeping the TOS in a register 1.3 anton 5047: 5048: An important optimization for stack machine emulators, e.g., Forth 5049: engines, is keeping one or more of the top stack items in 1.4 anton 5050: registers. If a word has the stack effect @var{in1}...@var{inx} @code{--} 5051: @var{out1}...@var{outy}, keeping the top @var{n} items in registers 1.34 anton 5052: @itemize @bullet 1.3 anton: 1.43 anton 5060: @cindex -DUSE_TOS 5061: @cindex -DUSE_NO_TOS 1.3 anton: 1.43 anton 5074: @cindex -DUSE_FTOS 5075: @cindex -DUSE_NO_FTOS 1.3 anton: 1.34 anton 5090: @itemize @bullet 1.3 anton 5091: @item In the case of @code{dup ( w -- w w )} the generator must not 5092: eliminate the store to the original location of the item on the stack, 5093: if the TOS optimization is turned on. 1.4 anton 5094: @item Primitives with stack effects of the form @code{--} 5095: @var{out1}...@var{outy} must store the TOS to the stack at the start. 5096: Likewise, primitives with the stack effect @var{in1}...@var{inx} @code{--} 1.3 anton 5097: must load the TOS from the stack at the end. But for the null stack 5098: effect @code{--} no stores or loads should be generated. 5099: @end itemize 5100: 1.4 anton 5101: @node Produced code, , TOS Optimization, Primitives 1.3 anton 5102: @subsection Produced code 1.43 anton 5103: @cindex primitives, assembly code listing 1.3 anton 5104: 1.43 anton 5105: @cindex @file{engine.s} 1.3 anton 5106: To see what assembly code is produced for the primitives on your machine 5107: with your compiler and your flag settings, type @code{make engine.s} and 1.4 anton 5108: look at the resulting file @file{engine.s}. 1.3 anton 5109: 1.43 anton 5110: @node Performance, , Primitives, Engine 1.17 anton 5111: @section Performance 1.43 anton 5112: @cindex performance of some Forth interpreters 5113: @cindex engine performance 5114: @cindex benchmarking Forth systems 5115: @cindex Gforth performance 1.17 anton: 1.43 anton 5128: @cindex Win32Forth performance 5129: @cindex NT Forth performance 5130: @cindex eforth performance 5131: @cindex ThisForth performance 5132: @cindex PFE performance 5133: @cindex TILE performance 1.17 anton 5134: However, this potential advantage of assembly language implementations 5135: is not necessarily realized in complete Forth systems: We compared 1.26 anton 1.30 anton 5140: language. We also compared Gforth with three systems written in C: 1.32 anton. 1.17 anton 5152: 5153: We used four small benchmarks: the ubiquitous Sieve; bubble-sorting and 5154: matrix multiplication come from the Stanford integer benchmarks and have 5155: been translated into Forth by Martin Fraeman; we used the versions 1.30 anton). 1.17 anton 5161: 5162: @example 1.30 anton 5163: relative Win32- NT eforth This- 5164: time Gforth Forth Forth eforth +opt PFE Forth TILE 1.32 anton 5165: sieve 1.00 1.39 1.14 1.39 0.85 1.58 3.18 8.58 5166: bubble 1.00 1.31 1.41 1.48 0.88 1.50 3.88 1.38 anton 5167: matmul 1.00 1.47 1.35 1.46 0.74 1.58 4.09 5168: fib 1.00 1.52 1.34 1.22 0.86 1.74 2.99 4.30 1.17 anton 1.43 anton 5178: per NEXT (@pxref{Image File Background}). 1.17 anton 5179: 1.26 anton: 1.30 anton 5185: The speedup of Gforth over PFE, ThisForth and TILE can be easily 1.43 anton. 1.17 anton: 1.43 anton} 1.26 anton 1.46 anton 5208: version of Gforth is 2%@minus{}8% slower on a 486 than the direct 5209: threaded version used here. The paper available at 1.48 ! anton 5210: @*@url{}; 1.43 anton 5211: it also contains numbers for some native code systems. You can find a 5212: newer version of these measurements at 1.48 ! anton 5213: @url{}. You can 1.43 anton 5214: find numbers for Gforth on various machines in @file{Benchres}. 1.24 anton 5215: 1.43 anton 5216: @node Bugs, Origin, Engine, Top 1.4 anton 5217: @chapter Bugs 1.43 anton 5218: @cindex bug reporting 1.4 anton 5219: 1.17 anton 5220: Known bugs are described in the file BUGS in the Gforth distribution. 5221: 1.24 anton 5222: If you find a bug, please send a bug report to 1.48 ! anton 5223: @email{bug-gforth@@gnu.ai.mit.edu}. A bug report should 1.17 anton 5224: describe the Gforth version used (it is announced at the start of an 5225: interactive Gforth session), the machine and operating system (on Unix 5226: systems you can use @code{uname -a} to produce this information), the 1.43 anton 5227: installation options (send the @file{config.status} file), and a 1.24 anton. 1.17 anton 5232: 5233: For a thorough guide on reporting bugs read @ref{Bug Reporting, , How 5234: to Report Bugs, gcc.info, GNU C Manual}. 5235: 5236: 1.29 anton 5237: @node Origin, Word Index, Bugs, Top 5238: @chapter Authors and Ancestors of Gforth 5239: 5240: @section Authors and Contributors 1.43 anton 5241: @cindex authors of Gforth 5242: @cindex contributors to Gforth 1.29 anton 5243: 5244: The Gforth project was started in mid-1992 by Bernd Paysan and Anton 1.30 1.29 anton 5248: @file{glosgen.fs}, while Stuart Ramsden has been working on automatic 5249: support for calling C libraries. Helpful comments also came from Paul 1.37 anton 5250: Kleinrubatscher, Christian Pirker, Dirk Zoller, Marcel Hendrix, John 1.39 anton 5251: Wavrik, Barrie Stott and Marc de Groot. 1.29 anton 5252: 1.30 anton: 1.29 anton 5258: @section Pedigree 1.43 anton 5259: @cindex Pedigree of Gforth 1.4 anton 5260: 1.17 anton 5261: Gforth descends from BigForth (1993) and fig-Forth. Gforth and PFE (by 1.24 anton 5262: Dirk Zoller) will cross-fertilize each other. Of course, a significant 5263: part of the design of Gforth was prescribed by ANS Forth. 1.17 anton 5264: 1.23 pazsan 1.24 anton 5271: UltraForth there) in the mid-80s and ported to the Atari ST in 1986. 1.17 anton 5272: 1.34 anton 5273: Henry Laxen and Mike Perry wrote F83 as a model implementation of the 1.17 anton 5274: Forth-83 standard. !! Pedigree? When? 5275: 5276: A team led by Bill Ragsdale implemented fig-Forth on many processors in 1.24 anton 5277: 1979. Robert Selzer and Bill Ragsdale developed the original 5278: implementation of fig-Forth for the 6502 based on microForth. 5279: 5280: The principal architect of microForth was Dean Sanderson. microForth was 1.41 anton 5281: FORTH, Inc.'s first off-the-shelf product. It was developed in 1976 for 1.24 anton 5282: the 1802, and subsequently implemented on the 8080, the 6800 and the 5283: Z80. 1.17 anton 5284: 1.24 anton 5285: All earlier Forth systems were custom-made, usually by Charles Moore, 1.30 anton 5286: who discovered (as he puts it) Forth during the late 60s. The first full 5287: Forth existed in 1971. 1.17 anton: 1.43 anton 5295: @node Word Index, Concept Index, Origin, Top 5296: @unnumbered Word Index 1.4 anton 5297: 1.18 anton 5298: This index is as incomplete as the manual. Each word is listed with 5299: stack effect and wordset. 1.17 anton 5300: 5301: @printindex fn 5302: 1.43 anton. 1.17 anton 5309: 1.43 anton 5310: @printindex cp 1.1 anton 5311: 5312: @contents 5313: @bye 5314: | http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/Attic/gforth.ds?annotate=1.48;sortby=log;f=h;only_with_tag=MAIN | CC-MAIN-2020-45 | en | refinedweb |
generates SurfaceChanged events which are propagated through the specified callback. If no callback is specified, the system will throw an ArgumentNullException. Generated callbacks are synchronous with this call. Scenes containing multiple SurfaceObservers should consider using different callbacks so that events can be properly routed.
Update is a very expensive call. Its expense scales with the number of observable Surfaces. Thus, constraining your observation volume can help improve performance if you expect to call Update regularly. The engine provides the SpatialMapping.ObserverUpdate profiling tag to allow you to monitor spatial mapping performance.
using UnityEditor; using UnityEngine; using UnityEngine.XR.WSA; using System;
public class ExampleScript : MonoBehaviour { public SurfaceObserver m_Observer;
void UpdateSurfaceObserver() { // Update your surface observer to generate onSurfaceChanged callbacks m_Observer.Update(SurfaceChangedHandler); // all Update callbacks are now complete }
void SurfaceChangedHandler(SurfaceId id, SurfaceChange changeType, Bounds bounds, DateTime updateTime) { switch (changeType) { case SurfaceChange.Added: // handle Surface adds here break; case SurfaceChange.Updated: // handle Surface updates here break; case SurfaceChange.Removed: // handle Surface removal here break; } } } | https://docs.unity3d.com/2018.4/Documentation/ScriptReference/XR.WSA.SurfaceObserver.Update.html | CC-MAIN-2020-45 | en | refinedweb |
I recently started using Arduino to make my projects. As a designer I love making custom interfaces for my games/interactive projects.
The one problem I came across in that using serial communication is quite complicated and prone to problems and bugs and I wanted a quick and easy solution to allow me to use external buttons to control my games.
As I wanted a plug and play device that you could use instantly with any computer, I bought an Arduino Leonardo. It's almost identical to an Uno, but with a few differences. The main difference that I'll be using to my advantage for this project is it's ability to act as a HID. A HID, or human interface device is a USB protocol that allows your computer to recognize and accept input from keyboards and a computer mouse without having to install custom drivers for each device.
note: you can also use an Uno, if you update the firmware, as shown here.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Materials
For this project you will need:
1x HID capable microcontroller (there are a few like Arduino micro, Due & leonardo, I'll be using the Arduino Leonardo)
1x USB to arduino cable (for the Leonardo it's USB micro)
3x Arcade buttons (I bought these)
1x solderless breadboard
3x 10k ohm resistors
3x 220 ohm resistors
Jumper wires
You can of course add more buttons, or solder everything to a breadboard to make things more permanent.
Step 2: Prototyping
So, before I bought the arcade buttons that I wanted to use I tested this out with standard push buttons. Wire up the buttons in the standard way, I believe I used 10K ohm resistors.
The programming, thanks to the Leonardo, is pretty simple. You have to include the Keyboard library. I used the Arduino standard example "Keyboard message" as the base for my code.
Now the question is how you want your buttons to work. You basically have two choices, single buttons presses and a continuous stream of letters if pushed. It really depends on your project what you would want.
If you want a single instance of something to happen if a key is pressed, like a jump or an on/off switch then you would choose the single push method. With this method you look at the state of the button, is it up or down? Then you compare it to the previous state, was it already up or down? If the previous button state is the same as the current button state nothing happens. But if the button state changes, as in you press or release a button something happens. In my code it only types a letter when the button is pushed, not when released but you could change this.
#include "Keyboard.h"<br>
const int buttonLeft = A0; // input pin for pushbutton const int buttonRight = A1; const int buttonUp = A2;
int previousButtonStateLeft = HIGH; // for checking the state of a pushButton int previousButtonStateRight = HIGH; int previousButtonStateUp = HIGH;
void setup() { // make the pushButton pin an input: pinMode(buttonLeft, INPUT); pinMode(buttonRight, INPUT); pinMode(buttonUp, INPUT); // initialize control over the keyboard: Keyboard.begin(); }
void loop() { // read the pushbutton: int buttonStateLeft = digitalRead(buttonLeft); // if the button state has changed, if ((buttonStateLeft != previousButtonStateLeft) // and it's currently pressed: && (buttonStateLeft == HIGH)) { // type out a message Keyboard.print("a"); } // save the current button state for comparison next time: previousButtonStateLeft = buttonStateLeft;
// read the pushbutton: int buttonStateRight = digitalRead(buttonRight); // if the button state has changed, if ((buttonStateRight != previousButtonStateRight) // and it's currently pressed: && (buttonStateRight == HIGH)) { // type out a message Keyboard.print("w"); } // save the current button state for comparison next time: previousButtonStateRight = buttonStateRight;
// read the pushbutton: int buttonStateUp = digitalRead(buttonUp); // if the button state has changed, if ((buttonStateUp != previousButtonStateUp) // and it's currently pressed: && (buttonStateUp == HIGH)) { // type out a message Keyboard.print("d"); } // save the current button state for comparison next time: previousButtonStateUp = buttonStateUp; }
If you want something to continuously happen as long as the button is pushed, as you would want for a left or right movement you, just let it write a letter without checking the previous button state. Do remember to add a small delay to prevent it from going crazy and to counter any bounce your buttons may have. There are more elegant ways of solving this problem, but this is easy and quick.
#include "Keyboard.h"<br>
const int buttonLeft = A0; // input pin for pushbutton const int buttonRight = A1; const int buttonUp = A2;
void setup() { // make the pushButton pin an input: pinMode(buttonLeft, INPUT); pinMode(buttonRight, INPUT); pinMode(buttonUp, INPUT); // initialize control over the keyboard: Keyboard.begin(); }
void loop() { // read the pushbutton: int buttonStateLeft = digitalRead(buttonLeft); if (buttonStateLeft == HIGH) //if the button is pressed { // type out a message Keyboard.print("a"); delay(50); //Delay for bounce & to let you computer catch up }
// read the pushbutton: int buttonStateRight = digitalRead(buttonRight); if (buttonStateRight == HIGH) //if the button is pressed { // type out a message Keyboard.print("w"); delay(50); //Delay for bounce & to let you computer catch up }
// read the pushbutton: int buttonStateUp = digitalRead(buttonUp); if (buttonStateUp == HIGH) //if the button is pressed { // type out a message Keyboard.print("d"); delay(50); //Delay for bounce & to let you computer catch up } }
You can always use a mix of both methods, depending on what best suits your needs.
Step 3: Laser Cutting the Case
For the case I used 3 mm mdf, with a 2mm Plexiglas insert. I added the insert as I want to add some LEDs on the inside of the case at a later stage to make it nice and glowy.
I inputted my dimensions to makercase and downloaded the svg file. I opened it up in Illustrator and added the holes where I wanted them. If you don't have Illustrator you could use Inkscape for this step.
You don't need to use a laser cutter of course, as this is a simple box with a few holes in it. It should be easy enough to create it using more traditional power tools (or even hand tools!) I'm just very lazy and had access to a laser cutter.
Step 4: Soldering Arcade Buttons
An arcade button (or mine at least) is comprised of three parts. The plastic casing, the LED holder (with LED in it) and the micro switch. The micro switch is the actual button part of the button and is what you will need to connect to your Arduino. There are three terminals (metal bits that stick out, where you'll solder your wires) on the micro switch. The one on top (or bottom, what you want) is the ground. The other two terminals are the Normal Open (NO) and Normal Closed (NC). NO means that if the switch is pressed it makes a connection. NC means that if the button is pressed it breaks the connection. We will use the NO for this project. I labeled the ground, NO and NC on my micro switch in the pictures.
My buttons are illuminated so I soldered wires to the LED holder. Make sure to color code your wires so that you know which side is the anode and which the cathode (positive and negative sides of the LED).
I soldered header pins onto my wires, to make them easy to use with a solderless breadboard. I just soldered the wire to a header pin and put a bit of heat shrink tubing around to make them more resilient.
Step 5: Stack the Buttons & Connect Them to Your Board
Now it's time to stack your arcade buttons in your case. Remove the locking ring from the plastic casing and stick it through the hole in the case. Thread the locking ring on the other side to secure the button in place. Stick in the LED holder and twist it to lock it in to place. Wiggle in the micro switches (there are little nobs and holes that align with each other to hold it into place).
To connect the switches to the board remove the push buttons you may or may not have added. Connect the wire leading from the ground of the micro switch to the ground of the Arduino and the resistor (where the leg of the push button was). Connect the wire leading from the NO of the micro switch to the 5v of the Arduino.
For the LED wires connect the negative wire to the ground & the positive via a 220OHM resistor to the 5v. If you wire them up like this they will be always on. You could add them in the code and get them to switch on and off in sync with the buttons if you want.
Step 6: Coding Hell
So, now you've attached your fancy new buttons to you old code and suddenly it doesn't work as it should anymore. The letters appear two or three at a time and it doesn't work as it should with simple HTML5 games. Welcome to debounce hell.
First things first. The code we wrote during prototyping? it works fine and is simple, but it's not elegant. If you want to add more buttons you have to copy & paste snippets of code and change all the values inside them. If you forget one of them you enter the bugfixing hell. Detect a theme here? Coding is hell, but a very fun, problem solving hell.
We want pretty, short code. So we'll change all the individual button integers to arrays. This way, if you want to add more buttons you only have to change the button amount, the pins where they are located & their output. We also change the key inputs to ASCII because... it works better?
Now if you're like me you'll write a simple and easy way to use the buttons and it will not work as well as you'd like. So you create new versions (remember kids, incremental back-ups!), try different things, write constantly more complicated code that still doesn't work well and eventually go back to the simple code you wrote hours ago AND notice a small error which instantly fixes everything.
Let me spare you that journey, here's the working code:
Disclaimer: this text was written after hours of coding & bug fixing a very simple code. Please disregard any signs of frustration and focus on the working code posted below ;)
#include "Keyboard.h"<br>#define buttonAmount 3
int buttonPin[] = { A0,A1,A2 }; //Where are the buttons? int asciiLetter[] = { 97, 100, 119}; //Letters in ASCII, here: a,d,w int buttonState[buttonAmount]; //Is the button pushed or not?
void setup() {
for (int i = 0; i < buttonAmount; i++) { //cycle through the array pinMode(buttonPin[i], INPUT); //set all the pins to input } }
void loop() { for (int i = 0; i < buttonAmount; i++) //cycle through the array { buttonState[i] = digitalRead(buttonPin[i]); //What are the buttons doing? if (buttonState[i] == HIGH){ //If the button is pressed Keyboard.press(asciiLetter[i]); //send the corresponding letter } else //if the button is not pressed { Keyboard.release(asciiLetter[i]); //release the letter } }
}
Step 7: Everything Works!
Enjoy your plug & play custom controller!
If you liked this instructable, please consider voting for me in the contest!
Participated in the
Epilog Challenge 9
Discussions | https://www.instructables.com/id/Plug-and-Play-Arcade-Buttons/ | CC-MAIN-2019-43 | en | refinedweb |
Hide Forgot
Created attachment 367545 [details]
console-dump of crash
Description of problem:
Kernel freezes with KVM vms running [usually during startup]
Version-Release number of selected component (if applicable):
kernel-2.6.30.9-90.fc11.x86_64
qemu-0.10.6-9.fc11.x86_64
virt-manager-0.7.0-7.fc11.x86_64
How reproducible:
Happens pretty frequently
Steps to Reproduce:
1. Install F11 with latest patches
2. Install a bunch of VMs [kvm] windows, ubuntu, freebsd, opensolaris
3. set the VMs to start at reboot
Actual results:
kernel panic? with blinking numlock/caps lock at various times.
- sometimes when the VMs are booting
- sometimes when a VM is restarted
Expected results:
no crash
Additional info:
- Ran memtest86 overnight and no errors here.
- I suspect it happens with multiple VMs running - esp with OpenSolaris in the mix.
- Some of these VMs are carried over from F10 - and during this migration
OpenSolaris VM never worked. Recently [perhaps a couple of months back - I reinstalled the OpenSolaris VM]. Also until then, the windows VM was the primary active VM. But since then - I was attempting to run all the VMs [windows,freebsd, ubuntu, opensolaris] simultaneously - and have seen constant crashes.
- so the crashes have been cosntant during the past few kernel updates - and qemu, virt-manager updates.
a picture of one of the kernel dumps is attached
Created attachment 367547 [details]
lspci; cat /proc/meminfo; cat /proc/cpuinfo; dmesg
We really need to see the beginning of that oops report.
I'm not sure how to get the complete stack trace. All I can do is take pics of the console output.
I have the following update since my previous report:
I've been running a single [Windows] VM since then - and it was stable. So the issue is with multiple VMs - usually its triggered during the boot sequence of one of them.
I've upgraded to F12 now - and the crashes persist. I have the oops from 2 different crashes.
Kernel: 2.6.31.6-145.fc12.x86_64
- First one when all 4 VMs are booted together at startup [of host F12]
- Second one - with 3 VMs booted together at startup [without OpenSolaris]. Here the initial boot went fine. But on rebooting one of the VMs [ubuntu] a panic was triggered.
For this panic - I have the stack trace from the begining. I also get the following output on a ssh terminal connection [to the F12 host] from a different machine.
>>>>>>>>>>>>>>>>
[root@maverick ~]#
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel:general protection fault: 0000 [#1] SMP
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel:last sysfs file: /sys/kernel/mm/ksm/run
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel:Stack:
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel:Call Trace:
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel: <IRQ>
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel: <EOI>
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel:Code: 38 0f b6 d2 48 01 d0 74 30 48 8b 58 28 eb 13 48 89 df e8 03 f9 ff ff 48 89 df e8 cf f8 ff ff 4c 89 e3 48 85 db 74 12 48 8d 7b 78 <4c> 8b 23 e8 fa ee cb ff 85 c0 74 e8 eb d6 5b 41 5c c9 c3 55 48
Message from syslogd@maverick at Dec 2 14:20:01 ...
kernel:Kernel panic - not syncing: Fatal exception in interrupt
asterix:/home/balay>
Created attachment 375591 [details]
crash with 4 VMs started together
Created attachment 375592 [details]
crash with 3 VMs started together, and then one of the VMs was rebooted
Ok - I've disabled ipv6 on this machine [because the stack trace has references to it] - and now the VMs are lot more stable. I've tried a few things - theF12 host hasn't crashed yet.
I'll see if this stays stable [with all the 4VMs running concurrently].
BTW: should have mentioned: I use bridge networking for the VMs [and it is also listed in the stack trace]. So perhaps the combination of bridge networking with ipv6 is the trigger for the crash..
Am seeing something fairly similar, reported in
It seems I can confirm the ipv6 part of the anecdote. This is on a friends AMD box, which doesn't have a VT-d knob in the bios. I can't test myself but the issue is quite reproducible there.
Thanks to a lead from the Fedora Forums I found this bug report that mirrors recent problems I have seen on both Fedora 11 and Fedora 12 64-bit KVM systems that are using bridge networks.
If Autostart is enabled for at least one VM, the systems are hard locking at reboot.
However, if ipv6 is disabled, the host boots normally and the VMs autostart up as normal.
I have disabled ipv6 by editing /etc/modprobe.d/blacklist and adding the line:
install ipv6 /bin/true
If I remove all autostart options and re-enable ipv6, the KVM host starts fine, and the VMs can be manually started without any problems.
Hence, it appears there is a conflict (possibly just for systems using bridge networks) when ipv6 is enabled and VMs are configured to autostart.
Just an update: [after disabling ipv6] The machine now has been stable for the past 2 weeks [even with some reboots of the guest OSes]
[root@maverick ~]# uname -srv
Linux 2.6.31.6-145.fc12.x86_64 #1 SMP Sat Nov 21 15:57:45 EST 2009
[root@maverick ~]# uptime
12:09:12 up 13 days, 19:12, 1 user, load average: 0.31, 0.29, 0.21
I have had a similar experience, although I could get the kernel to crash even when KVM VM machines were not running (and libvirtd was disabled at startup).
Originally thought it was issue with Spanning Tree, but disabling had no effect.
The issue for me involved Windows 7 clients with TCP/IPv6 active on their NIC profile.
As soon as the NIC initialised (or reset for that matter), the Fedora 12 server would freeze.
Disabling TCP/IPv6 driver in Win7 clients resolved issue.
WinXP clients no problem, as TCP/IPv6 not installed.
For safety, turned off any explicit TCP/IPv6 settings on Fedora too, although even with IPV6INIT=no, Fedora still assigned auto IP6 address.
Network config:
HP ProCurve managed switch, with two ports configured in LACP (802.3ad) mode, no VLAN.
Fedora server config:
eth0+eth1 -> bond0 -> br0
/etc/sysconfig/network-scripts/ifcfg-br0:
...
IPV6INIT=no
IPV6_AUTOCONF=no
DHCPV6=no
...
ncftool> dumpxml br0
<?xml version="1.0"?>
<interface type="bridge" name="br0">
<start mode="onboot"/>
<protocol family="ipv4">
<ip address="10.16.182.254" prefix="24"/>
<route gateway="10.16.182.1"/>
</protocol>
<bridge stp="on">
<interface type="bond" name="bond0">
<bond mode="802.3ad">
<miimon freq="100" updelay="100" carrier="ioctl"/>
<interface type="ethernet" name="eth0">
<mac address="00:23:7D:FB:FE:35"/>
</interface>
<interface type="ethernet" name="eth1">
<mac address="00:23:7D:A8:EE:CC"/>
</interface>
</bond>
</interface>
</bridge>
</interface>
ncftool> dumpxml --live br0
<?xml version="1.0"?>
<interface name="br0" type="bridge">
<bridge>
<interface name="bond0" type="bond">
<bond>
<interface name="eth0" type="ethernet">
<mac address="00:23:7d:fb:fe:35"/>
</interface>
<interface name="eth1" type="ethernet">
<mac address="00:23:7d:fb:fe:35"/>
</interface>
</bond>
</interface>
<interface name="vnet0" type="ethernet">
<mac address="e2:30:33:b3:84:78"/>
</interface>
</bridge>
<protocol family="ipv4">
<ip address="10.16.182.254" prefix="24"/>
</protocol>
<protocol family="ipv6">
<ip address="fe80::223:7dff:fefb:fe35" prefix="64"/>
</protocol>
</interface>
I disable IPV6 on the F12 box by doing the following:
- edit /etc/sysconfig/network and add the line
NETWORKING_IPV6=no
- create a file /etc/modprobe.d/disable-ipv6.conf with the line
install ipv6 /bin/true
(In reply to comment #11)
> For safety, turned off any explicit TCP/IPv6 settings on Fedora too, although
> even with IPV6INIT=no, Fedora still assigned auto IP6 address.
I spent the whole weekend learning the netfilter code and SLUB debugging to find this problem:
There should be a patch soon.
*** Bug 545851 has been marked as a duplicate of this bug. ***
Confirmed that the following hack prevents the issue (real fix is being worked on):
void nf_conntrack_destroy(struct nf_conntrack *nfct)
{
void (*destroy)(struct nf_conntrack *);
if ((struct nf_conn *)nfct == &nf_conntrack_untracked) {
printk("JCM: nf_conntrack_destroy: trying to destroy
nf_conntrack_untracked! CONTINUING...\n");
//panic("JCM: nf_conntrack_destroy: trying to destroy
nf_conntrack_untracked!\n");
return; /* refuse to free nf_conntrack_untracked */
}
rcu_read_lock();
destroy = rcu_dereference(nf_ct_destroy);
BUG_ON(destroy == NULL);
destroy(nfct);
rcu_read_unlock();
}
EXPORT_SYMBOL(nf_conntrack_destroy);
The issue is that with multiple namespaces, we wind up decreasing the use count on the untracked static ct to zero and trying to free it, which is bad. Patrick should have a fix tomorrow using per-namespace untracked ct's.
Jon.
This hack is harmless, but in an ideal world we wouldn't try freeing the untracked ct in the first place.
*** Bug 521362 has been marked as a duplicate of this bug. ***
I see Kyle is already making a test kernel with this.
Yeah, builds are in progress on all the targets I think.
*** Bug 520108 has been marked as a duplicate of this bug. ***
This has been fixed and confirmed.
Is this bug fixed only in rawhide?
No, it's been committed to F-11 and F-12 too..
*** Bug 681917 has been marked as a duplicate of this bug. *** | https://bugzilla.redhat.com/show_bug.cgi?id=533087 | CC-MAIN-2019-43 | en | refinedweb |
Hello guys,
I have this problem, I have an Activity with the following code:
public class CreateProperty : Activity, ILocationListener
Iose
You could use my Geolocator plugin to call from shared code:
Also the code is here:
Also did you set your permissions correct?
Answers
I think you need to set minTime and minDistance params higher than 0.
I don't think that this were the problem really, because i was debugged and the method OnLocationChanged is not fired never...
You could use my Geolocator plugin to call from shared code:
Also the code is here:
Also did you set your permissions correct?
Thanks you very much @JamesMontemagno it is very nice from you give us this code to people that were started
now.
It work perfectly. | https://forums.xamarin.com/discussion/comment/129017/ | CC-MAIN-2019-43 | en | refinedweb |
We wanted to make the Feather changeover diode (just to the right of the JST jack) and the Lipoly charging circuitry (to the right of the JST jack). regulator. While you can get 500mA from it, you can't do it continuously from 5V as it will overheat the regulator. It's fine for, say, powering an ESP8266 WiFi chip or XBee radio though, since the current draw is 'spikey' & sporadic.
If you're running off of a battery, chances are you wanna know what the voltage is at! That way you can tell when the battery needs recharging. Lipoly batteries are 'maxed out' at 4.2V and stick around 3.7V for much of the battery life, then slowly sink down to 3.2V or so before the protection circuitry cuts it off. By measuring the voltage you can quickly tell when you're heading below 3.7V
To make this easy we stuck a double-100K resistor divider on the BAT pin, and connected it to A6 which is not exposed on the feather breakout
In Arduino, you can read this pin's voltage, then double it, to get the battery voltage.
//);
For CircuitPython, we've written a
get_voltage() helper function to do the math for you. All you have to do is call the function, provide the pin and print the results.
import board from analogio import AnalogIn vbat_voltage = AnalogIn(board.VOLTAGE_MONITOR) def get_voltage(pin): return (pin.value * 3.3) / 65536 * 2 battery_voltage = get_voltage(vbat_voltage) print("VBat voltage: {:.2f}".format(battery_voltage))
import board from analogio import AnalogIn vbat_voltage = AnalogIn(board.VOLTAGE_MONITOR) def get_voltage(pin): return (pin.value * 3.3) / 65536 * 2 battery_voltage = get_voltage(vbat_voltage) print("VBat voltage: {:.2f}".format(battery_voltage)). | https://learn.adafruit.com/adafruit-feather-m4-express-atsamd51/power-management | CC-MAIN-2019-43 | en | refinedweb |
From: k.hagan_at_[hidden]
Date: 2001-02-13 04:51:57
<pbristow_at_[hidden]> wrote:
>
> So using builtin constants might provide a more accurate result,
> but one that is not as portable (that may be different from
> other processors). So builtin pi, e ... is better, but maybe
> also badder?
I think differences in accuracy are inevitable, even if processors
claim to be IEEE. On x86 and 68k, the FPU has 80 bit registers.
Even if you set the operating precision to 64 or 32, it still isn't
entirely equivalent to an FPU that has true 32 or 64-bit wide
registers, such as (amusingly enough) the SSE registers added in
the Pentium 3 and 4, or the registers in a PowerPC or Alpha.
Intel's 64-bit chip has FP registers even wider still. They are
deliberately a couple of bits wider than *any* memory based
floating point type.
In all these cases, the accuracy that you get out depends on how
the compiler balances memory accesses against register pressure.
The standard allows FP calculations to be carried out at "inflated"
precision anyway, for these kinds of reason, so float calculations
may be done at double precision as specified in "classic C".
In such circumstances, I don't think we have the luxury of calling
the extra accuracy "badder", so we'll just have to call it better.
(The main reason why the x86 provides these "load constant"
instructions is to support rounding modes correctly. The most
accurate representation differs if the rounding mode is towards
+/- infinity. In one case we want the closest value below the
constant, and in the other case we want the closest value above it.
No compile-time constant can reproduce this behaviour, and working
it out at run-time is not terribly fast, at least for the "long
double" case! We probably don't care about this. Jens might, in the
context of his boost interval library, where he needs a pi for
argument reduction, if I remember correctly.)
> This is one reason for providing the constants in crude pre-
> processor #define form for all possible uses, and then providing
> a C++ representation which should be as portable as possible for
> the Standard.
I still don't like the namespace pollution that #define causes.
What was the objection to something like...?
template<class T> struct math_constants /*==namespace*/
{
static T e() { return T(2.718...L); }
};
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/02/8790.php | CC-MAIN-2019-43 | en | refinedweb |
From: JOAQUIN LOPEZ MU?Z (joaquin_at_[hidden])
Date: 2008-02-03 17:52:38
----- Mensaje original -----
De: "vicente.botet" <vicente.botet_at_[hidden]>
Fecha: Domingo, Febrero 3, 2008 12:00 pm
Asunto: Re: [boost] [review] Review of Flyweight library started
January 21 and still running!
Para: boost_at_[hidden]
> Hi all,
>
> Thank you very much Joaquín , I have learn a lot with your library
> and the review. Thanks also for all the pertinent answers.
You're welcome. Thank you for taking the effort to do such a
thorough review.
[...]
> > * What is your evaluation of the design?
>
> The library has a simple to use interface based on a understandable
> domain-specific language having a good configurability and
> extensibilitydegree, even if the naming of some policies could be
> improved.
> This easy to use interface hides a more complex design. The relations
> between the different policies are not evident, and even if the
> libraryprovides a elegant way to extend it, it's no evident to
> define such extensions without reading the code. Maybe the
> documentation should include a deeper description of the
> collaboration of the core implementation and the different policies.
Point taken. Will try to improve that part.
[...]
> * performance test with intermodule_holder
> The tests performed on example5 should be extended with a
> intermodule_holder specifier.
I don't see the need: remember the intermodule_holder affects
the static initialization process *only*, so the performance
of flyweight<T,intermodule_holder> is exactly the same as that
of fylweight<T>, save for the startup and shutdown phases of
the program.
> * intermodule_holder could use static_holder_class
> As the intermodule_holder is a kind of workaround for platforms
> where DLL can duplicate static libraries, we can have a better
> performance on the others for free.
>
> On platforms on which static variables are unique inter DLL the
> intermodule_holder specifier type can be a static_holder_class.
> This could be controlled using conditional compilation.
Yes, this is already taken note of.
[...]
> * holder specifier renaming
> The holder's main responsibility is to instantiate a unique
> instance for its internal resources depending on the scope
> of the flyweight class.
>
> So the variation here is the scope. I haven't a better suggestion,
> but in my opinion holder do not reflects its responsibilities.
If someone comes up with a better name I'll be happy to
change it. I admit the current terminology is not terrific.
> * concrete holder specifiers renaming
> I'd RENAME static_holder by module_holder, and intermodule_holder
> by process_holder.
> module_holder reflects much more the scope accessibility intent
> for the users, static_holder talks more about the implementation.
>
> So at the end there will be two holders:
> . module_holder (renaming of static_holder) and
> . process_holder
>
> I don't think that simple_holder is better than static_holder.
> Does simple_holder stands for simple implemented holder?
Yep, this is the idea. As in other terminology issues, I'll
do whatever people agree upon. The problem is that in my experience
names are something people rarely agree upon :-/
> * factory specifier renaming
> Maybe repository could be more adapted to the associated
> responsibilities,store the shared values.
>
> * equality semantic
> It would be great if the library reach to manage with the
pathological
> cases without reducing the performances of the usual cases.
> If in order to solve the problem you propose the user overrides the
> operator== with non-& semantics this must be included in the
> documentation.
This is probably what I'll finally do, given the extremely
unlikeliness of the pathological cases.
> * adding a exclusive core specifier.
[...]
> As the interface do not change, this could be for future work.
Yep, thanks for your syntax suggestion, this will definitely
go to the future work section.
> > * What is your evaluation of the implementation?
>
> Quite good .
>
> Only some details
> * Minor correction
> The documentation states that the ..._specifier must be a
> ..._marker, but
> this is not needed for the ..._class
> On the code static_holder_class inherits from holder_marker, and
> hashed_factory_class inherits from factory_marker.
> Is this an error without importance, or there is something behind?
There is something behind :) As you point out, it's only
specifiers that need be marked as such. But as a bonus extra,
the factory classes provided by the library are also specifiers
on their own. How come? You can do so by using MPL lambda
expressions, for instance, the specifier:
// use std::greater reather than the default std::less
set_factory<std::greater<boost::mpl::_2> >
can equivalentely be written like this:
set_factory_class<
boost::mpl::_1,boost::mpl::_2,
std::greater<boost::mpl::_2>
>
You can take a look at the tests, which exercise both forms
of specification. I decided not to mention that factory
(and holder) classes can act as specifiers because I think
readers not very acquainted with MPL might easily get confused
(understanding lambda expressions in conenction with the
specifiers alone can be already quite a task). This ability used
to be documented at the reference, but I've just checked
and it is no longer there, I'll restore that bit.
> * refcounted_value is a class that could be public. Why it is
> declared in the namespace detail.
It is an implementation detail, the user won't ever be exposed
to it.
>
> * could you explain why detail::recursive_lightweight_mutex is
> needed and boost::detail::lightweight_mutex is not used directly?
Recursive mutexes are needed when constructing composite complexes
with flyweight<>.
[...]
> A more general Boost question. Can boost libraries use the details
> namespace (i.e. private implementation) of the other boost libraries?
No, this shouldn't be done. If some detail of lib A is
deemed interesting for use in lib B, the proper thing to do
is to lift this detail to namespace boost::detail and folder
boost/detail. I've done this in the past with some parts of
Boost.MultiIndex that are now used by other libraries.
> What is needed to promote this shared private implementations to a
> publicboost library? Should someone purpose the library?
I understand this is what fast track reviews are about.
[...]
> Some more elaborated examples should be included in order to show
> the user how to manage with more complicated examples, for example
> inter-process flyweights. These examples should be a probe of the
> extensibility mechanisms and a check the orthogonality of the
> policies.
I'll try to extend the examples section. As it happens, this is
one of the tasks I've got most difficulties with, since it's
not easy to come up with good example ideas, and I try to keep
them fun and motivating if possible.
> * intermodule_holder specifier
> A reference to a document explaining the problem with platforms not
> ensuring the unicity of static variables when DLL are used will be
I'm no expert here, but I'd say that platforms without
static var unicity across dynamic modules are the majority rather
than the exception.
> * Reference section
> The reference section was much more hard to follow. I think that the
> interactions between the different classes are very hard to describe
> using the current style.
Well, I know the reference is not the easiest thing to follow,
but it tries to keep the level of formality you find at the
C++ standard. The idea is you go to the reference for exact,
authoritative information. Once you get used to the stiff style,
it gets bearable.
> Un implementation section will help to understand how all this works,
> describing the interactions between the core and the policies.
I understand the reader might be curious about the internal
implementation, but is this knowledge really needed to use and
extend the lib? I'd much prefer to concentrate on improving the part
on extending the library without disclosing too much about
how things are assembled internally.
> * Performance section
> I expect a performace section to be included. And how the future
> work will improbe this performances.
Yes, this will be done.
>
> * Rational
> Adding of the map versus set problem explanation.
I've got something prepared to ask Alberto on this one, please
read the following post of mine. | https://lists.boost.org/Archives/boost/2008/02/132939.php | CC-MAIN-2019-43 | en | refinedweb |
Test Run - Fault Injection Testing with TestApi
By James McCaffrey | August 2010
Fault injection testing is the process of deliberately inserting an error into an application under test and then running the application to determine whether the application deals with the error properly. Fault injection testing can take several different forms. In this month’s column, I explain how you can introduce faults into .NET applications at run time using a component of the TestApi library.
The best way for you to see where I’m headed in this column is to take a look at the screenshot in Figure 1. The screenshot shows that I’m performing fault injection testing on a dummy .NET WinForm application named TwoCardPokerGame.exe. A C# program named FaultHarness.exe is running in the command shell. It alters the normal behavior of the application under test so the application throws an exception the third time a user clicks on the button labeled Evaluate. In this situation, the Two Card Poker application does not handle the application exception gracefully and the result is the system-generated message box.
Figure 1 Fault Injection Testing in Action
Let’s take a closer look at this scenario to consider some of the details involved. When FaultHarness.exe is launched from the command shell, behind the scenes the harness prepares profiling code that will intercept the normal code execution of TwoCardPokerGame.exe. This is called the fault injection session.
The fault injection session uses a DLL to start watching for calls to the application’s button2_Click method, which is the event handler for the button labeled Evaluate. The fault injection session has been configured so that the first two times a user clicks on the Evaluate button, the application behaves as coded, but on the third click the fault session causes the application to throw an exception of type System.ApplicationException.
The fault session records session activity and logs a set of files to the test host machine. Notice in Figure 1 that the first two application Deal-Evaluate click pairs work properly, but the third click generated an exception.
In the sections that follow, I’ll briefly describe the dummy Two Card Poker Game application under test, present and explain in detail the code in the FaultHarness.exe program shown in Figure 1, and provide some tips about when the use of fault injection testing is appropriate and when alternative techniques are more suitable. Although the FaultHarness.exe program itself is quite simple and most of the difficult work is performed behind the scenes by the TestApi DLLs, understanding and modifying the code I present here to meet your own testing scenarios requires a solid understanding of the .NET programming environment. That said, even if you’re a .NET beginner, you should be able to follow my explanations without too much difficulty. I’m confident you’ll find the discussion of fault injection an interesting and possibly useful addition to your toolset.
The Application Under Test
My dummy application under test is a simplistic but representative C# WinForm application that simulates a hypothetical card game called Two Card Poker. The application consists of two main components: TwoCardPokerGame.exe provides the UI and TwoCardPokerLib.dll provides the underlying functionality.
To create the game DLL I launched Visual Studio 2008 and selected the C# Class Library template from the File | New Project dialog box. I named the library TwoCardPokerLib. The overall structure of the library is presented in Figure 2. The code for TwoCardPokerLib is too long to present in its entirety in this article. The complete source code for the TwoCardPokerLib library and the FaultHarness fault injection harness is available in the code download that accompanies this article.
using System; namespace TwoCardPokerLib { // ------------------------------------------------- public class Card { private string rank; private string suit; public Card() { this.rank = "A"; // A, 2, 3, . . ,9, T, J, Q, K this.suit = "c"; // c, d, h, s } public Card(string c) { . . . } public Card(int c) { . . . } public override string ToString(){ . . . } public string Rank { . . . } public string Suit { . . . } public static bool Beats(Card c1, Card c2) { . . . } public static bool Ties(Card c1, Card c2) { . . . } } // class Card // ------------------------------------------------- public class Deck { private Card[] cards; private int top; private Random random = null; public Deck() { this.cards = new Card[52]; for (int i = 0; i < 52; ++i) this.cards[i] = new Card(i); this.top = 0; random = new Random(0); } public void Shuffle(){ . . . } public int Count(){ . . . } public override string ToString(){ . . . } public Card[] Deal(int n) { . . . } } // Deck // ------------------------------------------------- public class Hand { private Card card1; // high card private Card card2; // low card public Hand(){ . . . } public Hand(Card c1, Card c2) { . . . } public Hand(string s1, string s2) { . . . } public override string ToString(){ . . . } private bool IsPair() { . . . } private bool IsFlush() { . . . } private bool IsStraight() { . . . } private bool IsStraightFlush(){ . . . } private bool Beats(Hand h) { . . . } private bool Ties(Hand h) { . . . } public int Compare(Hand h) { . . . } public enum HandType { . . . } } // class Hand } // ns TwoCardPokerLib
The Application UI Code
Once I had the underlying TwoCardPokerLib library code finished, I created a dummy UI component. I started a new project in Visual Studio 2008 using the C# WinForm Application template and I named my application TwoCardPokerGame.
Using the Visual Studio designer, I dragged a Label control from the Toolbox collection onto the application design surface, and modified the control’s Text property from “textBox1” to “Two Card Poker.” Next I added two more Label controls (“Your Hand” and “Computer’s Hand”), two TextBox controls, two Button controls (“Deal” and “Evaluate”), and a ListBox control. I didn’t change the default control names of any of the eight controls—textBox1, textBox2, button1 and so on.
Once my design was in place, I double-clicked on the button1 control to have Visual Studio generate an event handler skeleton for the button and load file Form1.cs into the code editor. At this point I right-clicked on the TwoCardPokerGame project in the Solution Explorer window, selected the Add Reference option from the context menu, and pointed to the file TwoCardPokerLib.dll. In Form1.cs, I added a using statement so that I wouldn’t need to fully qualify the class names in the library.
Next, I added four class-scope static objects to my application:
Object h1 is the Hand for the user, and h2 is the Hand for the computer. Then I added some initialization code to the Form constructor:
The Deck constructor creates a deck of 52 cards, in order from the ace of clubs to the king of spades, and the Shuffle method
randomizes the order of the cards in the deck.
Next I added the code logic to the button1_Click method as shown in Figure 3. For each of the two hands, I call the Deck.Deal method to remove two cards from the deck object. Then I pass those two cards to the Hand constructor and display the value of the hand in a TextBox control. Notice that the button1_Click method handles any exception by displaying a message in the ListBox control.
private void button1_Click( object sender, EventArgs e) { try { ++dealNumber; listBox1.Items.Add("Deal # " + dealNumber); Card[] firstPairOfCards = deck.Deal(2); h1 = new Hand(firstPairOfCards[0], firstPairOfCards[1]); textBox1.Text = h1.ToString(); Card[] secondPairOfCards = deck.Deal(2); h2 = new Hand(secondPairOfCards[0], secondPairOfCards[1]); textBox2.Text = h2.ToString(); listBox1.Items.Add(textBox1.Text + " : " + textBox2.Text); } catch (Exception ex) { listBox1.Items.Add(ex.Message); } }
Next, in the Visual Studio designer window I double-clicked on the button2 control to auto-generate the control’s event handler
skeleton. I added some simple code to compare the two Hand objects and display a message in the ListBox control. Notice that the button2_Click method does not directly handle any exceptions:
private void button2_Click( object sender, EventArgs e) { int compResult = h1.Compare(h2); if (compResult == -1) listBox1.Items.Add(" You lose"); else if (compResult == +1) listBox1.Items.Add(" You win"); else if (compResult == 0) listBox1.Items.Add(" You tie"); listBox1.Items.Add("-------------------------"); }
The Fault Injection Harness
Before creating the fault injection harness shown in Figure 1, I downloaded the key DLLs to my test host machine. These DLLs are part of a collection of .NET libraries named TestApi and can be found at testapi.codeplex.com.
The TestApi library is a collection of software-testing-related utilities. Included in the TestApi library is a set of Managed Code Fault Injection APIs. (Read more about them at blogs.msdn.com/b/ivo_manolov/archive/2009/11/25/9928447.aspx.) I downloaded the latest fault injection APIs release, which in my case was version 0.4, and unzipped the download. I will explain what’s in the download and where to place the fault injection binaries shortly.
Version 0.4 supports fault injection testing for applications created using the .NET Framework 3.5. The TestApi library is under active development, so you should check the CodePlex site for updates to the techniques I present in this article. Additionally, you may want to check for updates and tips on the blog of Bill Liu, the primary developer of the TestApi fault injection library, at blogs.msdn.com/b/billliu/.
To create the fault injection harness I started a new project in Visual Studio 2008 and selected the C# Console Application template. I named the application FaultHarness and I added some minimal code to the program template (see Figure 4).
using System; namespace FaultHarness { class Program { static void Main(string[] args) { try { Console.WriteLine("\nBegin TestApi Fault Injection environmnent session\n"); // create fault session, launch application Console.WriteLine("\nEnd TestApi Fault Injection environment session"); } catch (Exception ex) { Console.WriteLine("Fatal: " + ex.Message); } } } // class Program } // ns
I hit the <F5> key to build and run the harness skeleton, which created a \bin\Debug folder in the FaultHarness root folder.
The TestApi download has two key components. The first is TestApiCore.dll, which was located in the Binaries folder of the unzipped download. I copied this DLL into the root directory of the FaultHarness application. Then I right-clicked on the FaultHarness project in the Solution Explorer window, selected Add Reference, and pointed it to TestApiCore.dll. Next, I added a using statement for Microsoft.Test.FaultInjection to the top of my fault harness code so my harness code could directly access the functionality in TestApiCore.dll. I also added a using statement for System.Diagnostics because, as you’ll see shortly, I want to access the Process and ProcessStartInfo classes from that namespace.
The second key component in the fault injection download is a folder named FaultInjectionEngine. This holds 32-bit and 64-bit versions of FaultInjectionEngine.dll. I copied the entire FaultInjectionEngine folder into the folder holding my FaultHarness executable, in my case C:\FaultInjection\FaultHarness\bin\Debug\. The 0.4 version of the fault injection system I was using requires the FaultInjectionEngine folder to be in the same location as the harness executable. Additionally, the system requires that the application under test binaries be located in the same folder as the harness executable, so I copied files TwoCardPokerGame.exe and TwoCardPokerLib.dll into C:\FaultInjection\FaultHarness\bin\Debug\.
To summarize, when using the TestApi fault injection system, a good approach is to generate a skeleton harness and run it so that a harness \bin\Debug directory is created, then place file TestApiCore.dll in the harness root directory, place the FaultInjectionEngine folder in \bin\Debug, and place the application under test binaries (.exe and .dll) in \bin\Debug as well.
Using the TestApi fault injection system requires that you specify the application under test, the method in the application under test that will trigger a fault, the condition that will trigger a fault, and the kind of fault that will be triggered:
string appUnderTest = "TwoCardPokerGame.exe"; string method = "TwoCardPokerGame.Form1.button2_Click(object, System.EventArgs)"; ICondition condition = BuiltInConditions.TriggerEveryOnNthCall(3); IFault fault = BuiltInFaults.ThrowExceptionFault( new ApplicationException( "Application exception thrown by Fault Harness!")); FaultRule rule = new FaultRule(method, condition, fault);
Notice that, because the system requires the application under test to be in the same folder as the harness executable, the name of the application under test executable does not need the path to its location.
Specifying the name of the method that will trigger the injected fault is a common source of trouble for TestApi fault injection beginners. The method name must be fully qualified in the form Namespace.Class.Method(args). My preferred technique is to use the ildasm.exe tool to examine the application under test to help me determine the triggering method’s signature. From the special Visual Studio tools command shell I launch ildasm.exe, point to the application under test, then double-click on the target method. Figure 5 shows an example of using ildasm.exe to examine the signature for the button2_Click method.
Figure 5 Using ILDASM to Examine Method Signatures
When specifying the trigger method signature, you do not use the method return type, and you do not use parameter names. Getting the method signature correct sometimes requires a bit of trial and error. For example, on my first attempt to target button2_Click, I used:
I had to correct it to:
The TestApi download contains a Documentation folder containing a concepts document that provides good guidance on how to correctly construct different kinds of method signatures including constructors, generic methods, properties, and overloaded operators. Here I target a method that’s located in the application under test, but I could have also targeted a method in the underlying TwoCardPokerLib.dll, such as:
After specifying the trigger method, the next step is to specify the condition under which the fault will be injected into the application under test. In my example I used TriggerEveryOnNthCall(3), which as you’ve seen injects a fault every third time the trigger method is called. The TestApi fault injection system has a neat set of trigger conditions including TriggerIfCalledBy(method), TriggerOnEveryCall, and others.
After specifying the trigger condition, the next step is to specify the type of fault that will be injected into the system under test. I used BuiltInFaults.ThrowExceptionFault. In addition to exception faults, the TestApi fault injection system has built-in return type faults that allow you to inject erroneous return values into your application under test at run time. For example, this will cause the trigger method to return a (presumably incorrect) value of -1:
After the fault trigger method, condition, and fault kind have been specified, the next step is to create a new FaultRule and pass that rule to a new FaultSession:
FaultRule rule = new FaultRule(method, condition, fault); Console.WriteLine( "Application under test = " + appUnderTest); Console.WriteLine( "Method to trigger injected runtime fault = " + method); Console.WriteLine( "Condition which will trigger fault = On 3rd call"); Console.WriteLine( "Fault which will be triggered = ApplicationException"); FaultSession session = new FaultSession(rule);
With all the preliminaries in place, the last part of writing the fault harness code is to programmatically launch the application under test in the fault session environment:
When you execute the fault harness, it will launch the application under test in your fault session, with the FaultInjectionEngine.dll watching for situations where the trigger method is called when the trigger condition is true. The tests are performed manually here, but you can also run test automation in a fault session.
While the fault session is running, information about the session is logged into the current directory—that is, the directory that holds the fault harness executable and the application under test executable. You can examine these log files to help resolve any problems that might occur while you’re developing your fault injection harness.
Discussion
The example and explanations I've presented here should get you up and running with creating a fault injection harness for your own application under test. As with any activity that’s part of the software development process, you will have limited resources and you should analyze the costs and benefits of performing fault injection testing. In the case of some applications, the effort required to create fault injection testing may not be worthwhile, but there are many testing scenarios where fault injection testing is critically important. Imagine software that controls a medical device or a flight system. In situations such as these, applications absolutely must be robust and able to correctly handle all kinds of unexpected faults.
There is a certain irony involved with fault injection testing. The idea is that, if you can anticipate the situations when an exception can occur, you can in theory often programmatically guard against that exception and test for the correct behavior of that guarding behavior. However, even in such situations, fault injection testing is useful for generating difficult to create exceptions. Additionally, it’s possible to inject faults that are very difficult to anticipate, such as System.OutOfMemoryException.
Fault injection testing is related to and sometimes confused with mutation testing. In mutation testing, you deliberately insert errors into the system under test, but then execute an existing test suite against the faulty system in order to determine whether the test suite catches the new errors created. Mutation testing is a way to measure test suite effectiveness and ultimately increase test case coverage. As you’ve seen in this article, the primary purpose of fault injection testing is to determine whether the system under test correctly handles errors.: Bill Liu and Paul Newson
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/magazine/ff898404.aspx | CC-MAIN-2019-43 | en | refinedweb |
by Radu Raicea
How — and why — you should use Python Generators
Generators have been an important part of Python ever since they were introduced with PEP 255.
Generator functions allow you to declare a function that behaves like an iterator.
They allow programmers to make an iterator in a fast, easy, and clean way.
What’s an iterator, you may ask?.
An iterator is defined by a class that implements the Iterator Protocol. This protocol looks for two methods within the class:
__iter__ and
Whoa, step back. Why would you even want to make iterators?
Saving memory space
Iterators don’t.
Let’s = 1
def __iter__(self): return self
def __next__(self): self.number += 1 if self.number >= self.max: raise StopIteration elif check_prime(self.number): return self.number else: return self.__next__()
Primes is instantiated with a maximum value. If the next prime is greater or equal than the
max, the iterator will raise a
StopIteration exception, which ends the iterator.
When we request the next element in the iterator, it will increment
number by 1 and check if it’s a prime number. If it’s not, it will call
__next__ again until
number is prime. Once it is, the iterator returns the number.
By using an iterator, we’re not creating a list of prime numbers in our memory. Instead, we’re generating the next prime number every time we request for it.
Let’s try it out:
primes = Primes(100000000000)
print(primes)
for x in primes: print(x)
---------
<__main__.Primes object at 0x1021834a8>235711...
Every iteration of the
Primes object calls
__next__ to generate the next prime number.
Iterators can only be iterated over once. If you try to iterate over
primes again, no value will be returned. It will behave like an empty list.
Now that we know what iterators are and how to make one, we’ll move on to generators.
Generators
Recall that generator functions allow us to create iterators in a more simple fashion..
If we transform our
Primes iterator into a generator, it’ll look like this:
def Primes(max): number = 1 while number < max: number += 1 if check_prime(number): yield number
primes = Primes(100000000000)
print(primes)
for x in primes: print(x)
---------
<generator object Primes at 0x10214de08>235711...
Now that’s pretty pythonic! Can we do better?
Yes! We can use Generator Expressions, introduced with PEP 289.
This is the list comprehension equivalent of generators. It works exactly in the same way as a list comprehension, but the expression is surrounded with
() as opposed to
[].
The following expression can replace our generator function above:
primes = (i for i in range(2, 100000000000) if check_prime(i))
print(primes)
for x in primes: print(x)
---------
<generator object <genexpr> at 0x101868e08>235711...
This is the beauty of generators in Python.
In summary…
- Generators allow you to create iterators in a very pythonic manner.
- Iterators allow lazy evaluation, only generating the next element of an iterable object when requested. This is useful for very large data sets.
- Iterators and generators can only be iterated over once.
- Generator Functions are better than Iterators.
- Generator Expressions are better than Iterators (for simple cases only).
You can also check out my explanation of how I used Python to find interesting people to follow on Medium.
For more updates, follow me on Twitter. | https://www.freecodecamp.org/news/how-and-why-you-should-use-python-generators-f6fb56650888/ | CC-MAIN-2019-43 | en | refinedweb |
Python Client for QuadrigaCX
Project description
Introduction
Quadriga is a Python client for Canadian cryptocurrency exchange platform QuadrigaCX. It wraps the exchange’s REST API v2 using requests library.
Announcements
Requirements
- Python 2.7, 3.4, 3.5 or 3.6.
- QuadrigaCX API secret, API key and client ID (the number used for your login).
Installation
To install a stable version from PyPi:
~$ pip install quadriga
To install the latest version directly from GitHub:
~$ pip install -e git+git@github.com:joowani/quadriga.git@master#egg=quadriga
You may need to use sudo depending on your environment.
Getting Started
Here are some usage examples:
from quadriga import QuadrigaClient client = QuadrigaClient( api_key='api_key', api_secret='api_secret', client_id='client_id', ) client.get_balance() # Get the user's account balance client.lookup_order(['order_id']) # Look up one or more orders by ID client.cancel_order('order_id') # Cancel an order by ID client.get_deposit_address('bch') # Get the funding address for BCH client.get_deposit_address('btc') # Get the funding address for BTC client.get_deposit_address('btg') # Get the funding address for BTG client.get_deposit_address('eth') # Get the funding address for ETH client.get_deposit_address('ltc') # Get the funding address for LTC client.withdraw('bch', 1, 'bch_wallet_address') # Withdraw 1 BCH to wallet client.withdraw('btc', 1, 'btc_wallet_address') # Withdraw 1 BTC to wallet client.withdraw('btg', 1, 'btg_wallet_address') # Withdraw 1 BTG to wallet client.withdraw('eth', 1, 'eth_wallet_address') # Withdraw 1 ETH to wallet client.withdraw('ltc', 1, 'ltc_wallet_address') # Withdraw 1 LTC to wallet book = client.book('btc_cad') book.get_ticker() # Get the latest ticker information book.get_user_orders() # Get user's open orders book.get_user_trades() # Get user's trade history book.get_public_orders() # Get public open orders book.get_public_trades() # Get recent public trade history book.buy_market_order(10) # Buy 10 BTC at market price book.buy_limit_order(5, 10) # Buy 5 BTC at limit price of $10 CAD book.sell_market_order(10) # Sell 10 BTC at market price book.sell_limit_order(5, 10) # Sell 5 BTC at limit price of $10 CAD
Donation
If you found this library useful, feel free to donate.
- BTC: 3QG2wSQnXNbGv1y88oHgLXtTabJwxfF8mU
- ETH: 0x1f90a2a456420B38Bdb39086C17e61BF5C377dab
Disclaimer
The author(s) of this project is in no way affiliated with QuadrigaCX, and shall not accept any liability, obligation or responsibility whatsoever for any cost, loss or damage arising from the use of this client. Please use at your own risk.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/quadriga/ | CC-MAIN-2019-43 | en | refinedweb |
Simple, Pythonic text processing. Sentiment analysis, POS tagging, noun phrase parsing, and more.
Project description
from text.blob import TextBlob! """ blob = TextBlob(zen) # Create a new TextBlob
Part-of-speech and noun phrase tagging sentiment property returns a tuple of the form (polarity, subjectivity) where polarity ranges from -1.0 to 1.0 and subjectivity ranges from 0.0 to 1.0.
blob.sentiment # (0.20, 0.58).
for sentence in blob.sentences: print(sentence) # Beautiful is better than ugly print("---- Starts at index {}, Ends at index {}"\ .format(sentence.start_index, sentence.end_index)) # 0, 30
Get a serialized version of the blob (a list of dicts)
blob.serialized # [{'end_index': 30, # 'noun_phrases': ['beautiful'], # 'raw_sentence': 'Beautiful is better than ugly.', # 'start_index': 0, # 'stripped_sentence': 'beautiful is better than ugly'},
Testing
Run
$ nosetests
to run all tests.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/textblob/0.1.35/ | CC-MAIN-2019-43 | en | refinedweb |
by Shaun Persad
How you can build your own free, serverless comment box
Contentful’s flexible content modeling goes far beyond blog posts. Here’s how you can leverage Contentful and Netlify to create a nested commenting system that’s easy to moderate and deploy.
The motivation
I find most commenting systems out there to be…lacking. Disqus can often be slow to render, and their user tracking behavior doesn’t have the best reputation. Meanwhile, Facebook’s comments plugin is quite nice, but of course is limited to Facebook users.
What I really wanted was the native speed and approach to nested commenting and moderation taken by sites like Hacker News and Indie Hackers, but I needed a solution that would be portable to multiple projects.
There just didn’t seem to be a great fit out there, so I decided to build my own, with my wish list of features:
- Free
- Low barrier to entry — minimal steps required to submit a comment
- Low maintenance — serverless, to not worry about hosting or scaling
- Easy moderation — use a dashboard to perform CRUD on comments
- Peformant — super-fast to appear on the page
- Flexible — users should be able to log in via multiple platforms
- Powerful — comments should have smart formatting features
- High comment quality — users can upvote and downvote comments
- Subscriptions — users can receive notifications when their comments are replied to
Over the course of this series, we will build out a commenting system that incorporates each of the above aspects.
The plan
Our stack will initially include:
- Contentful as a database and moderation dashboard
- AWS Lambda via Netlify as our back-end
- React on the front-end
We will create a React component to serve as our comment box, and supply it with the ability to make an API call to Contentful to fetch comments as necessary. It will also be able to make an API call to our Lambda function to post a comment to Contentful.
Project-wise, our Lambda function will live along-side our front-end code. Both the front-end and back-end will be set up to be continuously deployed via Netlify.
By the way, the above stack is all free! Well, mostly. Unless you’re going to be doing over 10,000 comments, it’s free. Also, I’m not affiliated with any of these companies…I just love their stuff :)
Contentful in 10 seconds
If you’re not already familiar with Contentful and how it works, it’s a “headless” (API-driven) CMS. You’re able to model your content with different fields and field types, and then you create content based on those models. You can build your front-end however you like, and query for your data using their API. It’s super flexible, and their dashboard is quite nice to use. It’s basically the best thing to happen to CMS’s since, well, ever?
I was already using Contentful for my blog posts, so I wondered, could it be viable to host comments as well? I’m happy to report that the answer is yes! However, a few of the items on my wishlist don’t quite work out using just Contentful. But don’t worry, we’ll get there…in the subsequent posts of this series.
We’ll be using Contentful because:
- flexible data modeling
- convenient API
- moderation via a dashboard
- you may already be using it for your website/blog that needs comments
Netlify in 10 seconds
I think Netlify has by far the most enjoyable deployment experience for front-end apps. It links to your GitHub repo and sets you up to continuously deploy a static site to CDN-backed hosting. They also have Netlify Functions, which let you deploy to AWS Lambda without any of the pain of messing around in AWS.
You can get started at their docs, but honestly, their dashboard is so easy to use and understand, I recommend just logging in and poking around.
We’ll be using Netlify because:
- painless AWS Lambda integration
- you may already be using it for your website/blog that needs comments
- If you’re not already using it, you can still deploy the Lambda functions we create to AWS itself
Wait, no “React in 10 seconds”?
I don’t know if 10 seconds is enough to do React justice. If you haven’t yet learned it, you should! But skip the Redux and Flux stuff. Chances are you don’t need any of that (but that’s another topic for another time).
Content modeling in Contentful
Now down to business.
There are two different approaches we could take regarding how we handle our users: authless and logged-in commenting:
- Authless — anyone can leave a comment simply by supplying their name
- Logged-in — only users who are authenticated in some auth system can comment
I prefer logged-in commenting, because in my opinion, the conversations tend to be more civilized. Plus, you tend to avoid spam altogether. On the flipside, the barrier to create a comment is slightly higher.
However, we will start off with authless commenting, because it’s simpler to implement. Once we get our feet wet, we’ll jump into logged-in commenting in Part 2.
Regardless, we’re going to first need to create a content model to represent our comments.
For both authless and logged-in approaches, our Comment content model will remain mostly the same as well, though there will be some later changes to the Author field, as noted below.
The Comment content model
This is the model at the heart of our commenting system. Comments should have four fields:
Body
- The actual body of the comment
- Mark this one as the entry title
- Feel free to also set a maximum and/or minimum value on its length
Author
- A unique identifier representing the user who posted this comment.
- For authless commenting, you’d use short text and fill in the author’s name in this field
- For logged-in commenting, this field will become a reference to the upcoming CommentAuthor model
Subject
- The unique ID of the blog post (or equivalent) that these comments belong to
- It can also be the URL of the page
- For maximum flexibility, I chose not to assume that you’re storing your blog posts in Contentful, or else this would be a reference field instead of short text
ParentComment
- If this comment is a reply to another comment, we’ll reference that comment here
- This field is what enables us to create nested comments
Implementing authless commenting
For this implementation, we want the user to enter their name before they are able to post a comment. I recommend doing an initial read-through of the following steps, and then check out the final demo project at the end to see how it all comes together.
Front-end
Now that our Comment model is done, it’s time to create our comment box. The good news is that I’ve already made a generic “comment box” React component. It’s designed as a low-order component, where you wrap a higher-order component around it to handle fetching and creating Contentful comments, and other application-specific business logic.
You can install it and the other required packages via npm:
npm install react-commentbox contentful contentful-management --save
The GitHub repo has a list of every prop you can pass to it, but minimally, we’ll be implementing and passing these:
getComments: a function that returns a promise that resolves to an array of comments, ordered from oldest to newest
normalizeComment: a function that maps your array of comments to objects that the component understands
comment: a function that makes an API call to create a comment, and returns a promise
disabled: set to true when commenting should be disabled
disabledComponent: the component to show when commenting is disabled
Let’s create our higher-level component:
import React from 'react';import CommentBox from 'react-commentbox';
class MyCommentBox extends React.Component {
state = { authorName: '', authorNameIsSet: false };
onChangeAuthorName = (e) => this.setState({ authorName: e.currentTarget.value });
onSubmitAuthorName = (e) => {
e.preventDefault(); this.setState({ authorNameIsSet: true }); };}
Notice that the component is in charge of setting the author’s name.
By the way, we’re using the transform-class-properties Babel plugin to avoid tedious constructor setup and function bindings. You don’t need to use it, but it’s quite handy.
Now we need to implement the business-logic props that
react-commentbox needs.
We’ll start off by fetching comments from Contentful, and normalizing them:
// fetch our comments from ContentfulgetComments = () => {
return this.props.contentfulClient.getEntries({ 'order': 'sys.createdAt', 'content_type': 'comment', 'fields.subject': this.props.subjectId, }).then( response => {
return response.items;
}).catch(console.error);};
// turn Contentful entries to objects that react-commentbox expects.normalizeComment = (comment) => {
const { id, createdAt } = comment.sys; const { body, author, parentComment } = comment.fields;
return { id, bodyDisplay: body, userNameDisplay: author, timestampDisplay: createdAt.split('T')[0], belongsToAuthor: false, parentCommentId: parentComment ? parentComment.sys.id : null };};
Next, we need to make the API call to create comments:
// make an API call to post a commentcomment = (body, parentCommentId = null) => {
return this.props.postData('/create-comment', { body, parentCommentId, authorName: this.state.authorName, subjectId: this.props.subjectId });};
We also need to ask the user for their name before they can comment:
// will be shown when the comment box is initially disableddisabledComponent = (props) => {
return ( <form className="author-name" onSubmit{ this.onSubmitAuthorName } > <input type="text" placeholder="Enter your name to post a comment" value={ this.state.authorName } onChange={ this.onChangeAuthorName } /> <button type="submit">Submit</button> </form> );};
Then, bring it all together in
render, by passing the appropriate props to
react-commentbox:
render() {
return ( <div> <h4>Comments</h4> <CommentBox disabled={ !this.state.authorNameIsSet } getComments={ this.getComments } normalizeComment={ this.normalizeComment } comment={ this.comment } disabledComponent={ this.disabledComponent } /> </div> );};
We’ve also set the
disabled prop to
true while the author's name is not set. This disables the
textarea, and shows the
disabledComponent form we made to get the author's name.
You can view the complete component here.
You may have noticed that our newly created
MyCommentBox also expects a few props itself:
subjectId,
postData, and
contentfulClient.
The
subjectId is simply some unique ID or URL of the blog post (or equivalent entity) that these comments are for.
postData is a function that makes POST ajax calls. Using
fetch, it could look like this:
function postData(url, data) {
return fetch(`.netlify/functions${url}`, { body: JSON.stringify(data), headers: { 'content-type': 'application/json' }, method: 'POST', mode: 'cors' // if your endpoints are on a different domain }).then(response => response.json());}
contentfulClient is an instance of the client you get when using the contentful npm package (so make sure you've installed it):
import { createClient } from 'contentful';const contentfulClient = createClient({ space: 'my-space-id', accessToken: 'my-access-token'});
You can get your space ID in the Contentful dashboard under “Space settings” > “General settings”.
You can get your access token from “Space settings” > “API keys” > “Content delivery/preview tokens” > “Add API Key”.
You can then pass in your props when creating
MyCommentBox, as shown here.
Back-end
We will implement our
/create-comment endpoint as an AWS Lambda function.
Prerequisites
To be able to build, preview, and eventually deploy these functions, we’re going to use the handy netlify-lambda npm package. It lets you write your Lambda functions as regular ES6 functions in a particular source directory, and then it builds them in a Lambda-friendly way and puts them in a destination directory, ready for deployment. Even better, it also allows us to preview these functions by deploying them locally.
So, you’ll need to create a particular source directory to store your function (e.g.
src/lambda), then create a
netlify.toml file in your root directory. Minimally, that file should look like this:
[build] Functions = "lambda"
The above tells
netlify-lambda which directory to put your built functions, meaning it will build the functions in
src/lambda and store them in
./lambda. Also, when it comes time to deploy, Netlify will look in the
./lambda directory to deploy to AWS.
To run your Lambda functions locally, use the following command:
netlify-lambda serve <source directory>
This will allow you to run your functions on{function-name}.
This is the default behavior, but it does not quite match what will happen in production, because it’s running our functions on a different domain from our front-end. In production, our functions will be available on the same domain as our front-end, via the URL
{domain}/.netlify/functions/{function-name}.
To replicate this behavior locally, we need to proxy front-end calls from
/.netlify/functions/{function-name} to{function-name}.
Accomplishing this differs based on your project setup. I will cover two popular setups:
For create-react-app projects, add the following to your
package.json:
"proxy": { "/.netlify/functions": { "target": "", "pathRewrite": { "^/\\.netlify/functions": "" } }}
For Gatsby.js projects, add the following to your
gatsby-config.js:
const proxy = require('http-proxy-middleware');...developMiddleware: app => { app.use( '/.netlify/functions/', proxy({ target: '', pathRewrite: { '/.netlify/functions/': '', } }) );},
For most other projects, you can leverage webpack’s dev server, which has proxy support.
Writing our function
Before we get to writing Lambda-specific code, we will first create a generic function to handle most of our logic. This way, our code remains portable beyond Lambda.
Let’s create a
createComment function:
const contentful = require('contentful-management');const client = contentful.createClient({ accessToken: process.env.CONTENTFUL_CONTENT_MANAGEMENT_ACCESS_TOKEN});
module.exports = function createComment( body, authorName, subjectId, parentCommentId = null) {
return client.getSpace('my-space-id') .then(space => space.getEnvironment('master')) .then(environment => environment.createEntry('comment', { fields: { body: { 'en-US': body }, author: { 'en-US': authorName }, subject: { 'en-US': subjectId }, parentComment: { 'en-US': { sys: { type: 'Link', linkType: 'Entry', id: parentCommentId } } } } })) .then(entry => entry.publish());};
You can put the above function someplace like a
utils directory. It uses the
contentful-management npm package to create and publish a new comment entry, and returns a promise. Notice we've specified our management API key as an environment variable. You definitely do not want to hard-code that one. When deploying to Netlify or anywhere else, be sure to check that your environment variables are set.
You can get your management access token from the Contentful dashboard at “Space settings” > “API keys” > “Content management tokens” > “Generate personal token”.
Now, let’s create our Lambda-specific function:
const createComment = require('../utils/createComment');
exports.handler = function (event, context, callback) {
const { body, authorName, subjectId, parentCommentId } = JSON.parse(event.body);
createComment(body, authorName, subjectId, parentCommentId) .then(entry => callback(null, { headers: { 'Content-Type': 'application/json' }, statusCode: 200, body: JSON.stringify({ message: 'OK' }) })) .catch(callback);};
Put this function in your Lambda source directory, and name the file with the path you’d want the URL to be, e.g.
create-comment.js . This will make your function available at the URL
/.netlify/functions/create-comment.
The big picture
To illustrate our complete front-end and back-end setup thus far, I’ve created a create-react-app project that functions as a readily-deployable, fully-functional example.
Notice that in the example project’s
netlify.toml file, there’s a few more lines that you should add to your own file.
Command tells Netlify what commands to run to build the project.
Publish tells Netlify where to find the static assets ready for deployment once the build is complete. You can read more about this file in Netlify's documentation.
The example project is also easily cloneable and deployable to your own Netlify account via the convenient deploy button in the README.
If you’ve been implementing this in your own project instead, head over to the Netlify dashboard and follow their straightforward instructions to set up your repo to deploy.
Once it’s up and running, you’ll be able to start commenting like a boss.
(Note: this is just a screenshot, so don’t try clicking on it ^_^)
Until next time
In Part 2, we’ll cover implementing logged-in commenting, as well as giving our comment box some super-cool text formatting functionality.
Thanks for reading! — Shaun
Originally published at shaunasaservice.com. | https://www.freecodecamp.org/news/how-you-can-build-your-own-free-serverless-comment-box-dc9d4f366d12/ | CC-MAIN-2019-43 | en | refinedweb |
If you have a program that execute from top to bottom, it will not be responsive and feasible to build complex applications. So .Net Framework offers some classes to create complex applications.
Introduction
If you have a program that executes from top to bottom, it will not be responsive and feasible to build complex applications. So, the .NET Framework offers some classes to create complex applications.
What is threading?
In short, thread is like a virtualized CPU which will help to develop complex applications.
Understanding threading
Suppose, you have a computer which only has one CPU capable of executing only one operation at a time. And, your application has a complex operation. So, in this situation, your application will take too much time. This means that the whole machine will freeze and appear unresponsive. So your application performance will decrease.
For protecting the performance, we will be multithreading in C#.NET. So, we will divide our program into different parts using threading. And you know every application run its own process in Windows. So every process will run in its own thread.
Thread Class
Thread class can be found in System.Threading namespace, using this class you can create a new thread and can manage your program like property, status etc. of your thread.
Example
The following code shows how to create thread in C#.NET
Explanation
You may observe that, I created a new Local Variable thread of ThreadStart delegate and pass loopTask as a parameter to execute it. This loopTask Function has a loop. And we create a new object myThread from Thread Class and pass Local Variable thread to Thread Constructor. And start the Thread using myThread.Start(); and Thread.Sleep(2000); will pause for 2000 milliseconds.
And finally the result will be This code also can be written in a more simple way like this.
In the above code, we are using lambda expression ( => ) for initialization.
Passing Value as Parameter
The Thread constructor has another overload that.
View All | https://www.c-sharpcorner.com/article/multithreading-in-c-sharp-net2/ | CC-MAIN-2019-43 | en | refinedweb |
Hi all,
I'm pleased to announce this personal project to convert a number to text, in spanish, english, catalan and russian.
the aim of this function is to convert numbers into text. It allows a maximum number of 15 digits.
Overview
The translation is done in several languages. The allowed languages are
- es: Spanish
- en: English
- ca: Catalan
- ru: Russian
The function also allows to treat the numbers of 109 (milliards) in English-speaking countries format. See the following link Billion Wikipedia
w ##class(NumberTranslate.NumberTranslate).GetText(123,.tSc) one hundred and twenty-three w ##class(NumberTranslate.NumberTranslate).GetText(123,.tSc,"es") ciento veintitres w ##class(NumberTranslate.NumberTranslate).GetText(123,.tSc,"ca") cent vint-i-tres w ##class(NumberTranslate.NumberTranslate).GetText(123,.tSc,"ru") Сто двадцать три w ##class(NumberTranslate.NumberTranslate).GetText(1000000000,.tSc,"en",1) one billion w ##class(NumberTranslate.NumberTranslate).GetText(1000000000,.tSc,"es",0) mil millones
Please, have a look the project in the following link:
How to install
Open link last Version 1.1.2 CosNumberTranslation_v1.1.2.xml
Right click and select "Save as..."
Download the file .xml
Load from terminal in your namespace (i.e. USER)
USER> do $System.OBJ.Load("c:\temp\CosNumberTranslation_v1.1.2.xml","cs")
check a number
USER> w ##class(NumberTranslate.NumberTranslate).GetText(123,.tSc) one hundred and twenty-three
I hope it is useful for your development.
Best regards,
Francisco López
Hi, Francisco!
Great stuff!
Could it be possible to add other languages support to your solution? E.g. Russian?
я попробую ;)
Que bueno! :)
But I would be happy even if you suggest the way how to contribute to your solution to introduce an another language support.
Hi Evgeny,
I'll publish another article explaining how it works and how other languages are added. I'll have to take etymology and grammar classes to understand how some languages are structured.
Hola Francisco,
You motivated me to do something similar for German.
It's is straightforward .int routine and you are welcome to add the code to your project.
GermanNumberToText
I did it up to 10e21, negatives and unlimited decimals. (except what is cut down due to internal limits)
I tried to catch all the irregular structures of the language like singular/plural, varying genders, upper/lower case
and tried to keep the output readable:
Minus einhundertneunzig Billionen einhundertdrei Millionen zweihundertein Tausend einhunderteins Komma drei neun null drei
For quick copy:
Updated to avoid failover from integer to floating format for large numbers (2018-06-28 16:34 UTC)
;;; convert number as German text
;;; w $$^zahl(-1123.505) >>>> Minus ein Tausend einhundertdreiundzwanzig Komma fünf null fünf
set dec=$p(num,".",2),dec=$s(dec?1.N:$$dec(dec),1:"")
if num=0 quit "null"_dec
set neg=$S(num<0:"Minus ",1:"")
if num=1 set gen=$zcvt(gen,"U") quit neg_"ein"_$case(gen,"W":"e","S":"es","M":"",:"s")
if num<10e23 quit neg_$$trd($p(num,"."))_dec
quit "*** Zahl zu groß ***"
}
;
dec(num) {
set dec=" Komma"
for p=1:1:$l(num) set dec=dec_" "_$$zig($e(num,p))
quit dec
}
zig(num) {
if num<10 quit $li($lb("null","eins","zwei","drei","vier","fünf","sechs","sieben","acht","neun"),num+1)
if num<20 quit $li($lb("zehn","elf","zwölf","dreizehn","vierzehn","fünfzehn","sechzehn","siebzehn","achtzehn","neunzehn"),num-9)
set zig=$e(num,*-1),zn=$e(num,*)
set res=$s(zig=3:"dreißig"
,1:$li($lb(,"zwan",,"vier","fünf","sech","sieb","acht","neun"),zig)_"zig")
if zn set res=$s(zn=1:"ein",1:$$zig(zn))_"und"_res
quit res
}
hun(num) {
set hun=$e(num,*-2),zig=$e(num,*-1,*),res="",m="hundert"
set res=$s(hun=1:"ein"_m
,hun>1:$$zig(hun)_m
,1:"" )
quit $replace(res_$$zig(zig),"null","")
}
ein(res) {
if $e(res,*-3,*)="eins" set res=$e(res,1,*-1)
quit $replace(res,"null","")
}
tsd(num) { ;1,000 10e3
set tsd=$e(num,*-5,*-3),hun=$e(num,*-2,*),res=""
if tsd set res=$$ein($$hun(tsd))_" Tausend "
quit res_$$hun(hun)
}
mio(num) { ;1,000,000 10e6
set mio=$e(num,*-8,*-6),tsd=$e(num,*-5,*),m=" Million"
set res=$s(mio=1:"eine"_m_" "
,mio>1:$$ein($$hun(mio))_m_"en "
,1:"")
quit res_$$tsd(tsd)
}
mrd(num) { ;1,000,000,000 10e9
set mrd=$e(num,*-11,*-9),mio=$e(num,*-8,*),m=" Milliarde"
set res=$s(mrd=1:"eine"_m_" "
,mrd>1:$$ein($$hun(mrd))_m_"n "
,1:"" )
quit res_$$mio(mio)
}
bio(num) { ;1,000,000,000,000 10e12
set bio=$e(num,*-14,*-12),mrd=$e(num,*-11,*),m=" Billion"
set res=$s(bio=1:"eine"_m_" "
,bio>1:$$ein($$hun(bio))_m_"en "
,1:"" )
quit res_$$mrd(mrd)
}
brd(num) { ;1,000,000,000,000,000 10e15
set brd=$e(num,*-17,*-15),bio=$e(num,*-14,*),res="",m=" Billiarde"
set res=$s(brd=1:"eine"_m_" "
,brd>1:$$ein($$hun(brd))_m_"n"_" "
,1:"" )
quit res_$$bio(bio)
}
tri(num) {;1,000,000,000,000,000,000 10e18
set tri=$e(num,*-20,*-18),brd=$e(num,*-17,*),m=" Trillion"
set res=$s(tri=1:"eine"_m_" "
,tri>1:$$ein($$hun(tri))_m_"en"_" "
,1:"" )
quit res_$$brd(brd)
}
trd(num) { ;1,000,000,000,000,000,000,000 10e21
set trd=$e(num,*-23,*-21),tri=$e(num,*-20,*),m=" Trilliarde"
set res=$s(trd=1:"eine"_m_" "
,trd>1:$$ein($$hun(trd))_m_"n"_" "
,1:"" )
quit res_$$tri(tri)
}
Good one!
You can actually use 10e21 in COS code, it's a valid number format.
You are right.
Though due to the internal limits, next time I would avoid \ and # operations in favor of $E() for the next version
As there are some strange effects in handling numerics due to normalization
write $$^zahl("1.3400") >> eins Komma drei vier null null
with large numbers exceeding 64bit integers the logic with integer division \ and modulo #
was causing wrong results. So I changed it to pure string interpretation.
Recommendation:
pass all numbers as strings to escape from numeric normalization dreihunderteins Komma sechs eins zwei drei eins null null
If you feel think this is exaggerated think about banking calculations for countries within low rated currencies.
Hi, Francisco!
Your repo was mirrored to intersystems-community/CosNumberTranslate
Thank you for the helpful app!
Hi Francisco,
I've downloaded the latest version 1.1.1 to add Portuguese-Brazil (pt-br) and Portuguese (pt) from Portugal, there are some differences as in Brazil shortscale notation is used while in Portugal they use longscale notation. There are also slightspelling differences.
The first problem I encountered with versions 1.1 and 1.1.1 was that both were exported from IRIS and Caché currently does not recognize the <Export generator="IRIS" ...> tag and fails to import the file. Editing the XML file and changing IRIS by CACHE solved the import issue.
It would be nice to have the XML exported as Caché so everyone will be able to import it.
There is an issue with some numbers like 12345 that returns an <INVALID OREF> because the OREF "obj" was killed. Large numbers in the trillion range return wrong results like 123456789123456.
I have these issues fixed and have also added the code for Portuguese (pt-br and pt). I'll send the changed classe to you for analysis.
Best Regards,
Ernesto
Hi, Ernesto! This is great!
You can make a Pull Request to the repo - so everyone can review/comment the changes too
Speaking about IRIS export - yes, it exports with <Export generator="IRIS"> which makes it complicated to import into Caché and Ensemble without manually changing the file.
I'd suggest @Francisco López to export releases for IRIS and Caché/Ensemble separately for now and invite @Stefan Wittmann and @Benjamin De Boe to share guidelines what is the best way to develop on IRIS and make it available for Caché/Ensemble too.
UPDATE: You can download via Open Exchange also,... | https://community.intersystems.com/post/translate-number-text | CC-MAIN-2019-51 | en | refinedweb |
I need to offer new users on our system a temporary password that is valid for only 48 hours. This is different than a 60-day password expiration window for existing users' passwords (where a password needs to be changed every 60 days), and is different than a "user expiration date", where you can set a date where the user's account expires and is disabled on that date, and different than the inactivity expiration date where a user becomes active if his account is not used within, say, 30 days.
What I need is a password-inactivity expiration date such that if the user does not log for a first time within the time limit (48 hours), then the user account is disabled - but not expired! - and then must be "reset" to a new password (whereupon the cycle begins again, and the user is disabled withint 48 hours if he doesn't log in and change his password!).
I'd prefer not to use the user expiration date, as this is not accessible when the user logs in (being in Security.Users in %SYS namespace), and would have to be removed separately.
This concept is not quite the same as a Time-based-One-Time-Password, and we don't need two-factor authentication.
Does InterSystems 2016 have this sort of setting, or do I need to put it into a password validation or some kind of login authentication routine?
Thanks for the help,
Laura
A task, of course... I'll have to go through the list of users and check their LastLoginTime/CreateTime, Enabled, etc. For future users who are looking for this, I'll use the SQL procedure Security.Users_Detail().
I did not know about the Security.Scan, but I'll take a look, and see what else I should tack on to it. Good idea to run our custom security task after the SecurityScan.
Thanks! Getting there. And perhaps InterSystems will add this concept in the 2017 (2018?) release.
Laura | https://community.intersystems.com/post/there-temporary-password-concept-cach%C3%A9-2016 | CC-MAIN-2019-51 | en | refinedweb |
.AN. Stoch RSI Waves Resonance paid
It is difficult to imagine trading without an oscillator. Here is my version of the well-known Stochastic RSI.
Indicator has option to display D, H4, H1 waves resonance or H4, H1, M15 for quick scalping trading.
No movement can be direct and eternal - there have always been and will be Waves ... /\/
It is important to see in what phase the daily wave is.
It would be foolish to trade a one-hour wave against the direction of a H4 or older Daily wave,
and on the contrary - it is very reasonable to trade when the waves add up in resonance ... / / /
... resonance moments are highlighted on the indicator by dots ...
these are close to ideal moments for entry .
Source code is not public. You can download the indicator only from the author’s website
P.S. my name is Alex, I'm not a big trader or programmer, but only a very diligent person and try to do good worthwhile things to at least slightly improve my very modest life. Need some support - so I post my work at least for a nominal price, please be understanding.
117 downloads
How to install
using System; using cAlgo.API; using cAlgo.API.Internals; using cAlgo.API.Indicators; using cAlgo.Indicators; namespace cAlgo { [Indicator(IsOverlay = true, TimeZone = TimeZones.UTC, AccessRights = AccessRights.None)] public class fxcoder : Indicator { [Parameter("Author’s website", DefaultValue = "")] public string Parameter { get; set; } // [Output("Main")] // public IndicatorDataSeries Result { get; set; } protected override void Initialize() { Chart.DrawStaticText("$", "Source code is not public. You can download the indicator only from the author’s website", VerticalAlignment.Top, HorizontalAlignment.Center, Color.GhostWhite); } public override void Calculate(int index) { // Calculate value at specified index // Result[index] = ... } } } | https://ctrader.com/algos/indicators/show/2059 | CC-MAIN-2019-51 | en | refinedweb |
Elastic Load Balancing (ELB) is an AWS service used to dispatch incoming web traffic from your applications across your Amazon EC2 backend instances, which may be in different availability zones. ELB helps ensure a smooth user experience and provide increased fault tolerance, handling traffic peaks and failed EC2 instances without interruption.
Datadog collects metrics and metadata from all three flavors of Elastic Load Balancers that AWS offers: Application, Classic, and Network Load Balancers.
If you haven’t already, set up the Amazon Web Services integration first.
In the AWS integration tile, ensure that
ELB is checked under metric collection. Check also
ApplicationELB checkbox for Application ELB metrics, and the
NetworkELB checkbox for Network ELB metrics.
Add the following permissions to your Datadog IAM policy to collect Amazon ELB metrics. For more information on ELB policies, review the documentation on the AWS website.
Install the Datadog - AWS ELB integration.
Enable the logging on your ELB or your ALB first to collect your logs. ALB and ELB logs can be written in a AWS S3 bucket and consumed by a Lambda function. For more information, refer to the AWS documentation.
Set interval to 5 minutes and define your s3 buckets:
Object Created (All)then click on the add button.
Once done, go in your Datadog Log section to start exploring your logs!
Metrics are collected under the following namespaces:
Each of the metrics retrieved from AWS is assigned the same tags that appear in the AWS console, including but not limited to host name, security-groups, and more.
The AWS Elastic Load Balancing integration does not include any events.
The AWS Elastic Load Balancing integration does not include any service checks.
Need help? Contact Datadog support.
Learn more about how to monitor ELB performance metrics thanks to our series of posts. We detail the key performance metrics, how to collect them, and how to use Datadog to monitor ELB. | https://docs.datadoghq.com/integrations/amazon_elb/ | CC-MAIN-2019-51 | en | refinedweb |
Unique Constraints: How to work with unique constraints bundle?
In order to use this bundle first activate it on the server. The description can be found in server section.
To use the Unique Constraint features on the client side you need to reference
Raven.Client.UniqueConstraints assembly.
using Raven.Client.UniqueConstraints;
When creating the DocumentStore, you'll need to register the
UniqueConstraintsStoreListener in the store, as follows:
store.RegisterListener(new UniqueConstraintsStoreListener());
To define a unique constraint on a property use the
UniqueConstraint attribute as shown below:
private class User { [UniqueConstraint] public string Name { get; set; } [UniqueConstraint] public string Email { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
Extension methods
The bundle provides two extension methods for
IDocumentSession.
LoadByUniqueConstraint
Allows loading a document by its UniqueConstraint, returning null if the document doesn't exists.
User existingUser = session .LoadByUniqueConstraint<User>(x => x.Email, "john@gmail.com");
CheckForUniqueConstraints
Checks a document to see if its constraints are available on the server. It returns a
UniqueConstraintCheckResult containing the loaded docs and properties they are responsible for.
User user = new User { Name = "John", Email = "john@gmail.com" }; UniqueConstraintCheckResult<User> checkResult = session.CheckForUniqueConstraints(user); // returns whether its constraints are available if (checkResult.ConstraintsAreFree()) { session.Store(user); } else { User existingUser = checkResult.DocumentForProperty(x => x.Email); } | https://ravendb.net/docs/article-page/3.0/csharp/client-api/bundles/how-to-work-with-unique-constraints-bundle | CC-MAIN-2019-51 | en | refinedweb |
HTTPoxy - Is my Go application affected?
Environment
Red Hat Enterprise Linux 7.x
Issue
This issue applies when using Go in CGI mode. In case a Go CGI script uses the "HTTP_PROXY" environment variable to configure an outgoing HTTP proxy for subsequent HTTP requests, or if your script makes use of a module or library exposing this behavior, for example Go's "http" module, it's possible for all subsequent HTTP traffic stemming from within the Go CGI script to be redirected through an outside proxy of the attacker's control.
Please note that this is only an issue when the affected Go CGI script is deployed on a CGI-enabled HTTP server which provides the contents of the "Proxy" header of an incoming HTTP request via the "HTTP_PROXY" environment variable.
Resolution
Red Hat has issued updates and mitigation guides for HTTP servers, which prevents them from providing the contents of the HTTP "Proxy" header as the "HTTP_PROXY" environment variable. Updating your HTTP server or applying the mitigation will close this vector and prevent exploitation of this flaw.
To prevent the attacker-supplied header from being used, either of the following approaches can be used:
- Configure your Web Application Firewall to remove the "Proxy:" header
- Change your HTTP server configuration to remove the "Proxy:" header before Go scripts are invoked (see other knowledgebase articles linked form the main HTTPoxy article linked below).
- Make the following changes to your program:
Add “os” to the imports if not already present:
import “os”
Add near the top of your “main” function:
os.Unsetenv(“HTTP_PROXY”). | https://access.redhat.com/solutions/2442871 | CC-MAIN-2019-51 | en | refinedweb |
:
#include <opencv2/highgui/highgui_winrt.hpp>
Initializes container component that will be used to hold generated window content. | https://docs.opencv.org/master/d6/d2f/group__highgui__winrt.html | CC-MAIN-2019-51 | en | refinedweb |
Python external for controlling the Decibel ScorePlayer cavas mode
Project description
scoreplayer-external
A python module for drawing to the canvas mode of the Decibel ScorePlayer. This allows for OSC messages to be sent to canvas objects in a python-like manner. It requires the python-osc and zeroconf modules.
This is an early version and the documentation is currently incomplete. Until then, a paper explaining its use will be available in the proceedings of the 2018 Australian Computer Music Conference.
Basic Usage
First you need to create a scorePlayerExternal object and run the selectServer method.
from scoreplayer_external import scoreObject, scorePlayerExternal import time from threading import Event finished = Event() external = scorePlayerExternal() external.selectServer() canvas = external.connect(onConnect) finished.wait() external.shutdown()
This will check the network for any running iPad servers and prompt the user to connect to one. The drawing commands themselves should be placed into the connection handler method that is passed to the external.connect method.
Some sample drawing commands. Stay tuned for more documentation.
def onConnect(): canvas.clear() scroll = canvas.addScroller('scroll', 1, 10, 10, 300, 300, 500, 20.0) scroll.loadImage('modulation.png') line = canvas.addLayer('line', 1, 20, 10, 5, 300) line.setColour(0, 0, 0) clef = scroll.addGlyph('clef', 1, 100, 100) clef.setGlyphSize(72) clef.setGlyph('fClef') bunny = canvas.addLayer('bunny', 0, 200, 200, 300, 300) bunny.loadImage('distortion.png', 1) line2 = canvas.addLine('line2', 0, 400, 400, 500, 500, 2) scroll.start() scroll.fade(0, 5) bunny.move(100, 400, 8) time.sleep(5) scroll.fade(1, 5) time.sleep(2) scroll.stop() scroll.setScrollerPosition(0) line2.setStartPoint(400, 500) line2.setColour(255, 0, 0) finished.set()
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/scoreplayer-external/0.1.12/ | CC-MAIN-2019-51 | en | refinedweb |
Class TextConsolePage
- java.lang.Object
- org.eclipse.ui.console.TextConsolePage
- All Implemented Interfaces:
EventListener,
IAdaptable,
IPropertyChangeListener,
IPage,
IPageBookViewPage
public class TextConsolePage extends Object implements IPageBookViewPage, IPropertyChangeListener, IAdaptableA page for a text console.
Clients may contribute actions to the context menu of a text console page using the
org.eclipse.ui.popupMenusextension point. The context menu identifier for a text console page is the associated console's type suffixed with
.#ContextMenu. When a console does not specify a type, the context menu id is
#ContextMenu.
Clients may subclass this class.
- Since:
- 3.1
Constructor Detail
TextConsolePage
public TextConsolePage(TextConsole console, IConsoleView view)Constructs a text console page for the given console in the given view.
- Parameters:
console- text console
view- console view the page is contained in
Method Detail
createViewer
protected TextConsoleViewer createViewer(Composite parent)Returns a viewer used to display the contents of this page's console.
- Parameters:
parent- container for the viewer
- Returns:
- a viewer used to display the contents of this page's console
getSite
public IPageSite getSite()Description copied from interface:
IPageBookViewPageReturns the site for this page. May be
nullif no site has been set.
- Specified by:
getSitein interface
IPageBookViewPage
- Returns:
- the page site or
null
init
public void init(IPageSite pageSite) throws PartInitExceptionDescription copied from interface:
IPageBookViewPageInitializes this page with the given page site.
This method is automatically called by the workbench shortly after page construction. It marks the start of the pages's lifecycle. Clients must not call this method.
- Specified by:
initin interface
IPageBookViewPage
- Parameters:
pageSite- the page site
- Throws:
PartInitException- if this page was not initialized successfully
updateSelectionDependentActions
protected void updateSelectionDependentActions()Updates selection dependent actions.
createControl
public void createControl(Composite parent)Creates the SWT control for this page under the given parent control.
Clients should not call this method (the workbench calls this method when it needs to, which may be never).
- Specified by:
createControlin interface
IPage
- Parameters:
parent- the parent control
dispose
public void dispose()Disposes of this page.
This is the last method called on the
IPage. Implementors should clean up any resources associated with the page.
Callers of this method should ensure that the page's control (if it exists) has been disposed before calling this method. However, for backward compatibilty, implementors must also ensure that the page's control has been disposed before this method returns.
Note that there is no guarantee that createControl() has been called, so the control may never have been created.
getControl
public Control getControl()Returns the SWT control for this page.
- Specified by:
getControlin interface
IPage
- Returns:
- the SWT control for this page, or
nullif this page does not have a control
setActionBars
public void setActionBars(IActionBars actionBars)Allows the page to make contributions to the given action bars. The contributions will be visible when the page is visible.
This method is automatically called shortly after
createControlis called
- Specified by:
setActionBarsin interface
IPage
- Parameters:
actionBars- the action bars for this page
setFocus
public void setFocus()Asks this page to take focus within its pagebook view.
createActions
protected void createActions()Creates actions.
setGlobalAction
protected void setGlobalAction(IActionBars actionBars, String actionID, IAction action)Configures an action for key bindings.
- Parameters:
actionBars- action bars for this page
actionID- action definition id
action- associated action
getAdapter
public <T> T getAdapter(Class<T> required:
required- the adapter class to look up
- Returns:
- a object of the given class, or
nullif this object does not have an adapter for the given class
getConsoleView
protected IConsoleView getConsoleView()Returns the view this page is contained in.
- Returns:
- the view this page is contained in
getConsole
protected IConsole getConsole()Returns the console this page is displaying.
- Returns:
- the console this page is displaying
updateAction
protected void updateAction(String actionId)Updates the global action with the given id
- Parameters:
actionId- action definition id
contextMenuAboutToShow
protected void contextMenuAboutToShow(IMenuManager menuManager)Fill the context menu
- Parameters:
menuManager- menu
configureToolBar
protected void configureToolBar(IToolBarManager mgr)
getViewer
public TextConsoleViewer getViewer()Returns the viewer contained in this page.
- Returns:
- the viewer contained in this page
setViewer
public void setViewer(TextConsoleViewer viewer)Sets the viewer contained in this page.
- Parameters:
viewer- text viewer | http://help.eclipse.org/2019-03/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/ui/console/TextConsolePage.html | CC-MAIN-2019-51 | en | refinedweb |
Code formatter for Scala
Hello all. I have intellij 2019.2 and the following .scalafmt.conf file
version = 2.2.2 maxColumn = 100 includeNoParensInSelectChains = true
However, intellij popup error says
Failed to read /Users/someuser/workspace/someproject/.scalafmt.conf. Invalid fields: includeNoParensInSelectChains
Any ideas?
def foo = Bar(x => ... )
@wanderlustzoe I didn't understand first code sample. if it something like
def foo = Bar( x => ... )
then we have open PR scalameta/scalafmt#1551 for your request
Hi all, is there a way to create a scalafmt intance without using the classLoader?
val scalafmt = Scalafmt.create(this.getClass.getClassLoader)
I have a small project that generates scala code as text. I try to format it with scalafmt with
scalafmt.format method, running the program in a sbt loop (save - compile -run).
The getClassLoader together with sbt creates a memory leak it seems. From googling, this seems to be a very bad bug that its not easy to get rid of ....
Hello everyone :). Is there a way to NOT have newlines for single case-statements as e.g. in:
configValues.toList.traverse { case (k, v) => /*…*/ }
Scalafmt currently always gives me sth. like this:
configValues.toList .traverse { case (k, v) => /*..*/ }
Couldn’t find anything in the documentation.
hi,
I'm trying to add scalafmt in existing project.
i only want to run scalafmt on changed lines(not complete file). how can i do this?
i tried
scalafmt --mode diff and
scalafmt --mode changed but they are formatting the complete file.
P.S: I want to add scalafmt in pre-commit hook.
I have:
verticalMultiline.atDefnSite = true verticalMultiline.newlineAfterOpenParen = true verticalMultiline.newlineAfterImplicitKW = true
but I can’t get the newline before the parenthesis that close the implicit parameter list
So this works for a method:
def format( code: String, age: Int )( implicit ev: Parser, c: Context ): String
Is it possible to get this for a class?
class Format( code: String, age: Int )( implicit ev: Parser, c: Context ) extends Foo {
Instead I get this:
class Format( code: String, age: Int )( implicit ev: Parser, c: Context) extends Foo
Which looks strange
object X {before the first declaration in the object
(the real names are longer but the pattern repeats throughout the codebase)(the real names are longer but the pattern repeats throughout the codebase)
object Snapshot { + implicit val circeEncoderSnapshot - : circe.Encoder[Snapshot] = deriveEncoder[Snapshot] + : circe.Encoder[Snapshot] = deriveEncoder[Snapshot]
git diff --shortstat 1278 files changed, 32562 insertions(+), 20440 deletions(-)
// scalafmt 1.5.1 style = default maxColumn = 80 continuationIndent.callSite = 2 continuationIndent.defnSite = 4 align = some align.arrowEnumeratorGenerator = false align.openParenCallSite = true align.tokens = [caseArrow] assumeStandardLibraryStripMargin = false docstrings = ScalaDoc newlines.alwaysBeforeTopLevelStatements = true newlines.sometimesBeforeColonInMethodReturnType = true binPack.parentConstructors = true lineEndings = unix includeCurlyBraceInSelectChains = true optIn.breakChainOnFirstMethodDot = true optIn.annotationNewlines = true newlines.penalizeSingleSelectMultiArgList = true binPack.literalArgumentLists = true runner.optimizer.forceConfigStyleOnOffset = 150 rewriteTokens = {"⇒":"=>","←":"<-"} rewrite.rules = [RedundantBraces, PreferCurlyFors]
trailingCommas = preserveI think)
ScalafmtConfigand related but I can't figure it out yet how to prevent these diffs | https://gitter.im/scalameta/scalafmt | CC-MAIN-2019-51 | en | refinedweb |
Difference between revisions of "Linux Tools Project/Git"
Revision as of 12:54, 15 August 2011
{{
Branches
Please ensure that you fetch and rebase rather than merge. When new branches are created, please use the ["Rebase" pull strategy]. This is similar to the commandline:
git config branch.<branchname>.rebase true
- Branches created for bug fixes
- prefix the name with the bug # and a 'very' short description (ex. 307258-automake-tabs-to-spaces)
- Release branches
- stable-Major.Minor
- Branches specific to a sub-project
- namespaced (ex. valgrind/remote, lttng/super-awesome-feature)
- Tag each release with vMajor.Minor.Micro
- See Semantic Versioning for more details | http://wiki.eclipse.org/index.php?title=Linux_Tools_Project/Git&diff=264655&oldid=264654 | CC-MAIN-2019-51 | en | refinedweb |
music or video.So, to do this the largest source of videos in the world was thought and the functionality to play youtube videos in the app was added.
Since, the app now has two build flavors corresponding to the FDroid version and PlayStore version respectively it had to be considered that while adding the feature to play youtube videos any proprietary software was not included with the FDroid version. Google provides a youtube API that can be used to play videos inside the app only instead of passing an intent and playing the videos in the youtube app.
Steps to integrate youtube api in SUSI.AI
The first step was to create an environment variable that stores the API key, so that each developer that wants to test this functionality can just create an API key from the google API console and put it in the build.gradle file in the line below
def YOUTUBE_API_KEY = System.getenv(‘YOUTUBE_API_KEY’) ?: ‘”YOUR_API_KEY”‘
In the build.gradle file the buildConfigfield parameter names API_KEY was created so that it can used whenever we need the API_KEY in the code. The buildConfigfield was declared for both the release and debug build types as :
The second step is to catch the audio and video action type in the ParseSusiResponseHelper.kt file which was done by adding two constants “video_play” and “audio_play” in the Constant class. These actions were easily caught in the app as :
The third step involves making a ViewHolder class for the audio and video play actions. A simple layout was made for this viewholder which can display the thumbnail of the video and has a play button on the thumbnail, on clicking which plays the youtube video. Also, in the ChatFeedRecyclerAdapter we need to specify the action as Audio play and video play and then load the specific viewholder for the youtube videos whenever the response from the server for this particular action is fetched. The YoutubeViewHolder.java file describes the class that displays the thumbnail of the youtube video whenever a response arrives.
The above method shows how the thumbnail is set for a particular youtube video.
The fourth step is to pass the response from the server in the ChatPresenter to play the video asked by the user. This is achieved by calling a function declared in the IChatView so that the youtube video will be played after fetching the response from the server :
The fifth step is a bit complicated, as we know the app contains two flavors, one for the FDroid version that contains all the Open Source libraries and software and the second is the PlayStore version which may keep the proprietary software. Since, the youtube api is nothing but a piece of proprietary software we cannot include its implementation in the FDroid version and so different methods were devised to play the youtube videos in both versions. Since, we use flavors and only want that Youtube API is compiled and included in the playstore version the youtube dependency was updated as :
This ensures that the library is only included in the playStore version. But on this step another problem is encountered, as now the code in both the versions will be different an encapsulation method was devised which enabled the app to keep separate code in both flavors.
To keep different code for both variants, in the src folder directories named fdroid and playStore were created and package name was added and then finally a java folder was created in each directory in which the separate code was kept for each flavor. An interface in the main directory was created which was used as a means to provide encapsulation so that it was implemented differently in each of the flavors to provide different functionality. The interface was created as :
An object of the interface was made initialised in the ChatActivity and separate classes
named YoutubeVid.kt was made in each flavor which implemented the interface IYoutubeVid. So, now what happened was that depending on the build variant the YoutubeVid class of the particular flavor was called and the video would play according the sources that are available in that flavor.
In ChatActivity the following implementation was followed :
Declaration :
Initialisation :
Call to the overridden function :
Final Output
References
- Youtube Android player API :
- Building apps with product flavors – Tomoaki Imai
- Kotlin style guide : | https://blog.fossasia.org/play-youtube-videos-in-susi-ai-android-app/ | CC-MAIN-2019-51 | en | refinedweb |
EMU_DCDCInit_TypeDef Struct Reference
DCDC initialization structure.
#include <em_emu.h>
DCDC initialization structure.
Field Documentation
◆ powerConfig
Device external power configuration.
emuPowerConfig_DcdcToDvdd is currently the only supported mode.
◆ dcdcMode
DCDC regulator operating mode in EM0/1.
◆ mVout
Target output voltage (mV).
◆ em01LoadCurrent_mA
Estimated average load current in EM0/1.
(mA). This estimate is also used for EM1 optimization; if EM1 current is expected to be higher than EM0, then this parameter should hold the higher EM1 current.
◆ em234LoadCurrent_uA
Estimated average load current in EM2 (uA).
This estimate is also used for EM3 and 4 optimization; if EM3 or 4 current is expected to be higher than EM2, then this parameter should hold the higher EM3 or 4 current.
◆ maxCurrent_mA
Maximum average DCDC output current (mA).
This can be set to the maximum for the power source, for example the maximum for a battery.
◆ anaPeripheralPower
Select analog peripheral power in DCDC-to-DVDD mode.
◆ reverseCurrentControl
Low-noise reverse current control.
NOTE: this parameter uses special encoding: >= 0 is forced CCM mode where the parameter is used as the reverse current threshold in mA. -1 is encoded as emuDcdcLnHighEfficiencyMode (EFM32 only).
◆ dcdcLnCompCtrl
DCDC Low-noise mode compensator control. | https://docs.silabs.com/gecko-platform/3.0/emlib/api/efr32xg13/struct-e-m-u-d-c-d-c-init-type-def | CC-MAIN-2021-04 | en | refinedweb |
Help:Draft
RationalWiki allows you to create new articles in the Draft namespace. This allows people to work on articles which are not ready to "go live" or "air" in the main RationalWiki namespace.
Anyone can edit a draft article, so if you want to work on an article by yourself, you should create it in your sandbox instead.
Creating[edit]
To create a draft article, you can do a search for "Draft:Article name". If no article exists with that title, it will show up as a redlink (like this: redlink), and you can create a new article.
Moving to draft[edit]
An article can be moved to draft namespace if it is considered not ready for mainspace, for example if it is lacking references, is badly formatted, or is very incomplete. It should be moved using the relocate option, also used to change page titles, as described at Help:Page title.
Once properly edited it can be moved back to mainspace in the same way. If you are unsure if an article is good enough for mainspace, consult a moderator or other experienced user or ask in the Saloon bar.
Existing draft articles[edit]
There are some draft articles kicking around in a state of limbo (like dead babies floating around the Pope's head). Here is a list of all of them. You can search for one using the site search facility.
Warning[edit]
Even though draft articles have a lower standard than regular RationalWiki articles, they should still have the potential of being turned into proper articles. They should not be hopelessly off-topic, and should not be libellous or otherwise naughty articles. | https://rationalwiki.org/wiki/Help:Draft | CC-MAIN-2021-04 | en | refinedweb |
Getting Started with the VIPER Architecture Pattern
In this tutorial, you’ll learn about using the VIPER architecture pattern with SwiftUI and Combine, while building an iOS app that lets users create road trips.
Version
- Swift 5, iOS 13, Xcode 11
The VIPER architectural pattern is an alternative to MVC or MVVM. And while the SwiftUI and Combine frameworks create a powerful combination that makes quick work of building complex UIs and moving data around an app, they also come with their own challenges and opinions about architecture.
It’s a common belief that all of the app logic should now go into a SwiftUI view, but that’s not the case.
VIPER offers an alternative to this scenario and can be used in conjunction with SwiftUI and Combine to help build apps with a clean architecture that effectively separates the different functions and responsibilities required, such as the user interface, business logic, data storage and networking. These are then easier to test, maintain and expand.
In this tutorial, you’ll build an app using the VIPER architecture pattern. The app is also conveniently called VIPER: Visually Interesting Planned Easy Roadtrips. Clever, right? :]
It will allow users to build out road trips by adding waypoints to a route. Along the way, you’ll also learn about SwiftUI and Combine for your iOS projects.
Getting Started
Download the project materials from the Download Materials button at the top or bottom of the tutorial. Open the starter project. This includes some code to get you started:
- The
ContentViewwill launch the app’s other views as you build them.
- There are some helper views in the Functional Views group: one for wrapping the MapKit map view, a special “split image” view, which is used by the
TripListCell. You’ll be adding these to the screen in a little bit.
- In the Entities group, you’ll see the classes related to the data model. Trip and Waypoint will serve later as the Entities of the VIPER architecture. As such, they just hold data and don’t include any functional logic.
- In the Data Sources group, there are the helper functions for saving or loading data.
- Peek ahead if you like in the WaypointModule group. This has a VIPER implementation of the Waypoint editing screen. It’s included with the starter so you can complete the app by the end of this tutorial.
This sample uses Pixabay, a permissively licensed photo-sharing site. To pull images into the app, you’ll need to create a free account and obtain an API key.
Follow the instructions here to create an account. Then, copy your API key into the
apiKey variable found in ImageDataProvider.swift. You can find it in the Pixabay API docs under Search Images.
If you build and run now, you won’t see anything too interesting.
However, by the end of the tutorial, you’ll have a fully functional road-trip planning app.
What is VIPER?
VIPER is an architectural pattern like MVC or MVVM, but it separates the code further by single responsibility. Apple-style MVC motivates developers to put all logic into a
UIViewController subclass. VIPER, like MVVM before it, seeks to fix this problem.
Each of the letters in VIPER stand for a component of the architecture: View, Interactor, Presenter, Entity and Router.
- The View is the user interface. This corresponds to a SwiftUI
View.
- The Interactor is a class that mediates between the presenter and the data. It takes direction from the presenter.
- The Presenter is the “traffic cop” of the architecture, directing data between the view and interactor, taking user actions and calling to router to move the user between views.
- An Entity represents application data.
- The Router handles navigation between screens. That’s different than it is in SwiftUI, where the view shows any new views.
This separation is borne out of “Uncle” Bob Martin’s Clean Architecture paradigm.
When you look at the diagram, you can see there’s a complete path for the data to flow between the view and entities.
SwiftUI has its own opinionated way of doing things. The mapping of VIPER responsibilities onto domain objects will be different if you compare this to tutorials for UIKit apps.
Comparing Architectures
People often discuss VIPER with MVC and MVVM, but it is different from those patterns.
MVC, or Model-View-Controller, is the pattern most people associate with 2010’s iOS app architecture. With this approach, you define the View in a storyboard, and the Controller is an associated
UIViewController subclass. The Controller modifies the View, accepts user input and interacts directly with the Model. The Controller bloats with view logic and business logic.
MVVM is a popular architecture that separates the view logic from the business logic in a View Model. The view model interacts with the Model.
The big difference is that a view model, unlike a view controller, only has a one-way reference to the view and to the model. MVVM is a good fit for SwiftUI, and there is a whole tutorial on the topic.
VIPER goes a step further by separating the view logic from the data model logic. Only the presenter talks to the view, and only the interactor talks to the model (entity). The presenter and interactor coordinate with each other. The presenter is concerned with display and user action, and the interactor is concerned with manipulating the data.
Defining an Entity
VIPER is a fun acronym for this architecture, but its order isn’t proscriptive.
The fastest way to get something on screen is to start with the entity. The entity is the data object(s) for the project. In this case, the main entities are Trip, which contains a list of Waypoints, which are the stops in the trip.
The app contains a DataModel class that holds a list of trips. The model uses a JSON file for local persistence, but you can replace this by a remote back end without having to modify any of the UI-level code. That’s one of the advantages of clean architecture: When you change one part — like the persistence layer — it’s isolated from other areas of the code.
Adding an Interactor
Create a new Swift File named TripListInteractor.swift.
Add the following code to the file:
class TripListInteractor { let model: DataModel init (model: DataModel) { self.model = model } }
This creates the interactor class and assigns it a
DataModel, which you’ll use later.
Setting Up the Presenter
Now, create a new Swift File named TripListPresenter.swift. This will be for the presenter class. The presenter cares about providing data to the UI and mediating user actions.
Add this code to the file:
import SwiftUI import Combine class TripListPresenter: ObservableObject { private let interactor: TripListInteractor init(interactor: TripListInteractor) { self.interactor = interactor } }
This creates a presenter class that has reference to the interactor.
Since it’s the presenter’s job to fill the view with data, you want to expose the list of trips from the data model.
Add a new variable to the class:
@Published var trips: [Trip] = []
This is the list of trips the user will see in the view. By declaring it with the
@Published property wrapper, the view will be able to listen to changes to the property and update itself automatically.
The next step is to synchronize this list with the data model from the interactor. First, add the following helper property:
private var cancellables = Set<AnyCancellable>()
This set is a place to store Combine subscriptions so their lifetime is tied to the class’s. That way, any subscriptions will stay active as long as the presenter is around.
Add the following code to the end of
init(interactor:):
interactor.model.$trips .assign(to: \.trips, on: self) .store(in: &cancellables)
interactor.model.$trips creates a publisher that tracks changes to the data model’s
trips collection. Its values are assigned to this class’s own
trips collection, creating a link that keeps the presenter’s trips updated when the data model changes.
Finally, this subscription is stored in
cancellables so you can clean it up later.
Building a View
You now need to build out the first View: the trip list view.
Creating a View with a Presenter
Create a new file from the SwiftUI View template and name it TripListView.swift.
Add the following property to
TripListView:
@ObservedObject var presenter: TripListPresenter
This links the presenter to the view. Next, fix the previews by changing the body of
TripListView_Previews.previews to:
let model = DataModel.sample let interactor = TripListInteractor(model: model) let presenter = TripListPresenter(interactor: interactor) return TripListView(presenter: presenter)
Now, replace the content of
TripListView.body with:
List { ForEach (presenter.trips, id: \.id) { item in TripListCell(trip: item) .frame(height: 240) } }
This creates a
List where the presenter’s trips are enumerated, and it generates a pre-supplied
TripListCell for each.
Modifying the Model from the View
So far, you’ve seen data flow from the entity to the interactor through the presenter to populate the view. The VIPER pattern is even more useful when sending user actions back down to manipulate the data model.
To see that, you’ll add a button to create a new trip.
First, add the following to the class in TripListInteractor.swift:
func addNewTrip() { model.pushNewTrip() }
This wraps the model’s
pushNewTrip(), which creates a new
Trip at the top of the trips list.
Then, in TripListPresenter.swift, add this to the class:
func makeAddNewButton() -> some View { Button(action: addNewTrip) { Image(systemName: "plus") } } func addNewTrip() { interactor.addNewTrip() }
This creates a button with the system
+ image with an action that calls
addNewTrip(). This forwards the action to the interactor, which manipulates the data model.
Go back to TripListView.swift and add the following after the
List closing brace:
.navigationBarTitle("Roadtrips", displayMode: .inline) .navigationBarItems(trailing: presenter.makeAddNewButton())
This adds the button and a title to the navigation bar. Now modify the
return in
TripListView_Previews as follows:
return NavigationView { TripListView(presenter: presenter) }
This allows you to see the navigation bar in preview mode.
Resume the live preview to see the button.
Seeing It In Action
Now’s a good time to go back and wire up
TripListView to the rest of the application.
Open ContentView.swift. In the body of
view, replace the
VStack with:
TripListView(presenter: TripListPresenter(interactor: TripListInteractor(model: model)))
This creates the view along with its presenter and interactor. Now build and run.
Tapping the + button will add a New Trip to the list.
Deleting a Trip
Users who create trips will probably also want to be able to delete them in case they make a mistake or when the trip is over. Now that you’ve created the data path, adding additional actions to the screen is straightforward.
In
TripListInteractor, add:
func deleteTrip(_ index: IndexSet) { model.trips.remove(atOffsets: index) }
This removes items from the
trips collection in the data model. Because it’s an
@Published property, the UI will automatically update because of its subscription to the changes.
In
TripListPresenter, add:
func deleteTrip(_ index: IndexSet) { interactor.deleteTrip(index) }
This forwards the delete command on to the interactor.
Finally, in
TripListView, add the following after the end brace of the
ForEach:
.onDelete(perform: presenter.deleteTrip)
Adding an
.onDelete to an item in a SwiftUI
List automatically enables the swipe to delete behavior. The action is then sent to the presenter, kicking off the whole chain.
Build and run, and you’ll now be able to remove trips!
Routing to the Detail View
Now’s the time to add in the Router part of VIPER.
A router will allow the user to navigate from the trip list view to the trip detail view. The trip detail view will show a list of the waypoints along with a map of the route.
The user will be able to edit the list of waypoints and the trip name from this screen.
Setting Up the Trip Detail Screens
Before showing the detail screen, you’ll need to create it.
Following the previous example, create two new Swift Files: TripDetailPresenter.swift and TripDetailInteractor.swift and a SwiftUI View named TripDetailView.swift.
Set the contents of
TripDetailInteractor to:
import Combine import MapKit class TripDetailInteractor { private let trip: Trip private let model: DataModel let mapInfoProvider: MapDataProvider private var cancellables = Set<AnyCancellable>() init (trip: Trip, model: DataModel, mapInfoProvider: MapDataProvider) { self.trip = trip self.mapInfoProvider = mapInfoProvider self.model = model } }
This creates a new class for the interactor of the trip detail screen. This interacts with two data sources: an individual
Trip and Map information from MapKit. There’s also a set for the cancellable subscriptions that you’ll add later.
Then, in
TripDetailPresenter, set its contents to:
import SwiftUI import Combine class TripDetailPresenter: ObservableObject { private let interactor: TripDetailInteractor private var cancellables = Set<AnyCancellable>() init(interactor: TripDetailInteractor) { self.interactor = interactor } }
This creates a stub presenter with a reference for interactor and cancellable set. You’ll build this out in a bit.
In
TripDetailView, add the following property:
@ObservedObject var presenter: TripDetailPresenter
This adds a reference to the presenter in the view.
To get the previews building again, change that stub to:
static var previews: some View { let model = DataModel.sample let trip = model.trips[1] let mapProvider = RealMapDataProvider() let presenter = TripDetailPresenter(interactor: TripDetailInteractor( trip: trip, model: model, mapInfoProvider: mapProvider)) return NavigationView { TripDetailView(presenter: presenter) } }
Now the view will build, but the preview is still just “Hello, World!”
Routing
Before building out the detail view, you’ll want to link it to the rest of the app through a router from the trip list.
Create a new Swift File named TripListRouter.swift.
Set its contents to:
import SwiftUI class TripListRouter { func makeDetailView(for trip: Trip, model: DataModel) -> some View { let presenter = TripDetailPresenter(interactor: TripDetailInteractor( trip: trip, model: model, mapInfoProvider: RealMapDataProvider())) return TripDetailView(presenter: presenter) } }
This class outputs a new
TripDetailView that’s been populated with an interactor and presenter. The router handles transitioning from one screen to another, setting up the classes needed for the next view.
In an imperative UI paradigm — in other words, with UIKit — a router would be responsible for presenting view controllers or activating segues.
SwiftUI declares all of the target views as part of the current view and shows them based on view state. To map VIPER onto SwiftUI, the view is now responsible for showing/hiding of views, the router is a destination view builder, and the presenter coordinates between them.
In TripListPresenter.swift, add the router as a property:
private let router = TripListRouter()
You’ve now created the router as part of the presenter.
Next, add this method:
func linkBuilder<Content: View>( for trip: Trip, @ViewBuilder content: () -> Content ) -> some View { NavigationLink( destination: router.makeDetailView( for: trip, model: interactor.model)) { content() } }
This creates a
NavigationLink to a detail view the router provides. When you place it in a
NavigationView, the link becomes a button that pushes the
destination onto the navigation stack.
The
content block can be any arbitrary SwiftUI view. But in this case, the
TripListView will provide a
TripListCell.
Go to TripListView.swift and change the contents of the
ForEach to:
self.presenter.linkBuilder(for: item) { TripListCell(trip: item) .frame(height: 240) }
This uses the
NavigationLink from the presenter, sets the cell as its content and puts it in the list.
Build and run, and now, when the user taps the cell, it will route them to a “Hello World”
TripDetailView.
Finishing Up the Detail View
There are a few trip details you still need to fill out the detail view so the user can see the route and edit the waypoints.
Start by adding a the trip title:
In
TripDetailInteractor, add the following properties:
var tripName: String { trip.name } var tripNamePublisher: Published<String>.Publisher { trip.$name }
This exposes just the
String version of the trip name and a
Publisher for when that name changes.
Also, add the following:
func setTripName(_ name: String) { trip.name = name } func save() { model.save() }
The first method allows the presenter to change the trip name, and the second will save the model to the persistence layer.
Now, move onto
TripDetailPresenter. Add the following properties:
@Published var tripName: String = "No name" let setTripName: Binding<String>
These provide the hooks for the view to read and set the trip name.
Then, add the following to the
init method:
// 1 setTripName = Binding<String>( get: { interactor.tripName }, set: { interactor.setTripName($0) } ) // 2 interactor.tripNamePublisher .assign(to: \.tripName, on: self) .store(in: &cancellables)
This code:
- Creates a binding to set the trip name. The
TextFieldwill use this in the view to be able to read and write from the value.
- Assigns the trip name from the interactor’s publisher to the
tripNameproperty of the presenter. This keeps the value synchronized.
Separating the trip name into properties like this allows you to synchronize the value without creating an infinite loop of updates.
Next, add this:
func save() { interactor.save() }
This adds a save feature so the user can save any edited details.
Finally, go to
TripDetailView, and replace the
body with:
var body: some View { VStack { TextField("Trip Name", text: presenter.setTripName) .textFieldStyle(RoundedBorderTextFieldStyle()) .padding([.horizontal]) } .navigationBarTitle(Text(presenter.tripName), displayMode: .inline) .navigationBarItems(trailing: Button("Save", action: presenter.save)) }
The
VStack for now holds a
TextField for editing the trip name. The navigation bar modifiers define the title using the presenter’s published
tripName, so it updates as the user types, and a save button that will persist any changes.
Build and run, and now, you can edit the trip title.
Save after editing the trip name, and the changes will appear after you relaunch the app.
Using a Second Presenter for the Map
Adding additional widgets to a screen will follow the same pattern of:
- Adding functionality to the interactor.
- Bridging the functionality through the presenter.
- Adding the widgets to the view.
Go to
TripDetailInteractor, and add the following properties:
@Published var totalDistance: Measurement<UnitLength> = Measurement(value: 0, unit: .meters) @Published var waypoints: [Waypoint] = [] @Published var directions: [MKRoute] = []
These provide the following information about the waypoints in a trip: the total distance as a
Measurement, the list of waypoints and a list of directions that connect those waypoints.
Then, add the follow subscriptions to the end of
init(trip:model:mapInfoProvider:):
trip.$waypoints .assign(to: \.waypoints, on: self) .store(in: &cancellables) trip.$waypoints .flatMap { mapInfoProvider.totalDistance(for: $0) } .map { Measurement(value: $0, unit: UnitLength.meters) } .assign(to: \.totalDistance, on: self) .store(in: &cancellables) trip.$waypoints .setFailureType(to: Error.self) .flatMap { mapInfoProvider.directions(for: $0) } .catch { _ in Empty<[MKRoute], Never>() } .assign(to: \.directions, on: self) .store(in: &cancellables)
This performs three separate actions based on the changing of the trip’s waypoints.
The first is just a copy to the interactor’s waypoint list. The second uses the
mapInfoProvider to calculate the total distance for all of the waypoints. And the third uses the same data provider to get directions between the waypoints.
The presenter then uses these values to provide information to the user.
Go to
TripDetailPresenter, and add these properties:
@Published var distanceLabel: String = "Calculating..." @Published var waypoints: [Waypoint] = []
The view will use these properties. Wire them up for tracking data changes by adding the following to the end of
init(interactor:):
interactor.$totalDistance .map { "Total Distance: " + MeasurementFormatter().string(from: $0) } .replaceNil(with: "Calculating...") .assign(to: \.distanceLabel, on: self) .store(in: &cancellables) interactor.$waypoints .assign(to: \.waypoints, on: self) .store(in: &cancellables)
The first subscription takes the raw distance from the interactor and formats it for display in the view, and the second just copies over the waypoints.
Considering the Map View
Before heading over to the detail view, consider the map view. This widget is more complicated than the others.
In addition to drawing the geographical features, the app also overlays pins for each point and the route between them.
This calls for its own set of presentation logic. You could use the
TripDetailPresenter, or in this case, create a separate
TripMapViewPresenter. It will reuse the
TripDetailInteractor since it shares the same data model and is a read-only view.
Create a new Swift File named TripMapViewPresenter.swift. Set its contents to:
import MapKit import Combine class TripMapViewPresenter: ObservableObject { @Published var pins: [MKAnnotation] = [] @Published var routes: [MKRoute] = [] let interactor: TripDetailInteractor private var cancellables = Set<AnyCancellable>() init(interactor: TripDetailInteractor) { self.interactor = interactor interactor.$waypoints .map { $0.map { let annotation = MKPointAnnotation() annotation.coordinate = $0.location return annotation } } .assign(to: \.pins, on: self) .store(in: &cancellables) interactor.$directions .assign(to: \.routes, on: self) .store(in: &cancellables) } }
Here, the map presenter exposes two arrays to hold annotations and routes. In
init(interactor:), you map the
waypoints from the interactor to
MKPointAnnotation objects so that they can be displayed as pins on the map. You then copy the
directions to the
routes array.
To use the presenter, create a new SwiftUI View named TripMapView.swift. Set its contents to:
import SwiftUI struct TripMapView: View { @ObservedObject var presenter: TripMapViewPresenter var body: some View { MapView(pins: presenter.pins, routes: presenter.routes) } } #if DEBUG struct TripMapView_Previews: PreviewProvider { static var previews: some View { let model = DataModel.sample let trip = model.trips[0] let interactor = TripDetailInteractor( trip: trip, model: model, mapInfoProvider: RealMapDataProvider()) let presenter = TripMapViewPresenter(interactor: interactor) return VStack { TripMapView(presenter: presenter) } } } #endif
This uses the helper
MapView and supplies it with pins and routes from the presenter. The
previews struct builds the VIPER chain the app needs to preview just the map. Use Live Preview to see the map properly:
To add the map to the app, first add the following method to
TripDetailPresenter:
func makeMapView() -> some View { TripMapView(presenter: TripMapViewPresenter(interactor: interactor)) }
This makes a map view, providing it with its presenter.
Next, open TripDetailView.swift.
Add the following to the
VStack below the
TextField:
presenter.makeMapView() Text(presenter.distanceLabel)
Build and run to see the map on screen:
Editing Waypoints
The final feature is to add waypoint editing so you can make your own trips! You can rearrange the list on the trip detail view. But to create a new waypoint, you’ll need a new view for the user to type in the name.
To get to a new view, you’ll want a Router. Create a new Swift File named TripDetailRouter.swift.
Add this code to the new file:
import SwiftUI class TripDetailRouter { private let mapProvider: MapDataProvider init(mapProvider: MapDataProvider) { self.mapProvider = mapProvider } func makeWaypointView(for waypoint: Waypoint) -> some View { let presenter = WaypointViewPresenter( waypoint: waypoint, interactor: WaypointViewInteractor( waypoint: waypoint, mapInfoProvider: mapProvider)) return WaypointView(presenter: presenter) } }
This creates a
WaypointView that is already set up and ready to go.
With the router on hand, go to TripDetailInteractor.swift, and add the following methods:
func addWaypoint() { trip.addWaypoint() } func moveWaypoint(fromOffsets: IndexSet, toOffset: Int) { trip.waypoints.move(fromOffsets: fromOffsets, toOffset: toOffset) } func deleteWaypoint(atOffsets: IndexSet) { trip.waypoints.remove(atOffsets: atOffsets) } func updateWaypoints() { trip.waypoints = trip.waypoints }
These methods are self descriptive. They add, move, delete, and update waypoints.
Next, expose these to the view through
TripDetailPresenter. In
TripDetailPresenter, add this property:
private let router: TripDetailRouter
This will hold the router. Create it by adding this to the top of
init(interactor:):
self.router = TripDetailRouter(mapProvider: interactor.mapInfoProvider)
This creates the router for use with the waypoint editor. Next, add these methods:
func addWaypoint() { interactor.addWaypoint() } func didMoveWaypoint(fromOffsets: IndexSet, toOffset: Int) { interactor.moveWaypoint(fromOffsets: fromOffsets, toOffset: toOffset) } func didDeleteWaypoint(_ atOffsets: IndexSet) { interactor.deleteWaypoint(atOffsets: atOffsets) } func cell(for waypoint: Waypoint) -> some View { let destination = router.makeWaypointView(for: waypoint) .onDisappear(perform: interactor.updateWaypoints) return NavigationLink(destination: destination) { Text(waypoint.name) } }
The first three are part of the operations on the waypoint. The final method calls the router to get a waypoint view for the waypoint and put it in a
NavigationLink.
Finally, show this to the user in
TripDetailView by adding the following to the
VStack under the
Text:
HStack { Spacer() EditButton() Button(action: presenter.addWaypoint) { Text("Add") } }.padding([.horizontal]) List { ForEach(presenter.waypoints, content: presenter.cell) .onMove(perform: presenter.didMoveWaypoint(fromOffsets:toOffset:)) .onDelete(perform: presenter.didDeleteWaypoint(_:)) }
This adds the following controls to the view:
- An
EditButtonthat puts the list into editing mode so the user can move or delete waypoints.
- An add
Buttonthat uses the presenter to add a new waypoint to the list.
- A
Listthat uses a
ForEachwith the presenter to make a cell for each waypoint. The list defines an
onMoveand
onDeleteaction that enables those edit actions and calls back into the presenter.
Build and run, and you can now customize a trip! Be sure to save any changes.
Making Modules
With VIPER, you can group together the presenter, interactor, view, router and related code into modules.
Traditionally, a module would expose the interfaces for the presenter, interactor and router in a single contract. This doesn’t make a lot of sense with SwiftUI because it’s view forward. Unless you want to package each module as its own framework, you can instead conceptualize modules as groups.
Take TripListView.swift, TripListPresenter.swift, TripListInteractor.swift and TripListRouter.swift and group them together in a group named TripListModule.
Do the same for the detail classes: TripDetailView.swift, TripDetailPresenter.swift, TripDetailInteractor.swift, TripMapViewPresenter.swift, TripMapView.swift, and TripDetailRouter.swift.
Add them to a new group called TripDetailModule.
Modules are a good way to keep the code clean and separated. As a good rule of thumb, a module should be a conceptual screen/feature, and the routers hand the user off between modules.
Where to Go From Here?
Click the Download Materials button at the top or bottom of the tutorial to download the completed project files.
One of the advantages of the separation VIPER endorses is in testability. You can test the interactor so that it can read and manipulate the data model. And you can do all that while independently testing the presenter to change the view and respond to user actions.
Think of it as a fun exercise to try on your own!
Because of the reactive power of Combine and its native support in SwiftUI, you may have noticed that the interactor and presenter layers are relatively thin. They do separate the concerns, but mostly, they’re just passing data through an abstraction layer.
With SwiftUI, it’s a little more natural to collapse the presenter and interactor functionality into a single
ObservableObject that holds most of the view state and interacts directly with the entities.
For an alternate approach, read MVVM with Combine Tutorial for iOS.
We hope you enjoyed this tutorial! If you think of questions or comments, drop them in the discussion below. We’d love to hear about your favorite architecture and what’s changed in the era of SwiftUI. | https://www.raywenderlich.com/8440907-getting-started-with-the-viper-architecture-pattern | CC-MAIN-2021-04 | en | refinedweb |
ROOT macros and shared libraries
A ROOT macro contains pure C++ code, which additionally can contain ROOT classes and other ROOT objects (→ see ROOT classes, data types and global variables). A ROOT macro can consist of simple or multi-line commands, but also of arbitrarily complex class and function definitions.
You can save a ROOT macro in a file and execute it at the ROOT prompt or the system prompt (→ see Creating ROOT macros).
You also can compile a ROOT macro (→ see Compiling ROOT macros).
ROOT provides many tutorials that are available as ROOT macros (→ see ROOT tutorial page).
Creating ROOT macros
The name of the ROOT macro and the file name (without file extension) in which the macro is saved must match.
Create a new file in your preferred text editor.
Use the following general structure for the ROOT macro:
void MacroName() { ... your lines of C++ code code line ends with ; ... }
- Save the file ROOT macro, using the macro name as file name: MacroName.C
Executing ROOT macros
You can execute a ROOT macro:
- at the system prompt,
- at the ROOT prompt,
- by loading it to a ROOT session.
To execute a ROOT macro at the system prompt, type:
root MacroName.C
– or –
To execute a ROOT macro at the ROOT prompt, type:
.x MacroName.C
– or –
To load a ROOT macro to a ROOT session, type (at the ROOT prompt):
.L MacroName.C MacroName()
Note
You can load multiple ROOT macros, as each ROOT macro has a unique name in the ROOT name space.
In addition, you can:
Executing a ROOT macro from a ROOT macro
You can execute a ROOT macro conditionally inside another ROOT macro.
- Call the interpreter TROOT::ProcessLine().
ProcessLine() takes a parameter, which is a pointer to an
int or to a
TInterpreter::EErrorCode to let you access the interpreter error code after an attempt to interpret.
This contains the error as defined in enum
TInterpreter::EErrorCode with
TInterpreter::kSuccess
as being the value for a successful execution.
Example
The example
$ROOTSYS/tutorials/tree/cernstaff.C calls a ROOT macro to build a ROOT file, if it does not exist.
void cernstaff() { if (gSystem->AccessPathName("cernstaff.root")) { gROOT->ProcessLine(".x cernbuild.C"); }
Executing a ROOT macro from the invocation of ROOT
You can pass a macro to ROOT in its invocation.
Example
The exact kind of quoting depends on the used shell. This example works for bash-like shells.
$ root -l -b 'myCode.C("some String", 12)'
Compiling ROOT macros
You can use ACLiC (Compiling Your Code) to compile your code and to build a dictionary and a shared library from your ROOT macro. ACliC is implemented in TSystem::CompileMacro().
When using ACliC, ROOT checks what library really needs to be build and calls your system’s C++ compiler, linker and dictionary generator. Then ROOT loads a native shared library.
ACLiC executes the following steps:
Calls
rootclingto create automatically a dictionary.
For creating a dictionary manually, → see Using rootcling to generate dictionaries manually.
Calls the the system’s C++ compiler to build the shared library.
If there are errors, it calls the C++ compiler to build a dummy executable to clearly report the unresolved symbols.
ACLiC adds the classes and functions declared in included files with the same name as the
ROOT macro files with one of following extensions:
.h,
.hh,
.hpp,
.hxx,
.hPP,
.hXX.
This means that, by default, you cannot combine ROOT macros from different files into one
library by using
#include statements; you will need to compile each ROOT macro separately.
Compiling a ROOT macro with ACLiC
Before you can compile your interpreted ROOT macro, you need to add the include statements for the classes used in the ROOT macro. Only then you can build and load a shared library containing your ROOT macro.
You can compile a ROOT macro with:
default optimizations
optimizations
debug symbols
Compilation ensures that the shared library is rebuilt.
Note
Do not call ACLiC with a ROOT macro that has a function called
main().
To compile a ROOT macro and build a shared library, type:
root[] .L MyScript.C+
The
+ option compiles the code and generates a shared library. The name of the shared library is the filename
where the dot before the extension is replaced by an underscore. In addition, the shared library
extension is added.
Example
On most platforms,
hsimple.cxx will generate
hsimple_cxx.so.
The
+ command rebuilds the library only if the ROOT macro or any of the files it includes
are newer than the library.
When checking the timestamp, ACLiC generates a dependency file, which name is the same as
the library name, just replacing the
so extension by the
d extension.
To compile a ROOT macro with default optimizations, type:
root[] .L MyScript.C++g
To compile a ROOT macro with optimizations, type:
root[] .L MyScript.C++O
To compile a ROOT macro with debug symbols, type:
root[] .L MyScript.C++
Setting the include path
The
$ROOTSYS/include directory is automatically appended to the include path.
To get the include path, type:
root[] .include
To append the include path, type:
root[] .include $HOME/mypackage/include
Append the following line in the ROOT macro to include the include path:
gSystem->AddIncludePath(" -I$HOME/mypackage/include")
To overwrite an existing include path, type:
gSystem->SetIncludePath(" -I$HOME/mypackage/include")
To add a static library that should be used during linking, type:
gSystem->AddLinkedLibs("-L/my/path -l*anylib*");
For adding a shared library, you can load it before you compile the ROOT macros, by
gSystem->Load("mydir/mylib");
Generating dictionaries
A dictionary (“reflection database”) contains information about the types and functions that are available in a library.
With a dictionary you can call functions inside libraries. Dictionaries are also needed to write a class into a ROOT file (→ see ROOT files).
A dictionary consists of a source file, which contains the type information needed by Cling and ROOT’s I/O subsystem. This source file needs to be generated from the library’s headers and then compiled, linked and loaded. Only then does Cling and ROOT know what is inside a library.
There are two ways to generate a dictionary:
using ACLiC
using
rootcling
Using ACLiC to generate dictionaries
With a given header file
MyHeader.h, ACliC automatically generates a dictionary:
root[] .L MyHeader.h+
Using rootcling to generate dictionaries manually
You can manually create a dictionary by using
rootcling:
rootcling -f DictOutput.cxx -c OPTIONS Header1.h Header2.h ... Linkdef.h
DictOutput.cxxSpecifies the output file that will contain the dictionary. It will be accompanied by a header file
DictOutput.h.
OPTIONSare:
Isomething: Adding an include path, so that
rootclingcan find the files included in
Header1.h,
Header2.h, etc.
DSOMETHING: Define a preprocessor macro, which is sometimes needed to parse the header files.
Header1.h Header2.h...: The headers files.
Linkdef.h: Tells
rootcling, which classes should be added to the dictionary, → see Selecting dictionary entries: Linkdef.h.
Note
Dictionaries that are used within the same project must have unique names.
Compiled object files relative to dictionary source files cannot reside in the same library or in two libraries loaded by the same application if the original source files have the same name.
Example
In the first step, a
TEvent and a
TTrack class is defined. Next an event object is created to add tracks to it. The track objects have a pointer to their event. This shows that the I/O system correctly handles circular references.
In the second step, a
TEvent and a
TTrack call are implemented.
After that you can use
rootcling to manually generate a directory. This generates the
eventdict.cxx file.
The TEvent.h header
TTrack.h header
#ifndef __TTrack __ #define __TTrack__ #include "TObject.h" class TEvent; class TTrack : public TObject { private: Int_t fId; // Track sequential id. TEvent *fEvent; // TEvent. };
Implementation of TEvent and TTrack class
TEvent.cxx: #include <iostream.h> #include "TOrdCollection.h" #include "TEvent.h" #include "TTrack.h" ClassImp(TEvent) ... TTrack.cxx: #include <iostream.h> #include "TMath.h" #include "Track.h" #include "Event.h" ClassImp(TTrack) ...
Using rootcling to generate the dictionary
rootcling eventdict.cxx -c TEvent.h TTrack.h
eventdict.cxx - the generated dictionary; } }
Selecting dictionary entries: Linkdef.h
To select which types and functions should go into a dictionary, create a
Linkdef.h file that you use when you call
rootcint manually. The
Linkdef.h file is passed as the last argument to
rootcint. It must end on
Linkdef.h, LinkDef.h, or
linkdef.h. For example,
My_Linkdef.h is correct,
Linkdef_mine.h is not.
The
Linkdef.h file contains directives for
rootcint, for what a dictionary should be created: select the types and functions that will be accessible from the prompt (or in general through CINT) and for I/O.
Preamble: deselection
A
Linkdef.h file starts with the following preamble:
#ifdef __CINT__ #pragma link off all globals; #pragma link off all classes; #pragma link off all functions; #pragma link C++ nestedclasses;
The first line protects the compiler from seeing the
rootcint directives. The
rootcint directives are in the form of
#pragma statements. A
#pragma link of all something says that by default,
rootcint should not generate the dictionary for anything it sees.
The nested classes directive tells
rootcint not to ignore
nestedclasses, this is, classes defined inside classes like here:
class Outer { public: class Inner { public: // we want a dictionary for this one, too! ... }; ... };
Selection
In the next step, tell
rootcint for which objects the dictionary should be generated for:
#pragma link C++ class AliEvent+; #pragma link C++ function StrDup; #pragma link C++ function operator+(const TString&,const TString&); #pragma link C++ global gROOT; #pragma link C++ global gEnv; #pragma link C++ enum EMessageTypes;
Note
**The
+after the class name: This enables an essential feature for
rootcint. It is not a default setting, so you must add
+at the end.
Selection by file name
Sometimes it is easier to say: Create a dictionary for everything defined in the
MyHeader.h file.
Write the following statement into the
Linkdef.h file:
#pragma link C++ defined_in "subdir/MyHeader.h";
Make sure that
subdir/MyHeader.h corresponds to one of the header files that is passed to
rootcint.
Closing
Add the following line at the end of the
Linkdef.h file:
#endif /* __CINT__ */.
Example of a Linkdef.h file
#ifdef __CINT__ #pragma link off all globals; #pragma link off all classes; #pragma link off all functions; #pragma link C++ nestedclasses; #pragma link C++ global gHtml; #pragma link C++ class THtml; #endif
Developing portable ROOT macros
Portable ROOT macros run both with the Cling interpreter and ACLiC (Compiling Your Code).
Therefore, it is recommended not to use the Cling extensions and program around the Cling limitations.
If it is not possible to program around the Cling limitations, use the C preprocessor symbols
defined for Cling and
rootcling:
__CLING__is defined for both ROOT and
rootcling.
__ROOTCLING__(and
__MAKECINT__for backward compatibility) is only defined in rootcling.
Use
!defined(__CLING__) || defined(__ROOTCLING__) to bracket code that needs to be seen by
the compiler and
rootcling, but will be invisible to the interpreter.
– or –
Use
!defined(__CLING__) to bracket code that should be seen only by the compiler and not by Cling nor
rootcling.
Example
Hiding the declaration and initialization of the array
gArray from both Cling and
rootcling:
#if !defined(__CLING__) int gArray[] = { 2, 3, 4}; #endif
Cling and
rootcling will ignore all statements between the
#if !defined (__CLING__)
and
#endif. Because ACLiC calls
rootcling to build a dictionary, the declaration of
gArray will not be included in the dictionary, and consequently,
gArray will not be
available at the command line even if ACLiC is used.
If you want use the ROOT macro in the interpreter, you have to bracket the usage of
gArray
between the
#if’s, since the definition is not visible.
#if !defined(__CLING__) int gArray[] = { 2, 3, 4}; #elif defined(__ROOTCLING__) int gArray[]; #endif
gArray will be visible to
rootcling, but still not visible to Cling. If you use ACLiC,
gArray will be available at the command line.
Included header files
It is recommended to write ROOT macros with all the needed include statements. Only a few header files are not handled correctly by Cling.
You can include following types of headers in the interpreted and compiled mode:
The subset of standard C/C++ headers defined in
$ROOTSYS/Cling/include.
Headers of classes defined in a previously loaded library (including ROOT own library). The defined class must have a name known to ROOT (this is a class with a
ClassDef).
Hiding header files from
rootcling that are necessary for the compiler but optional for the interpreter can lead to a fatal error..
Embedding the rootcling call into a GNU Makefile
Use the following statement to compile and run the code.
.L MyCode.C+
If you need to use a:
libCore.
This rule generates $^ | https://root.cern.ch/manual/interacting_with_shared_libraries/ | CC-MAIN-2021-04 | en | refinedweb |
Hi Richard,
I also had this same problem. I did some reading, and according to Michael
Kay's XSLT Programmer's Reference:
"The xsl:exclude-result-prefixes and exclude-result-prefixes attributes
apply only to namespace nodes copied from the stylesheet using literal
result elements. They do not affect namespace nodes copied from the source
document using <xsl:copy> or <xsl:copy-of>: there is no way of suppressing
these."
Unfortunately, since xslt's will often have a catch-all template matcher to
copy elements it doesn't transform, this comes up quite a bit.).
You could make this more general, and use the serializer's configuration to
declare which namespaces you want to exclude, but excluding all worked well
for us, especially since we were outputting HTML.
Hope that helps!
Harry
-----Original Message-----
From: Reinhard Poetz [mailto:reinhard_poetz@gmx.net]
Sent: Monday, July 01, 2002 7:53 AM
To: cocoon-users@xml.apache.org
Subject: RE: How to remove namespace declarations and prefixes?
> From: Manos Batsis [mailto:m.batsis@bsnet.gr]
> > From: Reinhard Poetz [mailto:reinhard_poetz@gmx.net]
> > Is there a difference in performance - your solution compared
> > to a working
> > "exclude-result-prefixes"-attribute?
>
> Depends on whether you want to add a new stylesheet or modify the
> existing one (if any). While on the second choice (using xsl:element
> with local-name() in all templates that handle elements and attributes)
> performance should not change notisably; essentially this and
> exclude-result-prefixes do the same thing.
>
>
> > Did you try it with Cocoon? If yes, which version do you use?
>
> Nope I haven't.
It seems to me that this is a bug in cocoon.
>
> >
> > My stylesheet:
> >
> > <xsl:stylesheet
> > > xmlns: > xmlns: > xmlns: > xmlns: > xmlns: > xmlns: > xmlns: >
>
> Yes, the above will only remove namespace prefixes bound to
>
> To filter all prefixes out modify the exclude-result-prefixes attribute
> to
ATM it doesn't remove the namespace. I'll report the bug to bugzilla.
Thanks for your help.
Reinhard
---------------------------------------------------------------------> | http://mail-archives.apache.org/mod_mbox/cocoon-users/200207.mbox/%3C85063BBE668FD411944400D0B744267A018638F6@AUSMAIL%3E | CC-MAIN-2021-04 | en | refinedweb |
The
– Maven
Overview
GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data.
Creating Application
Step 1(Create Spring Boot App):
We can easily create a spring boot app by using
Just chose appropriate options as per yourself and select WEB as dependency in dependencies section.
After clicking on Generate Project tab your Spring Boot App will be downloaded as jar.
Step 2(Adding Dependencies in pom):
We will only add GraphQL dependency and Guava(because it will make easy to read resources) dependency.
Dependencies will look like:
<dependency> <groupId>com.graphql-java</groupId> <artifactId>graphql-java</artifactId> <version>14.1</version> </dependency> <dependency> <groupId>com.graphql-java</groupId> <artifactId>graphql-java-spring-boot-starter-webmvc</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>23.5-jre</version> </dependency>
Step 3(Create a Schema):
Schema:
GraphQL API has a schema which defines each field that can be queried.
We will create schema in src/main/resources and will name it as schema.graphqls
type Query { songById(id: ID): Song } type Song { id: ID name: String genre: String artist: Artist } type Artist { id: ID firstName: String lastName: String }
It is not JSON (even though it looks deliberately similar), it is a GraphQL schema. It defines each field that can be queried or mutated and what types those fields are. We are only going to look for query part in this blog.
In code, we have defined a type Query at top which defines that we are going to query for the fields mentioned in there( in our case, query for a song with specific Id named as songById). songById is again of type Song whose fields are defined there. In type Song we can also see that there is a field artist which is of another type Artist which you also can see there.
So that’s how we can create nested schema. There is a lot to know about schema which we will see later.
Step 4(Fetching Data)
Data Fetchers
Data Fetcher is probably the most important concept for a GraphQL.
It is an interface with a single method, taking a single argument of type:
public interface DataFetcher<T> { T get(DataFetchingEnvironment dataFetchingEnvironment) throws Exception; }
A Data Fetcher is used to fetch data when it is executed in query. And whenever a query is executed GraphQL calls appropriate DataFetcher to get the data.
So in our case first we will create a class in which we are going to add static data and will fetch them using DataFetcher.
@Component public class GraphQLDataFetchers { private static List<Map<String, String>> song = Arrays.asList( ImmutableMap.of("id", "song-1", "name", "Shape of you", "genre", "Pop", "artist", "artist-1"), ImmutableMap.of("id", "song-2", "name", "Closer", "genre", "Electronic/Dance", "artist", "artist-2"), ImmutableMap.of("id", "song-3", "name", "Señorita", "genre", "Pop", "artist", "artist-3") ); private static List<Map<String, String>> artist = Arrays.asList( ImmutableMap.of("id", "artist-1", "firstName", "Ed", "lastName", "Sheeran"), ImmutableMap.of("id", "artist-2", "firstName", "ChainSmokers", "lastName", ""), ImmutableMap.of("id", "artist-3", "firstName", "Shawn/Camila", "lastName", "Mendes/Cabello") ); public DataFetcher getSongByIdDataFetcher() { return dataFetchingEnvironment -> { String songID = dataFetchingEnvironment.getArgument("id"); return song .stream() .filter(book -> book.get("id").equals(songID)) .findFirst() .orElse(null); }; } public DataFetcher getArtistByIdDataFetcher() { return dataFetchingEnvironment -> { Map<String, String> song = dataFetchingEnvironment.getSource(); String artistID = song.get("artist"); return artist .stream() .filter(payment -> payment.get("id").equals(artistID)) .findFirst() .orElse(null); }; } }
This is the final draft of class which will help us in fetching data.
Step 5(Create GraphQL Instance)
Now we have schema, and data fetchers so we can now create a controller to call them.
So we are going to make a class name GraphQLProvider, in which we are going to create bean, integrate schema, and call data fetchers:
@Component public class GraphQLProvider { @Autowired private GraphQLDataFetchers graphQLDataFetchers; private GraphQL song; @Bean public GraphQL song() { return song; } @PostConstruct public void init() throws IOException { URL url = Resources.getResource("schema.graphqls"); String sdl = Resources.toString(url, Charsets.UTF_8); GraphQLSchema graphQLSchema = buildSchema(sdl); this.song =(newTypeWiring("Query") .dataFetcher("songById", graphQLDataFetchers.getSongByIdDataFetcher())) .type(newTypeWiring("Song") .dataFetcher("artist", graphQLDataFetchers.getArtistByIdDataFetcher())) .build(); } }
Here we have used init() method to create a GraphQL instance and a GraphQLSchema to integrating with schema.
We created a bean to expose GraphQLinstance as a Spring Bean via the
GraphQL instance to make our schema available via HTTP on the default url “/graphql”.
What have used buildSchema method which creates the GraphQLSchema instance and wires in code to fetch data.
TypeDefinitionRegistry Type is the parsed version of our schema file. SchemaGerator combines the TypeDefinitionRegistry with RuntimeWiring to actually make our GraphQLSchema. builWiring uses the graphQLDataFetchers bean to actually register two DataFetchers:
– One to retrieve a song with a specific ID
– One to get the artist for a specific song.
graphQL(). The GraphQL Java Spring adapter will use thatgraphQL(). The GraphQL Java Spring adapter will use that
Step 6(Run)
This is all we have to do. Now simply run your service.
We have used post-construct annotation, so it will be POST endpoint
By default your service run on ” “.
You can implement various queries and see how it is working.
Working with Postman
While writing the query in postman will be quite challenging as the schema is not JSON type. So we write like this:
{ "query": "query{songById(id: \"song-1\"){id, name, genre artist{firstName, lastName }}}" }
- whole query should be in single line
- there should be a key named as “query” and it’s value should always start from “query”.
- Inside query first thing would be the writing what we have written in schema. Like, songById(id: “song-1”).
- We will write nested schema as written. i.e, inside “curly braces{}”. Like, {id, name, artist{firstName}}.
That’s pretty much it from the article, full code can be found on the My GitHub Repo , feel free to fork it. If you have any feedback or queries, please do let me know in the comments. Also, if you liked the article, please give me a thumbs up and I will keep writing blogs like this for you in the future as well. Keep reading and Keep coding. | https://blog.knoldus.com/beginners-guide-to-graphql-with-spring-boot/ | CC-MAIN-2021-04 | en | refinedweb |
What Is RAII?
RAII stands for “resource acquisition is initialization.” RAII is a design pattern using C++ code to eliminate resource leaks. Resource leaks happen when a resource that your program acquired is not subsequently released. The most familiar example is a memory leak. Since C++ doesn't have a GC the way C# does, you need to be careful to ensure that dynamically allocated memory is freed. Otherwise, you will leak that memory. Resource leaks can also result in the inability to open a file because the file system thinks it’s already open, the inability to obtain a lock in a multi-threaded program, or the inability to release a COM object.
How Does RAII Work?
RAII works because of three basic facts.
- When an automatic storage duration object goes out of scope, its destructor runs.
- When an exception occurs, all automatic duration objects that have been fully constructed since the last try-block began are destroyed in the reverse order they were created before any catch handler is invoked.
- If you nest try-blocks, and none of the catch handlers of an inner try-block handles that type of exception, then the exception propagates to the outer try-block. All automatic duration objects that have been fully constructed within that outer try-block are then destroyed in reverse creation order before any catch handler is invoked, and so on, until something catches the exception or your program crashes.
RAII helps ensure that you release resources, without exceptions occurring, by simply using automatic storage duration objects that contain the resources. It is similar to the combination of the
System.IDisposable interface along with the using statement in C#. Once execution leaves the current block, whether through successful execution or an exception, the resources are freed.
When it comes to exceptions, a key part to remember is that only fully constructed objects are destroyed. If you receive an exception in the midst of a constructor, and the last try block began outside that constructor, since the object isn't fully constructed, its destructor will not run.
This does not mean its member variables, which are objects, will not be destroyed. Any member variable objects that were fully constructed within the constructor before the exception occurred are fully constructed automatic duration objects. Thus, those member objects will be destroyed the same as any other fully constructed objects.
This is why you should always put dynamic allocations inside either
std::unique_ptr or
std::shared_ptr. Instances of those types become fully constructed objects when the allocation succeeds. Even if the constructor for the object you are creating fails further in, the
std::unique_ptr resources will be freed by its destructor and the
std::shared_ptr resources will have their reference count decremented and will be freed if the count becomes zero.
RAII isn't about shared_ptr and unique_ptr only, of course. It also applies to other resource types, such as a file object, where the acquisition is the opening of the file and the destructor ensures that the file is properly closed. This is a particularly good example since you only need to create that code right just once—when you write the class—rather than again and again, which is what you need to do if you write the close logic every place you have to open a file.
How Do I Use RAII?
RAII use is described by its name: Acquiring a dynamic resource should complete the initialization of an object. If you follow this one-resource-per-object pattern, then it is impossible to wind up with a resource leak. Either you will successfully acquire the resource, in which case the object that encapsulates it will finish construction and be subject to destruction, or the acquisition attempt will fail, in which case you did not acquire the resource; thus, there is no resource to release.
The destructor of an object that encapsulates a resource must release that resource. This, among other things, is one of the important reasons why destructors should never throw exceptions, except those they catch and handle within themselves.
If the destructor threw an uncaught exception, then, to quote Bjarne Stroustrup, “All kinds of bad things are likely to happen because the basic rules of the standard library and the language itself will be violated. Don't do it.”
As he said, don’t do it. Make sure you know what exceptions, if any, everything you call in your destructors could throw so you can ensure that you handle them properly.
Now you might be thinking that if you follow this pattern, you will end up writing a ton of classes. You will occasionally write an extra class here and there, but you aren’t likely to write too many because of smart pointers. Smart pointers are objects too. Most types of dynamic resources can be put into at least one of the existing smart pointer classes. When you put a resource acquisition inside a suitable smart pointer, if the acquisition succeeds, then that smart pointer object will be fully constructed. If an exception occurs, then the smart pointer object’s destructor will be called, and the resource will be freed.
There are several important smart pointer types. Let’s have a look at them.
The
std::unique_ptr Function
The unique pointer,
std::unique_ptr, is designed to hold a pointer to a dynamically allocated object. You should use this type only when you want one pointer to the object to exist. It is a template class that takes a mandatory and an optional template argument. The mandatory argument is the type of the pointer it will hold. For instance
auto result = std::unique_ptr<int>(new int()); will create a unique pointer that contains an int*. The optional argument is the type of deleter. We see how to write a deleter in a coming sample. Typically, you can avoid specifying a deleter since the default_deleter, which is provided for you if no deleter is specified, covers almost every case you can imagine.
A class that has
std::unique_ptr as a member variable cannot have a default copy constructor. Copy semantics are disabled for
std::unique_ptr. If you want a copy constructor in a class that has a unique pointer, you must write it. You should also write an overload for the copy operator. Normally, you want
std::shared_ptr in that case.
However, you might have something like an array of data. You may also want any copy of the class to create a copy of the data as it exists at that time. In that case, a unique pointer with a custom copy constructor could be the right choice.
std::unique_ptr is defined in the <memory> header file.
std::unique_ptr has four member functions of interest.
The get member function returns the stored pointer. If you need to call a function that you need to pass the contained pointer to, use get to retrieve a copy of the pointer.
The release member function also returns the stored pointer, but release invalidates the unique_ptr in the process by replacing the stored pointer with a null pointer. If you have a function where you want to create a dynamic object and then return it, while still maintaining exception safety, use
std:unique_ptr to store the dynamically created object, and then return the result of calling release. This gives you exception safety while allowing you to return the dynamic object without destroying it with the
std::unique_ptr’s destructor when the control exits from the function upon returning the released pointer value at the end.
The swap member function allows two unique pointers to exchange their stored pointers, so if A is holding a pointer to X, and B is holding a pointer to Y, the result of calling
A::swap(B); is that A will now hold a pointer to Y, and B will hold a pointer to X. The deleters for each will also be swapped, so if you have a custom deleter for either or both of the unique pointers, be assured that each will retain its associated deleter.
The reset member function causes the object pointed to by the stored pointer, if any, to be destroyed in most cases. If the current stored pointer is null, then nothing is destroyed. If you pass in a pointer to the object that the current stored pointer points to, then nothing is destroyed. You can choose to pass in a new pointer, nullptr, or to call the function with no parameters. If you pass in a new pointer, then that new object is stored. If you pass in nullptr, then the unique pointer will store null. Calling the function with no parameters is the same as calling it with nullptr.
The
std::shared_ptr Function
The shared pointer,
std::shared_ptr, is designed to hold a pointer to a dynamically allocated object and to keep a reference count for it. It is not magic; if you create two shared pointers and pass them each a pointer to the same object, you will end up with two shared pointers—each with a reference count of 1, not 2. The first one that is destroyed will release the underlying resource, giving catastrophic results when you try to use the other one or when the other one is destroyed and tries to release the already released underlying resource.
To use the shared pointer properly, create one instance with an object pointer and then create all other shared pointers for that object from an existing, valid shared pointer for that object. This ensures a common reference count, so the resource will have a proper lifetime. Let’s look at a quick sample to see the right and wrong ways to create shared_ptr objects.
Sample: SharedPtrSample\SharedPtrSample.cpp
#include <memory> #include <iostream> #include <ostream> #include "../pchar.h" using namespace std; struct TwoInts { TwoInts(void) : A(), B() { } TwoInts(int a, int b) : A(a), B(b) { } int A; int B; }; wostream& operator<<(wostream& stream, TwoInts* v) { stream << v->A << L" " << v->B; return stream; } int _pmain(int /*argc*/, _pchar* /*argv*/[]) { //// Bad: results in double free. //try //{ // TwoInts* p_i = new TwoInts(10, 20); // auto sp1 = shared_ptr<TwoInts>(p_i); // auto sp2 = shared_ptr<TwoInts>(p_i); // p_i = nullptr; // wcout << L"sp1 count is " << sp1.use_count() << L"." << endl << // L"sp2 count is " << sp2.use_count() << L"." << endl; //} //catch(exception& e) //{ // wcout << L"There was an exception." << endl; // wcout << e.what() << endl << endl; //} //catch(...) //{ // wcout << L"There was an exception due to a double free " << // L"because we tried freeing p_i twice!" << endl; //} // This is one right way to create shared_ptrs. { auto sp1 = shared_ptr<TwoInts>(new; } // This is another right way. The std::make_shared function takes the // type as its template argument, and then the argument value(s) to the // constructor you want as its parameters, and it automatically // constructs the object for you. This is usually more memory- // efficient, as the reference count can be stored with the // shared_ptr's pointed-to object at the time of the object's creation. { auto sp1 = make_shared; } return 0; }
std::shared_ptr is defined in the <memory> header file.
std::shared_ptr has five member functions of interest.
The get member function works the same as the std::unique_ptr::get member function.
The use_count member function returns a long, which tells you what the current reference count for the target object is. This does not include weak references.
The unique member function returns a bool, informing you whether this particular shared pointer is the sole owner of the target object.
The swap member function works the same as the
std::unique_ptr::swap member function, with the addition that the reference counts for the resources stay the same.
The reset member function decrements the reference count for the underlying resource and destroys it if the resource count becomes zero. If a pointer to an object is passed in, the shared pointer will store it and begin a new reference count for that pointer. If nullptr is passed in, or if no parameter is passed, then the shared pointer will store null.
The
std::make_shared Function
The
std::make_shared template function is a convenient way to construct an initial
std::shared_ptr. As we saw previously in
SharedPtrSample, you pass the type as the template argument and then simply pass in the arguments, if any, for the desired constructor.
std::make_shared will construct a heap instance of the template argument object type and make it into a
std::shared_ptr. You can then pass that
std::shared_ptr as an argument to the
std::shared_ptr constructor to create more references to that shared object.
ComPtr in WRL for Metro-Style Apps
The Windows Runtime Template Library (WRL) provides a smart pointer named ComPtr within the Microsoft::WRL namespace for use with COM objects in Windows 8 Metro-style applications. The pointer is found in the <wrl/client.h> header, as part of the Windows SDK (minimum version 8.0).
Most of the operating system functionality that you can use in Metro-style applications is exposed by the Windows Runtime (“WinRT”). WinRT objects provide their own automatic reference counting functionality for object creation and destruction. Some system functionality, such as Direct3D, requires you to directly use and manipulate it through classic COM. ComPtr handles COM’s IUnknown-based reference counting for you. It also provides convenient wrappers for QueryInterface and includes other functionality that is useful for smart pointers.
The two member functions you typically use are As to get a different interface for the underlying COM object and Get to take an interface pointer to the underlying COM object that the ComPtr holds (this is the equivalent of
std::unique_ptr::get).
Sometimes you will use Detach, which works the same way as std::unique_ptr::release but has a different name because release in COM implies decrementing the reference count and Detach does not do that.
You might use ReleaseAndGetAddressOf for situations where you have an existing ComPtr that could already hold a COM object and you want to replace it with a new COM object of the same type.
ReleaseAndGetAddressOf does the same thing as the GetAddressOf member function, but it first releases its underlying interface, if any.
Exceptions in C++
Unlike .NET, where all exceptions derive from System.Exception and have guaranteed methods and properties, C++ exceptions are not required to derive from anything; nor are they even required to be class types. In C++, throw L"Hello World!"; is perfectly acceptable to the compiler as is throw 5;. Basically, exceptions can be anything.
That said, many C++ programmers will be unhappy to see an exception that does not derive from
std::exception (found in the <exception> header). Deriving all exceptions from
std::exception provides a way to catch exceptions of unknown type and retrieve information from them via the what member function before re-throwing them.
std::exception::what takes no parameters and returns a
const char* string, which you can view or log so you know what caused the exception.
There is no stack trace—not counting the stack-trace capabilities your debugger provides—with C++ exceptions. Because automatic duration objects within the scope of the try-block that catches the exception are automatically destroyed before the appropriate catch handler, if any, is activated, you do not have the luxury of examining the data that may have caused the exception. All you have to work with initially is the message from the what member function.
If it is easy to recreate the conditions that led to the exception, you can set a breakpoint and rerun the program, allowing you to step through execution of the trouble area and possibly spot the issue. Because that is not always possible, it is important to be as precise as you can with the error message.
When deriving from
std::exception, you should make sure to override the what member function to provide a useful error message that will help you and other developers diagnose what went wrong.
Some programmers use a variant of a rule stating that you should always throw
std::exception-derived exceptions. Remembering that the entry point (main or wmain) returns an integer, these programmers will throw
std::exception-derived exceptions when their code can recover, but will simply throw a well-defined integer value if the failure is unrecoverable. The entry-point code will be wrapped in a try-block that has a catch for an int. The catch handler will return the caught int value. On most systems, a return value of 0 from a program means success. Any other value means failure.
If there is a catastrophic failure, then throwing a well-defined integer value other than 0 can help provide some meaning. Unless you are working on a project where this is the preferred style, you should stick to
std::exception-derived exceptions, since they let programs handle exceptions using a simple logging system to record messages from exceptions not handled, and they perform any cleanup that is safe. Throwing something that doesn’t derive from
std::exception would interfere with these error-logging mechanisms.
One last thing to note is that C#’s finally construct has no equivalent in C++. The RAII idiom, when properly implemented, makes it unnecessary since everything will have been cleaned up.
C++ Standard Library Exceptions
We’ve already discussed
std::exception, but there are more types than that available in the standard library, and there is additional functionality to explore. Let’s look at the functionality from the <exception> header file first.
The
std::terminate function, by default, lets you crash out of any application. It should be used sparingly, since calling it rather than throwing an exception will bypass all normal exception handling mechanisms. If you wish, you can write a custom terminate function without parameters and return values. An example of this will be seen in ExceptionsSample, which is coming.
To set the custom terminate, you call
std::set_terminate and pass it the address of the function. You can change the custom terminate handler at any time; the last function set is what will be called in the event of either a call to
std::terminate or an unhandled exception. The default handler calls the abort function from the <cstdlib> header file.
The <stdexcept> header provides a rudimentary framework for exceptions. It defines two classes that inherit from
std::exception. Those two classes serve as the parent class for several other classes.
The
std::runtime_error class is the parent class for exceptions thrown by the runtime or due to a mistake in a C++ Standard Library function. Its children are the
std::overflow_error class, the
std::range_error class, and the
std::underflow_error class.
The
std::logic_error class is the parent class for exceptions thrown due to programmer error. Its children are the
std::domain_error class, the
std::invalid_argument class, the
std::length_error class, and the
std::out_of_range class.
You can derive from these classes or create your own exception classes. Coming up with a good exception hierarchy is a difficult task. On one hand, you want exceptions that will be specific enough that you can handle all exceptions based on your knowledge at build-time. On the other hand, you do not want an exception class for each error that could occur. Your code would end up bloated and unwieldy, not to mention the waste of time writing catch handlers for every exception class.
Spend time at a whiteboard, or with a pen and paper, or however you want thinking about the exception tree your application should have.
The following sample contains a class called
InvalidArgumentExceptionBase, which is used as the parent of a template class called
InvalidArgumentException. The combination of a base class, which can be caught with one exception handler, and a template class, which allows us to customize the output diagnostics based on the type of the parameter, is one option for balancing between specialization and code bloat.
The template class might seem confusing right now; we will be discussing templates in an upcoming chapter, at which point anything currently unclear should clear up.
Sample: ExceptionsSample\InvalidArgumentException.h
#pragma once #include <exception> #include <stdexcept> #include <string> #include <sstream> namespace CppForCsExceptions { class InvalidArgumentExceptionBase : public std::invalid_argument { public: InvalidArgumentExceptionBase(void) : std::invalid_argument("") { } virtual ~InvalidArgumentExceptionBase(void) throw() { } virtual const char* what(void) const throw() override = 0; }; template <class T> class InvalidArgumentException : public InvalidArgumentExceptionBase { public: inline InvalidArgumentException( const char* className, const char* functionSignature, const char* parameterName, T parameterValue ); inline virtual ~InvalidArgumentException(void) throw(); inline virtual const char* what(void) const throw() override; private: std::string m_whatMessage; }; template<class T> InvalidArgumentException<T>::InvalidArgumentException( const char* className, const char* functionSignature, const char* parameterName, T parameterValue) : InvalidArgumentExceptionBase(), m_whatMessage() { std::stringstream msg; msg << className << "::" << functionSignature << " - parameter '" << parameterName << "' had invalid value '" << parameterValue << "'."; m_whatMessage = std::string(msg.str()); } template<class T> InvalidArgumentException<T>::~InvalidArgumentException(void) throw() { } template<class T> const char* InvalidArgumentException<T>::what(void) const throw() { return m_whatMessage.c_str(); } }
Sample: ExceptionsSample\ExceptionsSample.cpp
#include <iostream> #include <ostream> #include <memory> #include <exception> #include <stdexcept> #include <typeinfo> #include <algorithm> #include <cstdlib> #include "InvalidArgumentException.h" #include "../pchar.h" using namespace CppForCsExceptions; using namespace std; class ThrowClass { public: ThrowClass(void) : m_shouldThrow(false) { wcout << L"Constructing ThrowClass." << endl; } explicit ThrowClass(bool shouldThrow) : m_shouldThrow(shouldThrow) { wcout << L"Constructing ThrowClass. shouldThrow = " << (shouldThrow ? L"true." : L"false.") << endl; if (shouldThrow) { throw InvalidArgumentException<const char*>( "ThrowClass", "ThrowClass(bool shouldThrow)", "shouldThrow", "true" ); } } ~ThrowClass(void) { wcout << L"Destroying ThrowClass." << endl; } const wchar_t* GetShouldThrow(void) const { return (m_shouldThrow ? L"True" : L"False"); } private: bool m_shouldThrow; }; class RegularClass { public: RegularClass(void) { wcout << L"Constructing RegularClass." << endl; } ~RegularClass(void) { wcout << L"Destroying RegularClass." << endl; } }; class ContainStuffClass { public: ContainStuffClass(void) : m_regularClass(new RegularClass()), m_throwClass(new ThrowClass()) { wcout << L"Constructing ContainStuffClass." << endl; } ContainStuffClass(const ContainStuffClass& other) : m_regularClass(new RegularClass(*other.m_regularClass)), m_throwClass(other.m_throwClass) { wcout << L"Copy constructing ContainStuffClass." << endl; } ~ContainStuffClass(void) { wcout << L"Destroying ContainStuffClass." << endl; } const wchar_t* GetString(void) const { return L"I'm a ContainStuffClass."; } private: unique_ptr<RegularClass> m_regularClass; shared_ptr<ThrowClass> m_throwClass; }; void TerminateHandler(void) { wcout << L"Terminating due to unhandled exception." << endl; // If you call abort (from <cstdlib>), the program will exit // abnormally. It will also exit abnormally if you do not call // anything to cause it to exit from this method. abort(); //// If you were instead to call exit(0) (also from <cstdlib>), //// then your program would exit as though nothing had //// gone wrong. This is bad because something did go wrong. //// I present this so you know that it is possible for //// a program to throw an uncaught exception and still //// exit in a way that isn't interpreted as a crash, since //// you may need to find out why a program keeps abruptly //// exiting yet isn't crashing. This would be one such cause //// for that. //exit(0); } int _pmain(int /*argc*/, _pchar* /*argv*/[]) { // Set a custom handler for std::terminate. Note that this handler // won't run unless you run it from a command prompt. The debugger // will intercept the unhandled exception and will present you with // debugging options when you run it from Visual Studio. set_terminate(&TerminateHandler); try { ContainStuffClass cSC; wcout << cSC.GetString() << endl; ThrowClass tC(false); wcout << L"tC should throw? " << tC.GetShouldThrow() << endl; tC = ThrowClass(true); wcout << L"tC should throw? " << tC.GetShouldThrow() << endl; } // One downside to using templates for exceptions is that you need a // catch handler for each specialization, unless you have a base // class they all inherit from, that is. To avoid catching // other std::invalid_argument exceptions, we created an abstract // class called InvalidArgumentExceptionBase, which serves solely to // act as the base class for all specializations of // InvalidArgumentException<T>. Now we can catch them all, if desired, // without needing a catch handler for each. If you wanted to, however, // you could still have a handler for a particular specialization. catch (InvalidArgumentExceptionBase& e) { wcout << L"Caught '" << typeid(e).name() << L"'." << endl << L"Message: " << e.what() << endl; } // Catch anything derived from std::exception that doesn’t already // have a specialized handler. Since you don't know what this is, you // should catch it, log it, and re-throw it. catch (std::exception& e) { wcout << L"Caught '" << typeid(e).name() << L"'." << endl << L"Message: " << e.what() << endl; // Just a plain throw statement like this is a re-throw. throw; } // This next catch catches everything, regardless of type. Like // catching System.Exception, you should only catch this to // re-throw it. catch (...) { wcout << L"Caught unknown exception type." << endl; throw; } // This will cause our custom terminate handler to run. wcout << L"tC should throw? " << ThrowClass(true).GetShouldThrow() << endl; return 0; }
Though I mention it in the comments, I just wanted to point out again that you will not see the custom terminate function run unless you run this sample from a command prompt. If you run it in Visual Studio, the debugger will intercept the program and orchestrate its own termination after giving you a chance to examine the state to see if you can determine what went wrong. Also, note that this program will always crash. This is by design since it allows you to see the terminate handler in action.
Conclusion
As we saw in this article, RAII helps ensure that you release resources, without exceptions occurring, by simply using automatic storage duration objects that contain the resources. In the next installment of this series, we zoom in on pointers and references.
This lesson represents a chapter from C++ Succinctly, a free eBook from the team at Syncfusion.<< | https://code.tutsplus.com/articles/c-succinctly-resources-acquisition-is-initialization--mobile-22053 | CC-MAIN-2021-04 | en | refinedweb |
Desktop K8S in 2021
Is there a better alternative to Minikube? See some options for Local Kubernetes Clusters if you are developing on a Mac.
Join the DZone community and get the full member experience.Join For Free
For this article, we’ll dig into some of the options for Local Kubernetes Clusters if you are developing on a Mac. When doing microservices development, eventually you will want to start to test integrated services together. And there are several options available to run these tests:
- Dedicated Clusters – Larger teams typically have dev environments and you can either run lots of little clusters or a big cluster with lots of namespaces in them. Running clusters 24/7 costs real money that can add up quickly. Also if you opt for big clusters with namespaces per developer, this is still not a good reflection of staging or production.
- Docker Compose – This is an old standby and one we still use for certain workloads. You can quickly spin up a chain of images and network them together. One big drawback I find to using docker compose is that the manifests are not used in my staging and production clusters. This ends up being work only for the developer desktop.
- Local Kubernetes Cluster – With a local cluster you can use the exact same manifests that are used in staging and production. Of course you need local hardware that can support the load, but the infrastructure requirements for this have come way down.
Tests were conducted on a 2019 MacBook Pro (Big Sur).
$ kubectx docker-desktop microk8s minikube rancher-desktop
Minikube
I’m not embarrassed to say that I cut my teeth on minikube. This is the recommended path for onboarding into Kubernetes and has a ton of benefits:
- Much of the standard Kubernetes documentation applies to minikube
- With over 20K stars on GitHub it is one of the most popular repos out there and is incredibly active
- It runs a VM that with a single node cluster, and you can swap out the VM if you want to use a different engine
Source: GitHub minikube
Microk8s
Several years ago Canonical released microk8s, their own distribution of Kubernetes. It is available directly into Ubuntu through snap. Because it is designed for running in Linux, this may be a good choice if you prefer to interact with everything over the command-line. There are tons of built-in commands and features, and it also has the ability to automatically pull in lots of other open source projects.
- It is built right into Ubuntu which is one of the most popular Linux distros, but it can also be deployed on Mac through brew
- On a Mac it will spin up a VM using Multipass that runs ubuntu with microk8s inside
- Microk8s includes a bunch of standard add-ons for instance if you want to test out a service mesh or particular kind of ingress
- The instructions all assume you type microk8s before your kubectl commands, so you’ll want to add the context to your config instead
Rancher Desktop
As a new entrant to the local Kubernetes cluster, Rancher Desktop takes a completely different approach. Instead of Kubernetes, it spins up a thin k3s cluster under the hood. Rancher has packaged this tool as an electron app. It runs a thin VM and images are maintained using KIM (also an experimental project).
- Easily switch between different versions of Kubernetes with a simple drop-down
- Built-in UI for doing things like port-forwarding with a single click
- It uses a surprisingly small amount of resources for an app built on top of electron
- At the time of this article it is still in alpha version, so expect more things to change over time
Source: Ken’s Incredible MacBook Pro
Docker Desktop for Mac
If you are doing development on a Mac and dealing with Dockerfiles, chances are you have Docker Desktop deployed. This is a closed source project, although you can open issues on GitHub. This is a very active project, docker pushes out updates regularly. One of the built-in features of Docker Desktop is that you can turn on the included Kubernetes cluster. This works in a unique way:
- It does not use a VM but deploys docker-in-docker across a set of nodes running as docker images themselves which is neat
- You can run your regular docker ps command to see things that are running inside the cluster
- Docker recently changed their policy that you must update to new versions unless on a paid plan, so this may force an unexpected upgrade
- It is not as easy to switch between Kubernetes distributions, you must “reset” your cluster in Docker Desktop to allow an upgrade
Source: Docker Desktop Documentation
Summary
Fortunately, there are still a ton of options for Kubernetes local development in 2021. Hopefully this inspires you to run some test workloads on a new platform as well.
Published at DZone with permission of Ken Ahrens. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/desktop-k8s-in-2021?fromrel=true | CC-MAIN-2021-43 | en | refinedweb |
#include <math.h>float log10f(float num);double log10(double num); long double log10l(long double num);
log10f( ) and log10l( ) were added by C99.
The log10( ) family of functions returns the base 10 logarithm for num. A domain error occurs if num is negative. If num is zero, a range error is possible.
Related functions are log( ) and log2( ). | https://flylib.com/books/en/3.13.1.242/1/ | CC-MAIN-2021-43 | en | refinedweb |
In this article I will talk about one of my favorite feature when using Laravel framework. Yes, It's a response macro.
I will share you what it is and how we can use this feature to make our response more simpler and reusable.
Let's start it out!
What is response macro?
Response macro is a custom response that you can re-use in your routes or controllers.
When building a REST API, commonly you will use
response() helper to send data back to the user.
You also use some variants of
response() helper when you want to tell that the requested data is not found.
For example you will use these similar syntax to handle your REST API.
// Syntax when sending response with HTTP no content return response()->json(null, 204); // Syntax when sending response with HTTP not found return response()->json(['message' => 'post not found'], 404); // Syntax when sending response with HTTP created return response()->json(['message' => 'register success'], 201);
Now, imagine. What if we can transform those response into simpler form but has same functionality?.
// Syntax when sending response with HTTP no content return response()->noContent(); // Syntax when sending response with HTTP not found return response()->notFound('post not found'); // Syntax when sending response with HTTP created return response()->created('register success');
It's cool right? even those syntax tells us explicitly what actual action of those response send to the user.
How to add response macro?
Basically, we just extend the basic features of Laravel
Response object by registering our custom response inside
App\Providers\AppServiceProvider.
Open file
app/Providers/AppServiceProvider.php and use
Illuminate\Support\Facades\Response (Response facade). To register the custom response use
Response::macro() inside the
boot method.
Response::macro() has two parameters. The custom response name and the implementation. Let's add one of the previous custom responses above.
<?php namespace App\Providers; use Illuminate\Http\JsonResponse; use Illuminate\Support\Facades\Response; use Illuminate\Support\ServiceProvider; class AppServiceProvider extends ServiceProvider { /** * Bootstrap any application services. * * @return void */ public function boot() { Response::macro('notFound', function ($message) { return Response::make(compact('message'), HTTP_NOT_FOUND); }); } }
How to use response macro?
If you have added the response macro, you can use it inside routes or controllers. For example you have a
PostController.php with method
show inside it.
<?php public function show(int $id) { $post = Post::find($id); if (is_null($post)) { $message = sprintf('Post with id %d not found', $id); return response()->notFound($message); } return new PostResource($post); }
Bonus
Response macro is not about adding simple custom response only. You can use response macro as a transform layer (or service) to add micro functionality.
For example you can add response macro to convert markdown to HTML.
<?php public function boot() { Response::macro('markdown', function ($raw) { // fake markdown converter library $md = new Markdown(); return Response::make(['data' => $md->toHTML($raw)], HTTP_OK); }); }
Use it inside your controller.
<?php public function render(int $id) { $post = Post::find($id); if (is_null($post)) { $message = sprintf('Post with id %d not found', $id); return response()->notFound($message); } return response()->markdown($post->body); }
NOTE:
When you use response macro as a transform layer, always remember to never put business logic inside it such a validation, database operations, etc.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/thexdev/laravel-response-macro-3e8f | CC-MAIN-2021-43 | en | refinedweb |
cleanenv alternatives and similar packages
Based on the "Configuration" category.
Alternatively, view cleanenv alternatives based on common mentions on social networks and blogs.
viper9.9 7.3 cleanenv VS viperGo configuration with fangs
kelseyhightower/envconfigGolang library for managing configuration data from environment variables
ini8.8 4.1 cleanenv VS iniPackage ini provides INI file read and write functionality in Go
env8.4 7.8 cleanenv VS envSimple lib to parse environment variables to structs
go-toml7.8 5.6 cleanenv VS go-tomlGo library for the TOML file format
go-arg7.7 6.4 cleanenv VS go-argStruct-based argument parsing in Go
koanf7.0 7.0 cleanenv VS koanfLight weight, extensible, configuration management library for Go. Built in support for JSON, TOML, YAML, env, command line, file, S3 etc. Alternative to viper.
konfig6.9 0.0 cleanenv VS konfigComposable, observable and performant config handling for Go for the distributed processing era
confita6.5 1.6 cleanenv VS confitaLoad configuration in cascade from multiple backends into a struct
gookit/config6.0 7.0 cleanenv
config5.8 0.0 cleanenv VS configJSON or YAML configuration wrapper with convenient access methods.
aconfig5.8 7.4 cleanenv VS aconfigSimple, useful and opinionated config loader.
hjson5.7 0.1 cleanenv VS hjsonHjson for Go
store5.6 0.0 cleanenv VS storeA dead simple configuration manager for Go applications
envconfig5.5 1.8 cleanenv VS envconfigSmall library to read your configuration from environment variables
gcfg5.5 0.0 cleanenv VS gcfgread INI-style configuration files into Go structs; supports user-defined types and subsections
joshbetz/config5.2 0.0 cleanenv VS joshbetz/config🛠 A configuration library for Go that parses environment variables, JSON files, and reloads automatically on SIGHUP
goConfig5.2 0.1 cleanenv VS goConfiggoconfig uses a struct as input and populates the fields of this struct with parameters from command line, environment variables and configuration file.
fig5.0 3.8 cleanenv VS figA minimalist Go configuration library
harvester4.7 6.1 cleanenv VS harvesterHarvest configuration, watch and notify subscriber
mini4.4 0.0 cleanenv VS miniA golang package for parsing ini-style configuration files
onion4.3 0.0 cleanenv VS onionLayer based configuration for golang
go-aws-ssm4.3 0.0 cleanenv VS go-aws-ssmGo package that interfaces with AWS System Manager
configuro4.0 2.9 cleanenv VS configuroAn opinionated configuration loading framework for Containerized and Cloud-Native applications.
envcfg4.0 0.0 cleanenv VS envcfgUn-marshaling environment variables to Go structs
envh4.0 2.8 cleanenv VS envhGo helpers to manage environment variables
gofigure3.7 0.0 cleanenv VS gofigureGo configuration made easy!
configure3.5 0.0 cleanenv VS configureConfigure is a Go package that gives you easy configuration of your project through redundancy
xdg3.5 0.1 cleanenv VS xdgA cross platform package that follows the XDG Standard
configuration2.9 0.1 cleanenv VS configurationLibrary for setting values to structs' fields from env, flags, files or default tag
ingo2.8 0.0 cleanenv VS ingopersistent storage for flags in go
uConfig2.7 5.6 cleanenv VS uConfigLightweight, zero-dependency, and extendable configuration management library for Go
hocon2.6 1.9 cleanenv VS hocongo implementation of lightbend's HOCON configuration library
go-up2.6 0.0 cleanenv VS go-upgo-up! A simple configuration library with recursive placeholders resolution and no magic.
How to use2.5 0.0 cleanenv VS How to useYour configuration library for your Go programs.
CONFLATE2.2 0.5 cleanenv VS CONFLATELibrary providing routines to merge and validate JSON, YAML and/or TOML files
Genv2.1 0.0 cleanenv VS GenvGenv is a library for Go (golang) that makes it easy to read and use environment variables in your projects. It also allows environment variables to be loaded from the .env file.
TOML2.0 6.1 cleanenv VS TOMLInstream TOML to JSON encoder
go-ssm-config1.8 0.0 cleanenv VS go-ssm-configGo utility for loading configuration parameters from AWS SSM (Parameter Store)
subVars1.7 8.2 cleanenv VS subVarsSubstitute environment variables from command line for template driven configuration files.
envconf1.4 0.0 cleanenv VS envconfConfigure Go applications from the environment
go-options1.1 0.0 cleanenv VS go-options:package: Clean APIs for your Go Applications
mollyDB1.0 0.0 cleanenv VS mollyDBA GraphQL configuration file database
go-env0.8 0.0 cleanenv VS go-envGolang handling of environment values
swap0.8 0.6 cleanenv VS swapInstantiate/configure structs recursively, based on build environment. (YAML, TOML, JSON and env).
sprbox0.8 0.8 cleanenv VS sprboxBuild-environment aware toolbox factory and agnostic config parser (YAML, TOML, JSON and Environment vars).
go-ini0.8 3.1 cleanenv VS go-iniautomatic mirror of
go-simple-config0.7 6.1 cleanenv VS go-simple-configopen source for accessing and storing configuration
typenv0.5 0.0 cleanenv VS typenvGo minimalist typed environment variables library
gonfig0.4 3.8 cleanenv VS gonfigTag based configuration loader from different providers
Do you think we are missing an alternative of cleanenv or a related project?
Popular Comparisons
README
[Clean Env](logo.svg)
Clean Env
Minimalistic configuration reader
Overview
This is a simple configuration reading tool. It just does the following:
- reads and parses configuration structure from the file
- reads and overwrites configuration structure from environment variables
- writes a detailed variable list to help output
Content
- Installation
- Usage
- Model Format
- Supported types
- Custom Functions
- Supported File Formats
- Integration
- Examples
- Contribution
- Thanks
Installation
To install the package run
go get -u github.com/ilyakaznacheev/cleanenv
Usage
The package is oriented to be simple in use and explicitness.
The main idea is to use a structured configuration variable instead of any sort of dynamic set of configuration fields like some libraries does, to avoid unnecessary type conversions and move the configuration through the program as a simple structure, not as an object with complex behavior.
There are just several actions you can do with this tool and probably only things you want to do with your config if your application is not too complicated.
- read configuration file
- read environment variables
- read some environment variables again
Read Configuration
You can read a configuration file and environment variables in a single function call.
import github.com/ilyakaznacheev/cleanenv type ConfigDatabase struct { Port string `yaml:"port" env:"PORT" env-default:"5432"` Host string `yaml:"host" env:"HOST" env-default:"localhost"` Name string `yaml:"name" env:"NAME" env-default:"postgres"` User string `yaml:"user" env:"USER" env-default:"user"` Password string `yaml:"password" env:"PASSWORD"` } var cfg ConfigDatabase err := cleanenv.ReadConfig("config.yml", &cfg) if err != nil { ... }
This will do the following:
- parse configuration file according to YAML format (
yamltag in this case);
- reads environment variables and overwrites values from the file with the values which was found in the environment (
envtag);
- if no value was found on the first two steps, the field will be filled with the default value (
env-defaulttag) if it is set.
Read Environment Variables Only
Sometimes you don't want to use configuration files at all, or you may want to use
.env file format instead. Thus, you can limit yourself with only reading environment variables:
import github.com/ilyakaznacheev/cleanenv type ConfigDatabase struct { Port string `env:"PORT" env-default:"5432"` Host string `env:"HOST" env-default:"localhost"` Name string `env:"NAME" env-default:"postgres"` User string `env:"USER" env-default:"user"` Password string `env:"PASSWORD"` } var cfg ConfigDatabase err := cleanenv.ReadEnv(&cfg) if err != nil { ... }
Update Environment Variables
Some environment variables may change during the application run. To get the new values you need to mark these variables as updatable with the tag
env-upd and then run the update function:
import github.com/ilyakaznacheev/cleanenv type ConfigRemote struct { Port string `env:"PORT" env-upd` Host string `env:"HOST" env-upd` UserName string `env:"USERNAME"` } var cfg ConfigRemote cleanenv.ReadEnv(&cfg) // ... some actions in-between err := cleanenv.UpdateEnv(&cfg) if err != nil { ... }
Here remote host and port may change in a distributed system architecture. Fields
cfg.Port and
cfg.Host can be updated in the runtime from corresponding environment variables. You can update them before the remote service call. Field
cfg.UserName will not be changed after the initial read, though.
Description
You can get descriptions of all environment variables to use them in the help documentation.
import github.com/ilyakaznacheev/cleanenv type ConfigServer struct { Port string `env:"PORT" env-description:"server port"` Host string `env:"HOST" env-description:"server host"` } var cfg ConfigRemote help, err := cleanenv.GetDescription(&cfg, nil) if err != nil { ... }
You will get the following:
Environment variables: PORT server port HOST server host
Model Format
Library uses tags to configure the model of configuration structure. There are the following tags:
env="<name>"- environment variable name (e.g.
env="PORT");
env-upd- flag to mark a field as updatable. Run
UpdateEnv(&cfg)to refresh updatable variables from environment;
env-required- flag to mark a field as required. If set will return an error during environment parsing when the flagged as required field is empty (default Go value). Tag
env-defaultis ignored in this case;
env-default="<value>"- default value. If the field wasn't filled from the environment variable default value will be used instead;
env-separator="<value>"- custom list and map separator. If not set, the default separator
,will be used;
env-description="<value>"- environment variable description;
env-layout="<value>"- parsing layout (for types like
time.Time);
env-prefix="<value>"- prefix for all fields of nested structure (only for nested structures);
Supported types
There are following supported types:
int(any kind);
float(any kind);
string;
boolean;
- slices (of any other supported type);
- maps (of any other supported type);
time.Duration;
time.Time(layout by default is RFC3339, may be overridden by
env-layout);
- any type implementing
cleanenv.Setterinterface.
Custom Functions
To enhance package abilities you can use some custom functions.
Custom Value Setter
To make custom type allows to set the value from the environment variable, you need to implement the
Setter interface on the field level:
type MyField string func (f *MyField) SetValue(s string) error { if s == "" { return fmt.Errorf("field value can't be empty") } *f = MyField("my field is: "+ s) return nil } type Config struct { Field MyField `env="MY_VALUE"` }
SetValue method should implement conversion logic from string to custom type.
Custom Value Update
You may need to execute some custom field update logic, e.g. for remote config load.
Thus, you need to implement the
Updater interface on the structure level:
type Config struct { Field string } func (c *Config) Update() error { newField, err := SomeCustomUpdate() f.Field = newField return err }
Supported File Formats
There are several most popular config file formats supported:
- YAML
- JSON
- TOML
- ENV
- EDN
Integration
The package can be used with many other solutions. To make it more useful, we made some helpers.
Flag
You can use the cleanenv help together with Golang
flag package.
// create some config structure var cfg config // create flag set using `flag` package fset := flag.NewFlagSet("Example", flag.ContinueOnError) // get config usage with wrapped flag usage fset.Usage = cleanenv.FUsage(fset.Output(), &cfg, nil, fset.Usage) fset.Parse(os.Args[1:])
Examples
type Config struct { Port string `yaml:"port" env:"PORT" env-default:"8080"` Host string `yaml:"host" env:"HOST" env-default:"localhost"` } var cfg Config err := ReadConfig("config.yml", &cfg) if err != nil { ... }
This code will try to read and parse the configuration file
config.yml as the structure is described in the
Config structure. Then it will overwrite fields from available environment variables (
PORT,
HOST).
For more details check the example directory.
Contribution
The tool is open-sourced under the [MIT](LICENSE) license.
If you will find some error, want to add something or ask a question - feel free to create an issue and/or make a pull request.
Any contribution is welcome.
Thanks
Big thanks to a project kelseyhightower/envconfig for inspiration.
The logo was made by alexchoffy.
Blog Posts
Clean Configuration Management in Golang.
*Note that all licence references and agreements mentioned in the cleanenv README section above are relevant to that project's source code only. | https://go.libhunt.com/cleanenv-alternatives | CC-MAIN-2021-43 | en | refinedweb |
Based on Guido's opinion that caller and callee should both be marked, I have used keywords 'include' and 'chunk'. I therefore call them "Chunks" and "Includers".
Examples are based on
(1) The common case of a simple resource manager. e.g.
(2) Robert Brewer's Object Relational Mapper which uses several communicating Chunks in the same Includer, and benefits from Includer inheritance.
Note that several cooperating Chunks may use the same name (e.g. old_children) to refer to the same object, even though that object is never mentioned by the Includer.
It is possible for the same code object to be both a Chunk and an Includer. Its own included sub-Chunks also share the top Includer's namespace.
Chunks and Includers must both be written in pure python, because C frames cannot be easily manipulated. They can of course call or be called (as a unit) by extension modules.
I have assumed that Chunks should not take arguments. While arguments are useful ("Which pattern should I match against on this inclusion?"), the same functionality *can* be had by binding a known name in the Includer. When that starts to get awkward, it is a sign that you should be using separate namespaces (and callbacks, or value objects).
"self" and "cls" are just random names to a Chunk, though using them for any but the conventional meaning will be as foolhardy as it is in a method.
Chunks are limited to statement context, as they do not return a value.
Includers must provide a namespace. Therefore a single inclusion will turn the entire nearest enclosing namespace into an Includer. ? Should this be limited to nearest enclosing function or method? I can't think of a good use case for including directly from class definition or module toplevel, except registration. And even then, a metaclass might be better.
Includers may only be used in a statement context, as the Chunks must be specified in a following suite. (It would be possible to skip the suite if all Chunk names are already bound, but I'm not sure that is a good habit to encourage -- so initially forbid it.)
Chunks are defined without a (), in analogy to parentless classes. They are included (called) with a (), so that they can remain first class objects.
Example Usage =============
def withfile(filename, mode='r'): """Close the file as soon we're done.
This frees up file handles sooner. This is particularly important under Jython, or if you are using files in cyclic structures.""" openfile = open(filename, mode) try: include fileproc() # keyword 'include' prevents XXX_FAST optimization finally: openfile.close()
chunk nullreader: # callee Chunk defined for reuse for line in openfile: pass
withfile("testr.txt"): # Is this creation of a new block-starter a problem? fileproc=nullreader # Using an external Chunk object
withfile("testw.txt", "w"): chunk fileproc: # Providing an "inline" Chunk openfile.write("Line 1")
# If callers must be supported in expression context #fileproc=nullreader #withfile("tests.txt") # Resolve Chunk name from caller's default # binding, which in this case defaults back # to the current globals. # Is this just asking for trouble?
class ORM(object):
chunk nullchunk: # The extra processing is not always needed. pass begin=pre=post=end=nullchunk # Default to no extra processing
def __set__(self, unit, value): include self.begin() if self.coerce: value = self.coerce(unit, value) oldvalue = unit._properties[self.key] if oldvalue != value: include self.pre() unit._properties[self.key] = value include self.post() include self.end()
class TriggerORM(ORM): chunk pre: include super(self,TriggerORM).pre() # self was bound by __set__ old_children = self.children() # inject new variable
chunk post: include super(self,TriggerORM).post() for child in self.children(): if child not in old_children: # will see pre's binding notify_somebody("New child %s" % child)
As Robert Brewer said,
The above is quite ugly written with callbacks (due to excessive argument passing), and is currently fragile when overriding __set__ (due to duplicated code).
How to Implement ----------------
The Includer cannot know which variables a Chunk will use (or inject), so the namespace must remain a dictionary. This precludes use of the XXX_FAST bytecodes. But as Robert pointed out, avoiding another frame creation/destruction will compensate somewhat.
Two new bytecodes will be needed to handle the jump and return to a different bytecode string without setting up or tearing down a new frame. Position in the Includer bytecode will need to be kept in a stack, though it might make sense to use a frame variable instead of the execution stack.
With those two exceptions, the Includer and Chunk are both composed entirely of valid statements that can already be compiled to ordinary bytecode.
-jJ | https://mail.python.org/archives/list/python-dev@python.org/thread/QESV4CYKDGAX7OHD5DE3EZB5KYIUJCBA/?sort=date | CC-MAIN-2021-43 | en | refinedweb |
Introduction: Cyborg Computer Mouse
Many studies suggest that the posture of using a conventional computer mouse can be hazardous.The mouse is a standard piece of computer equipment. Computer users use the mouse almost three times as much as the keyboard. As exposure rates are high, improving upper extremity posture while using a computer mouse is very important.
For this abstract project we will be making a wearable that allows people to move through a computer screen without the necessity of external technology. That way we could use the hands natural movements instead of clicking a device on a horizontal surface. This also allows to use screens while standing, making oral presentations more pleasant.
As for the prototype will be using the index as a joystick, the middle finger for left clicking, ring finger for right clicking and the pinky for turning on and off the device. The thumb will act as the surface where the buttons get pressed at. All of which will be added into a glove.
Supplies
- (x1) Arduino Leonardo
- (x1) Protoboard
- (x1) Joystick module
- (x3) Pushbutton
- (x20±) Wire jumpers
- (x3)Resistors of 1KΩ
- (x1) Glove sewing kit
- Velcro Hot silicone
- Wire Soldering kit
- 3D printed part
Step 1: Set Up the Hardware
We have included a Fritzing sketch for a better understanding of the design. We recommend mounting the components on a protoboard first. That way you can check that everything is working before soldering.
Attachments
Step 2: Upload the Code and Test
Once the connections are made connect the USB A (M) to micro USB B (M) from the computer to the Arduino Leonardo and upload the sketch. Feel free to copy, modify and improve on the sketch.
WARNING: When you use the Mouse.move() command, the Arduino takes over your mouse! Make sure you have control before you use the command. It only works for Arduino Leonardo, Micro or Due
Here is our code for this project:
// Define Pins
#include <Mouse.h> ; const int mouseMiddleButton = 2; // input pin for the mouse middle Button const int startEmulation = 3; // switch to turn on and off mouse emulation const int mouseLeftButton = 4; // input pin for the mouse left Button const int mouseRightButton = 5; // input pin for the mouse right int mouseMiddleState = 0;
boolean mouseIsActive = false; // whether or not to control the mouse int lastSwitchState = LOW; // previous switch state
void setup() { pinMode(startEmulation, INPUT); // the switch pin pinMode(mouseMiddleButton, INPUT); // the middle mouse button pin pinMode(mouseLeftButton, INPUT); // the left mouse button pin pinMode(mouseRightButton, INPUT); // the right) }
//LEFT // read the mouse button and click or not click: // if the mouse button is pressed: if (digitalRead(mouseLeftButton) == HIGH) { // if the mouse is not pressed, press it: if (!Mouse.isPressed(MOUSE_LEFT)) { Mouse.press(MOUSE_LEFT); delay(100); // delay to enable single and double-click Mouse.release(MOUSE_LEFT); } }
// else the mouse button is not pressed: else { // if the mouse is pressed, release it: if (Mouse.isPressed(MOUSE_LEFT)) { Mouse.release(MOUSE_LEFT); } }
//RIGHT // read the mouse button and click or not click: // if the mouse button is pressed: if (digitalRead(mouseRightButton) == HIGH) { // if the mouse is not pressed, press it: if (!Mouse.isPressed(MOUSE_RIGHT)) { Mouse.press(MOUSE_RIGHT); delay(100); // delay to enable single and double-click Mouse.release(MOUSE_RIGHT); } }
// else the mouse button is not pressed: else { // if the mouse is pressed, release it: if (Mouse.isPressed(MOUSE_RIGHT)) { Mouse.release(MOUSE_RIGHT); } }
//MIDDLE // read the mouse button and click or not click: // if the mouse button is pressed: if (digitalRead(mouseMiddleButton) == HIGH) { // if the mouse is not pressed, press it: if (!Mouse.isPressed(MOUSE_MIDDLE) && mouseMiddleState == 0) { Mouse.press(MOUSE_MIDDLE); mouseMiddleState = 1 ; //actualiza el estado del botón } }
// else the mouse button is not pressed: else { // if the mouse is pressed, release it: if (Mouse.isPressed(MOUSE_MIDDLE) && mouseMiddleState == 1 ) { Mouse.release(MOUSE_MIDDLE); mouseMiddleState = 0; } }
delay(responseDelay); }
/* reads an axis (0 or 1 for x or y) and scales the analog input range to a range from 0 to */
int readAxis(int thisAxis) { // read the analog input: int reading = analogRead(thisAxis);
// map the reading from the analog input range to the output range: reading = map(reading, 0, 1023, 0, cursorSpeed);
// if the output reading is outside from the // rest position threshold, use it: int distance = reading - center;
if (abs(distance) < threshold) { distance = 0; }
// return the distance for this axis: return distance; }
Step 3: Mounting the Prototype
The first step is sewing the velcro to the glove, you have to sew four strips of velcro one to each finger. We sewed the soft part of the velcro.
Each pushbutton has two wires, one that starts at the respective pins and connects to the positive leg of the button and another on the negative leg. At the other end of the negative wire we solder the resistances of each button plus the negative wire of the joystick to one last wire, which connects to the GND of the Arduino board. The same parallel connection works for the positive side. (3 buttons and joystick positive leg)
After soldering the jumpers we will put on the hard velcro-strips, so that the wires will get stuck in between. Lastly we thermo-glued the joystick module to a 3D printed piece. Below you can find the .STL file.
Attachments
Step 4: Start Using Your Hand As a Mouse!
Vote for us in the Assistive Tech Contest if you enjoyed the project.
Participated in the
Assistive Tech Contest
Be the First to Share
Recommendations
9 Comments
1 year ago
Great project, but unfortunately it will not work fine.
your connections of fritz and connections in code does not matched..
your Vertical/Y Axis pin of joystick connected to A2 of Arduino, but in code you define it A0.
an extra token in line 1 when include mouse.h
1 year ago
I have a question what protoboard are you using . double sided/single sided and the size. will be a great help. btw im making this for my project so it will be nice if you reply fast. :)
Reply 1 year ago
You can use Leonardo, Micro and Due boards. We usted the Leonardo R3 from the brand KEYESTUDIO (model KS0248).
1 year ago
Your Code is not working with arduino Uno micro.it showed the error on compling.
Reply 1 year ago
Bro Arduino Uno is Not Compatible with Mouse and Keyboard Functions. Only some Arduino Boards like Leonardo are Compatible. You can Google It. Also, with help of Python if you know you can use Arduino Easily. Hope this Answer Sastifies your Question
Reply 1 year ago
Hy Kamal! I had some problems copy/pasting the arduino sketch code into my instructable. Apparently some symbols didn't get pasted right. I have checked it and now it should work right. If I could I would upload the .ino document, but I think they also have a bug with that :(
Add " #include <Mouse.h>; " at the beggining of your code.
Please do not hesitate to contact me if you have any further questions.
1 year ago
Beautiful!
1 year ago
well done!
Tip 1 year ago
Great project. Another idea is to press the middle and ring finger against the palm of your hand. This way one can move the mouse and click it at the same time. | https://www.instructables.com/Cyborg-Computer-Mouse/?utm_source=newsletter&utm_medium=email | CC-MAIN-2021-43 | en | refinedweb |
The story shows important history of software engineering between 1990s and 2000s, which includes the background of the birth of Agile software development, Software product line engineering (SPLE) and eXtreme derivative development process (XDDP).
Most of the actual cost of software development is personnel expenses because it is a human-intensive…
Software testing is an activity as part of verification and validation or software verification and validation.
Read the definitions of verification and validation from the article “Verification and validation” of Wikipedia.
In other words, verification is to check whether or not software or system meets to its requirements, while validation…
If a program has no structure and is chaotic as the below figure shows, the following quality characteristics will be getting worse:
This is called commonly, “a status like spaghetti”.
Such a program can be replaced into a nested program (the below figure shows)……
Traditional and typical kinds of a human resource are a generalist and a specialist. A generalist has literacy, knowledge and skills for all fields. On the other hands, a specialist has a specified knowledge and skills to one field, which also is called I-typed human resource (I型人材) in Japan.
In…
I've just released Pelemay 0.0.6:
A new feature of this release is to support String.replace.
defmodule M do
require Pelemay
import Pelemay
defpelemay do
def string_replace(subject) do
String.replace(&1, "Fizz", "Buzz")
end
def enum_map_string_replace(list) do
list
|> Enum.map(& String.replace(&1, "Fizz", "Buzz"))
end
end
end
This code is 4x faster than…
Thank ElixirConf for giving me another chance to make a presentation at ElixirConf US 2019:
This presentation will be conducted by me and Mr. Hisae, who is a graduate student in my laboratory and a co-author of Hastega. He's a great meta-programmer because he wrote a new feature of……
Call me ZACKY. I'm a researcher of Elixir. My works are including Pelemay, (its old name is Hastega) . | https://zacky1972.medium.com/?source=post_internal_links---------7---------------------------- | CC-MAIN-2021-43 | en | refinedweb |
#include <itkKdTree.h>
data structure for storing k-nearest neighbor search result (k number of Neighbors)
This class stores the instance identifiers and the distance values of k-nearest neighbors. We can also query the farthest neighbor's distance from the query point using the GetLargestDistance method.
Definition at line 581 of file itkKdTree.h.
Constructor
Definition at line 586 of file itkKdTree.h.
Destructor
Returns the distance of the farthest neighbor from the query point
Definition at line 610 of file itkKdTree.h.
Returns the instance identifier of the index-th neighbor among k-neighbors
Definition at line 645 of file itkKdTree.h.
Returns the vector of k-neighbors' instance identifiers
Definition at line 637 of file itkKdTree.h.
Replaces the farthest neighbor's instance identifier and distance value with the id and the distance
Definition at line 618 of file itkKdTree.h.
References itk::NumericTraits< T >::min().
Initialize the internal instance identifier and distance holders with the size, k
Definition at line 598 of file itkKdTree.h.
External storage for the distance values of k-neighbors from the query point. This is a reference to external vector to avoid unnecessary memory copying.
Definition at line 660 of file itkKdTree.h.
The index of the farthest neighbor among k-neighbors
Definition at line 652 of file itkKdTree.h.
Storage for the instance identifiers of k-neighbors
Definition at line 655 of file itkKdTree.h. | https://itk.org/Doxygen/html/classitk_1_1Statistics_1_1KdTree_1_1NearestNeighbors.html | CC-MAIN-2021-43 | en | refinedweb |
I finally have the luxury of time to learn new things, in which I decided to beef up some of my cryptography knowledge. A basic cryptography category in which certain CTFs present is a classic XOR challenge.
Being a person with ZERO knowledge in cryptography, some research were needed. So in a nutshell, XOR is the operation of taking 2 bits, putting in through the XOR operation or also known as
^.
Some XOR rules:
1 ^ 1 = 0
1 ^ 0 = 1
0 ^ 1 = 1
0 ^ 0 = 0
Since XOR works at the bit level, in order to encryt a message like
ATTACK AT DAWN, the message needs to be in bit representation before taking each for a spin in the XOR operator.
As this was just a simple practice, I decided to expeirment using a single key. Take the example of the following.
Message is
ATTACK AT DAWN
Key chosen is
H
So to encrypt that message using XOR, which individual character have to be XOR-ed by the key, therefore a conersion was needed. Thankfully, converting from string to bits was easy, by using the following
bin(ord('a')), then putting it into a list (Using python over here).
Since some of the XOR cipher text is not readable as text, I encoded the cipher text in base64 to make it "transportable". In which, a decode is needed before passing the message through the decryption process.
Here is the code sample code for encryption and decryption:
import base64 #Encryption cipher_bits = [] input_string = 'ATTACK AT DAWN' key = 'H' print "XOR Key: " + key key_bin = (bin(ord(key)))[2:] key_left_zeros = '0' * (8-len(key_bin)) new_key_bin = key_left_zeros + key_bin # print new_key_bin str_bits = [] for ch in input_string: tmp_bit = (bin(ord(ch)))[2:] bit_left_zeros = '0' * (8 - len(tmp_bit)) new_bit = bit_left_zeros + tmp_bit str_bits.append(list(new_bit)) # print str_bits temp_bits = '' for i in range(len(str_bits)): for j in range(len(str_bits[i])): temp_bits += str(int(str_bits[i][j]) ^ int(new_key_bin[j])) cipher_bits.append(list(temp_bits)) temp_bits = '' # print cipher_bits tmp_bits_holder = [] for i in range(len(cipher_bits)): tmp_bits_holder.append(''.join(cipher_bits[i])) # print tmp_bits_holder tmp_cipher_text = '' tmp_ch_holder = '' for i in range(len(tmp_bits_holder)): tmp_ch_holder = chr(int(tmp_bits_holder[i],2)) tmp_cipher_text += tmp_ch_holder tmp_ch_holder = '' new_cipher_text = base64.b64encode(tmp_cipher_text) print "XOR RAW Encrypted" + tmp_cipher_text print "XOR RAW Encrypted Encoded: " + new_cipher_text #Decryption cipher_text = base64.b64decode(new_cipher_text) cipher_str_bits = [] for ch in cipher_text: c_tmp_bit = (bin(ord(ch)))[2:] c_bit_left_zeros = '0' * (8 - len(c_tmp_bit)) c_new_bit = c_bit_left_zeros + c_tmp_bit cipher_str_bits.append(list(c_new_bit)) # print cipher_str_bits c_temp_bits = '' message_bits = [] for i in range(len(cipher_str_bits)): for j in range(len(cipher_str_bits[i])): c_temp_bits += str(int(cipher_str_bits[i][j]) ^ int(new_key_bin[j])) message_bits.append(list(c_temp_bits)) c_temp_bits = '' # print message_bits tmp_message_bolder = [] for bits in message_bits: tmp_message_bolder.append(''.join(bits)) decrypted_message = '' for i in tmp_message_bolder: decrypted_message += chr(int(i,2)) print "XOR Decrypted: " + decrypted_message | https://www.nullsession.pw/xor-xor/ | CC-MAIN-2021-43 | en | refinedweb |
Hi everyone,
In my application (for producing pdf reports), I need to render some
rhtml templates to html files on the disk, because I then push them
through apache fop and css2xslfo to produce a neat (not so little) pdf
out of my html+css combo, ready for print :). That said, the whole
process of calculating the figures, rendering the html and then
producing the pdf takes about 5min on my dev box, and still about 2min
on my production server. Needless to say, I want this to run in a
BackgrounDrb worker (thx for this wonderful plugin btw
Since rendering the html file alone takes up almost 1 minute (intense
db and svg image creation included), I need to put this call to
ActionController#render in a background worker. As there is no way to
access the controller in a backgroundrb worker (as far as i know), I
am looking for a generic way to use ActionPack/View outside of a Rails
controller. Currently I have a rather ugly approach, that lets me
render my templates/partials from anywhere (from my model class atm)
BUT, layouts don’t work, since they seem to be implemented in
ActionController. My current code is as follows:
def render(options, assigns = {})
ActionView::Base.class_eval do require 'rubygems' require 'RMagick' require 'scruffy' require 'application' include ApplicationHelper include Admin::AdminHelper include Admin::ReportsHelper include Admin::SurveyCalculationsHelper end viewer =
ActionView::Base.new(Rails::Configuration.new.view_path, assigns,
Admin::SurveyCalculationsController.new)
viewer.render options
end
As I said, ugly! With all those class_eval includes and requires, a
call to the above render method, correctly renders my partial (along
with local vars passed). However, my partials yield to different
places in the layout, using content_for, so it’s not convenient to
just wrap my partial code with what is now the layout file.
Does anyone know how to do ActionController style render with layout/
partial/localvar support outside of a ActionController instance?
Any help appreciated!
Martin G. | https://www.ruby-forum.com/t/how-can-i-use-actionpack-to-render-rhtml-outside-of-a-contro/86724 | CC-MAIN-2021-43 | en | refinedweb |
CanvasAgg demo¶
This: Retrieve a view on the renderer buffer... canvas.draw() buf = canvas.buffer_rgba() # ... convert to a NumPy array ... X = np.asarray(buf) # ... and pass it to PIL. from PIL import Image im = Image.fromarray(X) # Uncomment this line to display the image using ImageMagick's `display` tool. # im.show()
References¶
The use of the following functions, methods, classes and modules is shown in this example:
import matplotlib matplotlib.backends.backend_agg.FigureCanvasAgg matplotlib.figure.Figure matplotlib.figure.Figure.add_subplot matplotlib.figure.Figure.savefig matplotlib.axes.Axes.plot
Out:
<function Axes.plot at 0x7f154d1994c0>
Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery | https://matplotlib.org/3.2.0/gallery/user_interfaces/canvasagg.html | CC-MAIN-2021-43 | en | refinedweb |
If you are a beginner in data science, then you must be confused about which datasets you should use for improving your data science skills. If you have never worked on any dataset before then you should only choose datasets that are meant for beginners. In this article, I will take you through the best datasets for data science beginners that you can use to improve your data science skills.
Best Datasets for Data Science Beginners
There are so many datasets available online for data science beginners. You can also find some of the datasets already available in Python libraries like Scikit-learn and Tensorflow. Below are some of the best datasets for Data Science beginners that you can try one by one.
Iris Dataset:
The Iris dataset is one of the most popular datasets among the data science community. It contains the data about three Iris species; setosa, versicolor, and virginica. This dataset is based on the problem of classification where every iris belongs to one of the three species. So your task here is to classify the species of the flower. You can download this dataset from here, but as this is already included in the Scikit-learn library in Python so you can also import it using the code below:
from sklearn.datasets import load_iris iris_dataset = load_iris()
Titanic Dataset:
Another popular dataset among the data science community for beginners is the Classic Titanic dataset. This dataset contains data on demographic and travel information of Titanic passengers and our goal is to predict the survival of these passengers. This dataset is also based on the classification problem. You can download this dataset from here.
Stock Prices Dataset:
You can use the historical data of stock prices to predict the future prices of a company. You can use the same strategy to predict the prices of bitcoin. Predicting future prices is the problem of regression. To download such datasets follow the steps mentioned below:
- visit Yahoo Finance
- then search for any company, let’s say Apple
- Search for Apple and you will get to see the latest stock prices of Apple
- then click on Historical Prices
- and then click on download.
These steps will help you to download the stock price data for the past 365 days.
MNIST Dataset:
The MNIST dataset is so popular dataset among the data science community that it is also known as the hello world of machine learning. It contains 70,000 small images of handwritten digits where each image is labelled with the digit that it represents. This dataset is already available in the Scikit-learn library in Python. You can import it by using the code below:
from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784', version=1)
Summary
So these were some of the best datasets for data science beginners. You can practice so much with these datasets. After working on these datasets you can start working on some more complex problems. You can find so many machine learning projects solved and explained based on complex problems from here. I hope you liked this article on the best datasets for Data Science beginners. Feel free to ask your valuable questions in the comments section below. | https://thecleverprogrammer.com/2021/04/15/best-datasets-for-data-science-beginners/ | CC-MAIN-2021-43 | en | refinedweb |
Quick Contact
Selenium Tutorial
- Before You Start
- Introduction to Selenium
- Setup Eclipse
- WebDriver
- Browser and Navigation Commands
- Locators
- Element Identification
- Tables, Checkboxes and Radio Buttons
- Selenium Waits, Alerts and Switch Windows
- Action Class
- TestNG Framework
- Introduction to Selenium
- Database Connections
- Data Driven Automation Framework
- JUnit Java Framework
- Maven
- Page Object Model (POM)
- Miscellaneous
Java Basics
Java is a high level, robust, secured and object-oriented programming (OOP) language. It was developed by James Gosling, Mike Sheridan, Patrick Naughton in 1995 for Sun Microsystems, later acquired by Oracle Corporation. Java technology is used to develop applications such as Standalone Application, Web Application, Enterprise Application and Mobile Application. Games, Smart Card, Embedded System, Robotics for a wide range of environments, from consumer devices to heterogeneous enterprise systems. Java language, a C-language derivative has its own structure, syntax rules, and programming paradigm. The Java language helps to create modular programs and reusable code which can run on a variety of platforms, such as Windows, Mac OS, and the various versions of UNIX.
With the advancement of Java multiple configurations were built to suit various types of platforms such as J2EE for Enterprise Applications, J2ME for Mobile Applications, Java SE, Java EE, and Java ME.
JAVA Components
The Java language’s programming paradigm is based on the concept of OOP, which the language’s features support. Structurally, the Java language starts with packages. A package is the Java language’s namespace mechanism. Within packages are classes, and within classes are methods, variables, constants. The Java Runtime Environment (JRE) includes the Java Virtual Machine (JVM), code libraries, and components that are necessary for running programs that are written in the Java language. The JRE is available for multiple platforms. The JRE can be freely redistributed with applications, according to the terms of the JRE license, to give the application’s users a platform on which to run your software. The JRE is included in the Java development Kit (JDK).
Java Virtual Machine
A JVM is an abstract computing machine, or virtual machine because it doesn’t physically exist. It is a platform-independent execution environment that converts Java bytecode into machine language and executes it.
How JVM works?
A Java program is written and saved as .java extension. The complier checks the code against the language’s syntax rules and then writes out bytecode in .class files. Bytecode is a set of instructions targeted to run on a JVM. The byte code can run on any platform such as Windows, Linux, Mac OS. Each operating system has different JVM, however the output they produce after execution of bytecode is same across all operating systems. The pictorial representation of compilation and execution of a Java program is:
The Compiler (javac) converts source code (.java file) to the byte code (.class file). The JVM executes the bytecode produced by compiler. At runtime, the JVM reads and interprets .class files and executes the program’s instructions on the native hardware platform for which the JVM was written. The JVM interprets the bytecode just as a CPU would interpret assembly-language instructions.
JVM Architecture
The component of JVM are explained below:
- ClassLoader: The class loader is a subsystem used for loading class files. It performs three major functions as Loading, Linking, and Initialization.
- Method Area: JVM Method Area stores class structures like metadata, the constant runtime pool, and the code for methods.
- Heap: All the Objects, their related instance variables, and arrays are stored in the heap. This memory is common and shared across multiple threads.
- JVM language Stacks: Java language Stacks store local variables, and it’s partial results. Each thread has its own JVM stack, created simultaneously as the thread is created. A new frame is created whenever a method is invoked, and it is deleted when method invocation process is complete.
- PC Registers: PC register store the address of the Java virtual machine instruction which is currently executing. In Java, each thread has its separate PC register.
- Native Method Stacks: Native method stacks hold the instruction of native code depends on the native library. It is written in another language instead of Java.
- Execution Engine: It is a type of software used to test hardware, software, or complete systems. The test execution engine never carries any information about the tested product.
- Native Method interface: The Native Method Interface is a programming framework. It allows Java code which is running in a JVM to call by libraries and native applications.
- Native Method Libraries: Native Libraries is a collection of the Native Libraries(C, C++) which are needed by the Execution Engine.
JVM Operations
The JVM performs following operation:
- Loads code
- Verifies code
- Executes code
- Provides runtime environment
Java Development Kit
The Java Development Kit (JDK) is a software development environment which is used to develop Java applications, Java applets. The JDK contains JVM, interpreter/loader, compiler, an archiver, a documentation generator required to complete the development of a Java Application. The pictorial representation of JDK is:
JRE Components.
Eclipse is a popular open source Integrated Development Environment (IDE) for Java development. Eclipse handles basic tasks, such as code compilation and debugging apart from providing an interface for writing and testing code. In addition, Eclipse can be used to organize source code files into projects, compile and test those projects, and store project files in any number of source repositories.
Difference between JVM, JRE and JDK
- JRE: JRE is the environment within which the java virtual machine runs. JRE contains Java virtual Machine(JVM), class libraries, and other files excluding development tools such as compiler and debugger. That means you can run the code in JRE but you can’t develop and compile the code in JRE.
- JVM: JVM runs the program by using class, libraries and files provided by JRE.
- JDK: JDK is a superset of JRE, it contains everything that JRE has along with development tools such as compiler, debugger.
Class Concepts
A class is a user defined entity from which objects are created. An object can be defined as an instance of a class. An object contains an address and takes up some space in memory whereas class doesn’t store any space. In other words, class is a blueprint or a set of instruction to build a group of object which has common properties.
For Example, we can think of class as a sketch of a building. It contains all the details about the floors, doors and windows. Based on these descriptions we build the building. Building is the object. Since, many buildings can be made from the same description, we can create many objects from a class.
Object
It is a basic unit of Object Oriented Programming and represents the real life entities. A typical Java.
If we consider the real-world, we can find many objects around us, cars, dogs, humans.
Class Declaration
A Java class declarations can include these components, in order:
- Modifiers: A class can be public or has default access.
- Class name: The name should begin with a initial letter (capitalized by convention).
- Superclass: The name of the class’s parent (superclass), if any, preceded by the keyword extends. A class can only extend (subclass) one parent.
- Interfaces: A comma-separated list of interfaces implemented by the class, if any, preceded by the keyword implements. A class can implement more than one interface.
- Body: The class body surrounded by braces, { }.
Syntax of a class is:
class ≷class_name>{ field; method; }
Constructors
Constructors are used for initializing new objects. Fields are variables that provides the state of the class and its objects, and methods are used to implement the behavior of the class and its objects. There are various types of classes that are used in real time applications such as nested classes, anonymous classes, lambda expressions. Each time a new object is created, at least one constructor will be invoked. The main rule of constructors is that they should have the same name as the class. A class can have more than one constructor.
Example
public class Puppy { public Puppy() { } public Puppy(String name) { // This constructor has one parameter, name. } }
Class Variables initialized when the class is instantiated. Instance variables can be accessed from inside any method, constructor or blocks of that particular class.
- Class variables: Class variables are variables declared within a class, outside any method, with the static keyword.
Creating an Object.
Following is an example of creating an object
public class Puppy { public Puppy(String name) { // This constructor has one parameter, name. System.out.println("Passed Name is :" + name ); } public static void main(String []args) { // Following statement would create an object myPuppy Puppy myPuppy = new Puppy( "tommy" ); } }
Output
Passed Name is :tommy
Initializing an object
The new operator instantiates a class by allocating memory for a new object and returning a reference to that memory. The new operator also invokes the class constructor.Example:
// Class Declaration public class Dog { // Instance Variables String name; String breed; int age; String color; // Constructor Declaration of Class public Dog(String name, String breed,int age, String color) { this.name = name; this.breed = breed; this.age = age; this.color = color; } // method 1 public String getName() { return name; } // method 2 public String getBreed() { return breed; } // method 3 public int getAge() { return age; } // method 4 public String getColor() { return color; } @Override public String toString() { return("Hi my name is "+ this.getName()+ ".\nMy breed,age and color are " + this.getBreed()+"," + this.getAge()+ ","+ this.getColor()); } public static void main(String[] args) { Dog tuffy = new Dog("tuffy","papillon", 5, "white"); System.out.println(tuffy.toString()); } }
Output:
Hi my name is tuffy. My breed,age and color are papillon,5,white
Code Explanation:.
Ways to create object of a class
There are four ways to create objects in java.
Using new keyword: It is the most common and general way to create object in java.
Example:
// creating object of class Test Test t = new Test();
Using Class. For Name (String class Name) method : There is a pre-defined class in java. lang package with name Class. The for Name (String class Name) method returns the Class object associated with the class with the given string name. We have to give the fully qualified name for a class. On calling new Instance () method on this Class object returns new instance of the class with the given string name.
// creating object of public class Test // consider class Test present in com.p1 package Test obj = (Test)Class.forName("com.p1.Test").newInstance();
Using clone() method: clone() method is present in Object class. It creates and returns a copy of the object.
// creating object of class Test
Test t1 = new Test();
// creating clone of above object
Test t2 = (Test)t1.clone();
- Deserialization: De-serialization is technique of reading an object from the saved state in a file. Refer Serialization/De-Serialization in java
FileInputStream file = new FileInputStream(filename); ObjectInputStream in = new ObjectInputStream(file); Object obj = in.readObject();
Object Oriented Programming System Concepts
Object Oriented Programming is a methodology or paradigm to design a program using classes and objects. It is a programming style which is associated with the concepts like class, object, Inheritance, Encapsulation, Abstraction, Polymorphism. Object-oriented languages follow a different programming pattern from structured programming languages like C and COBOL. The Object-oriented languages combine data and program instructions into objects.
Object
An object is a self-contained entity that contains attributes and behavior. Instead of such as a Number, to coarse-grained objects, such as a FundsTransfer service in a large banking application. An object-based application in Java is based on declaring classes, creating objects from them and interacting between these objects.
Principles of OOPs
The object-oriented paradigm supports four major principles
Inheritance
In OOP, computer programs are designed in such a way where everything is an object that interacts with one another. Inheritance is one such concept where the properties of one class can be inherited by the other. It helps to reuse the code and establish a relationship between different classes. In Java, there are two classes:
- Parent class (Super or Base class)
- Child class (Subclass or Derived class)
A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class. The biggest advantage of Inheritance is that the code in base class need not be rewritten in the child class. That is the variables and methods of the base class can be used in the child class as well.
Syntax:
class A extends B { }
Here class A is child class and class B is parent class.
Example:
class Faculty { String designation = "Faculty"; String college = "BookWorld"; void does(){ System.out.println("Teaching"); } } public class ScienceFaculty extends Faculty String mainSubject = "Science"; public static void main(String args[]){ ScienceFaculty obj = new ScienceFaculty(); System.out.println(obj.college); System.out.println(obj.designation); System.out.println(obj.mainSubject); obj.does(); } }
Output:
BookWorld
Faculty
Science
Teaching
Code Explanation:
- There is a parent class Faculty and a child class Science Faculty.
- In the Science Faculty class there is no need to write the same code which is already present in the present class.
- The college name, designation and does() method is common for all the faculties, thus Science Faculty class does not need to write this code
- The common data members and methods can in-herited from the Faculty class.
Polymorphism
Polymorphism in java is a concept by which we can perform a single action by different ways. In other words, Polymorphism is the ability of an object to take on many forms. The most common use of polymorphism in OOP occurs when a parent class reference is used to refer to a child
class object. Polymorphism in java can be performed by method overloading and method overriding.
There are two types of polymorphism in java
- compile time polymorphism is achieved by method overloading
- runtime polymorphism is achieved by method overriding
For example, let’s say we have a class Animal that has a method sound(). Since this is a generic class so we can’t give it a implementation like: Roar, Meow, Oink.
public class Animal{ ... public void sound(){ System.out.println("Animal is making a sound"); } }
Now lets say we two subclasses of Animal class: Horse and Cat that extends Animal class. We can provide the implementation to the same method. It would not make any sense to just call the generic sound() method as each Animal has a different sound. Thus we can say that the action this method performs is based on the type of object.
Example 1: Runtime Polymorphism
Example: Code of Animal.java
public class Animal{ public void sound(){ System.out.println("Animal is making a sound"); } }
Code of); } }
Output:
a: 10
a and b: 10,20
double a: 5.5
O/P : 30.25
Code Explanation:.
Abstraction
Abstraction is a process where only relevant data is shown to the user and unnecessary details of an object are hidden. For example, when the user login to his bank account online, he enter his user_id and password and press login, The after process of to and fro information transfer and verification is all abstracted away from the user.
The abstraction in object oriented programming can be achieved by various ways such as encapsulation and inheritance.
A Java program is also a great example of abstraction. Here java takes care of converting simple statements to machine language and hides the inner implementation details from outer world.
Abstract Keyword (Abstract Classes and Methods)
An abstract class is never instantiated. When a class contains an abstract method, then it is declared as abstract class. It is used to provide abstraction. Note that an abstract class does not provide 100% abstraction because it may contain a concrete method as well
Syntax:
abstract class class_name { }
Example of Abstract class
abstract class A { abstract void callme(); } class B extends A { void callme() { System.out.println("this is callme."); } public static void main(String[] args) { B b = new B(); b.callme(); } }
Output:
this is callme.
Note the key points about abstract classes:
- Abstract classes are not Interfaces. They are different, we will study this when we will study Interfaces.
- An abstract class may or may not have an abstract method. But, if any class has even a single abstract method, then it must be declared abstract.
- Abstract classes can have constructors, member variables and normal methods.
- Abstract classes are never instantiated.
- When you extend an abstract class with abstract method, you must define the abstract method in the child class or make the child class abstract.
Abstract class cannot be instantiated. Below is an example to demonstrate same.
abstract class Student { public void name() // concrete (non-abstract) method { System.out.println("Name is Adam"); } public void marks() // concrete (non-abstract) method { System.out.println("Marks scored are 80"); } public static void main(String args[]) { Student s1 = new Student(); // Error raised, see the errror in screenshot } }
Output:
Student.java:13: error: Student is abstract; cannot be instantiated Student s1 = new Student(); // Error raised, see the errror in screenshot
1 error
Abstract Method
The methods that are declared without any body within an abstract class are called abstract methods. The method’s body, in this case, is defined by its subclass. An abstract method can never be final and static. Any class that extends an abstract class must implement all the abstract methods declared by the super class.
Syntax:
abstract return_type function_name (); //No definition
Abstract method in an abstract class
//abstract class abstract class Sum{ /* These two are abstract methods, the child class * must implement these methods */ public abstract int sumOfTwo(int n1, int n2); public abstract int sumOfThree(int n1, int n2, int n3); //Regular method public void disp(){ System.out.println("Method of class Sum"); } } //Regular class extends abstract class class Demo extends Sum{
/* If I don't provide the implementation of these two methods, the * program will throw compilation error. */ public int sumOfTwo(int num1, int num2){ return num1+num2; } public int sumOfThree(int num1, int num2, int num3){ return num1+num2+num3; } public static void main(String args[]){ Sum obj = new Demo(); System.out.println(obj.sumOfTwo(3, 7)); System.out.println(obj.sumOfThree(4, 3, 19)); obj.disp(); } }
Output:
10 26 Method of class Sum
Note the key points about abstract method:
- Abstract methods don’t have body, they just have method signature.
- If a class has an abstract method it should be declared abstract, the vice versa is not true, which means an abstract class doesn’t need to have an abstract method compulsory.
- If a regular class extends an abstract class, then the class must have to implement all the abstract methods of abstract parent class or it has to be declared abstract as well.
Encapsulation
Encapsulation means putting together all the variables and the methods into a single unit called Class. The idea behind is to hide how things work and just exposing the requests a user can do. Encapsulation provides the security that keeps data and methods safe from inadvertent changes. Encapsulation can be achieved in Java by:
- Declaring the variables of a class as private.
- Providing public setter and getter methods to modify and view the variables values.
Example:
public class Student { private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } public static void main(String[] args) { } }
Code Explanation:
- There is a class Student which has a private variable name and a getter and setter methods through which the name of a student can get and set.
- Through these methods, any class which wishes to access the name variable has to do it using these getter and setter methods.
String Class
String is a sequence of characters, for e.g. “World” is a string of 5 characters. In Java, string is a constant and cannot be changed once it has been created..
The Java language provides special support for the string concatenation operator (+), and for conversion of other objects to strings. String concatenation is implemented through the String Builder (or String Buffer) class and it’s append method. String conversions are implemented through the method to String, defined by Object and inherited by all classes in Java.
There are two ways to create a String object:
- By string literal: Java String literal is created by using double quotes. For Example: String str=“Hello”;
- By new keyword: Java String is created by using a keyword “new”. For Example: String str=new String(“Hello”);
Java String pool refers to collection of Strings which are stored in heap memory. In this, whenever a new object is created, String pool first checks whether the object is already present in the pool or not. If it is present, then same reference is returned to the variable else new object will be created in the String pool and the respective reference will be returned.
Example:
public class StringDemo { public static void main(String args[]) { char[] helloArray = { 'w', 'o', 'r', 'l', 'd', '.' }; String helloString = new String(helloArray); System.out.println( helloString ); } }
Output:
World.
Summary
- Java is a high level, robust, secured and object-oriented programming (OOP) language
- The Java Runtime Environment (JRE) includes the Java Virtual Machine (JVM), code libraries, and components that are necessary for running programs that are written in the Java language
- The Java Development Kit (JDK) is a software development environment which is used to develop Java applications, Java applets
- Abstraction is a process where only relevant data is shown to the user and unnecessary details of an object are hidden | https://tutorials.ducatindia.com/selenium/before-you-start/ | CC-MAIN-2021-43 | en | refinedweb |
On Wed, 11 Apr 2001 14:44:55 +1000, Peter Donald wrote:
>At 08:48 9/4/01 -0700, David Rees wrote:
>.
>
>I am not sure why you think such a system would be simpler. Aspect based
>systems are meant to be used to give fine grain separation of concerns. How
>the aspects are handled (ie Facilities in my terminology) is not set. In
>essence what you propose is to reclump all aspects into one again and then
>swap out facilities at runtime (ie essentially what Ant1 does with it's
>magic properties/loggers).
>
>This of course fails to provide for large projects who want need the extra
>flexibility to do their own thing. It also doesn't add anything on an
>aspect based system because we could always directly configure facilities
>to provide appropriate fgeatures (ie special ClassLoaderFacility for GUMP
>builds, BlameFacility for Alexandria, DocFacility for AntDoc etc).
>
>So I can't see how it is simpler or more useful for the **users** (though
>it would be simpler for us Ant developers).
>
I think are concepts are the same and its more of an API question
rather than one of aspect orientation. As a big proponent and someone
who has coded aspects into the compiler in those languages that I
could (Smalltalk) I don't think I am arguing against aspects. In fact,
in my experience, most aspect oriented solutions are not visible in
the code at the point where they are used. Instead, they are
installed/uninstalled as part of that classes configuration.
What I am suggested is that context represents this API for
installing/uninstalling aspects. As that they are explicitly supported
in the API.
I see the logfile attribute being on a LogContext element like:
<Context id="detailed">
<LogContext name="current" logfile="log.txt" />
</Context>
and I think you see it as a namespace delimited attribute (right?) on
the task itself:
<Copy log:.
dave | https://mail-archives.eu.apache.org/mod_mbox/ant-dev/200104.mbox/%3Coir7dt8b9qqkd9ofpkqca0a3utrh78dsgv@4ax.com%3E | CC-MAIN-2021-21 | en | refinedweb |
I read somewhere that one of the developers/contributors would like to see an IDE developed in Enigma; this implies to me that it would be possible, and if it's possible, I would love to get involved.
Don't take this the wrong way, but I actually do not agree with the complaints about LateralGM on Windows because of this. Sure LGM may not be designed well for a good plugin, but its problems almost always crop up when used with the ENIGMA plugin on Windows. LateralGM by itself rarely if ever (I've never seen it) segfault on its own all by itself, it's always the ENIGMA plugin crashing it. And this is not just because of LGM's poor plugin architecture it's the generally bad design of the ENIGMA plugin itself.
Robert, I see that you have basically remade LGM 5x over the past few years and it would have been awesome if you actually managed to finish one.
CLI is the easiest way to make a custom IDE compile a game with ENIGMA. That is why I encourage finishing it, because then HitCoder and other people who want to make an IDE wouldn't have to waste time somehow interfacing with ENIGMA. They would just need to make a writer for EGM which is a trivial format.
1) Change room data into XML.
2) Make a format that is basically extracted EGM.
It doesn't really matter what crashes. When the user opens LGM and it segfaults then of course the only thing he can blame is LGM.
On my laptop my plugin fixes made crashes almost non-existent (about 1 crash every 3 hours or so).
I am doing this release to fix the JoshEdit problems so that egofree is able to build LGM again and people can continue to develop it.
Yes it certainly does make it easier to get started, but ideally in a perfect world you still need compileEGMf to pass resources that have been opened but not saved in the IDE. This is the problem GM Studio's IDE has, it always forces you to save your changes before running, which 99% of the time nobody wants to do, Especially me, you make small changes just to test them.
I was going to do that, but clearly if that's all you want then there is GMX. XML is a bloated markup language and Josh already proposed an EYAML format to replace the binary room blobs. Anyway, I don't have time to do it, that's a substantial amount of work. The breaking change is fine though because I have backed up every old plugin and LateralGM version in multiple locations and they aren't hidden either, you can find them easily on the Wiki and I've linked it multiple times. So if you'd like to make the changes to the plugin, knock yourself out it needs done but nobody has time right now.
EGM is actually supposed to support this, it was never finished and I never had the interest in doing it. Also note the above comments regarding LGM still loading the entire project instead of just the resource you are currently editing.
Irrelevant
So push the changes to GitHub.
That is really not a problem as you can just write to a temporary..
And we did have posts with Josh talking about rooms in a non-binary mode. In it I showed how XML is actually not bloated if used correctly. Yet has more features than EYAML proposed. But I honestly don't care in which format it is.
What you said irrelevant.
I don't see how that is a lot of work.
Again, if you have EGM loading then this should be trivial. It should be even easier, as you must SKIP a step. Just don't open the .zip and instead read the folder structure.
This significantly complicates the overall process of passing the resources by requiring them to be loaded into the IDE, written to the disk, then read back from the disk on the other end before being passed from the CLI to compileEGMf.
functionThatSavesInEGM(randomlyGeneratedFileInTemp);system(cliPath+" -egm "+randomlyGeneratedFileInTemp);
Rusky has pointed out to me that XML supports namespaces and some other features I see, so excuse me for being naive originally. I was not aware of this but I think it is still debatable about how verbose XML can be, and I clearly tend to agree that it is. However, XML apparently does have some good data processing and querying features (which I was already aware of), but why exactly does ENIGMA need them? YAML is clearly/technically/colloquially not a markup language and really good for just plain old data, which our rooms basically are. So I would like to know if you had some ideas for utilizing the XML processing/querying features or not, because that is something to consider before making a decision.
I don't know what you want me to say. LGM has taken all of the necessary precautions to make sure the errors are properly reported, and so has the plugin framework. Even a native IDE would require a plugin framework so that it's not limited to only ENIGMA, it's a simple separation of duty.
Are we still not able to debug these compile errors you're having with the plugin or what?
It doesn't complicate anything as the IDE must be able to save EGM anyway and the CLI needs to be able to load it.
functionThatSavesInEGM(randomlyGeneratedFileInTemp);system(cliPath+" -egm "+randomlyGeneratedFileInTemp);nativeCodeThatReadsEGM();cliPassesDataToCompileEGMf();compileParseLinkAndStuffs();
pluginTellsCompileEGMfWhereToFindTheRemainingResources();pluginPassesDataToCompileEGMf();compileEGMfFindsResourcesTheIDEDidNotHaveLoaded();compileParseLinkAndStuffs();
I do agree that XML is more verbose than some alternatives, that is for sure.
I just meant that the excuses like "Plugin is crashing, not LGM" is futile here.
Removing one .ey entry can trow a segfault.
But internally, throughout the whole process, it turns into this...
I don't know, reading the XML is useful when a corruption occurs, that's why we don't want binary blobs. I wouldn't say XML is hard to read, I did HTML programming in junior high school. I don't know how I would have been towards YAML when I was younger. When I first saw it a few years ago in ENIGMA I wasn't sure I understood it, but it wasn't difficult even though it looked foreign. Now that I am older, I definitely prefer it just because it's easier on my eyes. That said, GMX is basically the XML format, we've done YAML in every other part of EGM. So that is basically why I don't support the room format being XML because that is inconsistent. Using YAML makes the EGM format unique from the others, and I really don't think we should do another format just because we want XML. There's not really a good argument at all here especially considering we already have the suitable YAML infrastructure.
The plugin gets a lot of these because it makes a lot of use of threads and tries to do GUI stuff from those threads.
Well maybe making it only use one thread is solution then? | https://enigma-dev.org/forums/index.php?PHPSESSID=bqdk6rsimloh8kasi9sfpqac60&topic=2599.0 | CC-MAIN-2021-21 | en | refinedweb |
Some time ago, my task was to write something like a virtual file system. Of course, I decided to use typed
DataSets because I have already written a framework to work and update them easily. With this technology, it is very easy to display the content of a folder. Relational
DataTables are very great tools for this. That�s all right, but when I saw the result - I died! That was not the look and feel that I had mentioned to my client! So I opened Google and started searching for
TreeView with data binding enabled. Of course, I found something: this (Data Binding TreeView in C#) was pretty but that was not the hierarchy in my mind; this (How to fill hierarchical data into a TreeView using base classes and data providers) was pretty too but I didn't understand why the author doesn't like standard binding. Both were not for me! And as a real Ukrainian man, I decided to write my own!
First of all, we need a method to fill all the data in the tree view. For this, I store references of the data items in the
ArrayList. This gives me an indicator of the items that are not in the tree. I will iterate through the items until the length of this array becomes 0. This realization will throw an exception if some of the items cannot find their places in the try. If you need another realization of this (for example do nothing or store these items on the bottom of the root node) please let me know. I will try to update this article.
ArrayList unsortedNodes = new ArrayList(); //This is list of items that still have no place in tree for (int i = 0; i < this.listManager.Count; i++) { //Fill this list with all items. unsortedNodes.Add(this.CreateNode(this.listManager, i)); } int startCount; //Iterate until list will not empty. while (unsortedNodes.Count > 0) { startCount = unsortedNodes.Count; for (int i = unsortedNodes.Count-1; i >= 0 ; i--) { if (this.TryAddNode((DataTreeViewNode)unsortedNodes[i])) { //Item found its place. unsortedNodes.RemoveAt(i); } } if (startCount == unsortedNodes.Count) { //Throw if nothing was done, in another way this //will continuous loop. throw new ApplicationException("Tree view confused when try to make your data hierarchical."); } } private bool TryAddNode(DataTreeViewNode node) { if (this.IsIDNull(node.ParentID)) { //If parent is null this mean that this is root node. this.AddNode(this.Nodes, node); return true; } else { if (this.items_Identifiers.ContainsKey(node.ParentID)) { //Parent already exists in tree so we can add item to it. TreeNode parentNode = this.items_Identifiers[node.ParentID] as TreeNode; if (parentNode != null) { this.AddNode(parentNode.Nodes, node); return true; } } } //Parent was not found at this point. return false; }
Okay� Now we have our tree view filled with all items. Second one that we need is to respond to external data changes. For this we need to handle the
ListChanged event of the current context.
((IBindingList)this.listManager.List).ListChanged += new ListChangedEventHandler(DataTreeView_ListChanged);
Realization of the handle is very simple.
private void DataTreeView_ListChanged(object sender, ListChangedEventArgs e) { switch(e.ListChangedType) { case ListChangedType.ItemAdded: //Add item here. break; case ListChangedType.ItemChanged: //Change node associated with this item break; case ListChangedType.ItemMoved: //Parent changed. break; case ListChangedType.ItemDeleted: //Item removed break; case ListChangedType.Reset: //This reset all data and control need to refill all data. break; } }
Now our control particularly supports data binding. You are able to see data, it will change synchronously with external data.
You can ask what additional functionality is required? -Oh! this only the start.
So next point: If you change the data source position our control will not change it. Currency manager has a
PositionChanged event. We will use it.
this.listManager.PositionChanged += new EventHandler(listManager_PositionChanged);
At the start point, we added index for positions and nodes according to them. This gives us an easy way to find a node by its position. So this short code will give us the ability to find the selected node by its position.
DataTreeViewNode node = this.items_Positions[this.listManager.Position] as DataTreeViewNode; if (node != null) { this.SelectedNode = node; }
Now you are not able to use this control as parent to your table. Basically all that we need is according to the selection of the node, change position of the context. This is not a problem as we store position of the item in each node. Make the
AfterSelect event:
private void DataTreeView_AfterSelect(object sender, System.Windows.Forms.TreeViewEventArgs e) { DataTreeViewNode node = e.Node as DataTreeViewNode; if (node != null) { //Set position. this.listManager.Position = node.Position; } }
Not a problem! Just use
AfterLabelEdit.
private void DataTreeView_AfterLabelEdit(object sender, System.Windows.Forms.NodeLabelEditEventArgs e) { DataTreeViewNode node = e.Node as DataTreeViewNode; if (node != null) { //This will found appropriate converter for //type and see if it can convert from string. if (this.PrepareValueConvertor() && this.valueConverter.IsValid(e.Label)//Lets converter to check value. ) { //Set property. this.nameProperty.SetValue( this.listManager.List[node.Position], this.valueConverter.ConvertFromString(e.Label) ); this.listManager.EndCurrentEdit(); return; } } //Node text are not editable. e.CancelEdit = true; }
As
DataSource you can use any data that, for example,
DataGrid can use. You can use any type of columns to bind. Basically, only the name column is limited to types that can be converted from
string. This applies only when
EditLabel is
true.
In most cases data must have three columns: Identifier, Name and Identifier of parent row. If you need something like 'FirstName + " " + LastName' as Name field - you can make autocomputed columns in
DataSet.
I am not including image index in binding because I didn't need it. Let me know if you need this functionality. I will update this article.
First bonus is full design time support. Unlike all other data bound trees on �Code Project�, this tree view has all standard designers that, for example,
DataGrid has (of course, with some changes, see
Design namespace). Second one is roundup framework bug with bottom scrollbar in
TreeView (scroll bar is visible all the time, even if it is not needed at all).
That�s all! Enjoy. Visit my blog for the latest news!
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/tree/bindablehierarchicaltree.aspx | crawl-002 | en | refinedweb |
It's hard to imagine that almost a year has gone by since my jab at Resharper's 3.0 lack of support for .NET 3.5. Yesterday I finally got around to installing thew newly released Resharper 4 and I'm more then blown away by some of the new features. Not only does it fully support the new syntax (lambdas, linq, anonymous types and so on), but it offers some nice new features.
The first thing I noticed was that the "Reformat" feature - which i use a lot - has been renamed to "Cleanup Code" and not only does more, but also supports profiles - so different code cleanup profiles can do different things. One thing i haven't figured out yet is how to edit the 2 default profiles
The next thing that surprised me was that Resharper suggested I use object initialization. So given:
task t = new Task();
t.Name = "Test";
and hitting alt-enter, resulted in:
Task t = new Task {Name = "Test"};
Similarly, Resharper suggests using implicit type variable. David already blogged about this - and like him, I also disabled this suggestion. However, if you're with JP on this, you'll certainly appreciate the helpful tip.
One feature I'm on the fence about is their JetBrains.Annotation assembly. With it, you can decorate your members usings JetBrain-specific attributes to provide even better integration. For example, given a method that behaves like string.Format, I can add a StringFormatMethod attribute:
[StringFormatMethod("key")]public void Put(string key, params object[] args) { ... }
This then allows Resharper to provide additional information, so if I do:
Put("testing {0}, {1}, {2}", 1, 2);
Resharper will tell me that {2} doesn't have a matching argument. It's a neat feature, but there's something strange about adding a JetBrain's 'dll to my project.
Generally, I think Resharper's a must-have. If you have an older version and aren't working on 3.5 code, then save your money. However, if you're doing even a little bit of 3.5 programming, then this thing is totally worth it. I have three complains/concerns.
First, I wish more of the windows docked. For example, I wish "Recent Edits" was dockable. While we're on the topic of recent edit, Resharper should look at what e-TextEditor does and provide THAT amazing functionality.
Secondly, each version of Resharper gets progressivley more complex. There are more shortcuts (like the new ctrl-shift-enter) and more configuration. The barrier to entry is starting to get a little high. Although the couple hours you might spend configuring it are quickly made up.
Finally, price. I can't help but feel that, despite the amazing value, upgrading from 3.0 to 4.0 should be less than $100. Maybe I feel that way 'cuz 3.0 was a bit of let-down for me (I know it wasn't for everyone, especially VB.NET developers), and also because I think everyone should use it.
What are you waiting for? Get your free 30 day trial now.
P.S - I downloaded that sucker at 8meg/sec from their Rackspace server - that's insane (rackspace is in Texas, I'm all the way north in Ottawa). And while I still love Rackspace, I'm a far bigger fan of SoftLayer. Same thing but $600/month cheaper, $0.20/gb instead of like $2.00, and amazingly useful iSCSI.
[Advertisement]
ReSharper rules!
What is in e-TextEditor that you like? I have never used it.
There isn't too much in e-TextEditor that I like. It isn't nearly polished enough to justify spending any money on it. The cygwin bundles don't work for me, multi-file search sucks, the UI is inconsistent.
I do use it for all my ruby programming though, but mostly more to try it out than anything else.
There are some features that do stand out. One of which is their "Undo", take a look at this screenshot:
pixeldrama.de/.../undo.png
it actually does branches and what not.
I'd feel bad if someone bought it because of what I said. I actually find it sad how much hype it has gotten..
About the annotation feature: You don't need to add a reference to JetBrain's dll to use it. All you have to do is go to the options page of the annotations, and here is a button there that will copy to your clipboard the code for the attributes they look for. You need to paste this code to a file in one of your projects, and you're set. The attributes they look for only need to be in JetBrain's namespace, but can reside in any dll you want.
Just a note, but if you are using VS2008 and "targeting" .Net 2.0 you can use all of the new C#3.0 language features, and ReSharper 4.0 will happily oblige with all its goodies...
Resharper 4.0 Quick Review:
1. Features, great.
2. Usability, great.
3. Performance terrible.
Still slows my computer to a crawl after all these years. I just can't stand having my IDE not react when I'm trying to type some code into it.
sigh, maybe some day...
@Omer & Tim: Thanks for the notes.
@Bryan: I wonder how resharper is affected by project size vs computer speed. I work on a fairly fast computer, and lately on small to medium projects, so I don't have any particular performance problems. Or maybe I'm just acclimatized to it.
ReSharper 4 is great, if you're not working on ASP.NET projects. Otherwise, it just crashes constantly. 4.0 was a HUGE letdown for me.
Pingback from User links about "texteditor" on iLinkShare
Anyone having a bad taste in their mouth from 4.0, definitely give the new 4.1 a try - the performance and stability have improved tremendously.
I think implicit types are great for complex types, and it's easy to tell resharper only to affect these.
I also wish it had a built in spell checker that worked for string constants, and text within .aspx files, but hey, everyone has their wishlist I guess.
It interfere code typing because it constantly trying to analyse it on the fly. (i think it is overboard. Not every single time, code analyse is needed.)
It always parsing source files for no good reason and it cause the whole system to crawl.
It is a resource hog. | http://codebetter.com/blogs/karlseguin/archive/2008/06/12/resharper-4.aspx | crawl-002 | en | refinedweb |
The same old story: I needed a date-time picker that could display a blank value, and I did not want to have to train my users that a grayed-out date with no check in the box really means there is no date. After looking through the many controls here at CodeProject, I got a bit dismayed. I made my own stab at it using VB and Visual Studio 2008, and I got something I liked without having to do a lot of work.
The base control itself is pretty straightforward: a
UserControl containing a
MaskedEditBox and a
Panel. The
MaskedEditBox is masked for a short date, and the user can type in a date or clear the control. Docked on the inside right edge of the
MaskedEditBox is a
Panel control that acts as a button. If the user clicks on it, an extended version of VS 2008's
MonthCalendar control pops up. If the
MaskedEditBox has a date, that date will be pre-selected on the pop-up. Selecting a date will copy that date into the
MaskedEditBox, or the user can click on the "Clear date" button which clears the
MaskedEditBox. Either choice will close the pop-up. The user also has the option of closing the pop-up without changing the date, by clicking on the "Close" button.
The code defining the pop-up is embedded in the control as a private class. Writing it was very straightforward: I created a regular
Form, added a
MonthCalendar and two
Label controls, set various properties to my liking, then copied the
InitializeComponent code into the class'
New method. This was the result, after cleaning it up a bit:
Protected Class Calendar Inherits System.Windows.Forms.Form Private MyPicker As NullableDateTimePicker Private WithEvents Label1 As Label Private WithEvents Label2 As Label Private WithEvents MonthCalendar1 As MonthCalendar Public Sub New(ByRef Picker As NullableDateTimePicker) MyPicker = Picker Me.MonthCalendar1 = New MonthCalendar Me.Label1 = New Label Me.Label2 = New Label Me.SuspendLayout() ' 'MonthCalendar1 ' Me.MonthCalendar1.Location = New Point(0, 0) Me.MonthCalendar1.Margin = New Padding(0) Me.MonthCalendar1.MaxSelectionCount = 1 Me.MonthCalendar1.Name = "MonthCalendar1" Me.MonthCalendar1.ShowTodayCircle = False Me.MonthCalendar1.TabIndex = 0 ' 'Label1 ' Me.Label1.Font = New Font("Microsoft Sans Serif", 8.25!, _ FontStyle.Underline, GraphicsUnit.Point, CType(0, Byte)) Me.Label1.ForeColor = SystemColors.HotTrack Me.Label1.Location = New Point(2, 163) Me.Label1.Name = "Label1" Me.Label1.Size = New Size(55, 13) Me.Label1.TabIndex = 1 Me.Label1.Text = "Clear date" ' 'Label2 ' Me.label2.AutoSize = True Me.label2.Font = New Font("Microsoft Sans Serif", 8.25!, _ FontStyle.Underline, GraphicsUnit.Point, CType(0, Byte)) Me.label2.ForeColor = System.Drawing.SystemColors.HotTrack Me.label2.Location = New System.Drawing.Point(192, 163) Me.label2.Name = "Label2" Me.label2.Size = New System.Drawing.Size(33, 13) Me.label2.TabIndex = 2 Me.label2.Text = "Close" ' 'CalendarPopup ' Me.AutoScaleDimensions = New SizeF(6.0!, 13.0!) Me.AutoScaleMode = AutoScaleMode.Font Me.ClientSize = New Size(228, 184) Me.ControlBox = False Me.Controls.Add(Me.Label1) Me.Controls.Add(Me.Label2) Me.Controls.Add(Me.MonthCalendar1) Me.FormBorderStyle = FormBorderStyle.FixedToolWindow Me.MaximizeBox = False Me.MinimizeBox = False Me.Name = "CalendarPopup" Me.ShowIcon = False Me.ShowInTaskbar = False Me.StartPosition = FormStartPosition.Manual Me.ResumeLayout(False) End Sub Public Property SelectedDate() As Date? Get If Information.IsDate(MonthCalendar1.SelectionStart) Then Return MonthCalendar1.SelectionStart Else Return Nothing End If End Get Set(ByVal value As Date?) If Information.IsDate(value) Then MonthCalendar1.SetDate(Convert.ToDateTime(value)) Else MonthCalendar1.SetDate(DateTime.Now) End If End Set End Property Private Sub Label1_Click(ByVal sender As Object, ByVal e As EventArgs) _ Handles Label1.Click 'Clear MyPicker.Value = Nothing Me.Close() End Sub Private Sub Label2_Click(ByVal sender As Object, ByVal e As EventArgs) _ Handles Label2.Click 'Close Me.Close() End Sub Private Sub MonthCalendar1_DateSelected(ByVal sender As Object, _ ByVal e As DateRangeEventArgs) _ Handles MonthCalendar1.DateSelected MyPicker.Value = e.Start Me.Close() End Sub End Class
The
NullableDateTimePicker has a private variable,
Cal, which serves as the control's instance of the pop-up form. When the control's button is clicked,
Cal is instantiated, if necessary, set to either the value in the masked edit box (if it is a valid date) or the current date (if it is not), and displayed as a modal form.
Private Sub Button1_MouseDown(ByVal sender As Object, ByVal e As MouseEventArgs) _ Handles Button1.MouseDown If Cal Is Nothing Then Cal = New Calendar(Me) If Cal.Visible Then Exit Sub If Information.IsDate(MaskedTextBox1.Text) Then Cal.SelectedDate = Convert.ToDateTime(MaskedTextBox1.Text) Else Cal.SelectedDate = Nothing End If Cal.Location = Button1.PointToScreen(New Point(0, 19)) Cal.ShowDialog() End Sub
Getting the date out of the control is pretty straightforward. I have implemented
HasDate, which indicates whether or not the control is displaying a valid date,
Text, which returns the
Text property of the masked edit box, and
Value which returns a
Nullable(Of Date) value (abbreviated in VB 2008 as
Date?).
Public ReadOnly Property HasDate() As Boolean Get Return Information.IsDate(MaskedTextBox1) End Get End Property Public Overrides Property Text() As String Get Return MaskedTextBox1.Text End Get Set(ByVal value As String) MaskedTextBox1.Text = value End Set End Property Public Property Value() As Date? Get If Me.HasDate Then Return Convert.ToDateTime(MaskedTextBox1.Text) Else Return Nothing End If End Get Set(ByVal value As Date?) If Information.IsDate(value) Then MaskedTextBox1.Text = _ Convert.ToDateTime(value).ToString("MM/dd/yyyy") Else MaskedTextBox1.Text = "" End If End Set End Property
This control is pretty basic, but it does what I need: allows the user to easily select a date, to manually enter a date, and to represent the absence of a date. Making it data-bound should be pretty easy. Properties to change the appearance of the calendar pop-up would also be handy.
Note the use of
Information.IsDate above.
Information is a static class found in the
Microsoft.VisualBasic namespace. I wrote the code this way to help C# programmers who might want to translate this control; I don't think C# has anything that is really equivalent to
IsDate. To use this function (and some other useful tools), add a reference to the namespace in your project.
As noted above, Visual Basic 2008 has added a shorthand notation for
Nullable(Of T) where
T is a value type. This is indicated by adding a
? after either the variable name or the variable type in the declaration. Thus, my use of
Date? is effectively the same as using
Nullable(Of Date); in fact, I believe, either way compiles the same.
Nullableshorthand notation in VB.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/selection/NDTP_VS2008.aspx | crawl-002 | en | refinedweb |
#include <mvIMPACT_acquire.h>
Instances of it can only be created by instances of the mvIMPACT::acquire::DeviceManager class as only the mvIMPACT::acquire::DeviceManager has the precise knowledge to do that anyway. As a result from this fact pointers to instances of mvIMPACT::acquire::Device can be obtained via a mvIMPACT::acquire::DeviceManager object only.
A valid pointer to a mvIMPACT::acquire::Device object is needed to construct most of the other objects available in this interface.
Almost every object requiering a valid pointer to a mvIMPACT::acquire::Device object will need to device in an initialised state as the properties provided e.g. by the class mvIMPACT::acquire::CameraSettingsBase will be constructed when the device is initialised. To initialise a device this class provides the function Device::open. However every object which needs an initialised device to be constructed successfully will try to open the device when it hasn't been opened before, so the user does not need to call this function explicitly.
IMPORTANT: Whenever the last instance to a DeviceManager object get's destroyed within the program every remaining device will be closed automatically! Thus every instance to mvIMPACT::acquire::Device objects or objects created with a pointer to a mvIMPACT::acquire::Device object will become invalid automatically. Therefore the user has to make sure there is always at least one instance to a mvIMPACT::acquire::DeviceManager object within the current process!
Copy constructor.
Creates a new object from an existing device object. Keep in mind that this new object will provide access to the very same hardware and therefore you might as well use the original reference returned from the mvIMPACT::acquire::DeviceManager. This constructor is only provided for internal reference counting to guarantee correct operation of the objects of this class under all platforms and languages.
Class destructor.
Closes an opened device.
This function closes a device previously opened again.
Returns a reference to a helper class to handle user specific data stored in the devices non-volatile memory(if available).
Checks whether this device has a certain capability.
A unique identifier for this device.
A unique identifier for the functionality offered by this device.
Returns the current initialisation status in this process.
If this function returns true, this only states, that the current process has not opened the device in question already. A call to mvIMPACT::acquire::Device::open() can still fail because of some other process using this device.
Opens a device.
This function will try to open the device represented by this instance of mvIMPACT::acquire::Device. If this fails for any reason an exception will be thrown. The exception object will contain additional information about the reason for the error.
Calling this function is not really necessary, as each object for accessing other settings of the device or even the function interface need the device to be opened in order to be constructed. Therefore all the constructors for these objects check if the device is open by calling mvIMPACT::acquire::Device::isOpen and open the Device if necessary.
Allows assignments of mvIMPACT::acquire::Device objects.
Assigns a new ID to this device.
To allow any application to distinguish between different devices of the same type the user can assign an unique ID to each device using this function. This ID currently is limited to values between 0 and 250 and is stored in the devices internal memory. This ID is NOT volatile. It will remain stored even if the device is unplugged.
Updates the firmware of the device.
calling this function will cause the driver to download the firmware version compiled into the driver library into the physical devices EEPROM.
A string property (read-only once the device is open) containing a path to a custom directory for the location of camera description files, etc.
When a custom path is assigned to this property this path will be used to locate certain driver specific files and folders. Under this folder the following structure must be created:
<customDataDirectory> |- CameraFiles // this folder will be searched by frame grabbers for camera description files |- GenICam // this folder will be searched by GenICam compliant devices, that refer to a local GenICam™ description file
If this property is left empty
%ALL USERS%\Documents\MATRIX VISION\mvIMPACT acquire
/etc/matrix-vision/mvimpact-acquire
An enumerated integer property (read-only) defining the device class this device belongs to.
A integer property (read-only) containing the device ID associated with this device.
A device ID can be used to identify a certain device in the system. The ID is an 8 bit unsigned integer value stored in the devices EEPROM. In order to allow the detection of a device distinct detection of a device via its device ID the user has to make sure, that two devices belonging to the same product family never share the same ID.
A string property (read-only) containing the family name of this device.
An integer property (read-only) containing the firmware version of this device.
An enumerated integer property (read-only) defining user executed hardware update results.
This property e.g. might contain the result of a user executed firmware update. Valid values for this property are defined by the enumeration mvIMPACT::acquire::THWUpdateResult.
An enumerated integer property which can be used to define which interface layout shall be used when the device is opened.
Valid values for this property are defined by the enumeration mvIMPACT::acquire::TDeviceInterfaceLayout.
This feature is available for every device. A device not offering this feature requires a driver update. Always check for the availability of this feature by calling mvIMPACT::acquire::Component::isValid.
An enumerated integer property which can be used to define which previously stored setting to load when the device is opened.
Valid values for this property are defined by the enumeration mvIMPACT::acquire::TDeviceLoadSettings.
A string property (read-only) containing the product name of this device.
A string property (read-only) containing the serial number of this device.
An enumerated integer property (read-only) containing the current state of this device.
The state e.g. tells the user if an USB device is currently unplugged or not. Valid values for this property are defined by the enumeration mvIMPACT::acquire::TDeviceState. | http://www.matrix-vision.com/manuals/mvBlueFOX/cpp/classmvIMPACT_1_1acquire_1_1Device.html | crawl-002 | en | refinedweb |
Flex 3 DataGrid Footers.
I've implemented the code you provided to show some total rows on my data grids. I ran into a couple of issues that were overcome by a couple small changes to the code as follows:
listContent.setActualSize(listContent.width, listContent.height - (footerHeight+15));
footer.setActualSize(listContent.width, footerHeight);
footer.move(listContent.x, listContent.y + listContent.heightExcludingOffsets + 15);
Posted by: Jeremiah | March 14, 2008 8:10 AM
This component is fantastic, Very Good
But
if i use delete
paddingTop="0" paddingBottom="0" verticalAlign="middle"
and include
width="100%" height="100%"
in
The Footer of datagrid not display correct
Please test your Datagrid with 138 records
------
if i use this
And update in file
FooterDataGrid.as
in line 19
protected var footerHeight:int = 22;
to
protected var footerHeight:int = 24;
Functional is correct !
----------------
How should I proceed?
---------------------------
Alex responds:
This code is just a prototype and is unsupported. If you have it working, great.
Posted by: MArcio | March 18, 2008 9:35 AM
If i resize collum datagrid and in sequence i use slider to filter datagrid the error ocurred
-------------
TypeError: Error #1009: Cannot access a property or method of a null object reference.
at DataGridFooter/updateDisplayList()[C:\inetpub\wwwroot\webserver\rarus_admin_flex_2\src\DataGridFooter.as:103]
at mx.core::UIComponent/validateDisplayList()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\core\UIComponent.as:6214]
at mx.managers::LayoutManager/validateDisplayList()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\managers\LayoutManager.as:602]
at mx.managers::LayoutManager/doPhasedInstantiation()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\managers\LayoutManager.as:675]
at Function/
at mx.core::UIComponent/callLaterDispatcher2()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\core\UIComponent.as:8460]
at mx.core::UIComponent/callLaterDispatcher()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\core\UIComponent.as:8403]
-------------
Please Help me, how to resolve this ?
----------------------
Alex responds:
in DataGridFooter.as in updateDisplayList, check to see if col is null and break out of the while loop
Posted by: MArcio | March 18, 2008 10:12 AM
if i use itemRenderer error ocurred
TypeError: Error #1009: Cannot access a property or method of a null object reference.
at DataGridFooter/updateDisplayList
-------------
---------------
----------------------------
Alex responds:
Your custon renderer must implement IDropInListItemRenderer
Posted by: MArcio | March 18, 2008 12:31 PM
thanks for that just was asking can’t we using any thing and replacing it on this code
footer.setActualSize(listContent.width, footerHeight);
?
thanks
---------------------------
Alex responds:
Not sure I understand the question. I guess you can make the footer some other size if you want to.
Posted by: KLadofoRA | April 5, 2008 3:26 AM
Hi Alex,
You talked about a version for AdvancedDataGrid developed by an other team. Do you have a link for showing that ? thanks
-----------------
Alex responds:
ADG has SummaryRows. It is developed by another team. I don't know if they've done footer support or not. Try Sameer's blog:
Posted by: romain | May 15, 2008 11:47 AM
useful post.thanks for ur sharing!
Posted by: ggfou | May 18, 2008 2:37 AM
how to enhance debugging in flex :(
---------------
Alex responds:
Debugging works fine for me. If you want specific features that other debuggers have, please file bugs/enhancement requests at bugs.adobe.com/jira
Posted by: akshay | May 19, 2008 11:08 PM
this blog is the AWESOME!
Posted by: labs | June 6, 2008 8:36 PM
Hi, I added some functionality to these footers - horizontal scrolling, column resizing, and locked columns. I also implemented for the Advanced Data Grid. Post here. Thanks for the great post and great starting point.
Posted by: Doug Marttila | June 16, 2008 4:01 PM
Hi Alex, this is a great component. But I have problems with it when I'm using a custom ItemRenderer. In this case the updateDisplayList() function is called permanently and it cause an infinite loop. When the addChild(DisplayObject(renderer)); is removed, it works fine, but the Renderer is not shown (of course). How can I change it? Do you have any ideeas? I need it for my current project.
Thanks, Artur
--------------------------
override invalidateDisplayList() and maybe invalidateSize() and see why it gets called. Usually you're doing something that changes the potential size of the renderer like adding children to it in updateDisplayLIst. You shouldn't do that, but if you must, then block the call to invalidateDisplayList somehow (have it not call super.invalidateDisplayList in those conditions)
Posted by: Artur | July 16, 2008 2:10 AM
thanks for the code again
your posts helps me allot
thanks
Posted by: jbr | July 18, 2008 4:37 PM
If you know some site that supply tutorials about flex, please tell me! and thanks a lot!
-----------------
Alex responds:
Search the web. FlexExamples.com and Lynda.com have stuff. Our documentation folks are sad to hear that their writings aren't sufficient for you.
Posted by: Rick | July 25, 2008 6:28 PM
Hi Alex! I want to use the footer as a toolbar, but I haven't found a way to add a button or a label to it.
Do you have any ideeas how to put a button on this footer?
Help pls. Thanks!
----------------
Alex responds:
You could make a button the renderer for one of the columns. You can also subclass the border and add a toolbar there.
Posted by: Bera Florin | July 29, 2008 4:59 AM
Hi,
thanks for the example.
i´ve copied&pasted the example, but looks like Flex Builder doesn´t recognize the namespaces??
But i got this error in Flex Builder:
"
Could not resolve to a component implementation. loginTest1/src dg.mxml Unknown 1219766177385 1574
"
----------------
Alex responds:
You have to setup xmlns:local in the Application tag. See how I used it in my example
Posted by: Andrew | August 26, 2008 8:59 AM
I'm intending to integrate this solution into my current project which has its own extension of DataGridColumn.
Can you explain why you extend DataGridColumn, **and** then place an instance of the base class as a public property? It's a pattern I haven't seen before, and can't understand why it's necessary to extend the base column class, and then use an instance of the base.
btw, the project I'm working on will be adding the column in actionscript (if this makes any difference)
Thanks.
------------------
Alex responds:
I just did that so you could specify different styles and labelFunctions for the footer. You could just duplicate stuff on the column subclass, but there'd be style name collisions
Posted by: Mark S | September 15, 2008 6:27 AM
Hello there,
thank you for great example.
May i use to project this example?
and what kind of license this example has?
regards
hbell
--------------------------
Alex responds:
There is no license. You can use it however you wish as long as there is no liability back to me.
Posted by: hbell | December 24, 2008 6:01 PM
Does this applicable to editable data grid,where user can change the existing value,correspondingly Ave value need to be updated,Currently when i make the grid editable Ave is not getting calculated.
------------------------
Alex responds:
Yes, there are other events where you'll need to update the footer. ITEM_EDIT_END for editable datagrids and probably on some collection change events as well.
Posted by: Ravi | February 11, 2009 10:10 AM
How do I set the visible property of Grid column and footer?
If I say visible=false at FooterDataGridColumn level the grid column is not visible but the footer column is visible and
visible=false at DataGridColumn level has no effect
Thanks
---------------
Alex responds:
You'll probably have to modify the example to handle hidden columns.
Posted by: Vimal | February 12, 2009 9:36 AM
Hi Alex,
The vertical rows of ADG are not display correctly.At the bottom they are incresed by few pixels.
So the last line of listContent is appearing overlapped by footer. When i make distance between listContent and footer i saw listContent is creating a 1/3row at last.
Posted by: Sachin Dev Tripathi | May 12, 2009 7:22 AM
Hi alex
what is masking?? you have told about this in second para of this page?
is it related to my problem(posted yesterday)
please advice
-sachindevtripathi@gmail.com
---------------------
Alex responds:
See the documentation for flash.display.DisplayObject.mask
Posted by: Sachin Dev Tripathi | May 12, 2009 11:08 PM | http://blogs.adobe.com/aharui/2008/03/flex_3_datagrid_footers.html | crawl-002 | en | refinedweb |
All opinions expressed here constitute my (Jeremy D. Miller's) personal opinion, and do not necessarily represent the opinion of any other organization or person, including (but not limited to) my fellow employees, my employer, its clients or their agents.
The title is a mouthful and accurately implies an alarmingly high jargon to
code ration, but I just didn't see anyway to write this post without straying
into all of these different subjects. When you try to write an explanatory
article you have to walk a tightrope between a sample problem that's simple
enough to work through in no more than a handful of pages and a sample problem
that's just too simplistic to be valuable. I'll leave it up to you to decide
which end of the spectrum this one falls on. I also had a rough time trying to
decide on the best way to order the topics in the narrative. All I can do is
ask you to scan the following headers if something seems to be missing or I'm
lurching ahead.
Last week I had a flood of people follow Martin Fowler's
link into this series. I thought it was pretty cool because most of my UI
patterns material and terminology is transparently based on Fowler's work. I'm
especially glad that Martin didn't mention the fact that I volunteered to help
with the UI patterns writing three years ago then disappeared...
If you missed something, here's the Build your own CAB series in its
entirety.
The end is in sight. In traditional developer style I blew the estimate for
how long "Build your own CAB" would take. I thought all I needed to do was copy
n'paste a bunch of verbiage and code from my DevTeach talks into Live Writer and
that would be that - but my usual wordiness kicked in and I ended up submitting
a completely new talk to DevTeach Vancouver on this subject.
A couple years ago I remember reading Stephen King say that he always found
the Dark Tower books difficult to write as a way of explaining why there was so
much lag between new books, much to my exasperation at the time. I'm obviously
not Stephen King, and there's no line of folks wrapped around the corner of
CodeBetter waiting for the next installment, but I know how Stephen King felt
now. I do promise that the end of this series won't be as
shocking/disappointing/brilliant(?) as the end of the Dark Tower.*
Again, I am not a licensed namer of patterns, and some of the people I've
shown this code to have commented that it's reminiscent of Smalltalk UI's, so
it's a good bet that some of you have already seen or used similar approaches.
That said, I use the term "MicroController" to refer to a controller class that
directs the behavior of a single UI widget. By itself, a MicroController isn't
really that powerful, but like an army of ants, a group of MicroController's
working cooperatively (with some external direction) can accomplish powerful
feats.
Here's a common scenario:
There's nothing in that list that's particularly hard or even unusual, but I
want to explore an alternative approach. Off the top of my head, I've got four
goals in mind for the design of my menu state management:
Much like the screen state machine sample from my
last post, I'm going to try to use a Domain Specific Language (DSL) / Fluent
Interface to express and define the menu behavior. By hiding the mechanics
of menu management behind an abstracted Fluent Interface I'm hoping to compress
the code that governs the menu state to a smaller area of the code. I want to
be able to understand the menu behavior by scanning a cohesive area of the
code. It's my firm contention that this type of readability simply cannot be
accomplished by using the designer to attach bits and pieces of behavior.
Leaning on the designer will scatter the behavior of the screen all over the
place. One of the main reasons I don't like to use the designer or wizards is
because you often can't "see" the code and the way it works.
Before zooming in one the individual components of the solution, let's keep
the man firmly behind the curtain and look at my intended end state. Inside
some screen (probably the main Form) is a piece of code that expresses the
behavior of the menu items like this fragment below:
private void configureMenus()
{
_menuController.MenuItem(openItem).Executes(CommandNames.Open).IsAlwaysEnabled();
_menuController.MenuItem(saveItem).Executes(CommandNames.Save)
.IsAvailableToRoles("BigBoss", "WorkerBee");
_menuController.MenuItem(executeItem).Executes(CommandNames.Execute)
.IsAvailableToRoles("BigBoss");
_menuController.MenuItem(exportItem).Executes(CommandNames.Export);
}
And in each individual screen presenter you might see some additional code to
set screen-specific menu settings like this that would be called upon activating
a different screen:
public class SomeScreenPresenter : IPresenter
{
public void ConfigureMenu(MenuState state)
state.Enable(CommandNames.Save, CommandNames.Export);
state.Enable(CommandNames.Execute).ForRoles("SpecialWorkerBee", "BigBoss");
}
So what's going on here? There isn't a single call in this code to
MenuItem.Enabled or any definition of MenuItem.Click, so it's safe to assume
that there's somebody behind the curtain. So, what is the man behind the
curtain? Before I talk about each piece in detail, here's a rundown of the
various moving parts:
The complete "Build
your own CAB" Table of Contents is now up if you've missed some of the
earlier missives.
Continuing where I left off in Build
your own CAB #14: Managing Menu State with MicroController's, Command's, a Layer
SuperType, some StructureMap Pixie Dust, and a Dollop of Fluent Interface ,
I'll show how to build a Fluent Interface API to configure menu state management
in a WinForms application while using as many buzzwords as humanly possible.
Going backwards "Memento" style, the end state is shown in the first post (I had
to split the content because Community Server whined at me).
From PEAA,
a Layer SuperType is
A type that acts as the supertype for all types in its
layer.
A type that acts as the supertype for all types in its
layer.
In all of the WinForms applications I've worked on with Model View Presenter
the Presenter's have implemented some sort of Layer Supertype interface or base
class. The details differ quite a bit from project to project, but the pattern
seems to always be there. The methods on the common interface usually relate to
setting up screen state or to transitioning between screens. Here's a sample
IPresenter interface that's pretty typical to my projects:
public interface IPresenter
void ConfigureMenu(MenuState state);
void Activate();
void Deactivate();
bool CanClose();
The Presenter interface is mostly a set of hook methods for the ApplicationController
to call to set up or tear down a screen. While I'll revisit this topic in much
more detail later, for now let's focus in on the bolded method in the IPresenter
interface above.
It's safe to assume that nearly every screen is going to have a different set
of rules for which menu items are available and valid for user rules. By
implementing a common interface across all screen Presenter's we can establish a
standard way to query a Presenter for its particular menu state. What we need
next is an easy way to transmit the screen specific business rules from each
screen to the menu. The logic and business rules to determine the menu state
really fits
into each Presenter, but we don't want the Presenter to know about the
concrete Menu because we don't want to bind our presentation logic to UI
machinery. We could wrap the Menu in some some sort of abstracted IMenu
interface that we could mock while testing the Presenter, but I think there's a
better way. By and large I think that state-based testing is generally easier
than interaction-based testing. In this case I've opted to use a class called
MenuState to configure and transfer screen state from the Presenter to the
Menu. MenuState looks something like this:
public class MenuState
private Dictionary<CommandNames, string[]> _enabledByRoleCommands
= new Dictionary<CommandNames, string[]>();
public void Enable(params CommandNames[] names)
foreach (CommandNames name in names)
{
_enabledByRoleCommands.Add(name, new string[0]);
}
public EnableByRoleExpression Enable(CommandNames name)
return new EnableByRoleExpression(name, this);
public class EnableByRoleExpression
private readonly CommandNames _names;
private readonly MenuState _state;
internal EnableByRoleExpression(CommandNames names, MenuState state)
_names = names;
_state = state;
public void ForRoles(params string[] roles)
_state._enabledByRoleCommands.Add(_names, roles);
public bool IsEnabled(CommandNames name)
return _enabledByRoleCommands.ContainsKey(name);
public string[] GetRolesFor(CommandNames name)
if (_enabledByRoleCommands.ContainsKey(name))
return _enabledByRoleCommands[name];
return null;
Much like the
Notification class from the earlier post on validation, the MenuState class
helps us to keep the coupling between the menu system and each screen to a
minimum. Also like a Notification, creating a MenuState object from the
Presenter makes it relatively easy to unit test the menu state logic in each
Presenter. We'll write interaction tests with mock objects to make sure that
the navigation and screen coordination code is correctly applying the result of
a call to IPresenter.ConfigureMenu(MenuState) method first. After that, we can
concentrate on just testing the result of a call to
IPresenter.ConfigureMenu(MenuState). The steps to unit test are pretty
simple.
On my previous project we created a custom assertion for testing the
MenuState calculation that I thought made the test code fairly descriptive.
Now, let's talk about how to tell a MenuItem what to do. Each MenuItem is
going to do very different things and interact with different services and
modules of the application, but we still want to have a consistent mechanism for
attaching actions to MenuItem's. We could use anonymous delegates (and I'm
doing this quite happily in parts of StoryTeller), but that syntax can quickly
lead to ugly code. Instead, let's adopt a Command pattern approach to wrap up
each unique action.
I think one of the fundamental truths of software development is that every
codebase wants, nay, demands, an ICommand interface like this one:
/// <summary>
/// I think I've had some sort of ICommand interface
/// in almost every codebase I've worked on in the last
/// 5 years
/// </summary>
public interface ICommand
void Execute();
Using a Command pattern comes with several advantages. Foremost in my mind
is the ability to detach the action into a small concrete unit divorced from a
particular screen or UI widget that is easy to test in isolation. Each of our
Command classes should be relatively simple to test. The ICommand classes are
likely manipulating and interacting with various services and other parts of the
application. For easy unit testing, we're probably going to use some sort of Test Double to take the
place of these dependencies. I typically use Constructor Injection to attach
the test doubles.
Here's an example command:
public class SaveCommand : ICommand
private readonly IRepository _repository;
private readonly IEventPublisher _publisher;
// SaveCommand needs access to the Singleton instance of both
// IRepository and IEventPublisher. We'll let StructureMap
// deal with wiring up the dependencies
public SaveCommand(IRepository repository, IEventPublisher publisher)
_repository = repository;
_publisher = publisher;
public void Execute()
// Save whatever it is that we're saving
Inside the unit test harness for SaveCommand I'll simply use RhinoMocks to
create mock objects for IRepository and IEventPublisher:
[TestFixture]
public class SaveCommandTester
private MockRepository _mocks;
private IRepository _repository;
private IEventPublisher _publisher;
private SaveCommand _command;
/// <summary>
/// In this method, set up all of the mock objects,
/// and construct an instance of SaveCommand using
/// the two mock objects
/// </summary>
[SetUp]
public void SetUp()
_mocks = new MockRepository();
_repository = _mocks.CreateMock<IRepository>();
_publisher = _mocks.CreateMock<IEventPublisher>();
_command = new SaveCommand(_repository, _publisher);
}
Assuming that you're comfortable with mock objects, SaveCommand is now
relatively easy to unit test. Of course we're still left with the problem of how
SaveCommand gets the proper instances of IEventPublisher and IRepository in the
real application mode.
If you've got yourself a reference to an ICommand object you know exactly
what to do to make it work. Without knowing the slightest thing about its
internals, you just call Execute() on the ICommand and get out of the way.
Let's stress part of that again. The MenuItem and its associated controllers
don't need to know anything about the internals of an ICommand object, and they
especially don't need to know how to construct and configure an ICommand
object. Looking again at the configuration code, all we do is "tell" each
MenuItem controller the name of an ICommand to run.
Looking back at our SaveCommand object above, we see that it has a dependency
upon both an IEventPublisher and an IRepository interface, but the code above
doesn't need to specify these two things. To make things a little more
complicated, both of these interfaces are probably a standin for singleton
concrete instances (I use Robert
C. Martin's Just Create One pattern for "Managed
Singleton's" with StructureMap instead of using traditional Singleton's).
Tracking and attaching dependencies doesn't have to be a terrible chore because
we can use tools like StructureMap to help us out.
The first step is to register or configure the proper instances of the
underlying services with StructureMap in one of the normal ways like this code
below:
public class ServiceRegistry : Registry
protected override void configure()
BuildInstancesOf<IEventPublisher>()
.TheDefaultIsConcreteType<EventPublisher>()
.AsSingletons();
Now that we've got our services configured we can turn our attention to the
ICommand classes. When we configure the ICommand objects with StructureMap we
also need to associate the ICommand Type's with the correct CommandNames
(CommandNames is a just a strong typed enumeration, the code is at the very
bottom of the post) instance. I use a separate Registry class for the
ICommand's to put the configuration into a common spot and also to create a
custom syntax specific to registering ICommand's.
public class CommandRegistry : Registry
// Wire up the ICommand's
Command(CommandNames.Save).Is<SaveCommand>();
Command(CommandNames.Open).Is<OpenCommand>();
For the most part, all I need to do is just say that an instance of
CommandNames on the left is the concrete class on the right. It's important to
associate the ICommand classes with an instance of CommandNames because we're
going to retrieve ICommand's in the controller classes with this code:
ICommand command = ObjectFactory.GetNamedInstance<ICommand>(_command.Name);
This Fluent Interface grammar is just a thin veneer over the StructureMap
configuration API. The grammar is implemented in additional members and an
inner class of the CommandRegistry:
private RegisterCommandExpression Command(CommandNames name)
return new RegisterCommandExpression(name, this);
internal class RegisterCommandExpression
private readonly CommandNames _name;
private readonly CommandRegistry _registry;
public RegisterCommandExpression(CommandNames name, CommandRegistry registry)
_name = name;
_registry = registry;
public void Is<T>()
// Register the ICommand type with StructureMap
_registry.AddInstanceOf<ICommand>().UsingConcreteType<T>().WithName(_name.Name);
Wait, you might say. How does the IEventPublisher and IRepository
dependencies get into SaveCommand? We didn't make any kind of definition or
configuration between SaveCommand and its services. The short answer is that we
don't have to do anything else because StructureMap supports "auto wiring" of
dependencies. StructureMap knows what SaveCommand needs by its constructor
function:
If you don't explicitly configure an instance of IRepository/IEventPublisher
for SaveCommand StructureMap will happily substitute the default instance of
both types into the constructor function of SaveCommand. While you can always
take full control of the dependency chaining, I find it very convenient just to
let StructureMap deal with it.
* Come on, I'm not the only person that screamed and threw the book across
the room when Roland ends up back at the very beginning. I'm going to swear off
fantasy books permanently if Rand doesn't win a clear cut victory in book 12
whenever that comes out. Almost 20 years worth of waiting better come with a
really solid ending.
There's a lot of commonality between the menu items. Sure, the individual
actions and rules are different, but there's a finite set of things we need to
do with and to the individual menu items. You could just use the visual
designer to generate one off code for each of the menu items and hard code the
menu on/off rules, but that's going to lead to sheer ugliness. Instead of one
off code, let's create a MicroController class for a single MenuItem.
In a stunning fit of creativity I've named this class MenuItemController. In
this design, MenuItemController has just two responsibilities:
First, let's setup a single MenuItemController and see the code that sets the
Click event.
public class MenuItemController
private readonly MenuItem _item;
private readonly CommandNames _command;
private bool _alwaysEnabled = false;
private List<string> _roles = new List<string>();
public MenuItemController(MenuItem item, CommandNames command)
_item = item;
_command = command;
_item.Click += new EventHandler(_item_Click);
private void _item_Click(object sender, EventArgs e)
command.Execute();
// Other methods are below...
We construct a MenuItemController first with a MenuItem and a CommandNames
key. The constructor simply sets a field for both values then adds an event
handler to the MenuItem's Click event. Inside the _item_Click() event handler
the MenuItemController simply fetches the named ICommand from StructureMap (the
call to ObjectFactory.GetNamedInstance()) and calls Execute() on the ICommand
that comes back. Great, that's the easy part. Great that's the easy part. Now
we can tackle the responsibility for enabling or disabling the MenuItem's.
The MenuItemController class uses three sources of information to make the
enabled determination. The first two sources are optional setters on
MenuItemController.
public bool AlwaysEnabled
get { return _alwaysEnabled; }
set { _alwaysEnabled = value; }
public void AddRoles(string[] roles)
_roles.AddRange(roles);
In some cases you have MenuItem/Command's that should be available in all
states. The "AlwaysEnabled" flag on MenuItemController will short circuit any
other logic and force the MenuItem to be enabled. The second determination is
role-based authorization. Our MenuItemController class keeps a list of the
roles that have access to this action. If there are no roles defined, we'll
assume that the command is accessible to all users.
The third piece of information the MicroController uses to determine menu
state is the screen specific rules that are transmitted in the MenuState object
created by each Presenter. The code to enable or disable the internal MenuItem
inside of a MenuItemController is below. The entry point is Enable(MenuState)
at the top.
public void Enable(MenuState state)
_item.Enabled = IsEnabled(state);
public bool IsEnabled(MenuState state)
if (AlwaysEnabled)
return true;
if (!state.IsEnabled(_command))
return false;
return HasRole(state);
public bool HasRole(MenuState state)
List<string> roles = new List<string>(_roles);
roles.AddRange(state.GetRolesFor(_command));
return hasRole(roles);
private static bool hasRole(List<string> roles)
if (roles.Count == 0)
IPrincipal principal = Thread.CurrentPrincipal;
foreach (string role in roles)
if (principal.IsInRole(role))
{
return true;
}
return false;
By itself, the MicroController classes don't know very much. The
MenuController below aggregates the MenuItemController objects and would
ostensibly give you access to a particular MenuItemController upon demand. In
the SetState(MenuState) method it simply iterates through all of the
MenuItemController objects and calls each individual
MenuItemController.Enable(MenuState) method.
public class MenuController
private Dictionary<CommandNames, MenuItemController> _items =
new Dictionary<CommandNames, MenuItemController>();
public void SetState(MenuState state)
foreach (KeyValuePair<CommandNames, MenuItemController> item in _items)
item.Value.Enable(state);
// Other methods to support the Fluent Interface configuration
MenuController above also includes the code for the configuration API. I'm
not sure how every one else is building these things, but I typically use
"Expression" classes that encapsulate the configuration. You can recognize
these things pretty quickly by looking for a lot of "return this;" calls. The
Expression classes are typically pretty dumb. All I do with these classes is
set properties on some sort of inner object that does the actual work. The
MenuItemExpression class below sets properties on a single MenuItemController as
additional methods are called in the configuration. I tend to use inner classes
for the Expression's to get easy access to the private members of the class
being configured. MenuItemExpression is an inner class of MenuController.
public MenuItemExpression MenuItem(MenuItem item)
return new MenuItemExpression(this, item);
public class MenuItemExpression
private readonly MenuController _controller;
private readonly MenuItem _item;
private MenuItemController _itemController;
internal MenuItemExpression(MenuController controller, MenuItem item)
_controller = controller;
_item = item;
public MenuItemExpression Executes(CommandNames name)
_itemController = new MenuItemController(_item, name);
_controller._items.Add(name, _itemController);
return this;
public MenuItemExpression IsAlwaysEnabled()
_itemController.AlwaysEnabled = true;
public MenuItemExpression IsAvailableToRoles(params string[] roles)
_itemController.AddRoles(roles);
I think MicroController's and Fluent Interface's go together quite well. The
MicroController's do the work, but the Fluent Interface API can make the code so
much more readable. What I'm finding is that it does take a bit of upfront work
to get a Fluent Interface put together, but once it's set, it's relatively easy
to work with. There's some Architect
Hubris lurking in that statement perhaps. I suppose I might caution you to
utilize a Fluent Interface mostly in situations where you can recoup the upfront
investment with lots of reuse or the API pays off when that code changes
frequently.
I will write at least one more post on this subject just to present the data
binding replacement we're using on my current project (it sounds nuts, but
actually, I'm feeling pretty good about that right now). I wrote about the
larval stages in My
Crackpot WinForms Idea.
I've received a gratifying number of compliments on this series, but I've
consistently heard a common refrain in the negative -- there's no code to
download and it's not clear how all the pieces fit together. To address that
problem I'll change my direction a bit, but it means that "Build your own CAB"
is going on hiatus for at least a month. I'm going to concentrate my "stuck on
the train" time with getting StoryTeller into a usable state. I ended up
scrapping big pieces of the StoryTeller UI and rebuilding with some of the ideas
that I've developed while writing this series. As soon as it's released, I can
use StoryTeller for more complete examples with code that's freely
available.
I used CommandNames several times in the course of the post but didn't really
explain it. All I've done is create a Java-style strongly typed enumeration for
the command names. StructureMap only understands strings for instance
identification within a family of like instances, so CommandNames exposes a Name
property. I suppose that you could just use an enumeration and do ToString()
on the keys as appropriate, but I chose this approach for some reason that
escapes my mind at the moment. The code for CommandNames is:
public class CommandNames
public static CommandNames Open = new CommandNames("Open");
public static CommandNames Execute = new CommandNames("Execute");
public static CommandNames Export = new CommandNames("Export");
public static CommandNames Save = new CommandNames("Save");
private readonly string _name;
private CommandNames(string name)
_name = name;
public string Name
get { return _name; }
public override bool Equals(object obj)
if (this == obj) return true;
CommandNames commandNames = obj as CommandNames;
if (commandNames == null) return false;
return Equals(_name, commandNames._name);
public override int GetHashCode()
return _name != null ? _name.GetHashCode() : 0;
[Advertisement]
Pingback from Build your own CAB #14: Managing Menu State with MicroController's, Command's, a Layer SuperType, some StructureMap Pixie Dust, and a Dollop of Fluent Interface - Jeremy D. Miller -- The Shade Tree Developer
The entire content is now online as a page on my site at: Build your own CAB #14: Managing Menu State
Interesting articles, but:
en.wikipedia.org/.../Apostrophe
;)
Time for another weekly roundup of news that focuses on .NET and MS development related content: VS 2008
Hi Jeremy,
I've noticed its been awhile since your last post on this subject. any news on when you on finishing this fantastic CAB article set?
@Mark:
It's writer's block and motivation. I'm going to talk to a publisher about the content first before doing anything else.
One way or another, I'll get back to it.
After a bit of a hiatus and a fair amount of pestering, I'm back and ready to continue the "Build
Pingback from The Build Your Own CAB Series Table of Contents - Jeremy D. Miller -- The Shade Tree Developer
To everybody that attended one of my talks at DevTeach this week. All of the materials are now online
Od kilku miesięcy nic tu nie pisałem (oczywiście poza poprzednim nieplanowanym wpisem ). Jak łatwo się
I have a question about your use of StructureMap to create commands:
ICommand command = ObjectFactory.GetNamedInstance<ICommand>(_command.Name);
command.Execute();
Can this approach work when the command has an argument such as an Id? If the concrete command class takes the argument as a parameter to its constructor, it doesn't seem like StructureMap can create the command instances. If instead the command has a setter for the argument, that setter won't be available through the ICommand interface.
How do you structure your code in this situation?
Thanks,
Ben
How would you control the state of a menu item after the view is shown - i.e. toggle state of a button
AND, how would you control the fact that 1 button might want to invoke different commands based on different contexts? (i.e. Save button - saves a client in certain contexts or a "product" in other?
Thanks! Great series of articles and would love to see a sample using all these concepts!
@David,
You would go to a different model. You could bind the menu items to a Command object that has enabled/visible type properties like WPF for the first.
You could recreate the MenuState object each time the menu needs to change (this is workable btw, because I've done it and it worked out well).
For the button doing multiple things, you can "register" for a menu button by passing in a Lambda for what gets called, which gives you the ability to "point" a menu item at a specific presenter.
Thanks!
So I'm "seeing" the following: Have a "DelegateCommand" class that's got a property that contains the actual delegate that must be executed when the command gets executed.
This new delegate property will be configured / set in the "ConfigureMenu" method?
For the state, I can hold on to the instance of the MenuState Object and change as necessary in the Presenter and fire off an event so that the menu can be updated?
Do you think this can work?
I've also got a question around your "Command" in this article. The way that I want to implement my repositories, is to have a method "Save" that takes as parameter the actual object that needs to be persisted. How would this work in your setup? Would you register the object that needs to be saved with your IoC container and resolve it in the constructor (as you do with the repository)? Or can StructureMap do a lot more than Unity? Please help me with some ideas!
Dawid
Hi Jeremy, i have been reading your series and i must say it has been a very good introduction to MVC; something i haven't really looked into before now.
While i was reading this article i noticed that you seem to be dealing with your menu items as the item to be enabled or disabled, however in most applications you end up with two menu items providing certain functionality (think a New button on a toolbar and on the File menu). This would mean that you have to put the (almost) identical code for controlling them against both items, which seemed a little odd.
I decided to have a go at implementing my own Command-centric version of this, which can be seen on my website:
If you had the time to have a quick look and see what you think of my version i would be very appreciative.
Thanks
(ps, any chance you can finish off the series | http://codebetter.com/blogs/jeremy.miller/pages/build-your-own-cab-14-managing-menu-state-with-microcontroller-s-command-s-a-layer-supertype-some-structuremap-pixie-dust-and-a-dollop-of-fluent-interface.aspx | crawl-002 | en | refinedweb |
Easily automate Microsoft Outlook via .NET
Takeaway: The interoperability between the .NET platform and Microsoft Office, particularly Outlook, greatly expands developers' options. Learn how to take advantage of Outlook in your next development endeavor.
In recent articles I've covered working with various pieces of the Microsoft Office Suite including Word and Excel. Today, I'll continue the concept by manipulating Microsoft Outlook via Microsoft .NET code.
Working with Outlook
As with other Microsoft Office products, Visual Basic for Applications (VBA) is utilized. Consequently, a little COM (Component Object Model) knowledge comes in handy. However, the .NET COM interop feature simplifies the process for .NET developers. Thus, you can easily utilize COM objects within a .NET application.
The Outlook object model exposes many features of the Outlook environment. This includes the following functionality:
- Contacts database: Utilize the Outlook contacts databases. You may add new contacts, edit existing, or integrate with other applications like creating a letter via Word and information from an Outlook contact.
- Calendar: create, edit, delete, and manipulate individual calendar entries
- Notes and Tasks: Interact with the notes and tasks portions of Outlook
- Outlook user interface: Control or manipulate explorers, inspectors, command bars, and so forth.
Microsoft supplies an interop assembly, which ships with Outlook 2003. This assembly, named Microsoft.Office.Interop.Outlook.dll, resides in the Global Assembly Cache under the name Microsoft.Office.Interop.Outlook.dll. To reference this assembly from Visual Studio .NET 2003, access the COM tab from the Add References dialog box and select Microsoft Outlook 11.0 Object Library.
Outlook object model
Outlook exposes an overwhelming number of objects with associated methods, properties, and so forth. Thankfully, their functions are documented and available with an installation of Microsoft Office. The help files are located on the hard drive (if help is installed) in the following path:
<drive letter>:\Program Files\Microsoft Office\OFFICE11\1033
There will be help files for all Office applications installed. A drawback of the help documentation is all code examples are presented via VBScript, so you will have to be aware of this difference when trying to apply to the .NET Framework. With that said, let's take a closer look at individual entities within the Outlook object model. We'll focus on the following items:
- Application: The Outlook application environment. This is the root object necessary to work with Outlook.
- Namespace: The Namespace object represents the messaging service provider. The MAPI namespace is required to gain access to Outlook folders and items.
- MailItem: The MailItem object allows you to work with Outlook mail memos. There are various item objects available to work with the various item types including notes, calendar entries, contacts, and so forth.
- MAPIFolder: If you use Outlook, you're familiar with folders. (I'm often chastised for my organized Outlook inbox, with everything filed in the appropriate folder.) The MAPIFolder object provides a vehicle for working with mail folders.
Let's put these objects to use by creating and sending e-mails via Outlook.
Sending e-mail
In the next example, a simple mail message is created and sent via Outlook. Listing A uses C#, while Listing B shows the example with VB.NET.
A few notes on the code:
- A copy of the mail message is copied to the Drafts folder for the purpose of demonstrating how placing a message in a folder works. The olDefaultFolders object contains a list of the default folders included with Outlook.
- The creation of a mail item uses the CreateItem method of the Outlook.Application class. The call to this method requires a cast to the necessary Item type (in this case, MailItem).
- The various properties of a mail message are included as properties of the MailItem class: to, subject, body, and so forth.
- The SaveSentMessageFolder property of the MailItem class allows you to designate where a copy of the message should be saved (it is optional).
- The Send method of the MailItem class sends the message via Outlook.
- Message creation is included in a try/catch block that handles COM exceptions. The COMException class is available in the System.Runtime.InteropServices namespace, so this namespace should be included in the code.
Outlook security
Security is an important aspect of Outlook, and allowing external code to create and send mail messages is considered a big security risk. When and if you execute the code in the previous example, you will see an Outlook security dialog box that says "A program is trying to automatically send e-mail on your behalf. Do you want to allow this? If this is unexpected, it may be a virus and you should choose 'No'." The dialog box contains Yes, No, and Help buttons, so the user may cancel the operation with a simple click of the mouse.
Creating an appointment
In the next example, we create an Outlook appointment to appear on the user's calendar. Once again, the base Application object is utilized. In addition, an AppointmentItem is used as opposed to the previously used MailItem since we are creating a calendar entry. Listing C uses C# to demonstrate how you may accomplish this. Listing D provides the example in VB.NET.
A few notes on the code:
- The AppointmentItem class includes various properties for setting up an appointment. In this example, we utilize the Subject (what appears in the calendar), body (visible when an entry is opened), location, start date/time, and end date/time.
- The ReminderSet property of the AppointmentItem class determines if a reminder is displayed to the user. The associated ReminderMinutesBeforeStart property determines when the reminder will be displayed.
- The Importance property of the AppointmentItem class sets the item's importance. Values for the property are set via the OlImporantance enumerator, which has the following values: olImportanceHigh, olImportanceLow, and olImportanceNormal.
- The BusyStatus property determines how the appointment is displayed on the calendar (i.e., whether the time is blocked off as busy, tentative, free, and so forth. The OlBusyStatus enumerator contains the following list of possible values: olBusy, olFree, olOutOfOffice, and olTentative.
Another developer option
Microsoft Office is the most popular office productivity suite in the world, and it is a desktop standard for most large organizations. For this reason, it is often available for you to take advantage of in your .NET applications. A Windows Forms application may easily utilize the mail features of Outlook and, likewise, add calendar appointments or contact database entries with very little code. The interoperability between the .NET platform and Microsoft Office greatly expand the developer's options.
TechRepublic's free .NET newsletter, delivered each Wednesday, contains useful tips and coding examples on topics such as Web services, ASP.NET, ADO.NET, and Visual Studio .NET. Automatically sign up | http://articles.techrepublic.com.com/5100-10878_11-5850937.html | crawl-002 | en | refinedweb |
All Articles from Perl.comBeginner. [May 7, 2008]. [Apr 23, 2008]
Using Amazon S3 from Perl. [Apr 8, 2008]. [Mar 14, 2008]. [Feb 13, 2008]. [Jan 18, 2008]
Memories of 20 Years of Perl
The Perl community just celebrated the 20th anniversary of Perl. Here are some stories from Perl hackers around the world about problems they've solved and memories they've made with the venerable, powerful, and still vital language. [Dec 21, 2007]
Programming is Hard, Let's Go Scripting...
Larry Wall's annual State of the Onion describes the state of Perl, the language and the community. In his 11th address, he discussed the past, present, and future of scripting languages, including the several dimensions of design decisions important to the development of Perl 6. [Dec 6, 2007]. [Sep 21, 2007]. [Aug 7, 2007]
Option and Configuration Processing Made Easy. [Jul 12, 2007]. [Jun 7, 2007]. [May 10, 2007]
Lightning Strikes Four Times
Perl lightning articles offer short takes on important subjects. See how Perl can outperform C for 3D programming, how (and why) to express cross-cutting concerns in your programs, and one way of keeping your test counts up-to-date. [Apr 12, 2007]
The Beauty of Perl 6 Parameter Passing. [Mar 1, 2007]
Advanced HTML::Template: Widgets
HTML::Template is a templating module for HTML made powerful by its simplicity. Its minimal set of operations enforces a strict separation between presentation and logic. However, sometimes that minimalism makes templates unwieldy. Philipp Janert demonstrates how to reuse templates smaller than an entire page--and how this simplifies your applications. [Feb 1, 2007]. [Jan 11, 2007]
Using Java Classes in Perl
Java has a huge amount of standard libraries and APIs. Some of them don't have Perl equivalents yet. Fortunately, using Java classes from Perl is easy--with Inline::Java. Andrew Hanenkamp shows you how. [Dec 21, 2006]
Advanced HTML::Template: Filters. [Nov 30, 2006]
Hash Crash Course
Most explanations of hashes use the metaphor of a dictionary. Most real-world code uses hashes for far different purposes. Simon Cozens explores some patterns of hashes for counting, uniqueness, caching, searching, set operations, and dispatching. [Nov 2, 2006]. [Oct 19, 2006]
The State of the Onion 10
In Larry Wall's tenth annual State of the Onion address, he talks about raising children and programming languages and balancing competing tensions and irreconcilable desires. [Sep 21, 2006]. [Aug 3, 2006]. [Jul 13, 2006]. [Jun 1, 2006]. [May 4, 2006]
Unraveling Code with the Debugger. [Apr 6, 2006]
Using Ajax from Perl
The recently rediscovered Ajax technique makes the client side of web programming much more useful and pleasant. However, it also means revising your existing web applications to take advantage of this new power. Dominic Mitchell shows how to use CGI::Ajax to give your Perl applications access to this new power. [Mar 2, 2006]. [Feb 23, 2006]. [Feb 16, 2006]
Debugging and Profiling mod_perl Applications
How do you use the debugger on a
mod_perlapplication? How do you profile an application embedded in a web server, with multiple child processes? Don't worry. Where there's Perl, there's a way. Frank Wiles demonstrates how to debug and profile
mod_perlapplications. [Feb 9, 2006]. [Feb 2, 2006]
More Advancements in Perl Programming. [Jan 26, 2006]
Analyzing HTML with Perl
Kendrew Lau taught HTML development to business students. Grading web pages by hand was tedious--but Perl came to the rescue. Here's how Perl and HTML parsing modules helped make teaching fun again. [Jan 19, 2006]
What Is Perl 6
Perl 6 is the long-awaited rewrite of the venerable Perl programming language. What's the status? What's changing? What's staying the same? Why does Perl need a rewrite anyway? chromatic attempts to answer all of these questions. [Jan 12, 2006]
Lexing Your Data. [Jan 5, 2006]. [Dec 21, 2005]. [Dec 15, 2005]. [Dec 8, 2005]. [Dec 1, 2005]
Document Modeling with Bricolage
Any document-processing application needs to make a model of the documents it expects to process. This can be a time-consuming and error-prone task, especially if you've never done it before. David Wheeler of the Bricolage project shows how to analyze and model documents for his publishing system. Perhaps it can help you. [Nov 23, 2005]
Building E-Commerce Sites with Handel
Building web sites can be tedious--so many parts and pieces are all the same. Have you written enough form processors and shopping carts to last the rest of your life? Now you can get on with the real programming. Christopher H. Laco shows how to use Handel and Catalyst to build a working e-commerce site without actually writing any code. [Nov 17, 2005]
Making Sense of Subroutines
Subroutines are the building blocks of programs. Yet, too many programmers use them ineffectively, whether not making enough of them, naming them poorly, combining too many concepts into one, or any of a dozen other problems. Used properly, they can make your programs shorter, faster, and more maintainable. Rob Kinyon shows the benefits and advanced uses that come from revisiting the basics of subroutines in Perl. [Nov 3, 2005]
Data Munging for Non-Programming Biologists
Scientists often have plenty of data to munge. Non-programmer scientists often have to beg their coders for help or get by doing it themselves. Amir Karger and his colleagues had a different idea. Why not provide them with short, interchangeable Perl recipes to solve small pieces of larger problems? Here's how they built the Scriptome. [Oct 20, 2005]
Making Menus with wxPerl. [Oct 6, 2005]
The State of the Onion 9
In Larry Wall's ninth annual State of the Onion address, he explains Perl 6's Five Year Plan, how Perl programmers are like spies (or vice versa), and how open source can learn from the intelligence community. [Sep 22, 2005]. [Sep 8, 2005]
Perl Needs Better Tools
Perl is a fantastic language for getting your work done. It's flexible, forgiving, malleable, and dynamic. Why shouldn't it have good, powerful tools? Are Perl development tools behind those of other, perhaps less-capable languages? J. Matisse Enzer argues that Perl needs better tools, and explains what those tools should do. [Aug 25, 2005]
This Week in Perl 6, August 17-23, 2005
Matt Fowles summarizes the Perl 6 mailing lists, with p6i seeing the most HLL discussion yet; p6l debating binding, parameters, and primitives; and p6c appreciating pretty graphics. [Aug 25, 2005]
Parsing iCal Data. [Aug 18, 2005]
This Week in Perl 6, Through August 14, 2005
Piers Cawley summarizes the Perl 6 mailing lists with containers and metamodels on the Perl 6 compiler list, metamodel and trait questions on the Perl 6 language list, and opcode changes and test modules on the Perl 6 internals list. [Aug 18, 2005]
Important Notice for Perl.com Perl.com.
The following URLs represent Perl.com
Automated GUI Testing. [Aug 11, 2005]
This Week in Perl 6, August 2-9, 2005
Matt Fowles summarizes the Perl 6 mailing lists, with p6i seeing build and platform patches, p6l exploring meta-meta discussions, and p6c enjoying Pugs and PxPerl releases. [Aug 11, 2005]
This Week in Perl 6, through August 2, 2005
Piers Cawley summarizes the Perl 6 mailing lists with PIL discussion on the Perl 6 compiler list, type and container questions on the Perl 6 language list, and a Lua compiler on the Perl 6 internals list. [Aug 8, 2005]
Building a 3D Engine in Perl, Part 4
The ultimate goal of all programming is to be as unproductive as possible--to write games. In part four of a series on building a 3D engine with Perl, Geoff Broadwell explains how to profile your engine, how to improve performance and code with display lists, and how to render text. [Aug 4, 2005]. [Jul 28, 2005]
This Week in Perl 6, July 20-26, 2005
Matt Fowles summarizes the Perl 6 mailing lists, with p6i discussing garbage collection schemes, p6l rethinking object attribute access and plotting GC APIs and access, and p6c reporting problems, documenting PIL, and discussing the grammar. [Jul 28, 2005]
An Introduction to Test::MockDBI. [Jul 21, 2005]
This Week in Perl 6, July 13-19, 2005
Piers Cawley summarizes the Perl 6 mailing lists with Pugs running on a JavaScript engine, GMC plans for Parrot, and typechecking and metamodel discussions about Perl 6. [Jul 21, 2005]. [Jul 14, 2005]
This Week in Perl 6, July 5-12, 2005
Matt Fowles summarizes the Perl 6 mailing lists, with p6l discussing metamodels, MMD, and invocants; p6i handling Leo's new calling conventions; and p6c plotting on retargeting Pugs to different back ends. [Jul 14, 2005]
Building Navigation Menus
Well-designed websites are easy to navigate, with sensible menus, breadcrumb trails, and the information you need within three clicks of where you are. Rather than tediously coding navigation structures by hand, why not consider using a Perl module to generate them for you? Shlomi Fish shows how to use his HTML::Widgets::NavMenu module. [Jul 7, 2005]
This Week in Perl 6, June 29-July 5, 2005
Piers Cawley summarizes the Perl 6 mailing lists with YAPC::NA hackathons, a request for better archives, DBI v2 plans from Tim Bunce, and PGE interoperability questions. [Jul 7, 2005]
Annotating CPAN
Perl has voluminous documentation, both in the core distribution and in thousands of CPAN modules. That doesn't make it all perfect, though, and the amount of documentation can make it daunting to find and recommend changes or clarifications. The Perl Foundation recently sponsored Ivan Tubert-Brohman to fix this; here's how he built AnnoCPAN, an annotation service for module documentation. [Jun 30, 2005]
This Week in Perl 6, June 21-28, 2005
Matt Fowles summarizes the Perl 6 mailing lists with p6c discussing self-hosting options for Perl 6, Parrot segfaults and changes; and
AUTOLOADand self method invocation discussions continuing on p6l. [Jun 30, 2005]. [Jun 23, 2005]. [Jun 23, 2005]. [Jun 16, 2005]. [Jun 9, 2005]
This Week in Perl 6, June 1-7, 2005
Piers Cawley summarizes the Perl 6 mailing lists with Parrot 0.2.1 released,
mod_parrotbundled with
mod_pugs(or vice versa), an end to the reduce operator debate, and a paean to Parrot lead architect Dan Sugalski. [Jun 9, 2005]. [Jun 2, 2005]
This Week in Perl 6, May 25, 2005-May 31, 2005
Matt Fowles summarizes the Perl 6 mailing lists with Parrot keys, MMD, Tcl, Python discussion, Pugs' continued evolution, introspection, generation, and more Perl 6 meta-programming goodness. [Jun 2, 2005]
This Week in Perl 6, May 18 - 24, 2005
Piers Cawley summarizes the Perl 6 mailing lists with Inline::Pugs bridging the gap, ParTcl coming into existence, and many questions about multimethod dispatch in Perl 6. [May 27, 2005]
Manipulating Word Documents with Perl
Unix hackers love their text editors for plain-text manipulatey goodness--especially Emacs and Vim with their wonderful extension languages (and sometimes Perl bindings). Don't fret, defenestrators-to-be. Andrew Savikas demonstrates how to use Perl for your string-wrangling when you have to suffer through using Word. [May 26, 2005]
This Week in Perl 6, May 3, 2005 - May 17, 2005
Matt Fowles summarizes the Perl 6 mailing lists with Pugs gaining object support, Parrot 0.2.0 released, and Perl 6 going through a reduction (though not in volume). [May 19, 2005]
Build a Wireless Gateway with Perl. [May 19, 2005]
Inside YAPC::NA 2005
One of the success stories of the Perl community is the series of self-organized Yet Another Perl Conferences. This year's North American YAPC is in Toronto in late June. chromatic recently interviewed Richard Dice, the man behind YAPC::NA 2005 to discuss how to put together a conference and what to expect from the conference and its attendant extracurricular activities in lovely Toronto. [May 12, 2005]
Massive Data Aggregation with Perl. [May 5, 2005]
This Week in Perl 6, April 26 - May 3, 2005
Matt Fowles summarizes the Perl 6 mailing lists with Pugs 6.2.2 released, Parrot freezing for a release, and the great debate over invocant naming continuing. [May 5, 2005]
This Week in Perl 6, April 20 - 26, 2005
Piers Cawley summarizes the Perl 6 mailing lists with Pugs 6.2.1 released, more MMD schemes, and big discussions of blocks, invocants, and parameters. [May 2, 2005]
People Behind Perl: brian d foy
brian d foy is a long-time Perl hacker and leader, having founded the Perl Mongers, written and helped to write many useful CPAN modules, and recently founding, publishing, and editing The Perl Review. Perl.com recently interviewed brian about his work, history, and future plans. [Apr 28, 2005]
Automating Windows Applications with Win32::OLE
Many Windows applications are surprisingly automable through OLE, COM, DCOM, et cetera. Even better, this automation works through Perl as well. Henry Wasserman walks through the process of discovering how to automate Internet Explorer components to automate web testing from Perl. [Apr 21, 2005]
This Week in Perl 6, April 12 - 19, 2005
Matt Fowles summarizes the Perl 6 mailing lists with Pugs 6.2.0 released, documentation patches, a switch to Subversion, and scope, whitespace, and character class questions. [Apr 21, 2005]
Building Good CPAN Modules
Your code is amazing. It works exactly as you intended. You've decided to give back, to share it with the world by uploading it to the CPAN. Before you do, though, there are a few fiddly details about cross-platform and cross-version compatibility to keep in mind. Rob Kinyon gives several guidelines about writing CPAN modules that will work everywhere they will be useful. [Apr 14, 2005]
This Week in Perl 6, April 4-11, 2005
Piers Cawley summarizes the Perl 6 mailing lists with a new plan for Ponie, a Parrot/Pugs hackathon announcement, and identity tests. [Apr 14, 2005]
Perl Code Kata: Mocking Objects
One problem with many examples of writing test code is that they fake up a nice, perfect, self-contained world and proceed to test it as if real programs weren't occasionally messy. Real programs have to deal with external dependencies and work around odd failures, for example. How do you test that? In this Perl Code Kata, Stevan Little presents exercises in using Test::MockObject to make the messy real world more testable. [Apr 7, 2005]
This Fortnight in Perl 6, March 22 - April 3, 2005
Matt Fowles summarizes the Perl 6 mailing lists with p6l discussing converters and S03 and S29 updates, p6c finding and fixing bugs in Pugs, and p6i cleaning up code and welcoming Chip. [Apr 7, 2005]
More Lightning Articles
Yes, it's the return of Perl Lightning Articles -- short discussions of Perl and programming. This time, learn about Emacs customization with Perl, debugging without adding print statements, testing database-heavy code, and why unbuffering may be a mistake. [Mar 31, 2005]
This Fortnight in Perl 6, March 7 - March 21, 2005
Matt Fowles summarizes the Perl 6 mailing lists with the resurgence of Perl 6 language questions, implementation decisions galore, and a new Parrot chief architect. [Mar 24, 2005]. [Mar 24, 2005]
Symbol Table Manipulation
One of the most dramatic advantages of dynamic languages is that they provide access to the symbol table at run-time, allowing new functions and variables to spring into existence as you need them. Though they're not always the right solution to common problems, they're very powerful and useful in certain circumstances. Phil Crow demonstrates how and when and why to manipulate Perl's symbol table. [Mar 17, 2005]
This Fortnight in Perl 6, Feb. 23 - March 7, 2005
Matt Fowles summarizes the Perl 6 mailing lists with the release of Parrot 0.1.2, lots of Pugs patches, and a plea for off-list summarization help. [Mar 10, 2005]
A Plan for Pugs. [Mar 3, 2005]
This Fortnight in Perl 6, Feb. 9-22, 2005
Matt Fowles summarizes the Perl 6 mailing lists with kudos to Autrijus, still more plans for the Parrot 0.1.2 release, and lots and lots and lots of words about junctions. [Feb 24, 2005]
Perl and Mandrakelinux
Perl is a fantastic tool for system administrators. Why not use it for building administrative applications? That's just what Mandrakelinux does! Mark Stosberg recently interviewed Perl 5.10 pumpking and Mandrake employee Rafael Garcia-Suarez about the use of Perl for graphical applications. [Feb 24, 2005]
Building a 3D Engine in Perl, Part 3
The ultimate goal of all programming is to be as unproductive as possible--to write games. In part three of a series on building a 3D engine with Perl, Geoff Broadwell explains how to manage the viewpoint and how to achieve impressive lighting effects with OpenGL. [Feb 17, 2005]
Perl Code Kata: Testing Databases. [Feb 10, 2005]
This Week in Perl 6, Feb. 1 - 8, 2005
Matt Fowles summarizes the Perl 6 mailing lists with bugfixes, plans for a Parrot 0.1.2 release, and the introduction of Featherweight Perl 6, an actual implementation. [Feb 10, 2005]
Throwing Shapes
Sometimes data processing works best when you separate the application into multiple parts; this is the well-loved client-server model. What goes on between the parts, though? Vladi Belperchinov-Shabanski walks through the design and implementation of a Remote Procedure Call system in Perl. [Feb 3, 2005]
This Fortnight in Perl 6, Jan. 19-31, 2005
Matt Fowles summarizes the Perl 6 mailing lists with more Parrot calling conventions, Perl 6 loop-ending and loop-continuing semantics, and evil thoughts from Luke Palmer. [Feb 3, 2005]
The Phalanx Project
One ancient Greek military invention was the phalanx, a group of soldiers with overlapping shields each protecting each other. In the Perl world, the Phalanx project intends to improve the quality of Perl 5, Ponie, and the top CPAN modules. Project founder Andy Lester describes the goals and ambitions. [Jan 20, 2005]
This Week in Perl 6, Jan. 11-18, 2005
Matt Fowles summarizes the Perl 6 mailing lists with idioms, loop counters, method-calling semantics, and the return of Dan Sugalski. [Jan 20, 2005]
An Introduction to Quality Assurance
The libraries and syntax for automated testing are easy to find. The mindset of quality and testability is harder to adopt. Tom McTighe reviews the basic principles of quality assurance that can make the difference between a "working" application and a high-quality application. [Jan 13, 2005]
This Week in Perl 6, January 03 - January 11, 2005
Matt Fowles summarizes the Perl 6 mailing lists with bugfixes, multimensional data structures, and a new syntax engine. [Jan 13, 2005]
This Fortnight in Perl 6, December 21 - 31 2004
Matt Fowles summarizes the Perl 6 mailing lists with the final summary of 2004. What's on the lists? Patches, design decisions, and lots of theory. [Jan 6, 2005]
Bricolage Configuration Directives
Any serious application has a serious configuration file. The Bricolage content management system is no different. David Wheeler explains the various configuration options that can tune your site to your needs. [Jan 6, 2005]
This Fortnight in Perl 6, December 7-20 2004
Matt Fowles summarizes the Perl 6 mailing lists: the Perl 6 language list discusses hashes, classes, and variables; the Perl 6 Compiler list launches code; and the Parrot list fixes lots and lots of bugs. [Dec 29, 2004]
Building a 3D Engine in Perl, Part 2
The ultimate goal of all programming is to be as unproductive as possible--to write games. In part two of a series on building a 3D engine with Perl, Geoff Broadwell demonstrates animations and event handling. [Dec 29, 2004]
Introducing mod_parrot
mod_perl marries Perl 5 with the Apache web server. What's the plan for Perl 6? mod_parrot--and it may also be base for any language hosted on the Parrot virtual machine. After a brief hiatus, Jeff Horwitz recently resurrected the mod_parrot progress. Here's the current state, what works, and how to play with it on your own. [Dec 22, 2004]
Perl Code Kata: Testing Imports
Persistently practicing good programming will make you a better programmer. It can be difficult to find small tasks to practice, though. Fear not! Here's a 30-minute exercise to improve your testing abilities and your understanding of module importing and exporting. [Dec 16, 2004]
The Evolution of ePayment Services at UB
Perl is often a workhorse behind the scenes, content to do its job quietly and without fuss. When the University of New York at Buffalo needed to offer electronic payment services to students, the Department of Computing Services reached for Perl. Jim Brandt describes how Perl (and a little Inline::Java) helped them build just enough code to allow students to pay their bills online. [Dec 9, 2004]
This Fortnight in Perl 6, December 1 - 6 2004
Matt Fowles summarizes the Perl 6 mailing lists: the Perl 6 language list discusses a shiny new syntax update, and the Parrot list discusses what is and isn't up for grabs. [Dec 9, 2004]
This Fortnight in Perl 6, November 16-30 2004
Matt Fowles summarizes the Perl 6 mailing lists, with the introduction of the Parrot Grammar Engine! [Dec 2, 2004]
Building a 3D Engine in Perl
The ultimate goal of all programming is to be as unproductive as possible -- to write games. Why hurt yourself to write in low-level languages, though, when Perl provides all of the tools you need to do it well? Geoff Broadwell demonstrates how to use OpenGL from Perl. [Dec 1, 2004]
Perl Debugger Quick Reference
Perl's debugger is both powerful and somewhat esoteric. This printable excerpt from Richard Foley's Perl Debugger Pocket Reference can help take some of the mystery out of the common commands and put more advanced features within your reach. [Nov 24, 2004]
Cross-Language Remoting with mod_perlservice
Remoting -- sharing data between server and client processes -- is powerful, but writing your own protocols is tedious and difficult. XML-RPC is too simple and SOAP and CORBA are too complex. Isn't there something in the middle, something easier to set up and use? Michael W. Collins introduces
mod_perlservice, an Apache httpd module that provides remote services to C, Perl, or Flash clients. [Nov 18, 2004]
This Week in Perl 6, November 9 - 15 2004
Matt Fowles summarizes the Perl 6 mailing lists, with the Parrot list discussing continuations and the unchanging calling conventions, the Perl 6 folks discussing exports, and the Perl 6 Compiler list still strangely quiet. [Nov 18, 2004]
Implementing Flood Control. [Nov 11, 2004]
This Week in Perl 6, November 2 - 8 2004
Matt Fowles summarizes the Perl 6 mailing lists, with the Parrot folks talking about optimizations not to apply yet, the Perl 6 people receiving updated Synopses and Apocalypses, and the Perl 6 Internals team being strangely quiet. [Nov 11, 2004]
Komodo 3.0 Review
ActiveState has recently released version 3.0 of its Komodo IDE, supporting agile languages. Jason Purdy reviews the progress made since the 2.0 release. [Nov 4, 2004]
This Fortnight on Perl 6, October 2004 Part Two
Matt Fowles summarizes two more weeks of the Perl 6 mailing lists in the last half of October. [Nov 4, 2004]
Installing Bricolage
Though CPAN makes it possible to write large and powerful applications, distributing those applications can prove daunting. In the case of the Bricolage content management system, though, David Wheeler's installation guide here will walk you through the process. [Oct 28, 2004]
This Fortnight on Perl 6, October 2004
In a stunning achievement, Matt Fowles makes his debut as the new Perl 6 summarizer, covering all three major mailing lists through the first half of October. [Oct 28, 2004]
Perl Code Kata: Testing Taint
Persistently practicing good programming will make you a better programmer. It can be difficult to find small tasks to practice, though. Fear not! Here's a 30-minute exercise to improve your testing abilities and your understanding of Perl's taint mode. [Oct 21, 2004]
FMTYEWTK About Mass Edits In Perl
Though it's a full-fledged programming language now, Perl still has roots in Unix file editing. A hearty set of command-line switches, options, and shortcuts make it possible to process files quickly, easily, and powerfully. Geoff Broadwell explains far more than you ever wanted to know about it. [Oct 14, 2004]
Why Review Code?
Want to become a better programmer? Read good code! How do you know what's good code and where to start? Luke Schubert demonstrates how to pull ideas out of code by exploring Math::Complex. [Oct 7, 2004]
Don't Be Afraid to Drop the SOAP
Web services may be unfortunately trendy, but having a simple API for other people to use your application is very powerful and useful. Is SOAP the right way to go? Sam Tregar describes an alternate approach he's pulled from working the Bricolage and Krang APIs. [Sep 30, 2004]
This Week on Perl 6, Week Ending 2004-09-24
Piers Cawley has the latest from the Perl 6 mailing lists. The perl6-compiler list discusses rule engine flexibility, the Parrot people discuss the Parrot versions of Forth, Tcl, and Python as well as lexical pads, and the Perl 6 Language list argues about what being in the core really means. [Sep 28, 2004]
Building a Finite State Machine Using DFA::Simple. [Sep 23, 2004]
This Week on Perl 6, Week Ending 2004-09-17
Piers Cawley has the latest from the Perl 6 mailing lists. The perl6-compiler list discusses grammar bootstrapping, the Parrot people debate namespaces again, and the Perl 6 Language list ponders the freshly updated Synopsis 5. [Sep 23, 2004]
Embedded Databases. [Sep 16, 2004]
This Week on Perl 6, Week Ending 2004-09-10
Piers Cawley has the latest from the Perl 6 mailing lists. The perl6-compiler list makes its introduction as Parrot people argue about configuration and namespaces and play Minesweeper and the Perl 6 language list continues to discuss Synopsis 9. [Sep 16, 2004]
Lightning Articles
Got something to say about Perl, but don't want to stretch it out to a full perl.com article? In the tradition of lightning talks, we now present lightning articles! [Sep 9, 2004]
This Week on Perl 6, Week Ending 2004-09-03
Piers Cawley has the latest from the Perl 6 mailing lists. The Parrot folks create some TODOs, hash out math semantics, and discuss cross-compiling. The Perl 6 language folks argue about ranges, roles, pipelines, and PRE and POST hooks. [Sep 9, 2004]
Hacking Perl in Nightclubs
By editing Perl programs on-the-fly, in real-time, Alex Mclean is producing some really interesting computer music. He talks to perl.com about how it all works. [Aug 31, 2004]
Content Management with Bricolage
David Wheeler presents an introduction to the Bricolage content management system (CMS). [Aug 27, 2004]
This Week on Perl 6, Week Ending 2004-08-20
Better register spilling, COBOL on Parrot, and a re-examination of lookahead from Apocalypse 12. All this and more in the latest Perl 6 summary. [Aug 26, 2004]
The State of the Onion
Larry Wall's eighth annual State of the Onion address from OSCON 2004 related screensavers to surgery to Perl and the community. [Aug 19, 2004]
Perl Command-Line Options
After looking at special variables, Dave Cross now casts his eye over the impressive range of functionality available from simple command-line options to the Perl interpreter. [Aug 10, 2004]
This Week on Perl 6, Week Ending 2004-07-31
The pie hits! --more-- [Aug 6, 2004]
Giving Lightning Talks
It's conference season, and there's still a chance to sign up for lightning talks. Until now, there were no written rules for giving lighting talks. Mark Fowler explains. [Jul 30, 2004]
Building Applications with POE
In Matt Cashner's second article on POE, he describes how to fit together POE's components into event-driven applications. [Jul 23, 2004]
This Week on Perl 6, Week Ending 2004-07-18
The Piethon benchmark contest is beginning to loom, and the language list discusses how scalars should be interpolated and subscripted. [Jul 23, 2004]
Accessible Software
Jouke Visser demonstrates how to adapt your Perl programs for use by those who have difficulty using a mouse or keyboard. [Jul 15, 2004]
Autopilots in Perl
Jeffrey Goff explains how to connect the X-Plane flight simulator with a Perl console to build new instrument panels, traffic simulators, and even an autopilot in Perl. [Jul 12, 2004]
This Week on Perl 6, Week Ending 2004-07-04
Work begins on a Perl 6 regexp parser, and Unicode manipulation of strings prompts discussion on the language list. [Jul 9, 2004]
This Week on Perl 6, Week Ending 2004-06-27
Getting ready for the Piethon is a major concern, while the language list deals with various ways of modifying and annotating expressions. [Jul 2, 2004]
Application Design with POE
Matt Cashner provides a high-level introduction to POE, the Perl Object Environment, examining the concepts that POE brings to bear when designing long-running Perl applications. [Jul 2, 2004]
This Week on Perl 6, Fortnight Ending 2004-06-21
Arrays and other classes go into the basic Parrot PMC hierarchy, and Dan finally embraces Unicode while perl6-language ... doesn't. [Jun 24, 2004]
Profiling Perl
How do you know what your Perl programs are spending their time doing? How do you know where to start optimizing slow code? The answer to both these questions is "profiling," and Simon Cozens looks at how it's done. [Jun 24, 2004]
Perl's Special Variables
Dave Cross goes back to basics to show how using Perl's special variables can tidy up file-handling code. [Jun 18, 2004]
The Evolution of Perl Email Handling. [Jun 10, 2004]
This Week on Perl 6, Fortnight Ending 2004-06-06
Parrot gets the beginnings of library dynamic loading, and Perl 6 gets a... periodic table? [Jun 9, 2004]
Web Testing with HTTP::Recorder. [Jun 4, 2004]
Return of Quiz of the Week
Mark-Jason Dominus's quiz of the week mailing list is back, and we bring you the questions and solutions for the past week's quizzes. [May 28, 2004]
This Week on Perl 6, Week Ending 2004-05-23
Lots of documentation effort on the Parrot list this week, and some work on the Perl 6 compiler, while on the language list, magical new syntaces for filling hashes... [May 27, 2004]
An Interview with Allison Randal
Allison is President of the Perl Foundation, and project manager for Perl 6. What does that actually mean? We caught up with her to talk about the Foundation, YAPC, and the Perl 6 effort. [May 21, 2004]
Affrus: An OS X Perl IDE
Affrus is a new IDE from Late Nite Software; Simon puts it through its paces to see how it compares to Komodo and his beloved Unix editors. [May 14, 2004]
This Week on Perl 6, Week Ending 2004-05-09
The native call interface raises questions on the internals list; Piers Cawley has the details on this and everything else from the Perl 6 effort. [May 14, 2004]
Building Testing Libraries
Save time, test more, and use what the CPAN has made available to enhance your development. Casey West demonstrates examples of good techniques when testing Perl-based software. [May 7, 2004]
This Week on Perl 6, Week Ending 2004-05-02
The internals list mulls over strings and multi-method dispatch, while Apocalypse 12 continues to intrigue and entertain the language list. [May 7, 2004]
Rapid Web Application Deployment with Maypole : Part 2
We use Maypole to turn last week's product catalogue into a complete web commerce application. [Apr 29, 2004]
This Week on Perl 6, Week Ending 2004-04-25
Now that Apocalypse 12 is out, what do the Perl 6 developers think about it? Piers has all the details... [Apr 29, 2004]
Rapid Web Application Deployment with Maypole
Maypole is a framework for creating web applications; Simon Cozens explains how to set up database-backed applications extremely rapidly. [Apr 22, 2004]
This Fortnight on Perl 6, Weeks Ending 2004-04-18
Parrot gains the beginnings of some Unicode support, causing much fallout; meanwhile, there's a fight over who gets the backtick operator, and what Perl 6 does to recognize and run Perl 5 code. [Apr 22, 2004]
Apocalypse 12
Larry Wall explains how objects and classes are to work in Perl 6. [Apr 16, 2004]
Using Bloom Filters. [Apr 8, 2004]
This week on Perl 6, week ending 2004-04-04
This week, the vexed question of how assignment should work returns, and Piers tries to make sense of continuations; meanwhile, the language list comes alive on the first of April... [Apr 4, 2004]
Photo Galleries with Mason and Imager
One of the major problems with the plethora of photo gallery software available is that very few of them integrate well with existing sites. Casey West comes up with a new approach using Imager and Mason to fit in with Mason sites. [Apr 1, 2004]
This week on Perl 6, week ending 2004-03-28
The language list is relatively quiet, but the Parrot implementors are haunted by continuations this week. Piers has the full story. [Mar 28, 2004]
Making Dictionaries with Perl
Sean Burke is a linguist who helps save dying languages by creating dictionaries for them. He shows us how he uses Perl to layout and print these dictionaries. [Mar 25, 2004]
This week on Perl 6, week ending 2004-03-21
Concerns about embedding and a new release of Tcl on Parrot occupy the internals mailing list, while the language list experiences some surprise about changes to the hash subscriptor syntax. [Mar 21, 2004]
Synopsis 3
In this synopsis, Luke Palmer provides us with an updated view of Perl 6's operators. [Mar 18, 2004]
This week on Perl 6, week ending 2004-03-14
Benchmarks, Ponie and even Ruby drive on Parrot development this week, while the language team discuss methods that mutate their objects and properties that call actions on variables. [Mar 14, 2004]
Simple IO Handling with IO::All
Perl module author extraordinaire Brian Ingerson demonstrates his latest creation. IO::All ties together almost all of Perl's IO handling libraries in one neat, cohesive package. [Mar 11, 2004]
This week on Perl 6, week ending 2004-03-07
Work on objects for Parrot continues, while the perl6-internals list gets dragged into a discussion about date/time handling; the & multimatching operator appears, and a question about detecting undefined subs on the language list. [Mar 7, 2004]
Distributed Version Control with svk
What started off for Chia-liang Kao as a wrapper around the Subversion version control system rapidly turned into a fully-fledged distributed VCS itself -- all, of course, in Perl. [Mar 4, 2004]
This week on Perl 6, week ending 2004-02-29
More on Parrot's objects, plus some discussion of the Perl 6 release timescale. Will we see Perl 6 this century? [Feb 29, 2004]
Exegesis 7
Damian Conway explains how formatting will work in Perl 6 -- with a twist... [Feb 27, 2004]
This week on Perl 6, week ending 2004-02-22
It had to happen some day - someone wrote obfuscated Parrot assembler. Objects are nearly there, and the fight over "sort" cotinues. [Feb 22, 2004]
Find What You Want with Plucene
Plucene is a Perl port of the Java Lucene search-engine framework. In this article, we'll look at how to use Plucene to build a search engine to index a web site. [Feb 19, 2004]
This week on Perl 6, week ending 2004-02-15
Parrot gains Data::Dumper, sort and nearly system(), while the language list struggles to agree on the best way to represent multi-level and multi-key sorting. [Feb 15, 2004]
Mail to WAP Gateways
Ever needed to quickly check your email while you're away from your computer? Pete Sergeant devises a way to convert a mailbox into a WAP page for you to easily check over the phone. [Feb 13, 2004]
This week on Perl 6, week ending 2004-02-08
This week, the internals team attack the challenges posed by garbage collection and threading, while the Unicode operators debate rages on over at the language list. Piers Cawley has the details. [Feb 8, 2004]
Siesta Mailing List Manager
Majordomo is past its best, and many Perl Mongers groups rely on ezmlm or Mailman. Why isn't there a decent Perl-based mailing list manager? Simon Wistow and others from London.pm decided to do something about it ... and came up with Siesta. [Feb 5, 2004]
This week on Perl 6, week ending 2004-02-01
Lots of little clean-ups done to Parrot this week, while the Perl 6 language design focuses on vector operations and Unicode operators. [Feb 1, 2004]
How We Wrote the Template Toolkit Book ...
When Dave Cross, Andy Wardley, and Darren Chamberlain got together to write the Perl Template Toolkit book, they decided to write it in Plain Old Documentation. Dave shows us how the Template Toolkit itself transformed that for publication. [Jan 30, 2004]
This week on Perl 6, week ending 2004-01-25
The internals list is concerned with threading a smattering of other things; the language list debates vector operators and syntax mangling. Piers, as ever, fills us in. [Jan 25, 2004]
Introducing Mac::Glue
Now that Apple computers are all the rage again, we describe how the technically inclined can use Perl to script Mac applications. [Jan 23, 2004]
Maintaining Regular Expressions
It's easy to get lost in complex regular expressions. Aaron Mackey offers a few tips and an ingenious technique to help you keep things straight. [Jan 16, 2004]
This week on Perl 6, week ending 2004-01-11
Parrot fixes to threading, continuations, the JIT and the garbage collector; the Perl 6 language list discusses traits, roles, and, for some reason, the EU Constitution... [Jan 11, 2004]. [Jan 9, 2004]
This week on Perl 6, week ending 2004-01-04
Dan calls for detailed suggestions for the Parrot threading model, and Piers makes up for the lack of activity in the language list by asking a few key players about their New Year hopes for Perl 6. [Jan 4, 2004]
Blosxoms, Bryars and Blikis
How to add a blog, wiki, or some combination of the two to almost anything. [Dec 18, 2003]
This week on Perl 6, week ending 2003-12-07
Objects all round - Parrot gets objects, and there was much rejoicing. Meanwhile, Larry lifts parts of the veil on the Perl 6 object model. Piers Cawley has the details. [Dec 7, 2003]
How Perl Powers the Squeezebox
The Squeezebox is a hardware-based ethernet and wireless MP3 player from Slim Devices; its server is completely written in Perl, and is open and hackable. We talked to the Squeezebox developers about Perl, open source, and third-party Squeezebox hacking. [Dec 5, 2003]
This week on Perl 6, week ending 2003-11-30
The IMCC has a FAQ, the Perl 6 compiler gets updated to this month's reality, and Larry explains some more about the properties system. Piers Cawley, as ever, summarizes. [Nov 30, 2003]
This fortnight on Perl 6, week ending 2003-11-23
Dan returns from LL3 with new ideas, what "multi" really means, and more on the Perl 6 design philosophy - Piers Cawley sums up two weeks on the Perl 6 and Parrot mailing lists. [Nov 23, 2003]
Perl Slurp-Eaze
Uri Guttman demonstrates several different ways to read and write a file in a single operation, a common idiom that's sometimes misused. [Nov 21, 2003]
Solving Puzzles with LM-Solve
A great many puzzles and games, such as Solitaire or Sokoban, are of the form of a "logic maze" -- you move a board or tableau from state to state until you reach the appropriate goal state. Shlomi Fish presents his Games::LMSolve module, which provides a general representation of such games and an algorithm to solve them. [Nov 17, 2003]
This week on Perl 6, week ending 2003-11-09
People try to get PHP working on Parrot, while the perl6-language list thinks about nesting modules inside of modules. And Piers, dilligent as ever, summarizes all the action for your benefit. [Nov 9, 2003]
Bringing Java into Perl
Phil Crow explains how to use Java code from inside of Perl, using the Inline::Java module. [Nov 7, 2003]
This week on Perl 6, week ending 2003-11-02
A Hallowe'en release, catering for method calls on empty registers, and Parrot gets a HTTP library. (No, really.) Perl 6 and Parrot news from Piers Cawley. [Nov 2, 2003]
Open Guides
Kake Pugh describes how Perl can help you find good beer in London, and many other places, with the OpenGuides collaborative city guides. [Oct 31, 2003]
This week on Perl 6, week ending 2003-10-26
IMCC becomes more important, how objects can get serialized, and the all-important Infocom interpreter: all the latest Parrot news from Piers. [Oct 26, 2003]
Database Programming with Perl
Simon Cozens introduces the DBI module, the standard way for Perl to talk to relational databases. [Oct 23, 2003]
This week on Perl 6, week ending 2003-10-19
A new Parrot pumpking, Larry returns, and the Perl 6 compiler actually starts gathering steam... All this and more in this week's Perl 6 summary. [Oct 19, 2003]
A Chromosome at a Time with Perl, Part 2
In this second article about using Perl in the bioinformatics realm, James Tisdall, author of Mastering Perl for Bioinformatics, continues his discussion about how references can greatly speed up a subroutine call by avoiding making copies of very large strings. He also shows how to bypass the overhead of subroutine calls entirely and how to quantify the behavior of your code by measuring its speed and space usage. [Oct 15, 2003]
This week on Perl 6, week ending 2003-10-12
The perl6-language list remains eerily silent, and Leo Tö [Oct 12, 2003]
A Refactoring Example
Michael Schwern explains how to use refactoring techniques to make code faster. [Oct 9, 2003]
Identifying Audio Files with MusicBrainz
Paul Mison describes one way to use the MusicBrainz database to find missing information about audio tracks. [Oct 3, 2003]
Adding Search Functionality to Perl Applications
Do you write applications that deal with large quantities of data -- and then find you don't know the best way to bring back the information you want? Aaron Trevena describes some simple, but powerful, ways to search your data with reverse indexes. [Sep 25, 2003]
This week on Perl 6, week ending 2003-09-21
The low-down on the 0.0.11 Parrot release, and some blue thinking about clever optimizations - the latest from the Perl 6 world, thanks to our trusty summarizer. [Sep 21, 2003]
Cooking with Perl, Part 3
In this third and final batch of recipes excerpted from Perl Cookbook, 2nd Edition, you'll find solutions and code examples for extracting HTML table data, templating with HTML::Mason, and making simple changes to elements or text. [Sep 17, 2003]
A Chromosome at a Time with Perl, Part 1
If you're a Perl programmer working in the field of bioinformatics, James Tisdall offers a handful of tricks that will enable you to write code for dealing with large amounts of biological sequence data--in this case, very long strings--while still getting satisfactory speed from the program. James is the author of O'Reilly's upcoming Mastering Perl for Bioinformatics. [Sep 11, 2003]
This week on Perl 6, week ending 2003-09-07
This week in Perl 6, the keyed ops question raises its head again, how to dynamically add singleton methods, and why serialisation of objects is hard. [Sep 7, 2003]
Cooking with Perl, Part 2
Learn how to use SQL without a database server, and how to send attachments in email in two new recipes from the second edition of Perl Cookbook. And check back here in two weeks for new recipes on extracting table data, making simple changes to elements or text, and templating with HTML::Mason. [Sep 3, 2003]
This week on Perl 6, week ending 2003-08-31
Continuation passing style, active data, dump PMCs and absolutely nothing at all on the language list. [Aug 31, 2003]
Using Perl to Enable the Disabled
Some people use Perl to help with data munging, database hacking, and scripting menial tasks. Jouke Visser uses Perl to communicate with his disabled daughter. Here he explains what his pVoice software is and how it works. [Aug 23, 2003]
Cooking with Perl
The new edition of Perl Cookbook is about to hit store shelves, so to trumpet its release, we offer some recipes--new to the second edition--for your sampling pleasure. This week learn how to match nested patterns and how to pretend a string is a file. Check back here in the coming weeks for more new recipes on using SQL without a database server, extracting table data, templating with HTML::Mason, and more. [Aug 21, 2003]
This week on Perl 6, week ending 2003-08-17
Python on Parrot is nearly done, what's to do before Parrot 0.1.0, and when should we start writing about Perl 6? Piers Cawley reports on the perl6 mailing lists. [Aug 17, 2003]
Perl Design Patterns, Part 3
Phil Crow concludes his series on patterns in Perl, with a discussion of patterns and objects. [Aug 15, 2003]
Perl Design Patterns, Part 2
Phil Crow continues his series on how some popular patterns fit into Perl programming. [Aug 7, 2003]
This week on Perl 6, week ending 2003-08-03
Piers Cawley brings us news of PHP, Java, and Python ports to the Parrot VM, and more Exegesis 6 fall-out. [Aug 3, 2003]
Exegesis 6
Apocalypse 6 described the changes to subroutines in Perl 6. Exegesis 6 demonstrates what this will mean to you, the average Perl programmer. Damian Conway explains how the new syntax and semantics make for cleaner, simpler, and more powerful code. [Jul 29, 2003]
Overloading
C++ and Haskell do it, Java and Lisp don't; Python does it, and Ruby is almost built on it. What is the allure of operator overloading, and how does it affect Perl programmers? Dave Cross brings us an introduction to overload.pm and the Perl overloading mechanism. [Jul 22, 2003]
This week on Perl 6, week ending 2003-07-20
Threads, Events, code cleanups, and onions top the list of interesting things discussed on the Perl 6 lists this week. Piers Cawley summarizes. [Jul 20, 2003]
State of the Onion 2003
Larry Wall's annual report on the state of Perl, from OSCON 2003 (the seventh annual Perl conference) in Portland, Oregon in July 2003. In this full length transcript, Larry talks about being unreasonable, unwilling, and impossible. [Jul 16, 2003]
How to Avoid Writing Code. [Jul 15, 2003]
This week on Perl 6, week ending 2003-07-13
Ponie and Perl6::Rules impressed Perl 6 summarizer Piers Cawley this week. Read on to find out why. [Jul 13, 2003]
Integrating mod_perl with Apache 2.1 Authentication
It's a good time to be a programmer. Apache 2.1 and mod_perl 2 make it tremendously easy to customize any part of the Apache request cycle. The more secure but still easy-to-use Digest authentication is now widely supported in web browsers. Geoffrey Young demonstrates how to write a custom handler that handles Basic and Digest authentication with ease. [Jul 8, 2003]
This week on Perl 6, week ending 2003-07-06
Building IMCC as
parrot, a better build system, and Perl 6 daydreams (z-code!) were the topics of note on perl6-internals and perl6-language this week, according to OSCON-session dodging summarizer Piers Cawley. [Jul 6, 2003]
Power Regexps, Part II
Simon looks at slightly more advanced features of the Perl regular expression language, including lookahead and lookbehind, extracting multiple matches, and regexp-based modules. [Jul 1, 2003]
Perl 6 Design Philosophy
Perl 6 Essentials is the first book to offer a peek into the next major version of the Perl language. In this excerpt from Chapter 3 of the book, the authors take an in-depth look of some of the most important principles of natural language and their impact on the design decisions made in Perl 6. [Jun 25, 2003]
This week on Perl 6, week ending 2003-06-22
Continuation Passing Shenanigans, evil dlopen() tricks, and controlling method dispatch dominate perl6-internals and perl6-language, according to fearless summarizer Piers Cawley. [Jun 23, 2003]
This week on Perl 6, week ending 2003-06-29
Exceptions, continuations, patches, and reconstituted flying cheeseburgers all dominated discussion on perl6-internals and perl6-language, according to summarizer Piers Cawley. No kidding. [Jun 23, 2003]
Hidden Treasures of the Perl Core, part II
Casey continues to look at some lesser-known modules in the Perl core. [Jun 19, 2003]
This week on Perl 6, week ending 2003-06-15
All the latest from perl6-language, perl6-internals and even the Perl 6 track at YAPC from our regular summariser, Piers Cawley. [Jun 15, 2003]
Perl Design Patterns
The Gang-of-Four Design Patterns book had a huge impact on programming methodologies in the Java and C++ communities, but what do Design Patterns have to say to Perl programmers? Phil Crow examines how some popular patterns fit in to Perl programming. [Jun 13, 2003]
This week on Perl 6, week ending 2003-06-08
IMCC becomes Parrot, continuation passing style, timely destruction YET AGAIN, multi-method dispatch and more. [Jun 8, 2003]
Regexp Power
In this short series of two articles, we'll take a look through some of the less well-known or less understood parts of the regular expression language, and see how they can be used to solve problems with more power and less fuss. [Jun 6, 2003]
This week on Perl 6, week ending 2003-06-01
Much more work on IMCC, method calling syntax in Parrot, coroutines attempt to become threads, compile-time binding, and many more discussions in this week's Perl 6 news - all summaries as usual by the eminent Piers Cawley. [Jun 1, 2003]
Hidden Treasures of the Perl Core
The Perl Core comes with a lot of little modules to help you get your job done. Many of these modules are not well known. Even some of the well known modules have some nice features that are often overlooked. In this article, we'll dive into many of these hidden treasures of the Perl Core. [May 29, 2003]
This week on Perl 6, week ending 2003-05-25
Piers takes a break from traditional English festivities to report on the Perl 6 world: this week, more about timely destruction, the Perl 6 Essentials book, a new layout for PMCs, and a lengthy discussion of coroutines. [May 25, 2003]
Testing mod_perl 2.0
Geoffrey Young examines another area of programming in mod_perl 2.0, testing your mod_perl scripts. [May 22, 2003]
This week on Perl 6, week ending 2003-05-18
Garbage collection versus timely destruction, colors in BASIC, how contexts interact, and whether or not we need a special sigil for objects - it's all talk in the Perl 6 world this week. [May 18, 2003]
CGI::Kwiki
Brian Ingerson's latest Perl module is a new modular implementation of the wiki, a world-editable web site. [May 13, 2003]
This week on Perl 6, week ending 2003-05-11
Perl 6 this week brings us news of managed and unmanaged buffers from Parrot's NCI, stack-walking garbage collection, co-routines, and some really horrible heredoc syntax wrangling. [May 11, 2003]
This week on Perl 6, week ending 2003-05-04
Piers reports on Parrot's calling conventions, the strange case of the boolean type, and much from the Perl 6 lists this week. [May 7, 2003]
2003 Perl Conferences
There are a plethora of Perl conferences on this year; which of them should you go to? I survey the conference scene and make a few recommendations about talks you might want to try and get to see. [May 6, 2003]
This week on Perl 6, week ending 2003-04-27
Memory allocation, NCI, more about types, and changes in the syntax of blocks - all the latest in the Perl 6 world from Piers Cawley. [Apr 27, 2003]
POOL
What do templating, object oriented modules, computational linguistics, Ruby, profiling and oil paintings have in common? They're all part of this introduction to POOL, a tool to make it easier to write Perl modules. [Apr 22, 2003]
This week on Perl 6, week ending 2003-04-20
Piers brings us a summary of much discussion of the proposed Perl 6 type system. [Apr 20, 2003]
Filters in Apache 2.0
Geoffrey Young, co-author of the renowned mod_perl Cookbook, brings us an introduction to Apache mod_perl 2.0, starting with Apache filters. [Apr 17, 2003]
This week on Perl 6, week ending 2003-04-13
Parrot on Win32, roll-backable variables, and properties versus methods. Piers has the scope, uh, scoop. [Apr 13, 2003]
Synopsis 6
Damian Conway and Allison Randal bring you a handy summary of the Perl 6 subroutine and type system, as described in last month's Apocalypse. [Apr 9, 2003]
This week on Perl 6, week ending 2003-04-06
Piers reports on the struggle for documentation, the battle of the threading models, and the victory for equality. [Apr 6, 2003]
Apache::VMonitor - The Visual System and Apache Server Monitor
In Stas' final article in his mod_perl series, we investigate how to monitor the performance and stability of our now fully-tuned mod_perl server using Apache::VMonitor. [Apr 2, 2003]
This week on Perl 6, week ending 2003-03-30
People remain silent about Leo's work, how to do static variables, assignment operators, and more. Piers Cawley has the details. [Mar 30, 2003]
Five Tips for .NET Programming in Perl
One of the most common categories of questions on the SOAP::Lite mailing list is getting Perl SOAP applications to work with .NET services. This article, by Randy J. Ray, coauthor of Programming Web Services with Perl, covers some of the most common traps and considerations that can trip up Perl developers. [Mar 26, 2003]
This week on Perl 6, week ending 2003-03-23
Another PDD, and much discussion of the latest Apocalypse. Piers Cawley, as ever, reports... [Mar 23, 2003]
For Perl Programmers : only
Brian Ingerson's curious new module allows you to specify which version of a module you want Perl to load - and even to install multiple versions at the same time. Let's hear about it from the man himself! [Mar 18, 2003]
This week on Perl 6, week ending 2003-03-16
Parrot 0.10.0 released, the Apocalypse hits, summarizer not quite buried... [Mar 16, 2003]. [Mar 13, 2003]
This week on Perl 6, week ending 2003-03-09
Object specifications and serialization discussion takes over both lists, and Piers narrowly escapes having to summarise the fallout from the Apocalypse already... [Mar 9, 2003]
Apocalypse 6
Larry continues his unfolding of the design of Perl 6 with his latest Apocalypse - this time, how subroutines are defined and called in Perl 6. [Mar 7, 2003]
Improving mod_perl Sites' Performance: Part 8
In the penultimate of Stas Bekman's mod_perl articles, more of those obscure Apache settings which can really speed up your web server. [Mar 4, 2003]
This week on Perl 6, week ending 2003-03-02
IMCC is still a subject of much debate on the perl6-internals list, while tumbleweed drifts through perl6-language. Piers has the details. [Mar 3, 2003]
Genomic Perl
After James Tisdall's "Beginning Perl for Bioinformaticists", has Rex Dwyer come up with a "Beginning Bioinformatics for Perl Programmers"? Simon Cozens reviews "Genomic Perl", with some anticipation... [Feb 27, 2003]
This week on Perl 6, week ending 2003-02-23
More from Piers Cawley on Perl 6, IMCC, Parrot's perfomance and the continuing arrays versus lists saga. [Feb 23, 2003]
Building a Vector Space Search Engine in Perl
Have you ever wondered how search engines work, or how to add one to your program? Maciej Ceglowski walks you through building a simple, fast and effective vector-space search engine. [Feb 19, 2003]
This week on Perl 6, week ending 2003-02-16
Optimizations to the main loops, reflections from the Perl 6 design meeting, arrays versus lists, and much more... [Feb 16, 2003]
Module::Build
Traditionally, modules have been put together with
ExtUtils::MakeMaker. Dave Rolsky describes a more modern solution, and in the first of a two-part series, tells us more about it. [Feb 12, 2003]
This week on Perl 6, week ending 2003-02-09
Beating Python, Parrot objects, shortcutting assignment operators , and much more... [Feb 9, 2003]
Improving mod_perl Sites' Performance: Part 7
In this month's episode of Stas Bekman's mod_perl series, more on how settings in your Apache configuration can make or break performance. [Feb 5, 2003]
This week on Perl 6, week ending 2003-02-02
Packfiles, coroutines, secure sandboxes, Damian takes a break, and much more... [Feb 2, 2003]
Embedding Perl in HTML with Mason
HTML::Mason is my favourite toolkit for building pages out of Perl-based components: can Dave Rolsky and Ken Williams do it justice in their new book? I take a look at "Embedding Perl in HTML with Mason" and find it a mixed bag... [Jan 30, 2003]
This week on Perl 6, week ending 2003-01-26
Problems on OS X, targetting Parrot, the pipeline syntax thread refuses to die, and much more... [Jan 26, 2003]... [Jan 22, 2003]
This week on Perl 6, week ending 2003-01-19
Yet more on dead object detection and pipeline syntax, (surprise!) eval in Parrot, Larry and others need gainfultude, and much more... [Jan 19, 2003]
What's new in Perl 5.8.0
It's been nearly six months since the release of Perl 5.8.0 but many people still haven't upgraded to it. Artur Bergman takes a look at some of the new features it provides and describe why you ought to investigate them for yourself. [Jan 16, 2003]
This week on Perl 6, week ending 2003-01-12
More on dead object detection, pipeline syntax, Perl 6 as a macro language, and much more... [Jan 12, 2003]
This week on Perl 6, weeks ending 2003-01-05
Garbage collection, the Ook! language, variables, values and properties, and much more... [Jan 8, 2003]
Improving mod_perl Sites' Performance: Part 6
In this month's episode of Stas Bekman's mod_perl series, how to correctly fork new processes under mod_perl. [Jan 7, 2003]
How Perl Powers Christmas
We've had some fantastic Perl Success Stories in the past, but this one tops them all: How Perl is used in the distribution of millions of Christmas presents every year. [Dec 18, 2002]
Programming with Mason
Dave Rolsky, coauthor of Embedding Perl in HTML with Mason, offers recipes--adapted from those you'll find in his book--for solving typical Web application problems. [Dec 11, 2002]
This week on Perl 6 (12/02-08, 2002)
Lots of work on IMCC, string literals, zero-indexec arrays, and much more... [Dec 8, 2002]
Improving mod_perl Sites' Performance: Part 5
Stas Bekman continues his series on optimizing mod_perl by examining more ways of saving on shared memory usage. [Dec 4, 2002]
This week on Perl 6 (11/24-12/01, 2002)
C# and Parrot again, various visions, and, well, not all that much more... [Dec 1, 2002]
Class::DBI
Tony Bowden introduces a brilliantly simple way to interface to a relational database using Perl classes and the Class::DBI module [Nov 27, 2002]
This week on Perl 6 (11/17-11/23, 2002)
C# and Parrot, the status of 0.0.9, invocant and topics, string concatenation and much, much more... [Nov 27, 2002]
This week on Perl 6 (11/10-11/17, 2002)
A quick Perl 6 roadmap, plus some JIT improvements, mysterious coredump fixes, continuations, superpositions, invocants, tests, and programming BASIC and Scheme in Parrot. [Nov 21, 2002]
Managing Bulk DNS Zones with Perl
Chris Josephes describes the challenges to system administrators in maintaining forward and reverse DNS records, and how a clever sysadmin can use Perl to automate this often tedious task. [Nov 20, 2002]
This week on Perl 6 (11/03-11/10, 2002)
Bytecode fingerprinting, on_exit() portability, memory washing, invocant and topic naming syntax, Unicode operators, operators, more operators, the supercomma, perl6-documentation, Schwern throws the Virtual Coffee Mug, and much more... [Nov 15, 2002]
Object Oriented Exception Handling in Perl
Arun Udaya Shankar discusses implementing object-oriented exception handling in Perl, using Error.pm. Also covered are the advantages of using exceptions over traditional error handling mechanisms, basic exception handling with eval {}, and the use of Fatal.pm. [Nov 14, 2002]
This week on Perl 6 (10/28-11/03, 2002)
Bytecode formats, "Simon Cozens versus the world", and more... [Nov 6, 2002]
Writing Perl Modules for CPAN
For many years, the Perl community has extolled the virtues of CPAN and re-usable, modular code. But why hasn't there been anything substantial written on how to achieve it? Sam Tregar redresses the balance, and this month's book review looks at how he got on. [Nov 6, 2002]
This week on Perl 6 (10/20-27, 2002)
C# and Parrot, fun with operators, a license change, and more... [Nov 4, 2002]
On Topic
Allison Randal explains the seemingly strange concept of "topic" in Perl 6 - and finds that it's alive and well in Perl 5 too... [Oct 30, 2002]
The Phrasebook Design Pattern
Have you ever written code that uses two languages in the same program? Whether they be human languages or computer languages, the phrasebook design pattern helps you separate them for more maintainable code. [Oct 22, 2002]
This week on Perl 6 (10/7-14, 2002)
A new pumpking, sprintf, insight from Larry, and more... [Oct 16, 2002]
Radiator
Are you fed up with those who think that commercial applications need to be written in an "enterprise" language like Java or C++? So are we, so we spoke to Mike McCauley at Open System Consultants. [Oct 15, 2002]
A Review of Komodo
Simon Cozens takes a look at ActiveState's latest Komodo release, Komodo 2.0. Will this version of the Perl IDE finally convince the hardened emacs and vi users to switch over? [Oct 9, 2002]
This week on Perl 6 (9/30 - 10/6, 2002)
The getting started guide, interfaces, memory allocation, and more... [Oct 6, 2002]
How Hashes Really Work
We're all used to using hashes, and expect them to just work. But what actually is a hash, when it comes down to it, and how do they work? Abhijit explains! [Oct 1, 2002]
This week on Perl 6 (9/23 - 9/29, 2002)
An IMCC update, the Scheme interpreter, lists and list references again, and a load besides... [Sep 29, 2002]
An AxKit Image Gallery
Continuing our look at AxKit, Barrie demonstrates the use of AxKit on non-XML data: images and operating system calls. [Sep 24, 2002]
This week on Perl 6 (9/16 - 9/22, 2002)
The neverending keys thread, lists versus list references, and a load besides... [Sep 22, 2002]
Embedding Web Servers
Web browsers are ubiquitous these days - it's hard to find a machine without one. To make use of a web browser, you need a web server, and they are simple enough to write that you can stick them almost anywhere. [Sep 18, 2002]
This week on Perl 6 (9/9 - 9/15, 2002)
Goals for the next release, arrays and hashes, hypothetical variables, getting more Parrot hackers, and a load besides... [Sep 15, 2002]
Retire your debugger, log smartly with Log::Log4perl!
Michael Schilli describes a new way of adding logging facilities to your code, with the help of the log4perl module - a port of Java's log4j. [Sep 11, 2002]
Writing CGI Applications with Perl
There are roughly four bazillion books on Perl and CGI available at the moment; one of the most recent is Brent Michalski and Kevin Meltzer's Writing CGI Applications with Perl. Kevin and Brent are long-standing members of the Perl community - can they do justice to this troublesome topic? Find out in this month's book review! [Sep 10, 2002]
This week on Perl 6 (9/1 - 9/8, 2002)
Goals for the next release, arrays and hashes, hypothetical variables, getting more Parrot hackers, and a load besides... [Sep 8, 2002]
Going Up?
Perl 5.8.0 brought stable threading to Perl - but what does it mean and how can we use it? Get a lift with Sam Tregar as he creates a multi-threaded simulation. [Sep 4, 2002]
The Fusion of Perl and Oracle
Andy Duncan, the coauthor of Perl for Oracle DBAs, explains that Perl's symbiosis with the Oracle database helped in constructing the Perl DBA Toolkit. He also ponders what Ayn Rand might have thought of these two strange bedfellows. [Sep 4, 2002]
This week on Perl 6 (8/26 - 9/1, 2002)
More talk of garbage collection, the never-ending keys debate, Parrot 0.0.8, lots and lots about regular expressions, and a good deal more... [Sep 1, 2002]
Mail Filtering
Michael Stevens compares two popular mail filtering tools, both written in Perl: ActiveState's PerlMX, and the open source Mail::Audit. How do they stack up? [Aug 27, 2002]
Exegesis 5
Are Perl 6's regular expressions still messing with your head? Never fear, Damian is here - with another Exegesis explaining what real programs using Perl 6 grammars look like. [Aug 22, 2002]
Web Basics. [Aug 20, 2002]
This week on Perl 6 (week ending 2002-08-18)
Much ado about regexes, a pirate parrot, (it had to happen...) and more... [Aug 18, 2002]
Acme::Comment
One of the most requested features for Perl 6 has been multiline comments; Jos Boumans goes one step ahead, and provides the feature for Perl 5. He describes the current hacks people use to get multiline comments, and explains his Acme::Comment module which now supports 44 different commenting styles. [Aug 13, 2002]
This week on Perl 6 (week ending 2002-08-11)
Arrays, Garbage collection, keys, (again) regular expressions on non-strings, and more... [Aug 11, 2002]
Proxy Objects
How do you manage to have circular references without leaking memory? Matt Sergeant shows how it's done, with the Proxy Object pattern. [Aug 7, 2002]
This week on Perl 6 (week ending 2002-08-04)
More ops, JIT v2, Regex speed and more... [Aug 5, 2002]
Improving mod_perl Sites' Performance: Part 4
Your web server may have plenty of memory, but are you making the best use of it? Stas Bekman explains how to optimize Apache and mod_perl for the most efficient memory use. [Jul 30, 2002]
This week on Perl 6 (week ending 2002-07-21)
0.0.7 is released, looking back to 5.005_03, documentation, MANIFEST and more... [Jul 23, 2002]
Graphics Programming with Perl
Martien Verbuggen has produced a fine book on all elements of handling and creating graphics with Perl - we cast a critical eye over it. [Jul 23, 2002]
Improving mod_perl Sites' Performance: Part 3
This week, Stas Bekman explains how to use the Perl and mod_perl benchmarking and memory measurement tools to perform worthwhile optimizations on mod_perl programs. [Jul 16, 2002]
This Week on Perl 6 (8 - 14 Jul 2002)
Second system effect, IMCC, Perl 6 grammar, and much more... [Jul 14, 2002]
A Test::MockObject Illustrated Example
Test::MockObject gives you a way to create unit tests for object-oriented programs, isolating individual object and method behavior. [Jul 10, 2002]
This week on Perl 6 ~(24-30 June 2002)
Processes, iterators, fun with the Perl 6 grammar and more... [Jul 2, 2002]
Taglib TMTOWTDI
Continuing our look at AxKit tag libraries, Barrie explains the use of SimpleTaglib and LogicSheets. [Jul 2, 2002]
Synopsis 5
Confused by the last Apocalypse? Allison Randal and Damian Conway explain the changes in a more succinct form. [Jun 26, 2002]
Improving mod_perl Sites' Performance: Part 2
Before making any optimizations to mod_perl applications, it's important to know what you need to be optimizing. Benchmarks are key to this, and Stas Beckman introduces the important tools for mod_perl benchmarking. [Jun 19, 2002]
Where Wizards Fear To Tread
One of the new features coming in Perl 5.8 will be reliable interpreter threading, thanks primarily to the work of Artur Bergman. In this article, he explains what you need to do to make your Perl modules thread-safe. [Jun 11, 2002]
Apocalypse 5
In part 5 of his design for Perl 6, Larry takes a long hard look at regular expressions, and comes up with some interesting ideas... [Jun 4, 2002]
Improving mod_perl Sites' Performance: Part 1
What do we need to think about when optimizing mod_perl applications? Stas Bekman explains how hardware, software and good programming come into play. [May 29, 2002]
Achieving Closure
What's a closure, and why does everyone go on about them? [May 29, 2002]. [May 22, 2002]
The Perl You Need To Know - Part 3
Stas Bekman finishes his introduction to the basic Perl skills you need to use mod_perl; this week, globals versus lexicals, modules and packages. [May 15, 2002]. [May 7, 2002]
The Perl You Need To Know - Part 2
Stas Bekman continues his mod_perl series by looking at the basic Perl skills you need to use mod_perl; this week, subroutines inside subroutines. [May 7, 2002]
Becoming a CPAN Tester with CPANPLUS
A few weeks ago, Jos Broumans introduced CPANPLUS, his replacement for the CPAN module. In the time since then, development has continued apace, and today's release includes support for automatically testing and reporting bugs in CPAN modules. Autrijus Tang explains how it all works. [Apr 30, 2002]
mod_perl Developer's Cookbook
Geoffrey Young, Paul Lindner and Randy Kobes have produced a new book on mod_perl which claims to teach "tricks, solutions and mod_perl idioms" - how well does it live up to this promise? [Apr 25, 2002]
The Perl You Need To Know
This week, Stas Bekman goes back to basics to explain some Perl topics of interest to his continuing mod_perl series. [Apr 23, 2002]
XSP, Taglibs and Pipelines
In this month's AxKit article, Barrie explains what a "taglib" is, and how to use them to create dynamic pages inside of AxKit. [Apr 16, 2002]
Installing mod_perl without superuser privileges
In his continuing series on mod_perl, Stas Bekman explains how to install a mod_perl-ized Apache on a server even if you don't have root privileges. [Apr 10, 2002]
Exegesis 4
What does the fourth apocalypse really mean to you? A4 explained what control structures would look like in Perl 6; Damian Conway expands on those ideas and presents a complete view of the Perl 6 control flow mechanism. [Apr 2, 2002]
CPAN PLUS
For many years the CPAN.pm module has helped people install Perl modules. But it's also been clunky, fragile and amazingly difficult to use programmatically. Jos Boumans introduces CPANPLUS, his project to change all that. [Mar 26, 2002]
mod_perl in 30 minutes
This week, Stas Bekman shows us how to install and configure mod_perl, and how to start accelerating CGI scripts with Apache::Registry. [Mar 22, 2002]
A Perl Hacker's Foray into .NET
We've all heard about Microsoft's .NET project. What is it, and what does it mean for Perl? [Mar 19, 2002]
Introducing AxKit
This is the first in the series of articles by Barrie Slaymaker on setting up and running AxKit. AxKit is a mod_perl application for dynamically transforming XML. In this first article, we focus on getting started with AxKit. [Mar 13, 2002]
This Week on Perl 6 (3 -9 Mar 2002)
Reworking printf, 0.0.4 imminent, multimethod dispatch, and more... [Mar 12, 2002]
Stopping Spam with SpamAssassin
SpamAssassin and Vipul's Razor are two Perl-based tools that can be used to dramatically reduce the number of junk emails you need to see. [Mar 6, 2002]
These Weeks on Perl 6 (10 Feb - 2 Mar 2002)
information about the .NET CLR and what it means for Parrot people, how topicalizers work in Perl 6, and a rant about the lack of Parrot Design Documents. [Mar 6, 2002]
Why mod_perl?
In the first of a series of articles from mod_perl guru, Stas Bekman, we begin by taking a look at what mod_perl is and what it can do for us. [Feb 26, 2002]
Preventing Cross-site Scripting Attacks
Paul Lindner, author of the mod_perl Cookbook, explains how to secure our sites against Cross-Site Scripting attacks using mod_perl and Apache::TaintRequest. [Feb 20, 2002]. [Feb 12, 2002]
This Fortnight on Perl 6 (27 Jan - 9 Feb 2002)
The Regexp Engine, Mono, Unicode and more... [Feb 12, 2002]
Visual Perl
Most Perl programmers are die-hard command line freaks, but those coming to Perl on Windows may be used to a more graphical way to edit programs. We asked the lead developer of the new Visual Perl plugin for Microsoft Visual Studio to tell us the advantages of a graphical IDE. [Feb 6, 2002]
Beginning PMCs
Parrot promises to give us support for extensible data types. Parrot Magic Cookie classes are the key to extending Parrot and providing support for other languages, and Jeff Goff shows us how to create them. [Jan 30, 2002]
Finding CGI Scripts
Dave Cross explains what to watch out for when choosing CGI scripts to run on your server, and announces a new best-of-breed project for CGI scripting. [Jan 23, 2002]
This Week on Perl 6 (13 - 19 Jan 2002)
Apocalypse 4, Parrot strings, and more... [Jan 23, 2002]
This Week on Perl 6 (6 - 12 Jan 2002)
Parrot has Regexp support! (And more...) [Jan 17, 2002]
Apocalypse 4
In his latest article explaining the design of Perl 6, Larry Wall tackles the syntax of the language. [Jan 15, 2002]
Creating Custom Widgets
Steve Lidie, coauthor of Mastering Perl/Tk, brings us more wisdom from his Tk experience--this time, explaining how to create your own simple widget classes. [Jan 9, 2002]
This Week on Perl 6 (30 December 2001 - 5 Jan 2002)
Generators, Platform Fixes, Output records, and more [Jan 5, 2002]
Beginning Bioinformatics
James Tisdall's new book is great for biochemists eager to get into the bioinformatics world, but what about us Perl programmers? In this article, we turn the tables, and ask what your average Perl programmer needs to know to get into this exciting new growth area. [Jan 2, 2002]
This Week on Perl 6 (23 - 29 December 2001)
JITs, primitives for output, and the PDGF. [Dec 29, 2001]
This Week on Perl 6 (16 - 22 December 2001)
A JIT Compiler, the PDFG, an I/O layer, and much more [Dec 29, 2001]
Building a Bridge to the Active Directory
Kelvin Param explains how Perl provides the glue between Microsoft's Active Directory and non-Windows clients. [Dec 19, 2001]
This Week on Perl 6 (9 - 15 December 2001)
Slice context, troubles with make, aggregates and more... [Dec 19, 2001]
A Drag-and-Drop Primer for Perl/Tk
This article, by Steve Lidie, coauthor of Mastering Perl/Tk, describes the Perl/Tk drag-and-drop mechanism, often referred to as DND. Steve illustrates DND operations local to a single application, where you can drag items from one Canvas to another. [Dec 11, 2001]
This Week on Perl 6 (2 - 8 December 2001)
Parrot 0.0.3, a FAQ, the execution environment and more... [Dec 10, 2001]
An Introduction to Testing
chromatic explains why writing tests is good for your code, and tells you how to go about it. [Dec 4, 2001]
Request Tracker
Do you ever forget what you're supposed to be doing today? Do you have a million and one projects on the go, but no idea where you're up to with them? I frequently get that, and I don't know how I'd get anything at all done if it wasn't for Request Tracker. Robert Spier explains how to use the open-source Request Tracker application to organise teams working on common projects. [Nov 28, 2001]
Lightweight Languages
Simon Cozens reports from this weekend's Lightweight Languages workshop at the MIT AI labs, where leading language researchers and implementors got together to chat about what they're up to. [Nov 21, 2001]
Parsing Protein Domains with Perl
James Tisdall, author of O'Reilly's Beginning Perl for Bioinformatics, shows biologists how to program in Perl using biological data, with downloadable code examples. [Nov 16, 2001]
Create RSS channels from HTML news sites
Chris Ball shows us how to turn any ordinary news site into a Remote Site Summary web service. [Nov 15, 2001]
Object-Oriented Perl
How do you move from an intermediate Perl programmer to an expert? Understanding object-oriented Perl is one key step along the way. [Nov 7, 2001]
The Lighter Side of CPAN
Alex Gough takes us on a whirlwind tour around the more esoteric and entertaining areas of the Comprehensive Perl Archive Network, and makes some serious points about Perl programming at the same time. [Oct 31, 2001]
Perl 6 : Not Just For Damians
Most of what we've heard about Perl 6 has come from either Larry or Damian. But what do average Perl hackers think about the proposed changes? We asked Piers Cawley for his opinions. [Oct 23, 2001]
This Week on p5p 2001/10/21
What's left before 5.8.0, the POD specification, test-fu and more. [Oct 21, 2001] - here's your chance to find out. [Oct 17, 2001]
Filtering Mail with PerlMx
PerlMx is ActiveState's Perl plug-in for Sendmail; in the first article in a new series, Mike DeGraw-Bertsch shows us how to begin building a mail filter to trap spam. [Oct 10, 2001]
This Week on p5p 2001/10/07
Code cleanups, attributes, tests from chromatic, and more... [Oct 10, 2001]
Exegesis 3
Damian Conway puts Larry's third Apocalypse to work and explains what it means for the budding Perl 6 programmer. [Oct 3, 2001]
Apocalypse 3
Larry Wall brings us the next installment in the unfolding of Perl 6's design. [Oct 2, 2001]
Asymmetric Cryptography in Perl. [Sep 26, 2001]
Parrot : Some Assembly Required. [Sep 18, 2001]
wxPerl: Another GUI for Perl
Jouke Visse brings us a new tutorial on how to use wxPerl to create good-looking GUIs for Perl programs. [Sep 12, 2001]
This Week on Perl 6 (2 - 8 September 2001)
Lexical insertion, lots of new documentation, and more... [Sep 8, 2001]
Changing Hash Behaviour with
tie
Hashes are one of the most useful data structures Perl provides, but did you know you can make them even more useful by changing the way they work? Dave Cross shows us how it's done [Sep 4, 2001]
This Week on p5p 2001/09/03
Michael Schwern, Coderefs in @INC (again), localising things, and more... [Sep 3, 2001]
This Week on Perl 6 (26 August - 1 September 2001)
Parameter passing, the latest on Parrot, finalization and more... [Sep 1, 2001]
Perl Helps The Disabled, to speak and to better use her computer. Jon's talk received a grand reception, not only for his clever use of Perl, but for a remarkably unselfish application of his skills. [Aug 27, 2001]
This Week on Perl 6 (19 - 25 August 2001)
Closures, more work on the internals, method signatures and more... [Aug 27, 2001]
This Week on p5p 2001/08/27
vstrings, callbacks, coderefs in @INC, and more... [Aug 27, 2001]
Choosing a Templating System
Perrin Harkins takes us on a grand tour of the most popular text and HTML templating systems. [Aug 21, 2001]
This Week on Perl 6 (12 - 18 August 2001)
Modules, work on the internals, language discussion and more... [Aug 21, 2001]
This Week on p5p 2001/08/15
POD specification, Unicode work, threading and more! [Aug 15, 2001]
Yet Another Perl Conference Europe 2001
A review of this year's YAPC::Europe in Amsterdam. [Aug 13, 2001]
This Week in Perl 6 (5 - 11 August 2001)
Damian's slides, more on properties and more [Aug 11, 2001]
This Fortnight In Perl 6 (July 22 - Aug. 4, 2001)
The Perl Conference 5.0 synopsis, ideas from the mailing lists, and more. [Aug 9, 2001]
This Week on p5p 2001/08/07
Subroutine Prototypes, the Great SDK Debate, and much more! [Aug 8, 2001]... [Aug 8, 2001]
People Behind Perl : Artur Bergman
We continue our series on the People Behind Perl with an interview with Artur Bergman, the driving force behind much of the work on Perl's new threading model. While the model was created by Gurusamy Sarathy, Artur's really spent a lot of good time and effort making iThreads usable to the ordinary Perl programmer. Artur tells us about what got him into Perl development, and what he's doing with threads right now. [Aug 1, 2001]
This Week on p5p 2001/07/30
Hash "clamping", a meeting of the perl-5 porters at TPC, and more! [Aug 1, 2001]. [Jul 25, 2001]
Mail Filtering with Mail::Audit
Does your e-mail still get dumped into a single Inbox because you haven't taken the time to figure out the incantations required to make procmail work? Simon Cozens shows how you can easily write mail filters in something you already know: Perl. [Jul 17, 2001]
This Week on p5p 2001/07/16
5.7.2 is out, some threading fixes, and much more. [Jul 16, 2001]
Symmetric Cryptography in Perl
What do you think of when you hear the word "cryptography"? Big expensive computers? Men in black helicopters? PGP or GPG encrypting your mail? Maybe you don't think of Perl. Well, Abhijit Menon-Sen says you should. He's the author of a bunch of the Crypt:: modules, and he explains how to use Perl to keep your secrets... secret. [Jul 10, 2001]
This Week on p5p 2001/07/09
No 5.8.0 yet, numeric hackery, worries about PerlIO and much more. [Jul 9, 2001]
This Fortnight in Perl 6 (17 - 30 June 2001)
A detailed summary of a recent Perl vs. Java battle, a discussion on the internal API for strings, and much more. [Jul 3, 2001]. [Jul 3, 2001]
This Week on p5p 2001/07/02
Module versioning and testing, regex capture-to-variable, and much more. [Jul 2, 2001]
Why Not Translate Perl to C?
Mark-Jason Dominus explains why it might not be any faster to convert your code to a C program rather than let the Perl interpreter execute it. [Jun 27, 2001]
This Week on p5p 2001/06/25
5.7.2 in sight, some threads on regular expression, and much more. [Jun 25, 2001]. [Jun 21, 2001]
This Week in Perl 6 (10 - 16 June 2001)
Even More on Unicode and Regexes, Multi-Dimensional Arrays and Relational Databases, and much more. [Jun 19, 2001]
This Week on p5p 2001/06/17
Miscellaneous Darwin Updates, hash accessor macros, and much more. [Jun 19, 2001]
The O'Reilly and Perl.com privacy policy [Jun 15, 2001]
Parse::RecDescent Tutorial
Parse::RecDescent is a recursive descent parser generator designed to help to Perl programmers who need to deal with any sort of structured data,from configuration files to mail headers to almost anything. It's even been used to parse other programming languages for conversion to Perl. [Jun 13, 2001]
The Beginner's Attitude of Perl: What Attitude?
Robert Kiesling says that the Perl Community's attitude towards new users is common fare for Internet development and compared to other lists Perl is downright civil. [Jun 12, 2001]
This Week on p5p 2001/06/09
Removing dependence on strtol, regex negation, and much more. [Jun 12, 2001]
This Week in Perl 6 (3 - 9 June 2001)
A discussion on the interaction of properties with use strict, continuing arguements surrounding regular expressions, and much more. [Jun 12, 2001]
Having Trouble Printing Code Examples?
Info for those who can't get Perl.com articles to print out correctly [Jun 11, 2001]
About perl.com
[Jun 7, 2001]. [Jun 5, 2001]
This Week in Perl 6 (27 May - 2 June 2001)
Coding Conventions Revisited, Status of the Perl 6 Mailing Lists, and much more. [Jun 4, 2001]
This Week on p5p 2001/06/03
Improving the Perl test suite, Warnings crusade, libnet in the core, and much more. [Jun 4, 2001]
Turning the Tides on Perl's Attitude Toward Beginners
Casey West is taking a stand against elitism in the Perl community and seems to be making progress. He has launched several new services for the Perl beginner that are being enthusiastically received. [May 28, 2001]
This Week on p5p 2001/05/27
Attribute tieing, Test::Harness cleanup, and much more. [May 27, 2001]
Taking Lessons From Traffic Lights
Michael Schwern examines traffic lights and shows what lessons applied to the development of Perl 6 [May 22, 2001]
This Month on Perl6 (1 May - 20 May 2001)
Perl 6 Internals, Meta, Language, Docs Released, and much more. [May 21, 2001]
Exegesis 2
Having trouble visualizing how the approved RFC's for Perl 6 will translate into actual Perl code? Damian Conway provides and exegesis to Larry Wall's Apocalypse 2 and reveals what the code will look like. [May 15, 2001]
This Week on p5p 2001/05/13
The to-do list, safe signals, release numbering and much more. [May 13, 2001]
This Week on p5p 2001/05/20
Internationalisation, Legal FAQ, and much more. [May 6, 2001]
This Week on p5p 2001/05/06
iThreads, Relocatable Perl, Module License Registration, and much more. [May 6, 2001]. [May 3, 2001]
This Week on p5p 2001/04/29
MacPerl 5.6.1, Licensing Perl modules, and much more. [Apr 29, 2001]
Quick Start Guide with SOAP Part Two
Paul Kulchenko continues his SOAP::Lite guide and shows how to build more comples SOAP servers. [Apr 23, 2001]
This Week on p5p 2001/04/22
Modules in the core, Kwalitee control, and much more. [Apr 22, 2001]
MSXML, It's Not Just for VB Programmers Anymore
Shawn Ribordy puts the tech back into the MSXML parser by using Perl instead of Visual Basic. [Apr 17, 2001]
This Week on p5p 2001/04/15
perlbug Administration, problems with tar, and much more. [Apr 15, 2001]
Designing a Search Engine
Pete Sergeant discusses two elements of designing a search engine: how to store and retrieve data efficiently, and how to parse search terms. [Apr 10, 2001]
This Week on p5p 2001/04/08
Perl 5.6.1 and Perl 5.7.1 Released(!), and much more. [Apr 8, 2001]
Apocalypse 1: The Ugly, the Bad, and the Good
With breathless expectation, the Perl community has been waiting for Larry Wall to reveal how Perl 6 is going to take shape. In the first of a series of "apocalyptic" articles, Larry reveals the ugly, the bad, and the good parts of the Perl 6 design process. [Apr 2, 2001]
This Week on p5p 2001/04/02
Perl and HTML::Parser, Autoloading Errno, Taint testing, and much more. [Apr 2, 2001]
This Week on p5p 2001/04/01
Perl and HTML::Parser, Autoloading Errno, Taint testing, and much more. [Apr 1, 2001]
A Simple Gnome Panel Applet
Build a useful Gnome application in an afternoon! Joe Nasal explains some common techniques, including widget creation, signal handling, timers, and event loops. [Mar 27, 2001]
This Week on p5p 2001/03/26
use Errno is broken, Lexical Warnings, Scalar repeat bug, and much more. [Mar 26, 2001]
This Month on Perl6 (25 Feb--20 Mar 2001)
Internal Data Types, API Conventions, and GC once again. [Mar 21, 2001]
DBI is OK
chromatic makes a case for using DBI and shows how it works well in the same situations as DBIx::Recordset. [Mar 20, 2001]
This Week on p5p 2001/03/19
Robin Houston vs. goto, more POD nits, and much more. [Mar 19, 2001]
Creating Modular Web Pages With EmbPerl
If you have ever wished for an "include" HTML tag to reuse large chunks of HTML, you are in luck. Neil Gunton explains how Embperl solves the problem. [Mar 13, 2001]
This Week on p5p 2001/03/12
Pod questions, patching perly.y, EBCDIC and Unicode, plus more. [Mar 12, 2001]
Writing GUI Applications in Perl/Tk
Nick Temple shows how to program a graphical Point-of-Sale application in Perl, complete with credit card processing. [Mar 6, 2001]
This Week on p5p 2001/03/05
Coderef @INC, More Memory Leak Hunting, and more. [Mar 5, 2001]
This Week on p5p 2001/02/26
Overriding +=, More Big Unicode Wars, and more. [Feb 28, 2001]
DBIx::Recordset VS DBI
Terrance Brannon explains why DBI is the standard database interface for Perl but should not be the interface for most Perl applications requiring database functionality. [Feb 27, 2001]
The e-smith Server and Gateway: a Perl Case Study
Kirrily "Skud" Robert explains the Perl behind the web-based administrator for the e-smith server. [Feb 20, 2001]
This Week on p6p 2001/02/18
A revisit to RFC 88, quality assurance, plus more. [Feb 18, 2001]
Perl 6 Alive and Well! Introducing the perl6-mailing-lists Digest
Perl.com will be supplying you with the P6P digest, covering the latest news on the development of Perl 6. [Feb 14, 2001]
This Week on p5p 2001/02/12
Perl FAQ updates, memory leak plumbing, and more. [Feb 14, 2001]
Pathologically Polluting Perl
Brian Ingerson introduces Inline.pm and CPR; with them you can embed C inside Perl and turn C into a scripting language. [Feb 6, 2001]
This Week on p5p 2001/02/06
Perl 5.6.1 not delayed after all, MacPerl, select() on Win32, and more. [Feb 6, 2001]
This Week on p5p 2001/01/28
5.6.x delayed, the hashing function, PerlIO programming documentation, and more. [Jan 30, 2001]. [Jan 29, 2001]
This Week on p5p 2001/01/21
Safe signals; large file support; pretty-printing and token reporting. [Jan 24, 2001]
Creating Data Output Files Using the Template Toolkit
Dave Cross explains why you should add the Template Toolkit to your installation of Perl and why it is useful for more than just dynamic web pages. [Jan 23, 2001]
A Beginner's Introduction to POE
Interested in event-driven Perl? Dennis Taylor and Jeff Goff show us how to write a simple server daemon using POE, the Perl Object Environment. [Jan 17, 2001]
This Week on p5p 2001/01/14
Unicode is stable! Big performance improvements! Better lvalue subroutine support! [Jan 15, 2001]
Beginners Intro to Perl - Part 6
Doug Sheppard shows us how to activate Perl's built in security features. [Jan 9, 2001]
This Fortnight on p5p 2000/12/31
Unicode miscellany; lvalue functions. [Jan 9, 2001]
This Week on p5p 2000/12/24
5.6.1 trial release; new repository browser; use constant [Dec 27, 2000]
What every Perl programmer needs to know about .NET
A very brief explanation of Microsoft's .NET project and why it's interesting. [Dec 19, 2000]
Beginners Intro to Perl - Part 5
Doug Sheppard discusses object-oriented programming in part five of his series on beginning Perl. [Dec 18, 2000]
This Week on p5p 2000/12/17
More Unicode goodies; better arithmetic; faster object destruction. [Dec 17, 2000]
Why I Hate Advocacy
Are you an effective Perl advocate? Mark Dominus explains why you might be advocating Perl the wrong way. [Dec 12, 2000]
This Week on p5p 2000/12/10
Unicode support almost complete! Long standing destruction order bug fixed! Rejoice! [Dec 11, 2000]
Beginners Intro to Perl - Part 4
Doug Sheppard teaches us CGI programming in part four of his series on beginning Perl. [Dec 6, 2000]
This Week on p5p 2000/12/03
Automatic transliteration of Russian; syntactic oddities; lvalue subs. [Dec 4, 2000]
Programming GNOME Applications with Perl - Part 2
Simon Cozens shows us how to use Perl to develop applications for Gnome, the Unix desktop environment. [Nov 28, 2000]
Red Flags Return
Readers pointed out errors and suggested more improvements to the code in my 'Red Flags' articles. As usual, there's more than one way to do it! [Nov 28, 2000]
This Week on p5p 2000/11/27
Enhancements to for, map, and grep; Unicode on Big Iron; Low-Hanging Fruit. [Nov 27, 2000]
Beginner's Introduction to Perl - Part 3
The third part in a new series that introduces Perl to people who haven't programmed before. This week: Patterns and pattern matching. If you weren't sure how to get started with Perl, here's your chance! [Nov 20, 2000]
This Week on p5p 2000/11/20
Major regex engine enhancements; more about
perlio; improved
subs.pm. [Nov 20, 2000]
This Week on p5p 2000/11/14
lstat _; more about
perlio; integer arithmetic. [Nov 14, 2000]
Program Repair Shop and Red Flags. [Nov 14, 2000]
Beginner's Introduction to Perl - Part 2
The second part in a new series that introduces Perl to people who haven't programmed before. This week: Files and strings. If you weren't sure how to get started with Perl, here's your chance! [Nov 7, 2000]
This Week on p5p 2000/11/07
Errno.pm error numbers; more self-tying; stack exhaustion in the regex engine. [Nov 7, 2000]
Hold the Sackcloth and Ashes
Jarkko Hietaniemi, the Perl release manager, responds to the critique of the Perl 6 RFC process. [Nov 3, 2000]
Critique of the Perl 6 RFC Process
Many of the suggestions put forward during the Perl 6 request-for-comment period revealed a lack of understanding of the internals and limitations of the language. Mark-Jason Dominus offers these criticisms in hopes that future RFCs may avoid the same mistakes -- and the wasted effort. [Oct 31, 2000]
This Week on p5p 2000/10/30
More Unicode; self-tying; Perl's new built-in standard I/O library. [Oct 30, 2000]
Last Chance to Support Damian Conway
As reported earlier, the Yet Another Society (YAS) is putting together a grant to Monash University, Australia. The grant will fund Damian Conway's full-time work on Perl for a year. But the deadline for pledges is the end of the week, and the fund is still short. [Oct 26, 2000]
State of the Onion 2000
Larry Wall's annual report on the state of Perl, from TPC 4.0 (the fourth annual Perl conference) in Monterey in July 2000. In this full length transcript, Larry talks about the need for changes, which has led to the effort to rewrite the language in Perl 6. [Oct 24, 2000]
These Weeks on p5p 2000/10/23
Perl's Unicode model; sfio; regex segfaults; naughty
use vars calls.
Beginner's Introduction to Perl
The first part in a new series that introduces Perl to people who haven't programmed before. If you weren't sure how to get started with Perl, here's your chance! [Oct 16, 2000]
Programming GNOME Applications with Perl
Simon Cozens shows us how to use Perl to develop applications for Gnome, the Unix desktop environment. [Oct 16, 2000]
This Week on p5p 2000/10/08
Self-tying is broken; integer and floating-point handling; why
unshiftis slow. [Oct 8, 2000]
Report from YAPC::Europe
Mark Summerfield tells us what he saw at YAPC::Europe in London last weekend. [Oct 2, 2000]
How Perl Helped Me Win the Office Football Pool
Walt Mankowski shows us how he used Perl to make a few extra bucks at the office. [Oct 2, 2000]
Ilya Regularly Expresses
Ilya Zakharevich, a major contributor to Perl 5, talks about Perl 6 effort, why he thinks that Perl is not well-suited for text manipulation, and what changes would make it better; whether the Russian education system is effective; and whether Perl 6 is a good idea. [Sep 20, 2000]
Sapphire
Can one person rewrite Perl from scratch? [Sep 19, 2000]
Guide to the Perl 6 Working Groups
Perl 6 discussion and planning are continuing at a furious rate and will probably continue to do so, at least until next month when Larry announces the shape of Perl 6 at the Linux Expo. In the meantime, here's a summary of the main Perl 6 working groups and discussion lists, along with an explanation of what the groups are about. [Sep 5, 2000]
Damian Conway Talks Shop
The author of Object-Oriented Perl talks about the Dark Art of programming, motivations for taking on projects, and the "deification" of technology. [Aug 21, 2000]
Report from the Perl Conference
One conference-goer shares with us his thoughts, experiences and impressions of TPC4. [Aug 21, 2000]
Report on the Perl 6 Announcement
At the Perl conference, Larry announced plans to develop Perl 6, a new implementation of Perl, starting over from scratch. The new Perl will fix many of the social and technical deficiencies of Perl 5. [Jul 25, 2000]
Reports from YAPC 19100
Eleven attendees of Yet Another Perl Conference write in about their experiences in Pittsburgh last month. [Jul 11, 2000]
Choosing a Perl Book
What to look for when choosing from the many Perl books on the market. [Jul 11, 2000]
This Week on p5p 2000/07/09
The Perl bug database;
buildtoc; proposed
use namespacepragma; a very clever Unicode hack. [Jul 9, 2000]
This Week on p5p 2000/07/02
Lots of Unicode; method lookup optimizations;
my __PACKAGE__ $foo. [Jul 2, 2000]
Notes on Doug's Method Lookup Patch
Simon Cozens explains the technical details of a patch that was sent to p5p this month. [Jun 27, 2000]
This Week on p5p 2000/06/25
More method call optimization; tr///CU is dead; Lexical variables and eval; perlhacktut. [Jun 25, 2000]
This Week on p5p 2000/06/18
Method call optimizations; more bytecode; more unicode source files; EPOC port. [Jun 17, 2000]
Return of Program Repair Shop and Red Flags
My other 'red flags' article was so popular that once again I've taken a real program written by a genuine novice and shown how to clean it up and make it better. I show how to recognize some "red flags" that are early warning signs that you might be doing some of the same things wrong in your own programs. [Jun 17, 2000]
Adventures on Perl Whirl 2000
Adam Turoff's report on last week's Perl Whirl cruise to Alaska [Jun 13, 2000]
This Week on p5p 2000/06/11
Unicode byte-order marks in Perl source code; many not-too-difficult bugs for entry-level Perl core hackers. [Jun 13, 2000]
ANSI Standard Perl?
Standardized Perl? Larry Rosler, who put the ANSI in ANSI C, shares his thoughts on how Perl could benefit from standards in this interview with Joe Johnston. [Jun 6, 2000]
This Week on p5p 2000/06/04
Farewell to Ilya Zakharevich; bytecode compilation; optimizations to
map. [Jun 4, 2000]
This Week on p5p 2000/05/28
Regex engine alternatives and optimizations;
eqand UTF8; Caching of
get*by*functions; Array interpolation semantics. [May 28, 2000]
This Week on p5p 2000/05/21
What happened on the perl5-porters mailing list between 15 and 21 May, 2000 [May 21, 2000]
Pod::Parser Notes
Brad Appleton, author of the
Pod::Parsermodule suite, responds to some of the remarks in an earlier perl5-porters mailing list summary. [May 20, 2000]
Perl Meets COBOL
I taught a Perl class to some IBM mainframe programmers whose only experience was in COBOL, and got some surprises. [May 15, 2000]
This Week on p5p 2000/05/14
What happened on the perl5-porters mailing list between 8 and 14 May, 2000 [May 14, 2000]
This Week on p5p 2000/05/07
What happened on the perl5-porters mailing list between 1 and 7 May, 2000 [May 7, 2000]
Program Repair Shop and Red Flags. [May 2, 2000]
This Week on p5p 2000/04/30
What happened on the perl5-porters mailing list between 24 and 30 April, 2000 [Apr 30, 2000]
This Week on p5p 2000/04/23
What happened on the perl5-porters mailing list between 17 and 23 April, 2000 [Apr 23, 2000]
What's New in 5.6.0.
After two years in the making, we look at new features of Perl, including support for UTF-8 Unicode and Internet address constants. [Apr 18, 2000]
POD is not Literate Programming
[Mar 20, 2000]
My Life With Spam: Part 3
In the third part of a tutorial on how to filter spam, Mark-Jason Dominus reveals how he relegates mail to his "losers list, blacklist and whitelist." [Mar 15, 2000]
This Week on p5p 2000/03/05
What happened on the perl5-porters mailing list between 28 February and 5 March, 2000 [Mar 5, 2000]
Ten Perl Myths
Ten things that people like to say about Perl that aren't true. [Feb 23, 2000]
My Life With Spam
In the second part of a tutorial on how to filter spam, Mark-Jason Dominus shows what to do with spam once you've caught it. [Feb 9, 2000]
RSS and You
RSS is an XML application that describes web sites as channels, which can act as feeds to a user's site. Chris Nandor explains how to use RSS in Perl and how he uses it to build portals. [Jan 25, 2000]
In Defense of Coding Standards
Perl programmers may bristle at the idea of coding standards. Fear not: a few simple standards can improve teamwork without crushing creativity. [Jan 12, 2000]
This Week on p5p 1999/12/26
What happened on the perl5-porters mailing list between 20 and 26 December, 1999 [Dec 26, 1999]
Virtual Presentations with Perl
This year, the Philadelphia Perl Mongers had joint remote meetings with Boston.pm and St. Louis.pm using teleconferencing equipment to bring a guest speaker to many places at once. Adam Turoff describes what worked and what didn't, and how you can use this in your own PM groups. [Dec 20, 1999]
This Week on p5p 1999/12/19
What happened on the perl5-porters mailing list between 13 and 19 December, 1999 [Dec 19, 1999]
Happy Birthday Perl!
According to the perlhist man page, Perl was first released twelve years ago, on December 18, 1987. Congratulations to Larry Wall on the occasion of Perl's twelfth birthday! [Dec 18, 1999]
This Week on p5p 1999/12/12
What happened on the perl5-porters mailing list between 6 and 12 December, 1999 [Dec 12, 1999]
This Week on p5p 1999/12/05
What happened on the perl5-porters mailing list between 29 November and 5 December, 1999 [Dec 5, 1999]
Sins of Perl Revisited
Tom Christiansen published the original seven sins in 1996. Where are we now? [Nov 30, 1999]
This Week on p5p 1999/11/28
What happened on the perl5-porters mailing list between 22 and 28 November, 1999 [Nov 28, 1999]
This Week on p5p 1999/11/21
What happened on the perl5-porters mailing list between 15 and 21 November, 1999 [Nov 21, 1999]
Perl as a First Language
Simon Cozens, author of the upcoming Beginning Perl talks about Perl as a language for beginning programmers. [Nov 16, 1999]
This Week on p5p 1999/11/14
What happened on the perl5-porters mailing list between 8 and 14 November, 1999 [Nov 14, 1999]
This Week on p5p 1999/11/07
What happened on the perl5-porters mailing list between 1 and 7 November, 1999 [Nov 7, 1999]
This Week on p5p 1999/10/31
What happened on the perl5-porters mailing list between 25 and 31 October, 1999 [Nov 3, 1999]
A Short Guide to DBI
Here's how to get started using SQL and SQL-driven databases from Perl. [Oct 22, 1999]
Happy Birthday Perl 5!
[Oct 18, 1999]
This Week on p5p 1999/10/17
What happened on the perl5-porters mailing list between 11 and 17 October, 1999 [Oct 17, 1999]
This Week on p5p 1999/10/24
What happened on the perl5-porters mailing list between 18 and 24 October, 1999 [Oct 17, 1999]
Perl/Tk Tutorial
On Perl.com, we are presenting this as part of what we hope will be an ongoing series of articles, titled "Source Illustrated." The presentation by Lee and Brett is a wonderfully concise example of showing annotated code and its result. [Oct 15, 1999]
Topaz: Perl for the 22nd Century
Chip Salzenberg, one of the core developers of Perl, talks about Topaz, a new effort to completely rewrite the internals of Perl in C++. The complete version of his talk (given at the 1999 O'Reilly Open Source Conference) is also available in Real Audio. [Sep 28, 1999]
Open Source Highlights
An open source champion inside Chevron writes about his visit to the Open Source Conference. [Sep 28, 1999]
Bless My Referents
Damian Conway explains how references and referents relate to Perl objects, along with examples of how to use them when building objects. [Sep 16, 1999]
White Camel Awards
An interview with White Camel Award winners Kevin Lenzo and Adam Turoff. [Sep 16, 1999]
3rd State of the Onion
Larry explains the "good chemistry" of the Perl community in his third State of the Onion speech. [Aug 30, 1999]
Perl Recipe of the Day
Each day, we present a new recipe from The Perl Cookbook, the best-selling book by Tom Christiansen and Nathan Torkington. [Aug 26, 1999]
Common Questions About CPAN
Answers to the most common questions asked from cpan@perl.org [Jul 29, 1999]
A New Edition of
Welcome to the new edition of! We've redesigned the site to make it easier for you to find the information you're looking for. [Jul 15, 1999]
Dispatch from YAPC
Brent was at YAPC -- were you? He reports from this "alternative" Perl conference. [Jun 30, 1999]
White Camel Awards to be Presented at O'Reilly's Perl Conference 3.0
The White Camel awards will be presented to individuals who have made outstanding contributions to Perl Advocacy, Perl User Groups, and the Perl Community at O'Reilly's Perl onference 3.0 on August 24, 1999. [Jun 28, 1999]
Microsoft to Fund Perl Development
ActiveState Tool Corp. has a new three-year agreement with Microsoft that funds new development of Perl for the Windows platform. [Jun 9, 1999]
Perl and CGI FAQ
This FAQ answers questions for Web developers. [Apr 14, 1999]
Perl, the first postmodern computer language
Larry Wall's talk at Linux World justifies Perl's place in postmodern culture. He says that he included in Perl all the features that were cool and left out all those that sucked. [Mar 9, 1999]
Success in Migrating from C to Perl
How one company migrated from using C to Perl -- and in doing so was able to improve their approach to code design, maintenance and documentation. [Jan 19, 1999]
What the Heck is a Perl Monger?!
Want to start or find a Perl user group in your area? Brent interviews brian d foy, the creator of Perl Mongers to find out just what the Mongers are all about. [Jan 13, 1999]
Y2K Compliance
Is someone asking you to ensure that your Perl code is Y2k compliant? Tom Christiansen gives you some answers, which may not completely please the bureaucrats. [Jan 3, 1999]
XML::Parser Module Enables XML Development in Perl
The new Perl module known as XML::Parser allows Perl programmers building applications to use XML, and provides an efficient, easy way to parse XML documents. [Nov 25, 1998]
A Zero Cost Solution
Creating a task tracking system for $0 in licensing fees, hardware, and software costs. [Nov 17, 1998]
Perl Rescues a Major Corporation
How the author used Perl to create a document management system for a major aerospace corporation and saved the day. [Oct 21, 1998]
A Photographic Journal
Photographs taken at the Perl Conference by Joseph F. Ryan: Perl Programmer at the National Human Genome Research Institute. [Aug 27, 1998]
Perl's Prospects Are Brighter Than Ever
Jon Udell's summary of the Perl Conference. [Aug 26, 1998]
2nd State of the Onion
Larry Wall's keynote address from 1998 Perl Conference. There is also a RealAudio version. [Aug 25, 1998]
The Final Day at The Perl Conference
Brent Michalski winds down his coverage of the Perl Conference. Highlights include: Tom Paquin's Keynote: "Free Software Goes Mainstream" and Tom Christiansen's "Perl Style". [Aug 21, 1998]
Day 3: Core Perl Developers at Work and Play
Recapping Larry Wall's Keynote, The OO Debate, The Internet Quiz Show, The Perl Institue and The Perl Night Life. [Aug 20, 1998]
Day 2: Perl Mongers at The Conference
Another exciting day! Brent talks about the Advanced Perl Fundamentals tutorial, the concept behind the Perl Mongers and the Fireside Chat with Jon Orwant. [Aug 19, 1998]
Day 1 Highlights: Lincoln's Cool Tricks and Regexp
Brent Michalski reports on the highlights of the first day of The Perl Conference. [Aug 18, 1998]
Perl Conference 3.0 -- The Call for Participation
This is a call for papers that demonstrate the incredible diversity of Perl. Selected papers will be presented at the third annual O'Reilly Perl Conference on August 21-24, 1999 at the Doubletree Hotel and Monterey Conference Center in Monterey California. [Aug 17, 1998]
How Perl Creates Orders For The Air Force
Brent Michalski, while in the Air Force, created a Web-based system written in Perl to simplify the process of ordering new hardware and software. [Jul 22, 1998]
Perl Builder IDE Debuts
A review of Perl Builder, the first integrated development environment (IDE)for Perl. [Jul 22, 1998]
MacPerl Gains Ground
MacPerl gains a foothold on a machine without a command-line interface. Rich Morin of Prime Time Freeware and Matthias Neeracher, the programmer who ported Perl to the Macintosh, talk about what makes MacPerl different. [Jun 3, 1998]
Perl Support for XML Developing
O'Reilly & Associates hosted a recent Perl/XML summit to discuss ways that Perl can support the Extensible Markup Language (XML), a new language for defining document markup and data formats. [Mar 10, 1998]
The Culture of Perl
In this keynote address for the first Perl Conference, Larry Wall talks about the key ideas that influence him and by extension the Perl culture. [Aug 20, 1997]
The Artistic License
This document states the conditions under which a Package may be copied, such that the Copyright Holder maintains some semblance of artistic control over the development of the package [Aug 15, 1997] | http://www.perl.com/all_articles.csp | crawl-002 | en | refinedweb |
- Table of Contents
- Table of Contents
- BackCover
- Microsoft Exchange Server 2003
- Foreword
- Preface
- Product names
- Omissions
- URLs
- Acknowledgments
- Chapter 1: A Brief History of Exchange
- 1.1 Exchange first generation
- 1.2 Exchange second generation
- 1.3 Exchange third generation
- 1.4 Deploying Exchange 2003
- 1.5 Some things that Microsoft still has to do
- 1.6 Moving on
- Chapter 2: Exchange and the Active Directory
- 2.1 The Active Directory
- 2.2 Preparing the Active Directory for Exchange
- 2.3 Active Directory replication
- 2.4 The Active Directory Connector
- 2.5 The LegacyExchangeDN attribute
- 2.6 DSAccess-Exchange s directory access component
- 2.7 Interaction between Global Catalogs and clients
- 2.8 Exchange and the Active Directory schema
- 2.9 Running Exchange in multiple forests
- 2.10 Active Directory tools
- Chapter 3: Exchange Basics
- 3.2 Access control
- 3.3 Administrative and routing groups
- 3.4 Mailboxes and user accounts
- 3.5 Distribution groups
- 3.6 Query-based distribution groups
- 3.7 Summarizing Exchange basics
- Chapter 4: Outlook-The Client
- 4.1 MAPI-Messaging Application Protocol
- 4.2 Making Outlook a better network client for Exchange
- 4.3 How many clients can I support at the end of a pipe?
- 4.4 Blocking client access
- 4.5 Junk mail processing
- 4.6 The Offline Address Book (OAB)
- 4.7 Freebusy information
- 4.8 Personal folders and offline folder files
- 4.9 Offline folder files
- 4.10 SCANPST-first aid for PSTs and OSTs
- 4.11 Working offline or online
- 4.12 Outlook command-line switches
- Chapter 5: Outlook Web Access
- 5.1 Second-generation OWA
- 5.2 The OWA architecture
- 5.3 Functionality: rich versus reach or premium and basic
- 5.4 Suppressing Web beacons and attachment handling
- 5.5 OWA administration
- 5.6 Exchange s URL namespace
- 5.7 Customizing OWA
- 5.8 OWA firewall access
- 5.9 OWA for all
- Chapter 6: Internet and Other Clients
- 6.1 IMAP4 clients
- 6.2 POP3 clients
- 6.3 LDAP directory access for IMAP4 and POP3 clients
- 6.4 Supporting Apple Macintosh
- 6.5 Supporting UNIX and Linux clients
- 6.6 Exchange Mobile Services
- 6.7 Pocket PC clients
- 6.8 Palm Pilots
- 6.9 Mobile BlackBerries
- 6.10 Sending messages without clients
- 6.11 Client licenses
- Chapter 7: The Store
- 7.1 Structure of the Store
- 7.2 Exchange ACID
- 7.3 EDB database structure
- 7.4 The streaming file
- 7.5 Transaction logs
- 7.6 Store partitioning
- 7.7 Managing storage groups
- 7.8 ESE database errors
- 7.9 Database utilities | http://flylib.com/books/en/4.389.1.1/1/ | CC-MAIN-2017-09 | en | refinedweb |
In .NET 4.0, we have a set of new API's to simplify the process of adding parallelism and concurrency to applications. This set of API's is called the "Task Parallel Library (TPL)" and is located in the System.Threading and System.Threading.Tasks namespaces. In the Table shown below, I have mentioned some of the classes used for Parallel programming in .NET 4.0.
If you are new to Parallel Tasks in .NET, check this article Introduction to .NET Parallel Task
In this article, I have described a scenario where a WPF application is trying to retrieve data using a WCF service and also trying to access data from the local database server. The application dedicates a long running WCF service call to the Task class, so that the call to the service can be made asynchronously. The below diagram explains the scenario:
Step 1: Open VS2010 and create a blank solution, name it as ‘CS_Task_Demo’. In this project, add a new WCF Service application project and name it as ‘WCF40_DataService’. Rename IService1.cs to IService.cs and Service1.svc to Service.svc.
Step 2: Write the following code in IService.cs with ServiceContract, OperationContract and DataContract. The OperationContract method also applies the WebGet attribute so that the WCF service is published as WCF REST service.
Step 3 : Now write the following code in the Service class. This code makes a call to Database and retrieve the Employee records.
Note: The above code uses the Thread.Sleep(10000) which waits for 10 seconds to get the data from the Database. I did this to emulate the effect of the Task class.
Step 4: Change the code of Service.svc as shown below:
Step 5: Make the following changes in Web.Config file which adds the WebHttpBinding for the WCF REST Services.
<protocolMapping>
<add binding="webHttpBinding" scheme="http"/>
</protocolMapping>
Step 6: Publish the WCF service on IIS.
Step 7: In the same solution, add a new WPF project and name it as ‘WPF40_TasksClass’,make sure that the framework for the project as .NET 4.0. Add the following XAML:
Step 8: Open MainPage.xaml.cs and add the below classes:
Step 9: Add the following code in the GetData click event. This code defines a Task object which initiates an Asynchronous operation to the WCF REST Service. This downloads the XML data by making call to WCF service. During this time of the service call, a call to the local database is made and completes the call. Once the service call is over, the data is processed. The code is as below:
The above code creates an instance of the Task class using its Factory property to retrieve a TaskFactory instance that can be used to create task.
Note: Please read comments in the code carefully to understand it.
Step 10: Run the application and click on the ‘Get Data’ button. You will get the data immediately in the DataGrid. This data is fetched from the local sql server database as below:
Click on the ‘OK’ for the message box and wait for some time, you will get data in the DataGrid on the Left hand side which gets the data from the WCF REST service as shown below:
Now if you see the above output, the time for the WCF service call is more than 10 Seconds and the local database call takes 0.2 Seconds.
The entire source code of this article can be downloaded over here | http://www.dotnetcurry.com/wpf/754/using-task-parallel-library-load-data | CC-MAIN-2017-09 | en | refinedweb |
Is there a
find()
You use
std::find from
<algorithm>, which works equally well for
std::list and
std::vector.
std::vector does not have its own search/find function.
#include <list> #include <algorithm> int main() { std::list<int> ilist; ilist.push_back(1); ilist.push_back(2); ilist.push_back(3); std::list<int>::iterator findIter = std::find(ilist.begin(), ilist.end(), 1); }
Note that this works for built-in types like
int as well as standard library types like
std::string by default because they have
operator== provided for them. If you are using using
std::find on a container of a user-defined type, you should overload
operator== to allow
std::find to work properly:
EqualityComparable concept | https://codedump.io/share/IsICPQ0Ak4cp/1/how-to-search-for-an-element-in-an-stl-list | CC-MAIN-2017-09 | en | refinedweb |
Sven K=C3=B6hler <skoehler@...> writes:
> > The 2.6.0 UML patch is available at
> >
>=20
>
Disable CONFIG_HIGHMEM, then it does build.
I have a few more questions:
* what is the status of kernel modules in 2.6 kernels? It doesn't
build without patching the kernel: One issue is some elf relocation
defines being missing (R_386_32, R_386_PC32), making
apply_relocate() fail to build. The other issue is
apply_alternatives() not being defined, making the final link fail.
Trying to fix that, booting and trying to load modules leads to
kernel panic.
* Any known issues with 2.6.1 kernels? Tried to apply the 2.6.0
patch to 2.6.1 and fixup the one reject in arch/um/kernel/irq.c
manually and the fs/proc/task_mmu.c build failure with a patch
fished from this list. But the resulting kernel fails to boot up
the system. I see the init start banner, then it stops. Looking
whats up with gdb shows that the fork syscall seems to hang in a
endless loop, pagefaulting at the same address over and over ...
Gerd
--=20
You have a new virus in /var/mail/kraxel?
Certainly (this is from 2.4.23-1um). Seems like something goes weird with
procfs, but I've no idea why this only happens with slirp (and newer gcc for
that matter).
(gdb) bt
#0 panic (fmt=0x0) at panic.c:58
#1 0xa00d769b in segv (address=8, ip=2685922193, is_write=0, is_user=0,
sc=0x0) at trap_kern.c:144
#2 0xa00d7af5 in segv_handler (sig=11, regs=0xa0350274) at trap_user.c:67
#3 0xa00df411 in sig_handler_common_skas (sig=11, sc_ptr=0x58)
at trap_user.c:33
#4 0xa00d7c05 in sig_handler (sig=0, sc=
{gs = 0, __gsh = 0, fs = 0, __fsh = 0, es = 43, __esh = 0, ds = 43, __dsh = 0, edi = 2687843188, esi = 2688540673, ebp = 2687843100, esp = 2687843028, ebx = 2687843188, edx = 2687843188, ecx = 2687827968, eax = 0, trapno = 14, err = 4, eip = 2684641864, cs = 35, __csh = 0, eflags = 2163202, esp_at_signal = 2687843028, ss = 43, __ssh = 0, fpstate = 0x0, oldmask = 0, cr2 = 8})
at trap_user.c:103
#5 <signal handler called>
#6 0xa0046248 in link_path_walk (name=0xa03fe001 "dev", nd=0xa0353b74)
at namei.c:462
#7 0xa004674e in path_walk (name=0x0, nd=0xa0353b74) at namei.c:659
#8 0xa0046919 in path_lookup (path=0xa03fe000 "/dev", flags=2687843188,
nd=0xa0353b74) at namei.c:748
#9 0xa0047754 in sys_mkdir (pathname=0x0, mode=448) at namei.c:1345
#10 0xa000eafa in prepare_namespace () at init/do_mounts.c:917
#11 0xa000e613 in init (unused=0x0) at init/main.c:580
#12 0xa00d22f9 in run_kernel_thread (fn=0xa000e600 <init>, arg=0x0,
#13 0xa00de930 in new_thread_handler (sig=10) at process_kern.c:70
#14 <signal handler called>
(gdb) i sym 2685922193
kill + 17 in section .text
(gdb) i line *2685922193
Line 155 of "proc_fs.h" starts at address 0xa0178e94 <svc_proc_register+68>
and ends at 0xa01c3913.
(gdb) up 6
#6 0xa0046248 in link_path_walk (name=0xa03fe001 "dev", nd=0xa0353b74)
at namei.c:462
462 inode = nd->dentry->d_inode;
(gdb) print *nd
$1 = {dentry = 0x0, mnt = 0x0, last = {
name = 0x8124 <Address 0x8124 out of bounds>, len = 1, hash = 2686526256},
flags = 16, last_type = 1}
--
- mdz
Alle 00:04, luned=EC 12 gennaio 2004, Jeff Chua ha scritto:
> On Sun, 11 Jan 2004, BlaisorBlade wrote:
> > Also, Jeff, when you compiled LVM, did you compile it as module(as you
> > speak about initrd) or statically?
>
> LVM dm-mod (device mapper) is module.
>
> If I don't load this and not start vgscan, mount lvm devices, I don't see
> klog segmentation fault.
Probably building into the kernel that module would work-around your proble=
m.=20
LVM is loaded at startup before klog, I guess, right?
In fact the bug is likely to be with handling of modules, as they go in a=20
different kernel memory area; *maybe* they are in vmalloc'ed memory.
Bye
=2D-=20
cat <<EOSIGN
Paolo Giarrusso, aka Blaisorblade
Linux Kernel 2.4.23/2.6.0 on an i686; Linux registered user n. 292729
EOSIGN
On 2004-01-13 22:40+1100, Tim Barbour wrote:
> ?
>
I have used an 32bit UML in my AMD64 (pure 64bit system). So it is
doable. I can't remember any hiccups and installation was really
straightforward.
BR, Jani
--
Jani Averbach?
Here what I get:
NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
Program received signal SIGSEGV, Segmentation fault.
walk_init_root (name=0xf4a33ee8 <Address 0xf4a33ee8 out of bounds>,
nd=0xa08f7b74) at atomic.h:107
107 __asm__ __volatile__(
(gdb) bt
#0 walk_init_root (name=0xf4a33ee8 <Address 0xf4a33ee8 out of bounds>,
nd=0xa08f7b74) at atomic.h:107
(gdb) c
Continuing.
Breakpoint 1, segv (address=4104339216, ip=2685837537, is_write=2, is_user=0,
sc=0xf4a33f10) at trap_kern.c:124
124 if(!is_user && (address >= start_vm) && (address < end_vm)){
(gdb) bt
#0 segv (address=4104339216, ip=2685837537, is_write=2, is_user=0,
sc=0xf4a33f10) at trap_kern.c:124
(gdb) bt
#0 segv (address=4104339216, ip=2685837537, is_write=2, is_user=0,
sc=0xf4a33f10) at trap_kern.c:124
(gdb) c
Continuing.
Breakpoint 2, panic (fmt=0xa08f4000 "") at panic.c:58
58 machine_paniced = 1;
(gdb) bt
#0 panic (fmt=0xa08f4000 "") at panic.c:58
(gdb) c
Continuing.
Kernel panic: Segfault with no mm
Program exited normally.
Cheers,
--
Bill. <ballombe@...>
Imagine a large red swirl here.
Moin Jeff Dike,
> > Is there any possibility of running a 64-bit UML virtual machine on an
> > AMD-64 ?
> It is, and I'm working on it.
i'm thinking about upgrading my oldest UML system to either a
BiAthlon, which is known to work with SKAS, or to wait a few
month more a dual Opteron system to become cheap. I remember
the TT problems of UML running on duals, leading me to install
SKAS. Are there any experiences about running UML on an 64bit
NUMA system ?
Bye Michael
--
mailto:kraehe@... UNA:+.? 'CED+2+:::Linux:2.4.22'UNZ+1' CETERUM CENSEO WINDOWS ESSE DELENDAM
allomber@... said:
> Kernel panic: Segfault with no mm
> Without the eth0=slirp parameter, there are no kernel panic.
Can someone get a stack trace from this?
Jeff
On Mon, Jan 12, 2004 at 10:36:23AM -0800, Matt Zimmerman wrote:
> By the way, the original reason why I started building UML with gcc-2.95 was
> because building with 3.x broke the slirp transport like so:
>
> Kernel panic: read of switch_pipe failed, errno = 9
>
> errno 9 is EBADF. I never did find the real cause of that bug, but it has
> resurfaced now that I am building with gcc 3.3 again to fix the other, worse
> bug. I would be interested to know if anyone else has run into it. More
> information is here:
For what it worth with the current Debian UML package I get
$ linux ubd0=uml root=/dev/ubd0 eth0=slirp|& less
...
Netdevice 0 : SLIRP backend - command line: 'slirp'
mconsole (version 2) initialized on /home/bill/.uml/wksNEk/mconsole
Partition check:
ubda: unknown partition table
Initializing stdio console driver
NET4: Linux TCP/IP 1.0 for NET4.0
IP: routing cache hash table of 512 buckets, 4Kbytes
TCP: Hash tables configured (established 2048 bind 4096)
Linux IP multicast router 0.06 plus PIM-SM
NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
Kernel panic: Segfault with no mm
Without the eth0=slirp parameter, there are no kernel panic.
So at least the error message has changed.
The host kernel has the skas patch from the Debian package applied.
Cheers,
--
Bill. <ballombe@...>
Imagine a large red swirl here.
trb@... said:
> Is there any possibility of running a 64-bit UML virtual machine on an
> AMD-64 ?
It is, and I'm working on it.
Jeff ?
The UML website has no mention of an AMD-64 port of UML, but I have heard a
claim that it is possible to host a 64-bit UML on an AMD-64.
If anyone can shed light on this I would appreciate it.
Tim
> The 2.6.0 UML patch is available at
>
i get this error:
i applied this patch to clean 2.6.0 sources from kernel.org.
if you need more information just ask. i'm running gentoo 1.4 with a
2.6.1 host kernel. linux 2.4.19 headers are installed in /usr/include,
just in case it matters.
> This patch updates UML to 2.6.0 and pulls in all the changes that have
> accumulated in my 2.4 tree.
I forgot to mention that newer libcs won't boot on 2.6 UMLs until I get
[gs]et_thread_area implemented. Older libcs are fine, as are new libcs on
2.4 UMLs.
Jeff | https://sourceforge.net/p/user-mode-linux/mailman/user-mode-linux-devel/?viewmonth=200401&viewday=13 | CC-MAIN-2017-09 | en | refinedweb |
Next Chapter: Exception Handling
Generators
Introduction
Generators are a simple and powerful possibility to create or to generate iterators. On the surface
they look like functions, but there is both a syntactical and a semantic difference.
Instead of return statements you will find inside of the body of a generator only yield statements,
i.e. one or more yield statements.
Another important feature of generators is that the local variables and the execution start is automatically saved between calls. This is necessary, because unlike an ordinary function successive calls to a generator function don't start execution at the beginning of the function. Instead, the new call to a generator function will resume execution right after the yield statement in the code, where the last call exited. In other words: When the Python interpreter finds a yield statement inside of an iterator generated by a generator, it records the position of this statement and the local variables, and returns from the iterator. The next time this iterator is called, it will resume execution at the line following the previous yield statement. There may be more than one yield statement in the code of a generator or the yield statement might be inside the body of a loop. If there is a return statement in the code of a generator, the execution will stop with a StopIteration exception error if this code is executed by the Python interpreter.
Everything what can be done with a generator can be implemented with a class based iterator as well. But the crucial advantage of generators consists in automatically creating the methods __iter__() and next().
Generators provide a very neat way of producing data which is huge or infinite.
The following is a simple example of a generator, which is capable of producing four city names:
def city_generator(): yield("Konstanz") yield("Zurich") yield("Schaffhausen") yield("Stuttgart")It's possible to create an iterator with this generator, which generates one after the other the four cities Konstanz, Zurich, Schaffhausen and Stuttgart.
>>> from city_generator import city_generator >>> x = city_generator() >>> print x.next() Konstanz >>> print x.next() Zurich >>> print x.next() Schaffhausen >>> print x.next() Stuttgart >>> print x.next() Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration >>>As we can see, we have generated an iterator x in the interactive shell. Every call of the method next() returns another city. After the last city, i.e. Stuttgart, has been created, another call of x.next() raises an error, saying that the iteration has stopped, i.e. "StopIteration".
Can we send a reset to an iterator is a frequently asked question, so that it can start the iteration all over again. There is no reset, but it's possible to create another generator. This can be done e.g. by having the statement "x = city_generator()" again.
Thought at first sight the yield statement looks like the return statement of a function, we can see in this example that there is a big difference. If we had a return statement instead of a yield in the previous example, it would be a function. But this function would always return "Konstanz" and never any of the other cities, i.e. Zurich, Schaffhausen or Stuttgart.
Method of Operation
As we have elaborated in the introduction of this chapter, the generators offer a comfortable
method to generate iterators, and that's why they are called generators.
Method of working:
- A generator is called like a function. It's return value is an iterator object. The code of the generator will not be executed in this stage.
- The iterator can be used by calling the next method. The first time the execution starts like a function, i.e. the first line of code within the body of the iterator. The code is executed until a yield statement is reached.
- yield returns the value of the expression, which is following the keyword yield. This is like a function, but Python keeps track of the position of this yield and the state of the local variables is stored for the next call. At the next call, the execution continues with the statement following the yield statement and the variables have the same values as they had in the previous call.
- The iterator is finished, if the generator body is completely worked through or if the program flow encounters a return statement without a value.
The Fibonacci sequence is named after Leonardo of Pisa, who was known as Fibonacci (a contraction of filius Bonacci, "son of Bonaccio"). In his textbook Liber Abaci, which appeared in the year 1202) he had an exercise about the rabbits and their breeding: It starts with a newly-born pair of rabbits, i.e. a male and a female animal. It takes one month until they can mate. At the end of the second month the female gives birth to a new pair of rabbits. Now let's suppose that every female rabbit will bring forth another pair of rabbits every month after the end of the first month. We have to mention that Fibonacci's rabbits never die. They question is how large the population will be after a certain period of time.
This produces a sequence of numbers: 0,1,1,2,3,5,8,13
This sequence can be defined in mathematical terms like this:
Fn = Fn - 1 + Fn - 2
with the seed values:
F0 = 0 and F1 = 1
def fibonacci(n): """Fibonacci numbers generator, first n""" a, b, counter = 0, 1, 0 while True: if (counter > n): return yield a a, b = b, a + b counter += 1 f = fibonacci(5) for x in f: print x, printThe generator above can be used to create the first n Fibonacci numbers, or better (n+1) numbers because the 0th number is also included.
In the next example we show you a version which is capable of returning an endless iterator. We have to take care when we use this iterator that a termination criterium is used:
def fibonacci(): """Fibonacci numbers generator""" a, b = 0, 1 while True: yield a a, b = b, a + b f = fibonacci() counter = 0 for x in f: print x, counter += 1 if (counter > 10): break print
Recursive Generators
Like functions generators can be recursively programmed.
The following example is a generator to create all the permutations of a given list
of items.
For those who don't know what permutations are, we have a short introduction:
Formal Definition:
A permutation is a rearrangement of the elements of an ordered list. In other words: Every arrangement of n elements is called a permutation.
In the following lines we show you all the permutations of the letter a, b and c:
a b c
a c b
b a c
b c a
c a b
c b a
The number of permutations on a set of n elements is given by n!
n! = n*(n-1)*(n-2) ... 2 * 1 The permutation generator can be called with an arbitrary list of objects. The iterator returned by this generator generates all the possible permutations:
def permutations(items): n = len(items) if n==0: yield [] else: for i in range(len(items)): for cc in permutations(items[:i]+items[i+1:]): yield [items[i]]+cc for p in permutations(['r','e','d']): print ''.join(p) for p in permutations(list("game")): print ''.join(p)
A Generator of Generators
The second generator of our Fibonacci sequence example generates an iterator, which can theoretically produce all the Fibonacci numbers, i.e. an infinite number. But you shouldn't try to produce all these numbers, as we would do in the following example:
list(fibonacci())This will show you very fast the limits of your computer.
In most practical applications, we only need the first n elements of an "endless" iterator. We can use another generator, in our example firstn, to create the first n elements of a generator g:
def firstn(g, n): for i in range(n): yield g.next()The following script returns the first 10 elements of the Fibonacci sequence:
#!/usr/bin/env python def fibonacci(): """Ein Fibonacci-Zahlen-Generator""" a, b = 0, 1 while True: yield a a, b = b, a + b def firstn(g, n): for i in range(n): yield g.next() print list(firstn(fibonacci(), 10))
Next Chapter: Exception Handling | http://python-course.eu/generators.php | CC-MAIN-2017-09 | en | refinedweb |
Next Chapter: Global vs. Local Variables
Namespaces and Scopes
Namespaces
Generally speaking, a namespace (sometimes also called a context) is a naming system for making names unique to avoid ambiguity. Everybody knows a namespacing system from daily life, i.e. the naming of people in firstname and familiy name (surname). Another example is
a network: each network device
(workstation, server, printer, ...) needs a unique name and address. Yet another example is the directory
structure of file systems. The same file name can be used in different directories, the files can be
uniquely accessed via the pathnames.
Many programming languages use namespaces or contexts for identifiers. An identifier defined in a namespace is associated with that namespace. This way, the same identifier can be independently defined in multiple namespaces. (Like the same file names in different directories) Programming languages, which support namespaces, may have different rules that determine to which namespace an identifier belongs.
Namespaces in Python are implemented as Python dictionaries, this means it is a mapping from names (keys) to objects (values). The user doesn't have to know this to write a Python program and when using namespaces.
Some namespaces in Python:
- global names of a module
- local names in a function or method invocation
- built-in names: this namespace contains built-in functions (e.g. abs(), cmp(), ...) and built-in exception names
Lifetime of a NamespaceNot every namespace, which may be used in a script or program is accessible (or alive) at any moment during the execution of the script. Namespaces have different lifetimes, because they are often created at different points in time. There is one namespace which is present from beginning to end: The namespace containing the built-in names is created when the Python interpreter starts up, and is never deleted. The global namespace of a module is generated when the module is read in. Module namespaces normally last until the script ends, i.e. the interpreter quits. When a function is called, a local namespace is created for this function. This namespace is deleted either if the function ends, i.e. returns, or if the function raises an exception, which is not dealt with within the function.
ScopesA scope refers to a region of a program where a namespace can be directly accessed, i.e. without using a namespace prefix. In other words: The scope of a name is the area of a program where this name can be unambiguously used, for example inside of a function. A name's namespace is identical to it's scope. Scopes are defined statically, but they are used dynamically.
During program execution there are the following nested scopes available:
-
Next Chapter: Global vs. Local Variables | http://python-course.eu/namespaces.php | CC-MAIN-2017-09 | en | refinedweb |
In this article, I will analyze and compare three rendering algorithms:
- Forward Rendering
- Deferred Shading
- Forward+ (Tiled Forward Rendering)
Table of Contents
- Introduction
- Forward Rendering
- Deferred Shading
- G-Buffer Pass
- Lighting Pass (Guerrilla)
- Lighting Pass (My Implementation)
- Transparent Pass
- Forward+
- Grid Frustums
- Grid Frustums Compute Shader
- Light Culling
- Frustum Culling
- Light Culling Compute Shader
- Final Shading
- Experiment Setup and Performance Results
- Future Considerations
- Conclusion
- Download the Demo
- References
Introduction
Forward rendering works by rasterizing each geometric object in the scene. During shading, a list of lights in the scene is iterated to determine how the geometric object should be lit. This means that every geometric object has to consider every light in the scene. Of course, we can optimize this by discarding geometric objects that are occluded or do not appear in the view frustum of the camera. We can further optimize this technique by discarding lights that are not within the view frustum of the camera. If the range of the lights is known, then we can perform frustum culling on the light volumes before rendering the scene geometry. Object culling and light volume culling provide limited optimizations for this technique and light culling is often not practiced when using a forward rendering pipeline. It is more common to simply limit the number of lights that can affect a scene object. For example, some graphics engines will perform per-pixel lighting with the closest two or three lights and per-vertex lighting on three or four of the next closes lights. In traditional fixed-function rendering pipelines provided by OpenGL and DirectX the number of dynamic lights active in the scene at any time was limited to about eight. Even with modern graphics hardware, forward rendering pipelines are limited to about 100 dynamic scene lights before noticeable frame-rate issues start appearing.
Deferred shading on the other hand, works by rasterizing all of the scene objects (without lighting) into a series of 2D image buffers that store the geometric information that is required to perform the lighting calculations in a later pass. The information that is stored into the 2D image buffers are:
- screen space depth
- surface normals
- diffuse color
- specular color and specular power
The textures that compose the G-Buffer. Diffuse (top-left), Specular (top-right), Normals (bottom-left), and Depth (bottom-right). The specular power is stored in the alpha channel of the specular texture (top-right).
The combination of these 2D image buffers are referred to as the Geometric Buffer (or G-buffer) [1].
Other information could also be stored into the image buffers if it is required for the lighting calculations that will be performed later but each G-buffer texture requires at least 8.29 MB of texture memory at full HD (1080p) and 32-bits per pixel.
After the G-buffer has been generated, the geometric information can then be used to compute the lighting information in the lighting pass. The lighting pass is performed by rendering each light source as a geometric object in the scene. Each pixel that is touched by the light’s geometric representation is shaded using the desired lighting equation.
The obvious advantage with the deferred shading technique compared to forward rendering is that the expensive lighting calculations are only computed once per light per covered pixel. With modern hardware, the deferred shading technique can handle about 2,500 dynamic scene lights at full HD resolutions (1080p) before frame-rate issues start appearing when rendering only opaque scene objects.
One of the disadvantage of using deferred shading is that only opaque objects can be rasterized into the G-buffers. The reason for this is that multiple transparent objects may cover the same screen pixels but it is only possible to store a single value per pixel in the G-buffers. In the lighting pass the depth value, surface normal, diffuse and specular colors are sampled for the current screen pixel that is being lit. Since only a single value from each G-buffer is sampled, transparent objects cannot be supported in the lighting pass. To circumvent this issue, transparent geometry must be rendered using the standard forward rendering technique which limits either the amount of transparent geometry in the scene or the number of dynamic lights in the scene. A scene which consists of only opaque objects can handle about 2000 dynamic lights before frame-rate issues start appearing.
Another disadvantage of deferred shading is that only a single lighting model can be simulated in the lighting pass. This is due to the fact that it is only possible to bind a single pixel shader when rendering the light geometry. This is usually not an issue for pipelines that make use of übershaders as rendering with a single pixel shader is the norm, however if your rendering pipeline takes advantage of several different lighting models implemented in various pixel shaders then it will be problematic to switch your rendering pipeline to use deferred shading.
Forward+ [2][3] (also known as tiled forward shading) [4][5] is a rendering technique that combines forward rendering with tiled light culling to reduce the number of lights that must be considered during shading. Forward+ primarily consists of two stages:
- Light culling
- Forward rendering
Forward+ Lighting. Default Lighting (left), Light heatmap (right). The colors in the heatmap indicate how many lights are affecting the tile. Black tiles contain no lights while blue tiles contain between 1-10 lights. The green tiles contain 20-30 lights.
The first pass of the Forward+ rendering technique uses a uniform grid of tiles in screen space to partition the lights into per-tile lists.
The second pass uses a standard forward rendering pass to shade the objects in the scene but instead of looping over every dynamic light in the scene, the current pixel’s screen-space position is used to look-up the list of lights in the grid that was computed in the previous pass. The light culling provides a significant performance improvement over the standard forward rendering technique as it greatly reduces the number of redundant lights that must be iterated to correctly light the pixel. Both opaque and transparent geometry can be handled in a similar manner without a significant loss of performance and handling multiple materials and lighting models is natively supported with Forward+.
Since Forward+ incorporates the standard forward rendering pipeline into its technique, Forward+ can be integrated into existing graphics engines that were initially built using forward rendering. Forward+ does not make use of G-buffers and does not suffer the limitations of deferred shading. Both opaque and transparent geometry can be rendered using Forward+. Using modern graphics hardware, a scene consisting of 5,000 – 6,000 dynamic lights can be rendered in real-time at full HD resolutions (1080p).
In the remainder of this article, I will describe the implementation of these three techniques:
- Forward Rendering
- Deferred Shading
- Forward+ (Tiled Forward Rendering)
I will also show performance statistics under various circumstances to try to determine under which conditions one technique performs better than the others.
Definitions
In the context of this article, it is important to define a few terms so that the rest of the article is easier to understand. If you are familiar with the basic terminology used in graphics programming, you may skip this section.
The scene refers to a nested hierarchy of objects that can be rendered. For example, all of the static objects that can be rendered will be grouped into a scene. Each individual renderable object is referenced in the scene using a scene node. Each scene node references a single renderable object (such as a mesh) and the entire scene can be referenced using the scene’s top-level node called the root node. The connection of scene nodes within the scene is also called a scene graph. Since the root node is also a scene node, scenes can be nested to create more complex scene graphs with both static and dynamic objects.
A pass refers to a single operation that performs one step of a rendering technique. For example, the opaque pass is a pass that iterates over all of the objects in the scene and renders only the opaque objects. The transparent pass will also iterate over all of the objects in the scene but renders only the transparent objects. A pass could also be used for more general operations such as copying GPU resources or dispatching a compute shader.
A technique is the combination of several passes that must be executed in a particular order to implement a rendering algorithm.
A pipeline state refers to the configuration of the rendering pipeline before an object is rendered. A pipeline state object encapsulates the following render state:
- Shaders (vertex, tessellation, geometry, and pixel)
- Rasterizer state (polygon fill mode, culling mode, scissor culling, viewports)
- Blend state
- Depth/Stencil state
- Render target
DirectX 12 introduces a pipeline state object but my definition of the pipeline state varies slightly from the DirectX 12 definition.
Forward rendering refers to a rendering technique that traditionally has only two passes:
- Opaque Pass
- Transparent Pass
The opaque pass will render all opaque objects in the scene ideally sorted front to back (relative to the camera) in order to minimize overdraw. During the opaque pass, no blending needs to be performed.
The transparent pass will render all transparent objects in the scene ideally sorted back to front (relative to the camera) in order to support correct blending. During the transparent pass, alpha blending needs to be enabled to allow for semi-transparent materials to be blended correctly with pixels already rendered to the render target’s color buffer.
During forward rendering, all lighting is performed in the pixel shader together will all other material shading instructions.
Deferred shading refers to a rendering technique that consists of three primary passes:
- Geometry Pass
- Lighting Pass
- Transparent Pass
The first pass is the geometry pass which is similar to the opaque pass of the forward rendering technique because only opaque objects are rendered in this pass. The difference is that the geometry pass does not perform any lighting calculations but only outputs the geometric and material data to the G-buffer that was described in the introduction.
In the lighting pass, the geometric volumes that represent the lights are rendered into the scene and the material information stored in the G-buffer is used to compute the lighting for the rasterized pixels.
The final pass is the transparent pass. This pass is identical to the transparent pass of the forward rendering technique. Since deferred shading has no native support for transparent materials, transparent objects have to be rendered in a separate pass that performs lighting using the standard forward rendering method.
Forward+ (also referred to as tiled forward rendering) is a rendering technique that consists of three primary passes:
- Light Culling Pass
- Opaque Pass
- Transparent Pass
As mentioned in the introduction, the light culling pass is responsible for sorting the dynamic lights in the scene into screen space tiles. A light index list is used to indicate which light indices (from the global light list) are overlapping each screen tile. In the light culling pass, two sets of light index lists will be generated:
- Opaque light index list
- Transparent light index list
The opaque light index list is used when rendering opaque geometry and the transparent light index list is used when rendering transparent geometry.
The opaque and transparent passes of the Forward+ rendering technique are identical to that of the standard forward rendering technique but instead of looping over all of the dynamic lights in the scene, only the lights in the current fragment’s screen space tile need to be considered.
A light refers to one of the following types of lights:
- Point light
- Spot light
- Directional light
All rendering techniques described in this article have support for these three light types. Area lights are not supported. The point light and the spot light are simulated as emanating from a single point of origin while the directional light is considered to emanate from a point infinitely far away emitting light everywhere in the same direction. Point lights and spot lights have a limited range after which their intensity falls-off to zero. The fall-off of the intensity of the light called attenuation. Point lights are geometrically represented as spheres, spot lights as cones, and directional lights as full-screen quads.
Let’s first take a more detailed look at the standard forward rendering technique.
Forward Rendering
Forward rendering is the simplest of the three lighting techniques and the most common technique used to render graphics in games. It is also the most computationally expensive technique for computing lighting and for this reason, it does not allow for a large number of dynamic lights to be used in the scene.
Most graphics engines that use forward rendering will utilize various techniques to simulate many lights in the scene. For example, lightmapping and light probes are methods used to pre-compute the lighting contributions from static lights placed in the scene and storing these lighting contributions in textures that are loaded at runtime. Unfortunately, lightmapping and light probes cannot be used to simulate dynamic lights in the scene because the lights that were used to produce the lightmaps are often discarded at runtime.
For this experiment, forward rendering is used as the ground truth to compare the other two rendering techniques. The forward rendering technique is also used to establish a performance baseline that can be used to compare the performance of the other rendering techniques.
Many functions of the forward rendering technique are reused in the deferred and forward+ rendering techniques. For example, the vertex shader used in forward rendering is also used for both deferred shading and forward+ rendering. Also the methods to compute the final lighting and material shading are reused in all rendering techniques.
In the next section, I will describe the implementation of the forward rendering technique.
Vertex Shader
The vertex shader is common to all rendering techniques. In this experiment, only static geometry is supported and there is no skeletal animation or terrain that would require a different vertex shader. The vertex shader is as simple as it can be while supporting the required functionality in the pixel shader such as normal mapping.
Before I show the vertex shader code, I will describe the data structures used by the vertex shader.
struct AppData { float3 position : POSITION; float3 tangent : TANGENT; float3 binormal : BINORMAL; float3 normal : NORMAL; float2 texCoord : TEXCOORD0; };
The AppData structure defines the data that is expected to be sent by the application code (for a tutorial on how to pass data from the application to a vertex shader, please refer to my previous article titled Introduction to DirectX 11). For normal mapping, in addition to the normal vector, we also need to send the tangent vector, and optionally the binormal (or bitangent) vector. The tangent and binormal vectors can either be created by the 3D artist when the model is created, or they can be generated by the model importer. In my case, I rely on the Open Asset Import Library [7] to generate the tangents and bitangents if they were not already created by the 3D artist.
In the vertex shader, we also need to know how to transform the object space vectors that are sent by the application into view space which are required by the pixel shader. To do this, we need to send the world, view, and projection matrices to the vertex shader (for a review of the various spaces used in this article, please refer to my previous article titled Coordinate Systems). To store these matrices, I will create a constant buffer that will store the per-object variables needed by the vertex shader.
cbuffer PerObject : register( b0 ) { float4x4 ModelViewProjection; float4x4 ModelView; }
Since I don’t need to store the world matrix separately, I precompute the combined model, and view, and the combined model, view, and projection matrices together in the application and send these matrices in a single constant buffer to the vertex shader.
The output from the vertex shader (and consequently, the input to the pixel shader) looks like this:
struct VertexShaderOutput { float3 positionVS : TEXCOORD0; // View space position. float2 texCoord : TEXCOORD1; // Texture coordinate float3 tangentVS : TANGENT; // View space tangent. float3 binormalVS : BINORMAL; // View space binormal. float3 normalVS : NORMAL; // View space normal. float4 position : SV_POSITION; // Clip space position. };
The VertexShaderOutput structure is used to pass the transformed vertex attributes to the pixel shader. The members that are named with a VS postfix indicate that the vector is expressed in view space. I chose to do all of the lighting in view space, as opposed to world space, because it is easier to work in view space coordinates when implementing the deferred shading and forward+ rendering techniques.
The vertex shader is fairly straightforward and minimal. It’s only purpose is to transform the object space vectors passed by the application into view space to be used by the pixel shader.
The vertex shader must also compute the clip space position that is consumed by the rasterizer. The SV_POSITION semantic is applied to the output value from the vertex shader to specify that the value is used as the clip space position but this semantic can also be applied to an input variable of a pixel shader. When SV_POSITION is used as an input semantic to a pixel shader, the value is the position of the pixel in screen space [8]. In both the deferred shading and the forward+ shaders, I will use this semantic to the get the screen space position of the current pixel.
VertexShaderOutput VS_main( AppData IN ) { VertexShaderOutput OUT; OUT.position = mul( ModelViewProjection, float4( IN.position, 1.0f ) ); OUT.positionVS = mul( ModelView, float4( IN.position, 1.0f ) ).xyz; OUT.tangentVS = mul( ( float3x3 )ModelView, IN.tangent ); OUT.binormalVS = mul( ( float3x3 )ModelView, IN.binormal ); OUT.normalVS = mul( ( float3x3 )ModelView, IN.normal ); OUT.texCoord = IN.texCoord; return OUT; }
You will notice that I am pre-multiplying the input vectors by the matrices. This indicates that the matrices are stored in column-major order by default. Prior to DirectX 10, matrices in HLSL were loaded in row-major order and input vectors were post-multiplied by the matrices. Since DirectX 10, matrices are loaded in column-major order by default. You can change the default order by specifying the row_major type modifier on the matrix variable declarations [9].
Pixel Shader
The pixel shader will compute all of the lighting and shading that is used to determine the final color of a single screen pixel. The lighting equations used in this pixel shader are described in a previous article titled Texturing and Lighting in DirectX 11 if you are not familiar with lighting models, then you should read that article first before continuing.
The pixel shader uses several structures to do its work. The Material struct stores all of the information that describes the surface material of the object being shaded and the Light struct contains all of the parameters that are necessary to describe a light that is placed in the scene.
Material
The Material struct defines all of the properties that are necessary to describe the surface of the object currently being shaded. Since some material properties can also have an associated texture (for example, diffuse textures, specular textures, or normal texture), we will also use the material to indicate if those textures are present on the object.
struct Material { float4 GlobalAmbient; //-------------------------- ( 16 bytes ) float4 AmbientColor; //-------------------------- ( 16 bytes ) float4 EmissiveColor; //-------------------------- ( 16 bytes ) float4 DiffuseColor; //-------------------------- ( 16 bytes ) float4 SpecularColor; //-------------------------- ( 16 bytes ) // Reflective value. float4 Reflectance; //-------------------------- ( 16 bytes ) float Opacity; float SpecularPower; // For transparent materials, IOR > 0. float IndexOfRefraction; bool HasAmbientTexture; //-------------------------- ( 16 bytes ) bool HasEmissiveTexture; bool HasDiffuseTexture; bool HasSpecularTexture; bool HasSpecularPowerTexture; //-------------------------- ( 16 bytes ) bool HasNormalTexture; bool HasBumpTexture; bool HasOpacityTexture; float BumpIntensity; //-------------------------- ( 16 bytes ) float SpecularScale; float AlphaThreshold; float2 Padding; //--------------------------- ( 16 bytes ) }; //--------------------------- ( 16 * 10 = 160 bytes )
The GlobalAmbient term is used to describe the ambient contribution applied to all object in the scene globally. Technically, this variable should be a global variable (not specific to a single object) but since there is only a single material at a time in the pixel shader, I figured it was a fine place to put it.
The ambient, emissive, diffuse, and specular color values have the same meaning as in my previous article titled Texturing and Lighting in DirectX 11 so I will not explain them in detail here.
The Reflectance component could be used to indicate the amount of reflected color that should be blended with the diffuse color. This would require environment mapping to be implemented which I am not doing in this experiment so this value is not used here.
The Opacity value is used to determine the total opacity of an object. This value can be used to make objects appear transparent. This property is used to render semi-transparent objects in the transparent pass. If the opacity value is less than one (1 being fully opaque and 0 being fully transparent), the object will be considered transparent and will be rendered in the transparent pass instead of the opaque pass.
The SpecularPower variable is used to determine how shiny the object appears. Specular power was described in my previous article titled Texturing and Lighting in DirectX 11 so I won’t repeat it here.
The IndexOfRefraction variable can be applied on objects that should refract light through them. Since refraction requires environment mapping techniques that are not implemented in this experiment, this variable will not be used here.
The HasTexture variables defined on lines 29-38 indicate whether the object being rendered has an associated texture for those properties. If the parameter is true then the corresponding texture will be sampled and the texel will be blended with the corresponding material color value.
The BumpIntensity variable is used to scale the height values from a bump map (not to be confused with normal mapping which does not need to be scaled) in order to soften or accentuate the apparent bumpiness of an object’s surface. In most cases models will use normal maps to add detail to the surface of an object without high tessellation but it is also possible to use a heightmap to do the same thing. If a model has a bump map, the material’s HasBumpTexture property will be set to true and in this case the model will be bump mapped instead of normal mapped.
The SpecularScale variable is used to scale the specular power value that is read from a specular power texture. Since textures usually store values as unsigned normalized values, when sampling from the texture the value is read as a floating-point value in the range of [0..1]. A specular power of 1.0 does not make much sense (as was explained in my previous article titled Texturing and Lighting in DirectX 11) so the specular power value read from the texture will be scaled by SpecularScale before being used for the final lighting computation.
The AlphaThreshold variable can be used to discard pixels whose opacity is below a certain value using the “discard” command in the pixel shader. This can be used with “cut-out” materials where the object does not need to be alpha blended but it should have holes in the object (for example, a chain-link fence).
The Padding variable is used to explicitly add eight bytes of padding to the material struct. Although HLSL will implicitly add this padding to this struct to make sure the size of the struct is a multiple of 16 bytes, explicitly adding the padding makes it clear that the size and alignment of this struct is identical to its C++ counterpart.
The material properties are passed to the pixel shader using a constant buffer.
cbuffer Material : register( b2 ) { Material Mat; };
This constant buffer and buffer register slot assignment is used for all pixel shaders described in this article.
Textures
The materials have support for eight different textures.
- Ambient
- Emissive
- Diffuse
- Specular
- SpecularPower
- Normals
- Bump
- Opacity
Not all scene objects will use all of the texture slots (normal and bump maps are mutually exclusive so they can probably reuse the same texture slot assignment). It is up to the 3D artist to determine which textures will be used by the models in the scene. The application will load the textures that are associated to a material. A texture parameter and an associated texture slot assignment is declared for each of these material properties.
Texture2D AmbientTexture : register( t0 ); Texture2D EmissiveTexture : register( t1 ); Texture2D DiffuseTexture : register( t2 ); Texture2D SpecularTexture : register( t3 ); Texture2D SpecularPowerTexture : register( t4 ); Texture2D NormalTexture : register( t5 ); Texture2D BumpTexture : register( t6 ); Texture2D OpacityTexture : register( t7 );
In every pixel shader described in this article, texture slots 0-7 will be reserved for these textures.
Lights
The Light struct stores all the information necessary to define a light in the scene. Spot lights, point lights and directional lights are not separated into different structs and all of the properties necessary to define any of those light types are stored in a single struct.
struct Light { /** * Position for point and spot lights (World space). */ float4 PositionWS; //--------------------------------------------------------------( 16 bytes ) /** * Direction for spot and directional lights (World space). */ float4 DirectionWS; //--------------------------------------------------------------( 16 bytes ) /** * Position for point and spot lights (View space). */ float4 PositionVS; //--------------------------------------------------------------( 16 bytes ) /** * Direction for spot and directional lights (View space). */ float4 DirectionVS; //--------------------------------------------------------------( 16 bytes ) /** * Color of the light. Diffuse and specular colors are not seperated. */ float4 Color; //--------------------------------------------------------------( 16 bytes ) /** * The half angle of the spotlight cone. */ float SpotlightAngle; /** * The range of the light. */ float Range; /** * The intensity of the light. */ float Intensity; /** * Disable or enable the light. */ bool Enabled; //--------------------------------------------------------------( 16 bytes ) /** * Is the light selected in the editor? */ bool Selected; /** * The type of the light. */ uint Type; float2 Padding; //--------------------------------------------------------------( 16 bytes ) //--------------------------------------------------------------( 16 * 7 = 112 bytes ) };
The Position and Direction properties are stored in both world space (with the WS postfix) and in view space (with VS postfix). Of course the Position variable only applies to point and spot lights while the Direction variable only applies to spot and directional lights. I store both world space and view space position and direction vectors because I find it easier to work in world space in the application then convert the world space vectors to view space before uploading the lights array to the GPU. This way I do not need to maintain multiple light lists at the cost of additional space that is required on the GPU. But even 10,000 lights only require 1.12 MB on the GPU so I figured this was a reasonable sacrifice. But minimizing the size of the light structs could have a positive impact on caching on the GPU and improve rendering performance. This is further discussed in the Future Considerations section at the end of this article.
In some lighting models the diffuse and specular lighting contributions are separated. I chose not to separate the diffuse and specular color contributions because it is rare that these values differ. Instead I chose to store both the diffuse and specular lighting contributions in a single variable called Color.
The SpotlightAngle is the half-angle of the spotlight cone expressed in degrees. Working in degrees seems to be more intuitive than working in radians. Of course, the spotlight angle will be converted to radians in the shader when we need to compute the cosine angle of the spotlight and the light vector.
The Range variable determines how far away the light will reach and still contribute light to a surface. Although not entirely physically correct (real lights have an attenuation that never actually reaches 0) lights are required to have a finite range to implement the deferred shading and forward+ rendering techniques. The units of this range are scene specific but generally I try to adhere to the 1 unit is 1 meter specification. For point lights, the range is the radius of the sphere that represents the light and for spotlights, the range is the length of the cone that represents the light. Directional lights don’t use range because they are considered to be infinitely far away pointing in the same direction everywhere.
The Intensity variable is used to modulate the computed light contribution. By default, this value is 1 but it can be used to make some lights brighter or more subtle than other lights.
Lights in the scene can be toggled on and off with the Enabled flag. Lights whose Enabled flag is false will be skipped in the shader.
Lights are editable in this demo. A light can be selected by clicking on it in the demo application and its properties can be modified. To indicate that a light is currently selected, the Selected flag will be set to true. When a light is selected in the scene, its visual representation will appear darker (less transparent) to indicate that it is currently selected.
The Type variable is used to indicate which type of light this is. It can have one of the following values:
#define POINT_LIGHT 0 #define SPOT_LIGHT 1 #define DIRECTIONAL_LIGHT 2
Once again the Light struct is explicitly padded with 8 bytes to match the struct layout in C++ and to make the struct explicitly aligned to 16 bytes which is required in HLSL.
The lights array is accessed through a StructuredBuffer. Most lighting shader implementations will use a constant buffer to store the lights array but constant buffers are limited to 64 KB in size which means that it would be limited to about 570 lights before running out of constant memory on the GPU. Structured buffers are stored in texture memory which is limited to the amount of texture memory available on the GPU (usually in the GB range on desktop GPUs). Texture memory is also very fast on most GPUs so storing the lights in a structured buffer did not impose a performance impact. In fact, on my particular GPU (NVIDIA GeForce GTX 680) I noticed a considerable performance improvement when I moved the lights array to a structure buffer.
StructuredBuffer<Light> Lights : register( t8 );
Pixel Shader Continued
The pixel shader for the forward rendering technique is slightly more complicated than the vertex shader. If you have read my previous article titled Texturing and Lighting in DirectX 11 then you should already be familiar with most of the implementation of this shader, but I will explain it in detail here as it is the basis of all of the rendering algorithms shown in this article.
Materials
First, we need to gather the material properties of the material. If the material has textures associated with its various components, the textures will be sampled before the lighting is computed. After the material properties have been initialized, all of the lights in the scene will be iterated and the lighting contributions will be accumulated and modulated with the material properties to produce the final pixel color.
[earlydepthstencil] float4 PS_main( VertexShaderOutput IN ) : SV_TARGET { // Everything is in view space. float4 eyePos = { 0, 0, 0, 1 }; Material mat = Mat;
The [earlydepthstencil] attribute before the function indicates that the GPU should take advantage of early depth and stencil culling [10]. This causes the depth/stencil tests to be performed before the pixel shader is executed. This attribute can not be used on shaders that modify the pixel’s depth value by outputting a value using the SV_Depth semantic. Since this pixel shader only outputs a color value using the SV_TARGET semantic, it can take advantage of early depth/stencil testing to provide a performance improvement when a pixel is rejected. Most GPU’s will perform early depth/stencil tests anyways even without this attribute and adding this attribute to the pixel shader did not have a noticeable impact on performance but I decided to keep the attribute anyways.
Since all of the lighting computations will be performed in view space, the eye position (the position of the camera) is always (0, 0, 0). This is a nice side effect of working in view space; The camera’s eye position does not need to be passed as an additional parameter to the shader.
On line 24 a temporary copy of the material is created because its properties will be modified in the shader if there is an associated texture for the material property. Since the material properties are stored in a constant buffer, it would not be possible to directly update the materials properties from the constant buffer uniform variable so a local temporary must be used.
Diffuse
The first material property we will read is the diffuse color.
float4 diffuse = mat.DiffuseColor; if ( mat.HasDiffuseTexture ) { float4 diffuseTex = DiffuseTexture.Sample( LinearRepeatSampler, IN.texCoord ); if ( any( diffuse.rgb ) ) { diffuse *= diffuseTex; } else { diffuse = diffuseTex; } }
The default diffuse color is the diffuse color assigned to the material’s DiffuseColor variable. If the material also has a diffuse texture associated with it then the color from the diffuse texture will be blended with the material’s diffuse color. If the material’s diffuse color is black (0, 0, 0, 0), then the material’s diffuse color will simply be replaced by the color in the diffuse texture. The any hlsl intrinsic function can be used to find out if any of the color components is not zero.
Opacity
The pixel’s alpha value is determined next.
float alpha = diffuse.a; if ( mat.HasOpacityTexture ) { // If the material has an opacity texture, use that to override the diffuse alpha. alpha = OpacityTexture.Sample( LinearRepeatSampler, IN.texCoord ).r; }
By default, the fragment’s transparency value is determined by the alpha component of the diffuse color. If the material has an opacity texture associated with it, the red component of the opacity texture is used as the alpha value, overriding the alpha value in the diffuse texture. In most cases, opacity textures store only a single channel in the first component of the color that is returned from the Sample method. In order to read from a single-channel texture, we must read from the red channel, not the alpha channel. The alpha channel of a single channel texture will always be 1 so reading the alpha channel from the opacity map (which is most likely a single channel texture) would not provide the value we require.
Ambient and Emissive
The ambient and emissive colors are read in a similar fashion as the diffuse color. The ambient color is also combined with the value of the material’s GlobalAmbient variable.
float4 ambient = mat.AmbientColor; if ( mat.HasAmbientTexture ) { float4 ambientTex = AmbientTexture.Sample( LinearRepeatSampler, IN.texCoord ); if ( any( ambient.rgb ) ) { ambient *= ambientTex; } else { ambient = ambientTex; } } // Combine the global ambient term. ambient *= mat.GlobalAmbient; float4 emissive = mat.EmissiveColor; if ( mat.HasEmissiveTexture ) { float4 emissiveTex = EmissiveTexture.Sample( LinearRepeatSampler, IN.texCoord ); if ( any( emissive.rgb ) ) { emissive *= emissiveTex; } else { emissive = emissiveTex; } }
Specular Power
Next the specular power is computed.
if ( mat.HasSpecularPowerTexture ) { mat.SpecularPower = SpecularPowerTexture.Sample( LinearRepeatSampler, IN.texCoord ).r \ * mat.SpecularScale; }
If the material has an associated specular power texture, the red component of the texture is sampled and scaled by the value of the material’s SpecularScale variable. In this case, the value of the SpecularPower variable in the material is replaced with the scaled value from the texture.
Normals
If the material has either an associated normal map or a bump map, normal mapping or bump mapping will be performed to compute the normal vector. If neither a normal map nor a bump map texture is associated with the material, the input normal is used as-is.
// Normal mapping if ( mat.HasNormalTexture ) { // For scenes with normal mapping, I don't have to invert the binormal. float3x3 TBN = float3x3( normalize( IN.tangentVS ), normalize( IN.binormalVS ), normalize( IN.normalVS ) ); N = DoNormalMapping( TBN, NormalTexture, LinearRepeatSampler, IN.texCoord ); } // Bump mapping else if ( mat.HasBumpTexture ) { // For most scenes using bump mapping, I have to invert the binormal. float3x3 TBN = float3x3( normalize( IN.tangentVS ), normalize( -IN.binormalVS ), normalize( IN.normalVS ) ); N = DoBumpMapping( TBN, BumpTexture, LinearRepeatSampler, IN.texCoord, mat.BumpIntensity ); } // Just use the normal from the model. else { N = normalize( float4( IN.normalVS, 0 ) ); }
Normal Mapping
The DoNormalMapping function will perform normal mapping from the TBN (tangent, bitangent/binormal, normal) matrix and the normal map.
float3 ExpandNormal( float3 n ) { return n * 2.0f - 1.0f; } float4 DoNormalMapping( float3x3 TBN, Texture2D tex, sampler s, float2 uv ) { float3 normal = tex.Sample( s, uv ).xyz; normal = ExpandNormal( normal ); // Transform normal from tangent space to view space. normal = mul( normal, TBN ); return normalize( float4( normal, 0 ) ); }
Normal mapping is pretty straightforward and is explained in more detail in a previous article titled Normal Mapping so I won’t explain it in detail here. Basically we just need to sample the normal from the normal map, expand the normal into the range [-1..1] and transform it from tangent space into view space by post-multiplying it by the TBN matrix.
Bump Mapping
Bump mapping works in a similar way, except instead of storing the normals directly in the texture, the bumpmap texture stores height values in the range [0..1]. The normal can be generated from the height map by computing the gradient of the height values in both the U and V texture coordinate directions. Taking the cross product of the gradients in each direction gives the normal in texture space. Post-multiplying the resulting normal by the TBN matrix will give the normal in view space. The height values read from the bump map can be scaled to produce more (or less) accentuated bumpiness.
float4 DoBumpMapping( float3x3 TBN, Texture2D tex, sampler s, float2 uv, float bumpScale ) { // Sample the heightmap at the current texture coordinate. float height = tex.Sample( s, uv ).r * bumpScale; // Sample the heightmap in the U texture coordinate direction. float heightU = tex.Sample( s, uv, int2( 1, 0 ) ).r * bumpScale; // Sample the heightmap in the V texture coordinate direction. float heightV = tex.Sample( s, uv, int2( 0, 1 ) ).r * bumpScale; float3 p = { 0, 0, height }; float3 pU = { 1, 0, heightU }; float3 pV = { 0, 1, heightV }; // normal = tangent x bitangent float3 normal = cross( normalize(pU - p), normalize(pV - p) ); // Transform normal from tangent space to view space. normal = mul( normal, TBN ); return float4( normal, 0 ); }
If the material does not have an associated normal map or a bump map, the normal vector from the vertex shader output is used directly.
Now we have all of the data that is required to compute the lighting.
Lighting
The lighting calculations for the forward rendering technique are performed in the DoLighting function. This function accepts the following arguments:
- lights: The lights array (as a structured buffer)
- mat: The material properties that were just computed
- eyePos: The position of the camera in view space (which is always (0, 0, 0))
- P: The position of the point being shaded in view space
- N: The normal of the point being shaded in view space.
The DoLighting function returns a LightingResult structure that contains the diffuse and specular lighting contributions from all of the lights in the scene.
// This lighting result is returned by the // lighting functions for each light type. struct LightingResult { float4 Diffuse; float4 Specular; }; LightingResult DoLighting( StructuredBuffer<Light> lights, Material mat, float4 eyePos, float4 P, float4 N ) { float4 V = normalize( eyePos - P ); LightingResult totalResult = (LightingResult)0; for ( int i = 0; i < NUM_LIGHTS; ++i ) { LightingResult result = (LightingResult)0; // Skip lights that are not enabled. if ( !lights[i].Enabled ) continue; // Skip point and spot lights that are out of range of the point being shaded. if ( lights[i].Type != DIRECTIONAL_LIGHT && length( lights[i].PositionVS - P ) > lights[i].Range ) continue; switch ( lights[i].Type ) { case DIRECTIONAL_LIGHT: { result = DoDirectionalLight( lights[i], mat, V, P, N ); } break; case POINT_LIGHT: { result = DoPointLight( lights[i], mat, V, P, N ); } break; case SPOT_LIGHT: { result = DoSpotLight( lights[i], mat, V, P, N ); } break; } totalResult.Diffuse += result.Diffuse; totalResult.Specular += result.Specular; } return totalResult; }
The view vector (V) is computed from the eye position and the position of the shaded pixel in view space.
The light buffer is iterated on line 439. Since we know that disabled lights and lights that are not within range of the point being shaded won’t contribute any lighting, we can skip those lights. Otherwise, the appropriate lighting function is invoked depending on the type of light.
Each of the various light types will compute their diffuse and specular lighting contributions. Since diffuse and specular lighting is computed in the same way for every light type, I will define functions to compute the diffuse and specular lighting contributions independent of the light type.
Diffuse Lighting
The DoDiffuse function is very simple and only needs to know about the light vector (L) and the surface normal (N).
float4 DoDiffuse( Light light, float4 L, float4 N ) { float NdotL = max( dot( N, L ), 0 ); return light.Color * NdotL; }
The diffuse lighting is computed by taking the dot product between the light vector (L) and the surface normal (N). The DoDiffuse function expects both of these vectors to be normalized.
The resulting dot product is then multiplied by the color of the light to compute the diffuse contribution of the light.
Next, we’ll compute the specular contribution of the light.
Specular Lighting
The DoSpecular function is used to compute the specular contribution of the light. In addition to the light vector (L) and the surface normal (N), this function also needs the view vector (V) to compute the specular contribution of the light.
float4 DoSpecular( Light light, Material material, float4 V, float4 L, float4 N ) { float4 R = normalize( reflect( -L, N ) ); float RdotV = max( dot( R, V ), 0 ); return light.Color * pow( RdotV, material.SpecularPower ); }
Since the light vector L is the vector pointing from the point being shaded to the light source, it needs to be negated so that it points from the light source to the point being shaded before we compute the reflection vector. The resulting dot product of the reflection vector (R) and the view vector (V) is raised to the power of the value of the material’s specular power variable and modulated by the color of the light. It’s important to remember that a specular power value in the range (0…1) is not a meaningful specular power value. For a detailed explanation of specular lighting, please refer to my previous article titled Texturing and Lighting in DirectX 11.
Attenuation
Attenuation is the fall-off of the intensity of the light as the light is further away from the point being shaded. In traditional lighting models the attenuation is computed as the reciprocal of the sum of three attenuation factors multiplied by the distance to the light (as explained in Attenuation):
- Constant attenuation
- Linear attenuation
- Quadratic attenuation
However this method of computing attenuation assumes that the fall-off of the light never reaches zero (lights have an infinite range). For deferred shading and forward+ we must be able to represent the lights in the scene as volumes with finite range so we need to use a different method to compute the attenuation of the light.
One possible method to compute the attenuation of the light is to perform a linear blend from 1.0 when the point is closest to the light and 0.0 if the point is at a distance greater than the range of the light. However a linear fall-off does not look very realistic as attenuation in reality is more similar to the reciprocal of a quadratic function.
I decided to use the smoothstep hlsl intrinsic function which returns a smooth interpolation between a minimum and maximum value.
// Compute the attenuation based on the range of the light. float DoAttenuation( Light light, float d ) { return 1.0f - smoothstep( light.Range * 0.75f, light.Range, d ); }
The smoothstep function will return 0 when the distance to the light (d) is less than ¾ of the range of the light and 1 when the distance to the light is more than the range. Of course we want to reverse this interpolation so we just subtract this value from 1 to get the attenuation we need.
Optionally, we could adjust the smoothness of the attenuation of the light by parameterization of the 0.75f in the equation above. A smoothness factor of 0.0 should result in the intensity of the light remaining 1.0 all the way to the maximum range of the light while a smoothness of 1.0 should result in the intensity of the light being interpolated through the entire range of the light.
Now let’s combine the diffuse, specular, and attenuation factors to compute the lighting contribution for each light type.
Point Lights
Point lights combine the attenuation, diffuse, and specular values to determine the final contribution of the light.
LightingResult DoPointLight( Light light, Material mat, float4 V, float4 P, float4 N ) { LightingResult result; float4 L = light.PositionVS - P; float distance = length( L ); L = L / distance; float attenuation = DoAttenuation( light, distance ); result.Diffuse = DoDiffuse( light, L, N ) * attenuation * light.Intensity; result.Specular = DoSpecular( light, mat, V, L, N ) * attenuation * light.Intensity; return result; }
On line 400-401, the diffuse and specular contributions are scaled by the attenuation and the light intensity factors before being returned from the function.
Spot Lights
In addition to the attenuation factor, spot lights also have a cone angle. In this case, the intensity of the light is scaled by the dot product between the light vector (L) and the direction of the spotlight. If the angle between light vector and the direction of the spotlight is less than the spotlight cone angle, then the point should be lit by the spotlight. Otherwise the spotlight should not contribute any light to the point being shaded. The DoSpotCone function will compute the intensity of the light based on the spotlight cone angle.
float DoSpotCone( Light light, float4 L ) { // If the cosine angle of the light's direction // vector and the vector from the light source to the point being // shaded is less than minCos, then the spotlight contribution will be 0. float minCos = cos( radians( light.SpotlightAngle ) ); // If the cosine angle of the light's direction vector // and the vector from the light source to the point being shaded // is greater than maxCos, then the spotlight contribution will be 1. float maxCos = lerp( minCos, 1, 0.5f ); float cosAngle = dot( light.DirectionVS, -L ); // Blend between the minimum and maximum cosine angles. return smoothstep( minCos, maxCos, cosAngle ); }
First, the cosine angle of the spotlight cone is computed. If the dot product between the direction of the spotlight and the light vector (L) is less than the min cosine angle then the contribution of the light will be 0. If the dot product is greater than max cosine angle then the contribution of the spotlight will be 1.
It may seem counter-intuitive that the max cosine angle is a smaller angle than the min cosine angle but don’t forget that the cosine of 0° is 1 and the cosine of 90° is 0.
The DoSpotLight function will compute the spotlight contribution similar to that of the point light with the addition of the spotlight cone angle.
LightingResult DoSpotLight( Light light, Material mat, float4 V, float4 P, float4 N ) { LightingResult result; float4 L = light.PositionVS - P; float distance = length( L ); L = L / distance; float attenuation = DoAttenuation( light, distance ); float spotIntensity = DoSpotCone( light, L ); result.Diffuse = DoDiffuse( light, L, N ) * attenuation * spotIntensity * light.Intensity; result.Specular = DoSpecular( light, mat, V, L, N ) * attenuation * spotIntensity * light.Intensity; return result; }
Directional Lights
Directional lights are the simplest light type because they do not attenuate over the distance to the point being shaded.
LightingResult DoDirectionalLight( Light light, Material mat, float4 V, float4 P, float4 N ) { LightingResult result; float4 L = normalize( -light.DirectionVS ); result.Diffuse = DoDiffuse( light, L, N ) * light.Intensity; result.Specular = DoSpecular( light, mat, V, L, N ) * light.Intensity; return result; }
Final Shading
Now we have the material properties and the summed lighting contributions of all of the lights in the scene we can combine them to perform final shading.
float4 P = float4( IN.positionVS, 1 ); LightingResult lit = DoLighting( Lights, mat, eyePos, P, N ); diffuse *= float4( lit.Diffuse.rgb, 1.0f ); // Discard the alpha value from the lighting calculations. float4 specular = 0; if ( mat.SpecularPower > 1.0f ) // If specular power is too low, don't use it. { specular = mat.SpecularColor; if ( mat.HasSpecularTexture ) { float4 specularTex = SpecularTexture.Sample( LinearRepeatSampler, IN.texCoord ); if ( any( specular.rgb ) ) { specular *= specularTex; } else { specular = specularTex; } } specular *= lit.Specular; } return float4( ( ambient + emissive + diffuse + specular ).rgb, alpha * mat.Opacity ); }
On line 113 the lighting contributions is computed using the DoLighting function that was just described.
On line 115, the material’s diffuse color is modulated by the lights diffuse contribution.
If the material’s specular power is lower than 1.0, it will not be considered for final shading. Some artists will assign a specular power less than 1 if a material does not have a specular shine. In this case we just ignore the specular contribution and the material is considered diffuse only (lambert reflectance only). Otherwise, if the material has a specular color texture associated with it, it will be sampled and combined with the material’s specular color before it is modulated with the light’s specular contribution.
The final pixel color is the sum of the ambient, emissive, diffuse and specular components. The opacity of the pixel is determined by the alpha value that was determined earlier in the pixel shader.
Deferred Shading
The deferred shading technique consists of three passes:
- G-buffer pass
- Lighting pass
- Transparent pass
The g-buffer pass will fill the g-buffer textures that were described in the introduction. The lighting pass will render each light source as a geometric object and compute the lighting for covered pixels. The transparent pass will render transparent scene objects using the standard forward rendering technique.
G-Buffer Pass
The first pass of the deferred shading technique will generate the G-buffer textures. I will first describe the layout of the G-buffers.
G-Buffer Layout
The layout of the G-buffer can be a subject of an entire article on this website. The layout I chose for this demonstration is based on simplicity and necessity. It is not the most efficient G-buffer layout as some data could be better packed into smaller buffers. There has been some discussion on packing attributes in the G-buffers but I did not perform any analysis regarding the effects of using various packing methods.
The attributes that need to be stored in the G-buffers are:
- Depth/Stencil
- Light Accumulation
- Diffuse
- Specular
- Normals
Depth/Stencil Buffer
The Depth/Stencil texture is stored as 32-bits per pixel with 24 bits for the depth value as a unsigned normalized value (UNORM) and 8 bits for the stencil value as an unsigned integer (UINT). The texture resource for the depth buffer is created using the R24G8_TYPELESS texture format and the depth/stencil view is created with the D24_UNORM_S8_UINT texture format. When accessing the depth buffer in the pixel shader, the shader resource view is created using the R24_UNORM_X8_TYPELESS texture format since the stencil value is unused.
The Depth/Stencil buffer will be attached to the output merger stage and will not directly computed in the G-buffer pixel shader. The results of the vertex shader are written directly to the depth/stencil buffer.
Light Accumulation Buffer
The light accumulation buffer is used to store the final result of the lighting pass. This is the same buffer as the back buffer of the screen. If your G-buffer textures are the same dimension as your screen, there is no need to allocate an additional buffer for the light accumulation buffer and the back buffer of the screen can be used directly.
The light accumulation buffer is stored as a 32-bit 4-component unsigned normalized texture using the R8G8B8A8_UNORM texture format for both the texture resource and the shader resource view.
The light accumulation buffer stores the emissive and ambient terms. This image has been considerably brightened to make the scene more visible.
After the G-buffer pass, the light accumulation buffer initially only stores the ambient and emissive terms of the lighting equation. This image was brightened considerably to make it more visible.
You may also notice that only the fully opaque objects in the scene are rendered. Deferred shading does not support transparent objects so only the opaque objects are rendered in the G-buffer pass.
As an optimization, you may also want to accumulate directional lights in the G-buffer pass and skip directional lights in the lighting pass. Since directional lights are rendered as full-screen quads in the lighting pass, accumulating them in the g-buffer pass may save some shader cycles if fill-rate is an issue. I’m not taking advantage of this optimization in this experiment because that would require storing directional lights in a separate buffer which is inconsistent with the way the forward and forward+ pixel shaders handle lighting.
Diffuse Buffer
The diffuse buffer is stored as a 32-bit 4-component unsigned normalized (UNORM) texture. Since only opaque objects are rendered in deferred shading, there is no need for the alpha channel in this buffer and it remains unused in this experiment. Both the texture resource and the shader resource view use the R8G8B8A8_UNORM texture format.
The above image shows the result of the diffuse buffer after the G-buffer pass.
Specular Buffer
Similar to the light accumulation and the diffuse buffers, the specular color buffer is stored as a 32-bit 4-component unsigned normalized texture using the R8G8B8A8_UNORM format. The red, green, and blue channels are used to store the specular color while the alpha channel is used to store the specular power. The specular power value is usually expressed in the range (1…256] (or higher) but it needs to be packed into the range [0…1] to be stored in the texture. To pack the specular power into the texture, I use the method described in a presentation given by Michiel van der Leeuw titled “Deferred Rendering in Killzone 2” [13]. In that presentation he uses the following equation to pack the specular power value:
This function allows for packing of specular power values in the range [1…1448.15] and provides good precision for values in the normal specular range (1…256). The graph below shows the progression of the packed specular value.
The result of packing specular power. The horizontal axis shows the original specular power and the vertical axis shows the packed specular power.[/math]
And the result of the specular buffer after the G-buffer pass looks like this.
Normal Buffer
The view space normals are stored in a 128-bit 4-component floating point buffer using the R32G32B32A32_FLOAT texture format. A normal buffer of this size is not really necessary and I could probably have packed the X and Y components of the normal into a 32-bit 2-component half-precision floating point buffer and recomputed the z-component in the lighting pass. For this experiment, I favored precision and simplicity over efficiency and since my GPU is not constrained by texture memory I used the largest possible buffer with the highest precision.
It would be worthwhile to investigate other texture formats for the normal buffer and analyze the quality versus performance tradeoffs. My hypothesis is that using a smaller texture format (for example R16G16_FLOAT) for the normal buffer would produce similar quality results while providing improved performance.
The image above shows the result of the normal buffer after the G-buffer pass.
Layout Summary
The total G-buffer layout looks similar to the table shown below.
Layout of the G-buffer.
Pixel Shader
The pixel shader for the G-buffer pass is very similar to the pixel shader for the forward renderer. The primary difference being no lighting calculations are performed in the G-buffer pass. Collecting the material properties are identical in the forward rendering technique so I will not repeat that part of the shader code here.
To output the G-buffer data to the textures, each G-buffer texture will be bound to a render target output using PixelShaderOutput structure.
struct PixelShaderOutput { float4 LightAccumulation : SV_Target0; float4 Diffuse : SV_Target1; float4 Specular : SV_Target2; float4 NormalVS : SV_Target3; };
Since the depth/stencil buffer is bound to the output-merger stage, we don’t need to output the depth value from the pixel shader.
Now let’s fill the G-buffer textures in the pixel shader.
[earlydepthstencil] PixelShaderOutput PS_Geometry( VertexShaderOutput IN ) { PixelShaderOutput OUT; // Get emissive, ambient, diffuse, specular and normal values // In the same way as the forward rendering pixel shader. // The source code is not shown here for the sake of brevity. OUT.LightAccumulation = ( ambient + emissive ); OUT.Diffuse = diffuse; OUT.Specular = float4( specular.rgb, log2( specularPower ) / 10.5f ); OUT.NormalVS = N; return OUT; }
Once all of the material properties have been retrieved, we only need to save the properties to the appropriate render target. The source code to read all of the material properties has been skipped for brevity. You can download the source code at the end of this article to see the complete pixel shader.
With the G-buffers filled, we can compute the final shading in the light pass. In the next sections, I will describe the method used by Guerrilla in Killzone 2 and I will also describe the implementation I used and explain why I used a different method.
Lighting Pass (Guerrilla)
The primary source of inspiration for the lighting pass of the deferred shading technique that I am using in this experiment comes from a presentation called “Deferred Rendering in Killzone 2” presented by Michiel van der Leeuw at the Sony Computer Entertainment Graphics Seminar at Palo Alto, California in August 2007 [13]. In Michiel’s presentation, he describes the lighting pass in four phases:
- Clear stencil buffer to 0,
- Mark pixels in front of the far light boundary,
- Count number of lit pixels inside the light volume,
- Shade the lit pixels
I will briefly describe the last three steps. I will then present the method I chose to use to implement the lighting pass of the deferred shading technique and explain why I chose a different method than what was explained in Michiel’s presentation.
Determine Lit Pixels
According to Michiel’s presentation, in order to determine which pixel are lit, you first need to render the back faces of the light volume and mark the pixels that are in-front of the far light boundary. Then count the number of pixels that are behind the front faces of the light volume. And finally, shade the pixels that are marked and behind the front faces of the light volume.
Mark Pixels
In the first phase, the pixels that are in front of the back faces of the light volume will be marked in the stencil buffer. To do this, you must first clear the stencil buffer to 0 then configure the pipeline state with the following settings:
- Bind only the vertex shader (no pixel shader is required)
- Bind only the depth/stencil buffer to the output merger stage (since no pixel shader is bound, there is no need for a color buffer)
- Rasterizer State:
- Set cull mode to FRONT to render only the back faces of the light volume
- Depth/Stencil State:
- Enable depth testing
- Disable depth writes
- Set the depth function to GREATER_EQUAL
- Enable stencil operations
- Set stencil reference to 1
- Set stencil function to ALWAYS
- Set stencil operation to REPLACE on depth pass.
And render the light volume. The image below shows the effect of this operation.
The dotted line of the light volume is culled and only the back facing polygons are rendered. The green volumes show where the stencil buffer will be marked with the stencil reference value. The next step is to count the pixels inside the light volume.
Count Pixels
The next phase is to count the number of pixels that were both marked in the previous phase and are inside the light volume. This is done by rendering the front faces of the light volume and counting the number of pixels that are both stencil marked in the previous phase and behind the front faces of the light volume. In this case, the pipeline state should be configured with:
- Bind only the vertex shader (no pixel shader is required)
- Bind only the depth/stencil buffer to the output merger stage (since no pixel shader is bound, there is no need for a color buffer)
-
And render the light volume again with an occlusion pixel query to count the number of pixels that pass both the depth and stencil operations. The image below shows the effect of this operation.
Render front faces of light volume. Count pixels that are marked and behind the front faces of the light volume.
The red volume in the image shows the pixels that would be counted in this phase.
If the number of pixels rasterized is below a certain threshold, then the shading step can be skipped. If the number of rasterized pixels is above a certain threshold then the pixels need to be shaded.
Shade Pixels
The final step according to Michiel’s method is to shade the pixels that are inside the light volume. To do this the configuration of the pipeline state should be identical to the pipeline configuration of the count pixels phase with the addition of enabling additive blending, binding a pixel shader and attaching a color buffer to the output merger stage.
- Bind both vertex and pixel shaders
- Bind depth/stencil and light accumulation buffer to the output merger stage
-
The result should be that only the pixels that are contained within the light volume are shaded.
Lighting Pass (My Implementation)
The problem with the lighting pass described in Michiel’s presentation is that the pixel query operation will most certainly cause a stall while the CPU has to wait for the GPU query results to be returned. The stall can be avoided if the query results from the previous frame (or previous 2 frames) is used instead of the query results from the current frame relying on the temporal coherence theory [15]. This would require multiple query objects to be created for each light source because query objects can not be reused if they must be persistent across multiple frames.
Since I am not doing shadow mapping in my implementation there was no apparent need to perform the pixel occlusion query that is described in Michiel’s presentation thus avoiding the potential stalls that are incurred from the query operation.
The other problem with the method described in Michiel’s presentation is that if the eye is inside the light volume then no pixels will be counted or shaded in the count pixels and shade pixels phases.
When the eye is inside the light volume, the front faces of the light volume will be clipped by the view frustum.
The green volume shown in the image represents the pixels of the stencil buffer that were marked in the first phase. There is no red volume showing the pixels that were shaded because the front faces of the light volume are clipped by the view frustum. I tried to find a way around this issue by disabling depth clipping but this only prevents clipping of pixels in front of the viewer (pixels behind the eye are still clipped).
To solve this problem, I reversed Michiel’s method:
- Clear stencil buffer to 1,
- Unmark pixels in front of the near light boundary,
- Shade pixels that are in front of the far light boundary
I will explain the last two steps of my implementation and describe the method used to shade the pixels.
Unmark Pixels
In the first phase of my implementation we need to unmark all of the pixels that are in front of the front faces of the light’s geometric volume. This ensures that pixels that occlude the light volume are not rendered in the next phase. This is done by first clearing the stencil buffer to 1 to mark all pixels and unmark the pixels that are in front of the front faces of the light volume. The configuration of the pipeline state would look like this:
- Bind only the vertex shader (no pixel shader is required)
- Bind only the depth/stencil buffer to the output merger stage (since no pixel shader is bound, there is no need for a color buffer)
- Rasterizer State:
- Set cull mode to BACK to render only the front faces of the light volume
- Depth/Stencil State:
- Enable depth testing
- Disable depth writes
- Set the depth function to GREATER
- Enable stencil operations
- Set stencil function to ALWAYS
- Set stencil operation to DECR_SAT on depth pass.
And render the light volume. The image below shows the result of this operation.
Unmark pixels in the stencil buffer where the pixel is in front of the front faces of the light volume.
Setting the stencil operation to DECR_SAT will decrement and clamp the value in the stencil buffer to 0 if the depth test passes. The green volume shows where the stencil buffer will be decremented to 0. Consequently, if the eye is inside the light volume, all pixels will still be marked in the stencil buffer because the front faces of the light volume would be clipped by the viewing frustum and no pixels would be unmarked.
In the next phase the pixels in front of the back faces of the light volume will be shaded.
Shade Pixels
In this phase the pixels that are both in front of the back faces of the light volume and not unmarked in the previous frame will be shaded. In this case, the configuration of the pipeline state would look like this:
- Bind both vertex and pixel shaders
- Bind depth/stencil and light accumulation buffer to the output merger stage
- Configure the Rasterizer State:
- Set cull mode to FRONT to render only the back faces of the light volume
- Disable depth clipping
- Depth/Stencil State:
- Enable depth testing
- Disable depth writes
- Set the depth function to GREATER
You may have noticed that I also disable depth clipping in the rasterizer state for this phase. Doing this will ensure that if any part of the light volume exceeds the far clipping plane, it will not be clipped.
The image below shows the result of this operation.
The red volume shows pixels that will be shaded in this phase. This implementation will properly shade pixels even if the viewer is inside the light volume. In the second phase, only pixels that are both in front of the back faces of the light volume and not unmarked in the previous phase will be shaded.
Next I’ll describe the pixel shader that is used to implement the deferred lighting pass.
Pixel Shader
The pixel shader is only bound during the shade pixels phase described above. It will fetch the texture data from the G-buffers and use it to shade the pixel using the same lighting model that was described in the Forward Rendering section.
Since all of our lighting calculations are performed in view space, we need to compute the view space position of the current pixel.
We will use the the screen space position and the value in the depth buffer to compute the view space position of the current pixel. To do this, we will use the ClipToView function to convert clip space coordinates to view space and the ScreenToView function to convert screen coordinates to view space.
In order to facilitate these functions, we need to know the screen dimensions and the inverse projection matrix of the camera which should be passed to the shader from the application in a constant buffer.
// Parameters required to convert screen space coordinates to view space. cbuffer ScreenToViewParams : register( b3 ) { float4x4 InverseProjection; float2 ScreenDimensions; }
And to convert the screen space coordinates to clip space we need to scale and shift the screen space coordinates into clip space then transform the clip space coordinate into view space by multiplying the clip space coordinate by the inverse of the projection matrix.
// Convert clip space coordinates to view space float4 ClipToView( float4 clip ) { // View space position. float4 view = mul( InverseProjection, clip ); // Perspective projection. view = view / view.w; return view; } // Convert screen space coordinates to view space. float4 ScreenToView( float4 screen ) { // Convert to normalized texture coordinates float2 texCoord = screen.xy / ScreenDimensions; // Convert to clip space float4 clip = float4( float2( texCoord.x, 1.0f - texCoord.y ) * 2.0f - 1.0f, screen.z, screen.w ); return ClipToView( clip ); }
First, we need to normalize the screen coordinates by dividing them by the screen dimensions. This will convert the screen coordinates that are expressed in the range ([0…SCREEN_WIDTH], [0…SCREEN_HEIGHT]) into the range ([0…1], [0..1]).
In DirectX, the screen origin (0, 0) is the top-left side of the screen and the screen’s y-coordinate increases from top to bottom. This is the opposite direction than the y-coordinate in clip space so we need to flip the y-coordinate in normalized screen space to get it in the range ([0…1], [1…0]). Then we need to scale the normalized screen coordinate by 2 to get it in the range ([0…2], [2…0]) and shift it by -1 to get it in the range ([-1…1], [1…-1]).
Now that we have the clip space position of the current pixel, we can use the ClipToView function to convert it into view space. This is done by multiplying the clip space coordinate by the inverse of the camera’s projection matrix (line 195) and divide by the w component to remove the perspective projection (line 197).
Now let’s put this function to use in our shader.
[earlydepthstencil] float4 PS_DeferredLighting( VertexShaderOutput IN ) : SV_Target { // Everything is in view space. float4 eyePos = { 0, 0, 0, 1 }; int2 texCoord = IN.position.xy; float depth = DepthTextureVS.Load( int3( texCoord, 0 ) ).r; float4 P = ScreenToView( float4( texCoord, depth, 1.0f ) );
The input structure to the deferred lighting pixel shader is identical to the output of the vertex shader including the position parameter that is bound to the SV_Position system value semantic. When used in a pixel shader, the value of the parameter bound to the SV_Position semantic will be the screen space position of the current pixel being rendered. We can use this value and the value from the depth buffer to compute the view space position.
Since the G-buffer textures are the same dimension as the screen for the lighting pass, we can use the Texture2D.Load [16] method to fetch the texel from each of the G-buffer textures. The texture coordinate of the Texture2D.Load method is an int3 where the x and y components are the U and V texture coordinates in non-normalized screen coordinate and the z component is the mipmap level to sample. When sampling the G-buffer textures, we always want to sample mipmap level 0 (the most detailed mipmap level). Sampling from a lower mipmap level will cause the textures to appear blocky. If no mipmaps have been generated for the G-Buffer textures, sampling from a lower mipmap level will return black texels. The Texture2D.Load method does not perform any texture filtering when sampling the texture making it faster than the Texture2D.Sample method when using linear filtering.
Once we have the screen space position and the depth value, we can use the ScreenToView function to convert the screen space position to view space.
Before we can compute the lighting, we need to sample the other components from the G-buffer textures.
// View vector float4 V = normalize( eyePos - P ); float4 diffuse = DiffuseTextureVS.Load( int3( texCoord, 0 ) ); float4 specular = SpecularTextureVS.Load( int3( texCoord, 0 ) ); float4 N = NormalTextureVS.Load( int3( texCoord, 0 ) ); // Unpack the specular power from the alpha component of the specular color. float specularPower = exp2( specular.a * 10.5f );
On line 179 the specular power is unpacked from the alpha channel of the specular color using the inverse of the operation that was used to pack it in the specular texture in the G-buffer pass.
In order to retrieve the correct light properties, we need to know the index of the current light in the light buffer. For this, we will pass the light index of the current light in a constant buffer.
cbuffer LightIndexBuffer : register( b4 ) { // The index of the light in the Lights array. uint LightIndex; }
And retrieve the light properties from the light list and compute the final shading.
Light light = Lights[LightIndex]; Material mat = (Material)0; mat.DiffuseColor = diffuse; mat.SpecularColor = specular; mat.SpecularPower = specularPower; LightingResult lit = (LightingResult)0; switch ( light.Type ) { case DIRECTIONAL_LIGHT: lit = DoDirectionalLight( light, mat, V, P, N ); break; case POINT_LIGHT: lit = DoPointLight( light, mat, V, P, N ); break; case SPOT_LIGHT: lit = DoSpotLight( light, mat, V, P, N ); break; } return ( diffuse * lit.Diffuse ) + ( specular * lit.Specular ); }
You may notice that we don’t need to check if the light is enabled in the shader like we did in the forward rendering shader. If the light is not enabled, the light volume should not be rendered by the application.
We also don’t need to check if the light is in range of the current pixel since the pixel shader should not be invoked on pixels that are out of range of the light.
The lighting functions were already explained in the section on forward rendering so they won’t be explained here again.
On line 203, the diffuse and specular terms are combined and returned from the shader. The ambient and emissive terms were already computed in the light accumulation buffer during the G-buffer shader. With additive blending enabled, all of the lighting terms will be summed correctly to compute final shading.
In the final pass, we need to render transparent objects.
Transparent Pass
The transparent pass for the deferred shading technique is identical to the forward rendering technique with alpha blending enabled. There is no new information to provide here. We will reflect on the performance of the transparent pass in the results section described later.
Now let’s take a look at the final technique that will be explained in this article; Forward+.
Forward+
Forward+ improves upon regular forward rendering by first determining which lights are overlapping which area in screen space. During the shading phase, only the lights that are potentially overlapping the current fragment need to be considered. I used the term “potentially” because the technique used to determine overlapping lights is not completely accurate as I will explain later.
The Forward+ technique consists primarily of these three passes:
- Light culling
- Opaque pass
- Transparent pass
In the light culling pass, each light in the scene is sorted into screen space tiles.
In the opaque pass, the light list generated from the light culling pass is used to compute the lighting for opaque geometry. In this pass, not all lights need to be considered for lighting, only the lights that were previously sorted into the current fragments screen space tile need to be considered when computing the lighting.
The transparent pass is similar to the opaque pass except the light list used for computing lighting is slightly different. I will explain the difference between the light list for the opaque pass and the transparent pass in the following sections.
Grid Frustums
Before light culling can occur, we need to compute the culling frustums that will be used to cull the lights into the screen space tiles. Since the culling frustums are expressed in view space, they only need to be recomputed if the dimension of the grid changes (for example, if the screen is resized) or the size of a tile changes. I will explain the basis of how the frustum planes for a tile are defined.
The screen is divided into a number of square tiles. I will refer to all of the screen tiles as the light grid. We need to specify a size for each tile. The size defines both the vertical and horizontal size of a single tile. The tile size should not be chosen arbitrarily but it should be chosen so that a each tile can be computed by a single thread group in a DirectX compute shader [17]. The number of threads in a thread group should be a multiple of 64 (to take advantage of dual warp schedulers available on modern GPUs) and cannot exceed 1024 threads per thread group. Likely candidates for the dimensions of the thread group are:
- 8×8 (64 threads per thread group)
- 16×16 (256 threads per thread group)
- 32×32 (1024 threads per thread group)
For now, let’s assume that the thread group has a dimension of 16×16 threads. In this case, each tile for our light grid has a dimension of 16×16 screen pixels.
The image above shows a partial grid of 16×16 thread groups. Each thread group is divided by the thick black lines and the threads within a thread group are divided by the thin black lines. A tile used for light culling is also divided in the same way.
If we were to view the tiles at an oblique angle, we can visualize the culling frustum that we need to compute.
The above image shows that the camera’s position (eye) is the origin of the frustum and the corner points of the tile denote the frustum corners. With this information, we can compute the planes of the tile frustum.
A view frustum is composed of six planes, but to perform the light culling we want to pre-compute the four side planes for the frustum. The computation of the near and far frustum planes will be deferred until the light culling phase.
To compute the left, right, top, and bottom frustum planes we will use the following algorithm:
- Compute the four corner points of the current tile in screen space.
- Transform the screen space corner points to the far clipping plane in view space.
- Build the frustum planes from the eye position and two other corner points.
- Store the computed frustum in a RWStructuredBuffer.
A plane can be computed if we know three points that lie on the plane [18]. If we number the corner points of a tile, as shown in the above image, we can compute the frustum planes using the eye position and two other corner points in view space.
For example, we can use the following points to compute the frustum planes assuming a counter-clockwise winding order:
- Left Plane: Eye, Bottom-Left (2), Top-Left (0)
- Right Plane: Eye, Top-Right (1), Bottom-Right (3)
- Top Plane: Eye, Top-Left (0), Top-Right (1)
- Bottom Plane: Eye, Bottom-Right (3), Bottom-Left (2)
If we know three non-collinear points
that lie in the plane (as shown in the above image), we can compute the normal to the plane
[18]:
If
is normalized then a given point
that lies on the plane can be used to compute the signed distance from the origin to the plane:
This is referred to as the constant-normal form of the plane [18] and can also be expressed as
Where
and
given that
is a point that lies in the plane.
In the HLSL shader, we can define a plane as a unit normal
and the distance to the origin
.
struct Plane { float3 N; // Plane normal. float d; // Distance to origin. };
Given three non-collinear counter-clockwise points that lie in the plane, we can compute the plane using the ComputePlane function in HLSL.
// Compute a plane from 3 noncollinear points that form a triangle. // This equation assumes a right-handed (counter-clockwise winding order) // coordinate system to determine the direction of the plane normal. Plane ComputePlane( float3 p0, float3 p1, float3 p2 ) { Plane plane; float3 v0 = p1 - p0; float3 v2 = p2 - p0; plane.N = normalize( cross( v0, v2 ) ); // Compute the distance to the origin using p0. plane.d = dot( plane.N, p0 ); return plane; }
And a frustum is defined as a structure of four planes.
// Four planes of a view frustum (in view space). // The planes are: // * Left, // * Right, // * Top, // * Bottom. // The back and/or front planes can be computed from depth values in the // light culling compute shader. struct Frustum { Plane planes[4]; // left, right, top, bottom frustum planes. };
To precompute the grid frustums we need to invoke a compute shader kernel for each tile in the grid. For example, if the screen resolution is 1280×720 and the light grid is partitioned into 16×16 tiles, we need to compute 80×45 (3,600) frustums. If a thread group contains 16×16 (256) threads we need to dispatch 5×2.8125 thread groups to compute all of the frustums. Of course we can’t dispatch partial thread groups so we need to round up to the nearest whole number when dispatching the compute shader. In this case, we will dispatch 5×3 (15) thread groups each with 16×16 (256) threads and in the compute shader we must make sure that we simply ignore threads that are out of the screen bounds.
The above image shows the thread groups that will be invoked to generate the tile frustums assuming a 16×16 thread group. The thick black lines denote the thread group boundary and the thin black lines represent the threads in a thread group. The blue threads represent threads that will be used to compute a tile frustum and the red threads should simply skip the frustum tile computations because they extend past the size of the screen.
We can use the following formula to determine the dimension of the dispatch:
Where
is the total number of threads that will be dispatched,
is the screen width in pixels,
is the screen height in pixels,
is the size of the thread group (in our example, this is 16) and
is the number of thread groups to execute.
With this information we can dispatch the compute shader that will be used to precompute the grid frustums.
Grid Frustums Compute Shader
By default, the size of a thread group for the compute shader will be 16×16 threads but the application can define a different block size during shader compilation.
#ifndef BLOCK_SIZE #pragma message( "BLOCK_SIZE undefined. Default to 16.") #define BLOCK_SIZE 16 // should be defined by the application. #endif
And we’ll define a common structure to store the common compute shader input variables.
struct ComputeShaderInput { uint3 groupID : SV_GroupID; // 3D index of the thread group in the dispatch. uint3 groupThreadID : SV_GroupThreadID; // 3D index of local thread ID in a thread group. uint3 dispatchThreadID : SV_DispatchThreadID; // 3D index of global thread ID in the dispatch. uint groupIndex : SV_GroupIndex; // Flattened local index of the thread within a thread group. };
See [10] for a list of the system value semantics that are available as inputs to a compute shader.
In addition to the system values that are provided by HLSL, we also need to know the total number of threads and the total number of thread groups in the current dispatch. Unfortunately HLSL does not provide system value semantics for these properties. We will store the required values in a constant buffer called DispatchParams.
// Global variables cbuffer DispatchParams : register( b4 ) { // Number of groups dispatched. (This parameter is not available as an HLSL system value!) uint3 numThreadGroups; // uint padding // implicit padding to 16 bytes. // Total number of threads dispatched. (Also not available as an HLSL system value!) // Note: This value may be less than the actual number of threads executed // if the screen size is not evenly divisible by the block size. uint3 numThreads; // uint padding // implicit padding to 16 bytes. }
The value of the numThreads variable can be used to ensure that a thread in the dispatch is not used if it is out of bounds of the screen as described earlier.
To store the result of the computed grid frustums, we also need to create a structured buffer that is large enough to store one frustum per tile. This buffer will be bound to the out_Frustrum RWStructuredBuffer variable using a uniform access view.
// View space frustums for the grid cells. RWStructuredBuffer<Frustum> out_Frustums : register( u0 );
Tile Corners in Screen Space
In the compute shader, the first thing we need to do is determine the screen space points of the corners of the tile frustum using the current thread’s global ID in the dispatch.
// A kernel to compute frustums for the grid // This kernel is executed once per grid cell. Each thread // computes a frustum for a grid cell. [numthreads( BLOCK_SIZE, BLOCK_SIZE, 1 )] void CS_ComputeFrustums( ComputeShaderInput IN ) { // View space eye position is always at the origin. const float3 eyePos = float3( 0, 0, 0 ); // Compute the 4 corner points on the far clipping plane to use as the // frustum vertices. float4 screenSpace[4]; // Top left point screenSpace[0] = float4( IN.dispatchThreadID.xy * BLOCK_SIZE, -1.0f, 1.0f ); // Top right point screenSpace[1] = float4( float2( IN.dispatchThreadID.x + 1, IN.dispatchThreadID.y ) * BLOCK_SIZE, -1.0f, 1.0f ); // Bottom left point screenSpace[2] = float4( float2( IN.dispatchThreadID.x, IN.dispatchThreadID.y + 1 ) * BLOCK_SIZE, -1.0f, 1.0f ); // Bottom right point screenSpace[3] = float4( float2( IN.dispatchThreadID.x + 1, IN.dispatchThreadID.y + 1 ) * BLOCK_SIZE, -1.0f, 1.0f );
To convert the global thread ID to the screen space position, we simply multiply by the size of a tile in the light grid. The z-component of the screen space position is -1 because I am using a right-handed coordinate system which has the camera looking in the -z axis in view space. If you are using a left-handed coordinate system, you should use 1 for the z-component. This gives us the screen space positions of the tile corners at the far clipping plane.
Tile Corners in View Space
Next we need to convert the screen space positions into view space using the ScreenToView function that was described in the section about the deferred rendering pixel shader.
float3 viewSpace[4]; // Now convert the screen space points to view space for ( int i = 0; i < 4; i++ ) { viewSpace[i] = ScreenToView( screenSpace[i] ).xyz; }
Compute Frustum Planes
Using the view space positions of the tile corners, we can build the frustum planes.
// Now build the frustum planes from the view space points Frustum frustum; // Left plane frustum.planes[0] = ComputePlane( eyePos, viewSpace[2], viewSpace[0] ); // Right plane frustum.planes[1] = ComputePlane( eyePos, viewSpace[1], viewSpace[3] ); // Top plane frustum.planes[2] = ComputePlane( eyePos, viewSpace[0], viewSpace[1] ); // Bottom plane frustum.planes[3] = ComputePlane( eyePos, viewSpace[3], viewSpace[2] );
Store Grid Frustums
And finally we need to write the frustum to global memory. We must be careful that we don’t access an array element that are out of bounds of the allocated frustum buffer.
// Store the computed frustum in global memory (if our thread ID is in bounds of the grid). if ( IN.dispatchThreadID.x < numThreads.x && IN.dispatchThreadID.y < numThreads.y ) { uint index = IN.dispatchThreadID.x + ( IN.dispatchThreadID.y * numThreads.x ); out_Frustums[index] = frustum; } }
Now that we have the precomputed grid frustums, we can use them in the light culling compute shader.
Light Culling
In the next step of the Forward+ rendering technique is to cull the lights using the grid frustums that were computed in the previous section. The computation of the grid frustums only needs to be done once at the beginning of the application or if the screen dimensions or the size of the tiles change but the light culling phase must occur every frame that the camera moves or the position of a light moves or an object in the scene changes that affects the contents of the depth buffer. Any one of these events could occur so it is generally safe to perform light culling each and every frame.
The basic algorithm for performing light culling is as follows:
- Compute the min and max depth values in view space for the tile
- Cull the lights and record the lights into a light index list
- Copy the light index list into global memory
Compute Min/Max Depth Values
The first step of the algorithm is to compute the minimum and maximum depth values per tile of the light grid. The minimum and maximum depth values will be used to compute the near and far planes for our culling frustum.
The image above shows an example scene. The blue objects represent opaque objects in the scene. The yellow objects represent light sources and the shaded gray areas represent the tile frustums that are computed from the minimum and maximum depth values per tile. The green lines represent the tile boundaries for the light grid. The tiles are numbered 1-7 from top to bottom and the opaque objects are numbered 1-5 and the lights are numbered 1-4.
The first tile has a maximum depth value of 1 (in projected clip space) because there are some pixels that are not covered by opaque geometry. In this case, the culling frustum is very large and may contain lights that don’t affect the geometry. For example, light 1 is contained within tile 1 but light 1 does not affect any geometry. At geometry boundaries, the clipping frustum could potentially be very large and may contain lights that don’t effect any geometry.
The minimum and maximum depth values in tile 2 are the same because object 2 is directly facing the camera and fills the entire tile. This won’t be a problem as we will see later when we perform the actual clipping of the light volume.
Object 3 fully occludes light 3 and thus will not be considered when shading any fragments.
The above image depicts the minimum and maximum depth values per tile for opaque geometry. For transparent geometry, we can only clip light volumes that are behind the maximum depth planes, but we must consider all lights that are in front of all opaque geometry. The reason for this is that when performing the depth pre-pass step to generate the depth texture which is used to determine the minimum and maximum depths per tile, we cannot render transparent geometry into the depth buffer. If we did, then we would not correctly light opaque geometry that is behind transparent geometry. The solution to this problem is described in an article titled “Tiled Forward Shading” by Markus Billeter, Ola Olsson, and Ulf Assarsson [4]. In the light culling compute shader, two light lists will be generated. The first light list contains only the lights that are affecting opaque geometry. The second light list contains only the lights that could affect transparent geometry. When performing final shading on opaque geometry then I will send the first list and when rendering transparent geometry, I will send the second list to the fragment shader.
Before I discuss the light culling compute shader, I will discuss the method that is used to build the light lists in the compute shader.
Light List Data Structure
The data structure that is used to store the per-tile light lists is described in the paper titled “Tiled Shading” from Ola Olsson and Ulf Assarsson [5]. Ola and Ulf describe a data structure in two parts. The first part is the light grid which is a 2D grid that stores an offset and a count of values stored in a light index list. This technique is similar to that of an index buffer which refers to the indices of vertices in an vertex buffer.
The size of the light grid is based on the number of screen tiles that are used for light culling. The size of the light index list is based the expected average number of overlapping lights per tile. For example, for a screen resolution of 1280×720 and a tile size of 16×16 results in a 80×45 (3,600) light grid. Assuming an average of 200 lights per tile, this would require a light index list of 720,000 indices. Each light index cost 4 bytes (for a 32-bit unsigned integer) so the light list would consume 2.88 MB of GPU memory. Since we need a separate list for transparent and opaque geometry, this would consume a total of 5.76 MB. Although 200 lights may be an overestimation of the average number of overlapping lights per tile, the storage usage is not outrageous.
To generate the light grid and the light index list, a group-shared light index list is first generated in the compute shader. A global light index list counter is used to keep track of the current index into the global light index list. The global light index counter is atomically incremented so that no two thread groups can use the same range in the global light index list. Once the thread group has “reserved” space in the global light index list, the group-shared light index list is copied to the global light index list.
The following pseudo code demonstrates this technique.
function CullLights( L, C, G, I ) Input: A set L of n lights. Input: A counter C of the current index into the global light index list. Input: A 2D grid G of index offset and count in the global light index list. Input: A list I of global light index list. Output: A 2D grid G with the current tiles offset and light count. Output: A list I with the current tiles overlapping light indices appended to it. 1. let t be the index of the current tile ; t is the 2D index of the tile. 2. let i be a local light index list ; i is a local light index list. 3. let f <- Frustum(t) ; f is the frustum for the current tile. 4. for l in L ; Iterate the lights in the light list. 5. if Cull( l, f ) ; Cull the light against the tile frustum. 6. AppendLight( l, i ) ; Append the light to the local light index list. 7. c <- AtomicInc( C, i.count ) ; Atomically increment the current index of the ; global light index list by the number of lights ; overlapping the current tile and store the ; original index in c. 8. G(t) <- ( c, i.count ) ; Store the offset and light count in the light grid. 9. I(c) <- i ; Store the local light index list into the global ; light index list.
On the first three lines, the index of the current tile in the grid is defined as t. The local light index list is defined as i and the tile frustum that is used to perform light culling for the current tile is defined as f.
Lines 4, 5, and 6 loop through the global light list and cull the lights against the current tile’s culling frustum. If the light is inside the frustum, the light index is added to the local light index list.
On line 7 the current index in the global light index list is incremented by the number of lights that are contained in the local light index list. The original value of the global light index list counter before being incremented is stored in the local counter variable c.
On line 8, the light grid G is updated with the current tile’s offset and count into the global light index list.
And finally, on line 9 the local light index list is copied to the global light index list.
The light grid and the global light index list is then used in the fragment shader to perform final shading.
Frustum Culling
To perform frustum culling on the light volumes, two frustum culling methods will be presented:
- Frustum-Sphere culling for point lights
- Frustum-Cone culling for spot lights
The culling algorithm for spheres is fairly straightforward. The culling algorithm for cones is slightly more complicated. First I will describe the frustum-sphere algorithm and then I will describe the cone-culling algorithm.
Frustum-Sphere Culling
We have already seen the definition of the culling frustum in the previous section titled Compute Grid Frustums. A sphere is defined as a center point in view space, and a radius.
struct Sphere { float3 c; // Center point. float r; // Radius. };
A sphere is considered to be “inside” a plane if it is fully contained in the negative half-space of the plane. If a sphere is completly “inside” any of the frustum planes then it is outside of the frustum.
We can use the following formula to determine the signed distance of a sphere from a plane [18]:
Where
is the signed distance from the sphere to the plane,
is the center point of the sphere,
is the unit normal to the plane, and
is the distance from the plane to the origin.
If
is less than
where
is the radius of the sphere, then we know that the sphere is fully contained in the negative half-space of the plane.
// Check to see if a sphere is fully behind (inside the negative halfspace of) a plane. // Source: Real-time collision detection, Christer Ericson (2005) bool SphereInsidePlane( Sphere sphere, Plane plane ) { return dot( plane.N, sphere.c ) - plane.d < -sphere.r; }
Then we can iteratively apply SphereInsidePlane function to determine if the sphere is contained inside the culling frustum.
// Check to see of a light is partially contained within the frustum. bool SphereInsideFrustum( Sphere sphere, Frustum frustum, float zNear, float zFar ) { bool result = true; // First check depth // Note: Here, the view vector points in the -Z axis so the // far depth value will be approaching -infinity. if ( sphere.c.z - sphere.r > zNear || sphere.c.z + sphere.r < zFar ) { result = false; } // Then check frustum planes for ( int i = 0; i < 4 && result; i++ ) { if ( SphereInsidePlane( sphere, frustum.planes[i] ) ) { result = false; } } return result; }
Since the sphere is described in view space, we can quickly determine if the light should be culled based on its z-position and the distance to the near and far clipping planes. If the sphere is either fully in front of the near clipping plane, or fully behind the far clipping plane, then the light can be discarded. Otherwise we have to check if the light is within the bounds of the culling frustum.
The SphereInsideFrustum assumes a right-handed coordinate system with the camera looking towards the negative z axis. In this case, the far plane is approaching negative infinity so we have to check if the sphere is further away (less than in the negative direction). For a left-handed coordinate system, the zNear and zFar variables should be swapped on line 268.
Frustum-Cone Culling
To perform frustum-cone culling, I will use the technique described by Christer Ericson in his book titled "Real-Time Collision Detection" [18]. A cone can be defined by its tip
, a normalized direction vector
, the height of the cone
and the radius of the base
.
T is the tip of the cone, d is the direction, h is the height and r is the radius of the base of the cone.
In HLSL the cone is defined as
struct Cone { float3 T; // Cone tip. float h; // Height of the cone. float3 d; // Direction of the cone. float r; // bottom radius of the cone. };
To test if a cone is completely contained in the negative half-space of a plane, only two points need to be tested.
- The tip
of the cone
- The point
that is on the base of the cone that is farthest away from the plane in the direction of
If both of these points are contained in the negative half-space of any of the frustum planes, then the cone can be culled.
To determine the point
that is farthest away from the plane in the direction of
we will compute an intermediate vector
which is parallel but opposite to
and perpendicular to
.
is obtained by stepping from the tip
along the cone axis
at a distance
and then along the base of the cone away from the positive half-space of the plane
at a factor of
.
If
is zero, then the cone axis
is parallel to the plane normal
and
will be a zero vector. This special case does not need to be handled specifically because in this case the equation reduces to:
Which results in the correct point that needs to be tested.
With points
and
computed, we can test both points if they are in the negative half-space of the plane. If they are, we can conclude that the light can be culled. To test to see if a point is in the negative half-space of the plane, we can use the following equation:
Where
is the signed distance from the point to the plane and
is the point to be tested. If
is negative, then the point is contained in the negative half-space of the plane.
In HLSL, the function PointInsidePlane is used to test if a point is inside the negative half-space of a plane.
// Check to see if a point is fully behind (inside the negative halfspace of) a plane. bool PointInsidePlane( float3 p, Plane plane ) { return dot( plane.N, p ) - plane.d < 0; }
And the ConeInsidePlane function is used to test if a cone is fully contained in the negative half-space of a plane.
// Check to see if a cone if fully behind (inside the negative halfspace of) a plane. // Source: Real-time collision detection, Christer Ericson (2005) bool ConeInsidePlane( Cone cone, Plane plane ) { // Compute the farthest point on the end of the cone to the positive space of the plane. float3 m = cross( cross( plane.N, cone.d ), cone.d ); float3 Q = cone.T + cone.d * cone.h - m * cone.r; // The cone is in the negative halfspace of the plane if both // the tip of the cone and the farthest point on the end of the cone to the // positive halfspace of the plane are both inside the negative halfspace // of the plane. return PointInsidePlane( cone.T, plane ) && PointInsidePlane( Q, plane ); }
The ConeInsideFrustum function is used to test if the cone is contained within the clipping frustum. This function will return true if the cone is inside the frustum or false if it is fully contained in the negative half-space of any of the clipping planes.
bool ConeInsideFrustum( Cone cone, Frustum frustum, float zNear, float zFar ) { bool result = true; Plane nearPlane = { float3( 0, 0, -1 ), -zNear }; Plane farPlane = { float3( 0, 0, 1 ), zFar }; // First check the near and far clipping planes. if ( ConeInsidePlane( cone, nearPlane ) || ConeInsidePlane( cone, farPlane ) ) { result = false; } // Then check frustum planes for ( int i = 0; i < 4 && result; i++ ) { if ( ConeInsidePlane( cone, frustum.planes[i] ) ) { result = false; } } return result; }
First we check if the cone is clipped by the near or far clipping planes. Otherwise we have to check the four planes of the culling frustum. If the cone is in the negative half-space of any of the clipping planes, the function will return false.
Now we can put this together to define the light culling compute shader.
Light Culling Compute Shader
The purpose of the light culling compute shader is to update the global light index list and the light grid that is required by the fragment shader. Two lists need to be updated per frame:
- Light index list for opaque geometry
- Light index list for transparent geometry
To differentiate between the two lists in the HLSL compute shader, I will use the prefix "o_" to refer to the opaque lists and "t_" to refer to transparent lists. Both lists will be updated in the light culling compute shader.
First we will declare the resources that are required by the light culling compute shader.
// The depth from the screen space texture. Texture2D DepthTextureVS : register( t3 ); // Precomputed frustums for the grid. StructuredBuffer<Frustum> in_Frustums : register( t9 );
In order to read the depth values that are generated the depth pre-pass, the resulting depth texture will need to be sent to the light culling compute shader. The DepthTextureVS texture contains the result of the depth pre-pass.
The in_Frustums is the structured buffer that was computed in the compute frustums compute shader and was described in the section titled Grid Frustums Compute Shader.
We also need to keep track of the index into the global light index lists.
// Global counter for current index into the light index list. // "o_" prefix indicates light lists for opaque geometry while // "t_" prefix indicates light lists for transparent geometry. RWStructuredBuffer<uint> o_LightIndexCounter : register( u1 ); RWStructuredBuffer<uint> t_LightIndexCounter : register( u2 );
The o_LightIndexCounter is the current index of the global light index list for opaque geometry and the t_LightIndexCounter is the current index of the global light index list for transparent geometry.
Although the light index counters are of type RWStructuredBuffer these buffers only contain a single unsigned integer at index 0.
// Light index lists and light grids. RWStructuredBuffer<uint> o_LightIndexList : register( u3 ); RWStructuredBuffer<uint> t_LightIndexList : register( u4 ); RWTexture2D<uint2> o_LightGrid : register( u5 ); RWTexture2D<uint2> t_LightGrid : register( u6 );
The light index lists are stored as a 1D array of unsigned integers but the light grids are stored as 2D textures where each "texel" is a 2-component unsigned integer vector. The light grid texture is created using the R32G32_UINT format.
To store the min and max depth values per tile, we need to declare some group-shared variables to store the minimum and maximum depth values. The atomic increment functions will be used to make sure that only one thread in a thread group can change the min/max depth values but unfortunately, shader model 5.0 does not provide atomic functions for floating point values. To circumvent this limitation, the depth values will be stored as unsigned integers in group-shared memory which will be atomically compared and updated per thread.
groupshared uint uMinDepth; groupshared uint uMaxDepth;
Since the frustum used to perform culling will be the same frustum for all threads in a group, it makes sense to keep only one copy of the frustum for all threads in a group. Only thread 0 in the group will need to copy the frustum from the global memory buffer and we also reduce the amount of local register memory required per thread.
groupshared Frustum GroupFrustum;
We also need to declare group-shared variables to create the temporary light lists. We will need a seperate list for opaque and transparent geometry.
// Opaque geometry light lists. groupshared uint o_LightCount; groupshared uint o_LightIndexStartOffset; groupshared uint o_LightList[1024]; // Transparent geometry light lists. groupshared uint t_LightCount; groupshared uint t_LightIndexStartOffset; groupshared uint t_LightList[1024];
The LightCount will keep track of the number of lights that are intersecting the current tile frustum.
The LightIndexStartOffset is the offset into the global light index list. This index will be written to the light grid and is used as the starting offset when copying the local light index list to global light index list.
The local light index list will allow us to store as many as 1024 lights in a single tile. This maximum value will almost never be reached (at least it shouldn't be!). Keep in mind that when we allocated storage for the global light list, we accounted for an average of 200 lights per tile. It is possible that there are some tiles that contain more than 200 lights (as long as it is not more than 1024) and some tiles that contain less than 200 lights but we expect the average to be about 200 lights per tile. As previously mentioned, the estimate of an average of 200 lights per tile is probably an overestimation but since GPU memory is not a limiting constraint for this project, I can afford to be liberal with my estimations.
To update the local light counter and the light list, I will define a helper function called AppendLight. Unfortunately I have not yet figured out how to pass group-shared variables as arguments to a function so for now I will define two versions of the same function. One version of the function is used to update the light index list for opaque geometry and the other version is for transparent geometry.
// Add the light to the visible light list for opaque geometry. void o_AppendLight( uint lightIndex ) { uint index; // Index into the visible lights array. InterlockedAdd( o_LightCount, 1, index ); if ( index < 1024 ) { o_LightList[index] = lightIndex; } } // Add the light to the visible light list for transparent geometry. void t_AppendLight( uint lightIndex ) { uint index; // Index into the visible lights array. InterlockedAdd( t_LightCount, 1, index ); if ( index < 1024 ) { t_LightList[index] = lightIndex; } }
The InterlockedAdd function guarantees that the group-shared light count variable is only updated by a single thread at a time. This way we avoid any race conditions that may occur when multiple threads try to increment the group-shared light count at the same time.
The value of the light count before it is incremented is stored in the index local variable and used to update the light index in the group-shared light index list.
The method to compute the minimum and maximum depth range per tile is taken from the presentation titled "DirectX 11 Rendering in Battlefield 3" by Johan Andersson in 2011 [3] and "Tiled Shading" by Ola Olsson and Ulf Assarsson [5].
The first thing we will do in the light culling compute shader is read the depth value for the current thread. Each thread in the thread group will sample the depth buffer only once for the current thread and thus all threads in a group will sample all depth values for a single tile.
// Implementation of light culling compute shader is based on the presentation // "DirectX 11 Rendering in Battlefield 3" (2011) by Johan Andersson, DICE. // Retrieved from: // Retrieved: July 13, 2015 // And "Forward+: A Step Toward Film-Style Shading in Real Time", Takahiro Harada (2012) // published in "GPU Pro 4", Chapter 5 (2013) Taylor & Francis Group, LLC. [numthreads( BLOCK_SIZE, BLOCK_SIZE, 1 )] void CS_main( ComputeShaderInput IN ) { // Calculate min & max depth in threadgroup / tile. int2 texCoord = IN.dispatchThreadID.xy; float fDepth = DepthTextureVS.Load( int3( texCoord, 0 ) ).r; uint uDepth = asuint( fDepth );
Since we can only perform atomic operations on integers, on line 100 we reinterrpret the bits from the floating-point depth as an unsigned integer. Since we expect all depth values in the depth map to be stored in the range [0...1] (that is, all positive depth values) then reinturrpreting the float to an int will still allow us to correctly perform comparissons on these values. As long as we don't try to preform any arithmetic operations on the unsigned integer depth values, we should get the correct minimum and maximum values.
if ( IN.groupIndex == 0 ) // Avoid contention by other threads in the group. { uMinDepth = 0xffffffff; uMaxDepth = 0; o_LightCount = 0; t_LightCount = 0; GroupFrustum = in_Frustums[IN.groupID.x + ( IN.groupID.y * numThreadGroups.x )]; } GroupMemoryBarrierWithGroupSync();
Since we are setting group-shared variables, only one thread in the group needs to set them. In fact the HLSL compiler will generate a race-condition error if we don't restrict the writing of these variables to a single thread in the group.
To make sure that every thread in the group has reached the same point in the compute shader, we invoke the GroupMemoryBarrierWithGroupSync function. This ensures that any writes to group shared memory have completed and the thread execution for all threads in a group have reached this point.
Next, we'll determine the minimum and maximum depth values for the current tile.
InterlockedMin( uMinDepth, uDepth ); InterlockedMax( uMaxDepth, uDepth ); GroupMemoryBarrierWithGroupSync();
The InterlockedMin and InterlockedMax methods are used to atomically update the uMinDepth and uMaxDepth group-shared variables based on the current threads depth value.
We again need to use the GroupMemoryBarrierWithGroupSync function to ensure all writes to group shared memory have been comitted and all threads in the group have reached this point in the compute shader.
After the minimum and maximum depth values for the current tile have been found, we can reinterrpret the unsigned integer back to a float so that we can use it to compute the view space clipping planes for the current tile.
float fMinDepth = asfloat( uMinDepth ); float fMaxDepth = asfloat( uMaxDepth ); // Convert depth values to view space. float minDepthVS = ClipToView( float4( 0, 0, fMinDepth, 1 ) ).z; float maxDepthVS = ClipToView( float4( 0, 0, fMaxDepth, 1 ) ).z; float nearClipVS = ClipToView( float4( 0, 0, 0, 1 ) ).z; // Clipping plane for minimum depth value // (used for testing lights within the bounds of opaque geometry). Plane minPlane = { float3( 0, 0, -1 ), -minDepthVS };
On line 118 the minimum and maximum depth values as unsigned integers need to be reinterpret as floating point values so that they can be used to compute the correct points in view space.
The view space depth values are computed using the ScreenToView function and extracting the z component of the position in view space. We only need these values to compute the near and far clipping planes in view space so we only need to know the distance from the viewer.
When culling lights for transparent geometry, we don't want to use the minimum depth value from the depth map. Instead we will clip the lights using the camera's near clipping plane. In this case, we will use the nearClipVS value which is the distance to the camera's near clipping plane in view space.
Since I'm using a right-handed coordinate system with the camera pointing towards the negative z axis in view space, the minimum depth clipping plane is computed with a normal
pointing in the direction of the negative z axis and the distance to the origin
is -minDepth. We can verify that this is correct by using the constant-normal form of a plane:
By substituting
,
and
we get:
Which implies that
is a point on the minimum depth clipping plane.
// Cull lights // Each thread in a group will cull 1 light until all lights have been culled. for ( uint i = IN.groupIndex; i < NUM_LIGHTS; i += BLOCK_SIZE * BLOCK_SIZE ) { if ( Lights[i].Enabled ) { Light light = Lights[i];
If every thread in the thread group checks one light in the global light list at the same time, then we can check 16x16 (256) lights per iteration of the for-loop defined on line 132. The loop starts with
and
is incremented
for each iteration of the loop. This implies that for
, each thread in the thread group will check every 256th light until all lights have been checked.
- Thread 0 checks: { 0, 256, 512, 768, ... }
- Thread 1 checks: { 1, 257, 513, 769, ... }
- Thread 2 checks: { 2, 258, 514, 770, ... }
- ...
- Thread 255 checks: { 255, 511, 767, 1023, ... }
For 10,000 lights, the for loop only needs 40 iterations (per thread) to check all lights for a tile.
First we'll check point lights using the SphereInsideFrustum function that was defined earlier.
switch ( light.Type ) { case POINT_LIGHT: { Sphere sphere = { light.PositionVS.xyz, light.Range }; if ( SphereInsideFrustum( sphere, GroupFrustum, nearClipVS, maxDepthVS ) ) { // Add light to light list for transparent geometry. t_AppendLight( i ); if ( !SphereInsidePlane( sphere, minPlane ) ) { // Add light to light list for opaque geometry. o_AppendLight( i ); } } } break;
On line 142 a sphere is defined using the position and range of the light.
First we check if the light is within the tile frustum using the near clipping plane of the camera and the maximum depth read from the depth buffer. If the light volume is in this range, it is added to the light index list for transparent geometry.
To check if the light should be added to the global light index list for opaque geometry, we only need to check the minimum depth clipping plane that was previously defined on line 128. If the light is within the culling frustum for transparent geometry and in front of the minimum depth clipping plane, the index of the light is added to the light index list for opaque geometry.
Next, we'll check spot lights.
case SPOT_LIGHT: { float coneRadius = tan( radians( light.SpotlightAngle ) ) * light.Range; Cone cone = { light.PositionVS.xyz, light.Range, light.DirectionVS.xyz, coneRadius }; if ( ConeInsideFrustum( cone, GroupFrustum, nearClipVS, maxDepthVS ) ) { // Add light to light list for transparent geometry. t_AppendLight( i ); if ( !ConeInsidePlane( cone, minPlane ) ) { // Add light to light list for opaque geometry. o_AppendLight( i ); } } } break;
Checking cones is almost identical to checking spheres so I won't go into any detail here. The radius of the base of the spotlight cone is not stored with the light so it needs to be calculated for the ConeInsideFrustum function. To compute the radius of the base of the cone, we can use the tangent of the spotlight angle multiplied by the height of the cone.
And finally we need to check directional lights. This is by far the easiest part of this function.
case DIRECTIONAL_LIGHT: { // Directional lights always get added to our light list. // (Hopefully there are not too many directional lights!) t_AppendLight( i ); o_AppendLight( i ); } break; } } } // Wait till all threads in group have caught up. GroupMemoryBarrierWithGroupSync();
There is no way to reliably cull directional lights so if we encounter a directional light, we have no choice but to add it's index to the light index list.
To ensure that all threads in the thread group have recorded their lights to the group-shared light index list, we will invoke the GroupMemoryBarrierWithGroupSync function to synchronize all threads in the group.
After we have added all non-culled lights to the group-shared light index lists we need to copy it to the global light index list. First, we'll update the global light index list counter.
// Update global memory with visible light buffer. // First update the light grid (only thread 0 in group needs to do this) if ( IN.groupIndex == 0 ) { // Update light grid for opaque geometry. InterlockedAdd( o_LightIndexCounter[0], o_LightCount, o_LightIndexStartOffset ); o_LightGrid[IN.groupID.xy] = uint2( o_LightIndexStartOffset, o_LightCount ); // Update light grid for transparent geometry. InterlockedAdd( t_LightIndexCounter[0], t_LightCount, t_LightIndexStartOffset ); t_LightGrid[IN.groupID.xy] = uint2( t_LightIndexStartOffset, t_LightCount ); } GroupMemoryBarrierWithGroupSync();
We will once again use the InterlockedAdd function to increment the global light index list counter by the number of lights that were appended to the group-shared light index list. On lines 194 and 198 the light grid is updated with the offset and light count of the global light index list.
To avoid race conditions, only the first thread in the thread group will be used to update the global memory.
On line 201, all threads in the thread group must be synced again before we can update the global light index list.
// Now update the light index list (all threads). // For opaque geometry. for ( i = IN.groupIndex; i < o_LightCount; i += BLOCK_SIZE * BLOCK_SIZE ) { o_LightIndexList[o_LightIndexStartOffset + i] = o_LightList[i]; } // For transparent geometry. for ( i = IN.groupIndex; i < t_LightCount; i += BLOCK_SIZE * BLOCK_SIZE ) { t_LightIndexList[t_LightIndexStartOffset + i] = t_LightList[i]; }
To update the opaque and transparent global light index lists, we will allow all threads to write a single index into the light index list using a similar method that was used to iterate the light list on lines 132-183 shown previously.
At this point both the light grid and the global light index list contain the necessary data to be used by the pixel shader to perform final shading.
Final Shading
The last part of the Forward+ rendering technique is final shading. This method is no different from the standard forward rendering technique that was discussed in the section titled Forward Rendering - Pixel Shader except that instead of looping through the entire global light list, we use the light index list that was generated in the light culling phase.
In addition to the properties that were described in the section about standard forward rendering, the Forward+ pixel shader also needs to take the light index list and the light grid that was generated in the light culling phase.
StructuredBuffer<uint> LightIndexList : register( t9 ); Texture2D<uint2> LightGrid : register( t10 );
When rendering opaque geometry, you must take care to bind the light index list and light grid for opaque geometry and when rendering transparent geometry, the light index list and light grid for transparent geometry. Of course this seems obvious but the only differentiating factor for the final shading pixel shader is the light index list and light grid that is bound to the pixel shader stage.
[earlydepthstencil] float4 PS_main( VertexShaderOutput IN ) : SV_TARGET { // Compute ambient, emissive, diffuse, specular, and normal // similar to standard forward rendering. // That code is omitted here for brevity. // Get the index of the current pixel in the light grid. uint2 tileIndex = uint2( floor(IN.position.xy / BLOCK_SIZE) ); // Get the start position and offset of the light in the light index list. uint startOffset = LightGrid[tileIndex].x; uint lightCount = LightGrid[tileIndex].y; LightingResult lit = (LightingResult)0; // DoLighting( Lights, mat, eyePos, P, N ); for ( uint i = 0; i < lightCount; i++ ) { uint lightIndex = LightIndexList[startOffset + i]; Light light = Lights[lightIndex]; LightingResult result = (LightingResult)0; switch ( light.Type ) { case DIRECTIONAL_LIGHT: { result = DoDirectionalLight( light, mat, V, P, N ); } break; case POINT_LIGHT: { result = DoPointLight( light, mat, V, P, N ); } break; case SPOT_LIGHT: { result = DoSpotLight( light, mat, V, P, N ); } break; } lit.Diffuse += result.Diffuse; lit.Specular += result.Specular; } diffuse *= float4( lit.Diffuse.rgb, 1.0f ); // Discard the alpha value from the lighting calculations. specular *= lit.Specular; return float4( ( ambient + emissive + diffuse + specular ).rgb, alpha * mat.Opacity ); }
Most of the code for this pixel shader is identical to that of the forward rendering pixel shader so it is omitted here for brevity. The primary concept here is shown on line 298 where the tile index into the light grid is computed from the screen space position. Using the tile index, the start offset and light count is read from the light grid on lines 301 and 302.
In the for-loop defined on line 306 loops over the light count and reads the light's index from the light index list and uses that index to retrieve the light from the global light list.
Now let's see how the performance of the various methods compare.
Experiment Setup and Performance Results
To measure the performance of the various rendering techniques, I used the Crytek Sponza scene [11] on an NVIDIA GeForce GTX 680 GPU at a screen resolution of 1280x720. The camera was placed close to the world origin and the lights were animated to rotate in a circle around the world origin.
I tested each rendering technique using two scenarios:
- Large lights with a range of 35-40 units
- Small lights with a range of 1-2 units
Having a few (2-3) large lights in the scene is a realistic scenario (for example key light, fill light, and back light [25]). These lights may be shadow casters that set the mood and create the ambient for the scene. Having many (more than 5) large lights that fill the screen is not necessarily a realistic scenario but I wanted to see how the various techniques scaled when using large, screen-filling lights.
Having many small lights is a more realistic scenario that might be commonly used in games. Many small lights can be used to simulate area lights or bounced lighting effects similar to the effects of global illumination algorithms that are usually only simulated using light maps or light probes as described in the section titled Forward Rendering.
Although the demo supports directional lights I did not test the performance of rendering using directional lights. Directional lights are large screen filling lights that are similar to lights having a range of 35-40 units (the first scenario).
In both scenarios lights were randomly placed throughout the scene within the boundaries of the scene. The sponza scene was scaled down so that its bounds were approximately 30 units in the X and Z axes and 15 units in the Y axis.
Each graph displays a set of curves that represent the various phases of the rendering technique. The horizontal axis of the curve represents the number of lights in the scene and the vertical axis represents the running time measured in milliseconds. Each graph also displays a minimum and maximum threshold. The minimum threshold is displayed as a green horizontal line in the graph and represents the ideal frame-rate of 60 Frames-Per Second (FPS) or 16.6 ms. The maximum threshold is displayed as a red horizontal line in the graph and represents the lowest acceptable frame-rate of 30 FPS or 33.3 ms.
Forward Rendering Performance
Let us first analyze the performance of the forward rendering technique using large lights.
Large Lights
The graph below shows the performance results of the forward rendering technique using large lights.
The graph displays the two primary phases of the forward rendering technique. The purple curve shows the opaque pass and the dark red curve shows the transparent pass. The orange line shows the total time to render the scene.
As can be seen by this graph, rendering opaque geometry takes the most amount of time and increases exponentially as the number of lights increases. The time to render transparent geometry also increases exponentially but there is much less transparent geometry in the scene than opaque geometry so the increase seems more gradual.
Even with very large lights, standard forward rendering is able to render 64 dynamic lights while still maintaining frame-rates below the maximum threshold of 30 FPS. With more than 512 lights, the frame time becomes immeasurably high.
From this we can conclude that if the scene contains more than 64 large visible lights, you may want to consider using a different rendering technique than forward rendering.
Small Lights
Forward rendering performs better when the scene contains many small lights. In this case, the rendering technique can handle twice as many lights while still maintaining acceptable performance. After more than 1024 lights, the frame time was so high, it was no longer worth measuring.
We see again that the most amount of time is spent rendering opaque geometry which is not surprising. The trends for both large and small lights are similar but when using small lights, we can create twice as many lights while achieving acceptable frame-rates.
Next I'll analyze the performance of the deferred rendering technique.
Deferred Rendering Performance
The same experiment was repeated but this time using the deferred rendering technique. Let's first analyze the performance of using large screen-filling lights.
Large Lights
The graph below shows the performance results of deferred rendering using large lights.
Rendering large lights using deferred rendering proved to be only marginally better than forward rendering. Since rendering transparent geometry uses the exact same code paths as the forward rendering technique, the performance of rendering transparent geometry using forward versus deferred rendering are virtually identical. As expected, there is no performance benefit when rendering transparent geometry.
The marginal performance benefit of rendering opaque geometry using deferred rendering is primarily due to the reduced number of redundant lighting computations that forward rendering performs on occluded geometry. Redundant lighting computations that are performed when using forward rendering can be mitigated by using a depth pre-pass which would allow for early z-testing to reject fragments before performing expensive lighting calculations. Deferred rendering implicitly benefits from early z-testing and stencil operations that are not performed during forward rendering.
Small Lights
The graph below shows the performance results of deferred rendering using small lights.
The graph shows that deferred rendering is capable of rendering 512 small dynamic lights while still maintaining acceptable frame rates. In this case the time to render transparent geometry greatly exceeds that of rendering opaque geometry. If rendering only opaque objects, then the deferred rendering technique is capable of rendering 2048 lights while maintaining frame-rates below the minimum acceptable threshold of 60 FPS. Rendering transparent geometry greatly exceeds the maximum threshold after about 700 lights.
Forward Plus Performance
The same experiment was repeated once again using tiled forward rendering. First we will analyze at the performance characteristics using large lights.
Large Lights
The graph below shows the performance results of tiled forward rendering using large scene lights.
The graph shows that tiled forward rendering is not well suited for rendering scenes with many large lights. Rendering 512 screen filling lights in the scene caused issues because the demo only accounts for having an average of 200 lights per tile. With 512 large lights the 200 light average was exceeded and many tiles simply appeared black.
Using large lights, the light culling phase never exceeded 1 ms but the opaque pass and the transparent pass quickly exceeded the maximum frame-rate threshold of 30 FPS.
Small Lights
The graph shows the performance of tiled forward rendering using small lights.
Forward plus really shines when using many small lights. In this case we see that the light culling phase (orange line) is the primary bottleneck of the rendering technique. Even with over 16,000 lights, rendering opaque (blue line) and transparent (purple line) geometry fall below the minimum threshold to achieve a desired frame-rate of 60 FPS. The majority of the frame time is consumed by the light culling phase.
Now lets see how the three techniques compare against each other.
Techniques Compared
First we'll look at how the three techniques compare when using large lights.
Large Lights
The graph below shows the performance of the three rendering techniques when using large lights.
As expected, forward rendering is the most expensive rendering algorithm when rendering large lights. Deferred rendering and tiled forward rendering are comparable in performance. Even if we disregard rendering transparent geometry in the scene, deferred rendering and tiled forward rendering have similar performance characteristics.
If we consider scenes with only a few large lights there is still no discernible performance benefits between forward, deferred, or forward plus rendering.
If we consider the memory footprint required to perform forward rendering versus deferred rendering versus tiled forward rendering then traditional forward rendering has the smallest memory usage.
Regardless of the number of lights in the scene, deferred rendering requires about four bytes of GPU memory per pixel per additional G-buffer render target. Tiled forward rendering requires additional GPU storage for the light index list and the light grid which must be stored even when the scene contains only a few dynamic lights.
- Deferred Rendering (Diffuse, Specular, Normal @ 1280x720): +11 MB
- Tiled Forward Rendering (Light Index List, Light Grid @ 1280x720): +5.76 MB
The additional storage requirements for deferred rendering is based on an additional three full-screen buffers at 32-bits (4 bytes) per pixel. The depth/stencil buffer and the light accumulation buffers are not considered as additional storage because standard forward rendering uses these buffers as well.
The additional storage requirements for tiled forward rendering is based on two light index lists that have enough storage for an average of 200 lights per tile and two 80x45 light grids that store 2-component unsigned integer per grid cell.
If GPU storage is a rare commodity for the target platform and there is no need for many lights in the scene, traditional forward rendering is still the best choice.
Small Lights
The graph below shows the performance of the three rendering techniques when using small lights.
In the case of small lights, tiled forward rendering clearly comes out as the winner in terms of rendering times. Up until somewhere around 128 lights, deferred and tiled forward rendering are comparable in performance but quickly diverge when the scene contains many dynamic lights. Also we must consider the fact that a large portion of the deferred rendering technique is consumed by rendering transparent objects. If transparent objects are not a requirement, then deferred rendering may be a viable option.
Even with small lights, deferred rendering requires many more draw calls to render the geometry of the light volumes. Using deferred rendering, each light volume must be rendered at least twice, the first draw call updates the stencil buffer and the second draw call performs the lighting equations. If the graphics platform is very sensitive to excessive draw calls, then deferred rendering may not be the best choice.
Similar to the scenario with large lights, when rendering only a few lights in the scene then all three techniques have similar performance characteristics. In this case, we must consider the additional memory requirements that are imposed by deferred and tiled forward rendering. Again, if GPU memory is scarce and there is no need for many dynamic lights in the scene then standard forward rendering may be a viable solution.
Future Considerations
While working on this project I have identified several issues that would benefit from consideration in the future.
- General Issues:
- Size of the light structure
- Forward Rendering:
- Depth pre-pass
- View frustum culling of visible lights
- Deferred Rendering:
- Optimize G-buffers
- Rendering of directional lights
- Tiled Forward Rendering
- Improve light culling
General Considerations
For each of the rendering techniques used in this demo there is only a single global light list which stores directional, point, and spotlights in a single data structure. In order to store all of the properties necessary to perform correct lighting, each individual light structure requires 160 bytes of GPU memory. If we only store the absolute minimum amount of information needed to describe a light source we could take advantage of improved caching of the light data and potentially improve rendering performance across all rendering techniques. This may require having additional data structures to store only the relevant information that is needed by either the compute or the fragment shader or creating separate lists for directional, spot, and point lights so that no redundant information that is not relevant to the light source is stored in the data structure.
Forward Rendering
This implementation of the forward rendering technique makes no attempt to optimize the forward rendering pipeline. Culling lights against the view frustum would be a reasonable method to improve the rendering performance of the forward renderer.
Performing a depth pre-pass as the first step of the forward rendering technique would allow us to take advantage of early z-testing to eliminate redundant lighting calculations.
Deferred Rendering
When creating the implementation for the deferred rendering technique, I did not spend much time evaluating the performance of deferred rendering dependent on the format of the G-buffer textures used. The layout of the G-buffer was chosen for simplicity and ease of use. For example, the G-buffer texture to store view space normals uses a 4-component 32-bit floating-point buffer. Storing this render target as a 2-component 16-bit fixed-point buffer would not only reduce the buffer size by 75%, it would also improve texture caching. The only change that would need to be made to the shader is the method used to pack and unpack the normal data in the buffer. To pack the normal into the G-buffer, we would only need to cast the normalized 32-bit floating-point x and y values of the normal into 16-bit floating point values and store them in the render target. To unpack the normals in the lighting pass, we could read the 16-bit components from the buffer and compute the z-component of the normal by applying the following formula:
This would result in the z-component of the normal always being positive in the range
. This is usually not a problem since the normals are always stored in view-space and if the normal's z-component is negative, then it would be back-facing and back-facing polygons should be culled anyways.
Another potential area of improvement for the deferred renderer is the handling of directional lights. Currently the implementation renders directional lights as full-screen quads in the lighting pass. This may not be the best approach as even a few directional lights will cause severe overdraw and could become a problem on fill-rate bound hardware. To mitigate this issue, we could move the lighting computations for directional lights into the G-buffer pass and accumulate the lighting contributions from directional lights into the light accumulation buffer similar to how ambient and emissive terms are being applied.
This technique could be further improved by performing a depth-prepass before the G-buffer pass to allow for early z-testing to remove redundant lighting calculations.
One of the advantages of using deferred rendering is that shadow maps can be recycled because only a single light is being rendered in the lighting pass at a time so only one shadow map needs to be allocated. Moving the lighting calculations for directional lights to the G-buffer pass would require that any shadow maps used by the directional lights need to be available before the G-buffer pass. This is only a problem if there are a lot of shadow casting directional lights in the scene. If using a lot of shadow-casting directional lights, this method of performing lighting computations of directional lights in the G-buffer pass may not be feasible.
Tiled Forward Rendering
As can be seen from the experiment results, the light culling stage takes a considerable amount of time to perform. If the performance of the light culling phase could be improved then we could gain an overall performance improvement of the tiled forward rendering technique. Perhaps we could perform an early culling step that eliminates lights that are not in the viewing frustum. This would require creating another compute shader that performs view frustum culling against all lights in the scene but instead of culling all lights against 3,600 frustums, only the view frustum needs to be checked. This way, each thread in the dispatch would only need to check a very small subset of the lights against the view frustum. After culling the lights against the larger view frustum, the per-tile light culling compute shader would only have to check the lights that are contained in the view frustum.
Another improvement to the light culling phase may be achievable using sparse octrees to store a light list at each node of the octree. A node is split if the nodes exceeds some maximum threshold for light counts. Nodes that don't contain any lights in the octree can be removed from the octree and would not need to be considered during final rendering.
DirectX 12 introduces Volume Tiled Resources [20] which could be used to implement the sparse octree. Nodes in the octree that don't have any lights would not need any backing memory. I'm not exactly sure how this would be implemented but it may be worth investigating.
Another area of improvement for the tiled forward rendering technique would be to improve the accuracy of the light culling. Frustum culling could result in a light being considered to be contained within a tile when in fact no part of the light volume is contained in the tile.
As can be seen in the above image, a point light is highlighted with a red circle. The blue tiles in the image show which tiles detect that the circle is contained within the frustum of the tile. Of course the tiles inside the red circle should detect the point light but the tiles at the corners are false positives. This happens because the sphere cannot be totally rejected by any plane of the tile's frustum.
If we zoom-in to the top-left tile (highlighted green in the video above) we can inspect the top, left, bottom, and right frustum planes of the tile. If you play the video you will see that the sphere is partially contained in all four of the tile's frustum planes and thus the light cannot be culled.
In a GDC 2015 presentation by Gareth Thomas [21] he presents several methods to improve the accuracy of tile-based compute rendering. He suggests using parallel reduction instead of atomic min/max functions in the light culling compute shader. His performance analyses shows that he was able to achieve an 11 - 14 percent performance increase by using parallel reduction instead of atomic min/max.
In order to improve the accuracy of the light culling, Gareth suggests using an axis-aligned bounding box (AABB) to approximate the tile frustum. Using AABB's to approximate the size of the tile frustum proves to be a successful method for reducing the number of false positives without incurring an expensive intersection test. To perform the sphere-AABB intersection test, Gareth suggests using a very simple algorithm described by James Arvo in the first edition of the Graphics Gems series [22].
Another issue with tile-based light culling using the min/max depth bounds occurs in tiles with large depth discontinuities, for example when foreground geometry only partially overlaps a tile.
The blue and green tiles contain very few lights. In this case the minimum and maximum depth values are in close proximity. The red tiles indicate that the tile contains many lights due to a large depth disparity. In Gareth Thomas's presentation [21] he suggests splitting the frustum in two halves and computing minimum and maximum depth values for each half of the split frustum. This implies that the light culling algorithm must perform twice as much work per tile but his performance analysis shows that total frame time is reduced by about 10 - 12 percent using this technique.
A more interesting performance optimization is a method called Clustered Shading presented by Ola Olsson, Markus Billeter, and Ulf Assarsson in their paper titled "Clustered Deferred and Forward Shading" [23]. Their method groups view samples with similar properties (3D position and normals) into clusters. Lights in the scene are assigned to clusters and the per-cluster light lists are used in final shading. In their paper, they claim to be able to handle one million light sources while maintaining real-time frame-rates.
Other space partitioning algorithms may also prove to be successful at improving the performance of tile-based compute shaders. For example the use of Binary Space Partitioning (BSP) trees to split lights into the leaves of a binary tree. When performing final shading, only the lights in the leaf nodes of the BSP where the fragment exists needs to be considered for lighting.
Another possible data structure that could be used to reduce redundant lighting calculations is a sparse voxel octree as described by Cyril Crassin and Simon Green in OpenGL insights [24]. Instead of using the octree to store material information, the data structure is used to store the light index lists of lights contained in each node. During final shading, the light index lists are queried from the octree depending on the 3D position of the fragment.
Conclusion
In this article I described the implementation of three rendering techniques:
- Forward Rendering
- Deferred Rendering
- Tiled Forward (Forward+) Rendering
I have shown that traditional forward rendering is well suited for scenarios which require support for multiple shading models and semi-transparent objects. Forward rendering is also well suited for scenes that have only a few dynamic lights. The analysis shows that scenes that contain less than 100 dynamic scene lights still performs reasonably well on commercial hardware. Forward rendering also has a low memory footprint when multiple shadow maps are not required. When GPU memory is scarce and support for many dynamic lights is not a requirement (for example on mobile or embedded devices) traditional forward rendering may be the best choice.
Deferred rendering is best suited for scenarios that don't have a requirement for multiple shading models or semi-transparent objects but do have a requirement of many dynamic scene lights. Deferred rendering is well suited for many shadow casting lights because a single shadow map can be shared between successive lights in the lighting pass. Deferred rendering is not well suited for devices with limited GPU memory. Amongst the three rendering techniques, deferred rendering has the largest memory footprint requiring an additional 4 bytes per pixel per G-buffer texture (~3.7 MB per texture at a screen resolution of 1280x720).
Tiled forward rendering has a small initial overhead required to dispatch the light culling compute shader but the performance of tiled forward rendering with many dynamic lights quickly supasses the performance of both forward and deferred rendering. Tiled forward rendering requires a small amount of additional memory. Approximately 5.7 MB of additional storage is required to store the light index list and light grid using 16x16 tiles at a screen resolution of 1280x720. Tiled forward rendering requires that the target platform has support for compute shaders. It is possible to perform the light culling on the CPU and pass the light index list and light grid to the pixel shader in the case that compute shaders are not available but the performance trad-off might negate the benefit of performing light culling in the first place.
Tiled forward shading supports both multi-material and semi-transparent materials natively (using two light index lists) and both opaque and semi-transparent materials can benefit from the performance gains offered by tiled forward shading.
Although tiled forward shading may seem like the answer to life, the universe and everything (actually, 42 is), there are improvements that can be made to this technique. Clustered deferred rendering [23] should be able to perform even better at the expense of additional memory requirements. Perhaps the memory requirements of clustered deferred rendering could be mitigated by the use of sparse volume textures [20] but that has yet to be seen.
Download the Demo
The source code (including pre-built executables) can be download using the link below. The zip file is almost 1GB in size and contains all of the pre-built 3rd party libraries and the Crytek Sponza scene [11]
References
[1] T. Saito and T. Takahashi, 'Comprehensible rendering of 3-D shapes', ACM SIGGRAPH Computer Graphics, vol. 24, no. 4, pp. 197-206, 1990.
[2] T. Harada, J. McKee and J. Yang, 'Forward+: Bringing Deferred Lighting to the Next Level', Computer Graphics Forum, vol. 0, no. 0, pp. 1-4, 2012.
[3] T. Harada, J. McKee and J. Yang, 'Forward+: A Step Toward Film-Style Shading in Real Time', in GPU Pro 4, 1st ed., W. Engel, Ed. Boca Raton, Florida, USA: CRC Press, 2013, pp. 115-135.
[4] M. Billeter, O. Olsson and U. Assarsson, 'Tiled Forward Shading', in GPU Pro 4, 1st ed., W. Engel, Ed. Boca Raton, Florida, USA: CRC Press, 2013, pp. 99-114.
[5] O. Olsson and U. Assarsson, 'Tiled Shading', Journal of Graphics, GPU, and Game Tools, vol. 15, no. 4, pp. 235-251, 2011.
[6] Unity Technologies, 'Unity - Manual: Light Probes', Docs.unity3d.com, 2015. [Online]. Available:. [Accessed: 04- Aug- 2015].
[7] Assimp.sourceforge.net, 'Open Asset Import Library', 2015. [Online]. Available:. [Accessed: 10- Aug- 2015].
[8] Msdn.microsoft.com, 'Semantics (Windows)', 2015. [Online]. Available:. [Accessed: 10- Aug- 2015].
[9] Msdn.microsoft.com, 'Variable Syntax (Windows)', 2015. [Online]. Available:. [Accessed: 10- Aug- 2015].
[10] Msdn.microsoft.com, 'earlydepthstencil (Windows)', 2015. [Online]. Available:. [Accessed: 11- Aug- 2015].
[11] Crytek.com, 'Crytek3 Downloads', 2015. [Online]. Available:. [Accessed: 12- Aug- 2015].
[12] Graphics.cs.williams.edu, 'Computer Graphics Data - Meshes', 2015. [Online]. Available:. [Accessed: 12- Aug- 2015].
[13] M. van der Leeuw, 'Deferred Rendering in Killzone 2', SCE Graphics Seminar, Palo Alto, California, 2007.
[14] Msdn.microsoft.com, 'D3D11_DEPTH_STENCIL_DESC structure (Windows)', 2015. [Online]. Available:. [Accessed: 13- Aug- 2015].
[15] Electron9.phys.utk.edu, 'Coherence', 2015. [Online]. Available:. [Accessed: 14- Aug- 2015].
[16] Msdn.microsoft.com, 'Load (DirectX HLSL Texture Object) (Windows)', 2015. [Online]. Available:. [Accessed: 14- Aug- 2015].
[17] Msdn.microsoft.com, 'Compute Shader Overview (Windows)', 2015. [Online]. Available:. [Accessed: 04- Sep- 2015].
[18] C. Ericson, Real-time collision detection. Amsterdam: Elsevier, 2005.
[19] J. Andersson, 'DirectX 11 Rendering in Battlefield 3', 2011.
[20] Msdn.microsoft.com, 'Volume Tiled Resources (Windows)', 2015. [Online]. Available:. [Accessed: 29- Sep- 2015].
[21] G. Thomas, 'Advancements in Tiled-Based Compute Rendering', San Francisco, California, USA, 2015.
[22] J. Arvo, 'A Simple Method for Box-Sphere Intersection Testing', in Graphics Gems, 1st ed., A. Glassner, Ed. Academic Press, 1990.
[23] O. Olsson, M. Billeter and U. Assarsson, 'Clustered Deferred and Forward Shading', High Performance Graphics, 2012.
[24] C. Crassin and S. Green, 'Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer', in OpenGL Insights, 1st ed., P. Cozzi and C. Riccio, Ed. CRC Press, 2012, p. Chapter 22.
[25] Mediacollege.com, 'Three Point Lighting', 2015. [Online]. Available:. [Accessed: 02- Oct- 2015].
great article
Thanks Peter. I worked a long time on this article.
hi i have read the article
there is a question about clip space z coordinate for this code
screenSpace[0] = float4( IN.dispatchThreadID.xy * BLOCK_SIZE, -1.0f, 1.0f );
// Top right point
screenSpace[1] = float4( float2( IN.dispatchThreadID.x + 1, IN.dispatchThreadID.y ) * BLOCK_SIZE, -1.0f, 1.0f );
….. to construct frustum(far plane)
then call ScreenToView function
why clip space z for far plane in right hand system is -1 not 1, according to
the near plane mapped to -1 , far plan mapped to 1, i confused with it.
thank you
The -1 is the z-coordinate in “clip-space” (or normalized device coordinate space) that will be converted to the “far clip plane” in view space.
Since I’m working with a right-handed coordinate system, the resulting far plane in view space is in the -Z axis.
Is it possible to get a printer friendly (like PDF) version of this so I can read it better?
Try this:
Hi, how does this forward plus technique compares to forward plus sample code from amd sdk ?
The forward+ technique from AMD is identical to this technique. There are some variations that were explored by Takahiro Harada like 2.5D (A 2.5D Culling for Forward+ (SIGGRAPH ASIA 2012)) that I did not research. But my implementation is similar to Harada’s implementation of Forward+ (which is also similar to the implementation of Ola Olsson and Ulf Assarsson – Tiled Shading (2011).
This is an amazing piece of work, thank you! There aren’t many (any?) other tutorial-style descriptions of forward+ rendering elsewhere, so this is really valuable.
This is an outstanding paper, truly!
Given the impressive competition put forth by Forward+ (w/ tiles), can you speculate on possible reasons why it has taken a backseat to deferred rendering implementations?
Epic article! As a blogger I can understand how much effort you put into it. And I can say this is one of the best works on the subject. You definitely have a talent – your explanations are very clear. I know you have another blog where you share dx12 findings () so I’m really waiting for a full article about new api :).
Nikita,
Thanks for your feedback. I have been writing short blog posts on Blogger () but have been neglecting them lately due to work load but I do plan an adding some new entries soon about using dynamic descriptor heaps (GPU visible heaps that hold descriptors for GPU resources).
Keep an eye out for new posts!
Great work, you’ve really put your heart into writing this article. Thank you! | http://www.3dgep.com/forward-plus/ | CC-MAIN-2017-09 | en | refinedweb |
Mozilla::DOM::Node
Mozilla::DOM::Node is a wrapper around an instance of Mozilla's nsIDOMNode interface. This class inherits from Supports.
* The nsIDOMNode interface is the primary datatype for the entire * Document Object Model. * It represents a single node in the document tree. * * For more information on this interface please see * L<http:E<sol>E<sol><sol>TRE<sol>DOM-Level-2-CoreE<sol>>
Pass this to QueryInterface.
A Mozilla::DOM::NamedNodeMap containing the attributes of this node (if it is an Element) or null otherwise.
In list context, returns a list of Mozilla::DOM::Attr, instead. (I considered returning a hash ($attr->GetName => $attr->GetValue), but then you couldn't set the attributes.)
Returns whether this node (if it is an element) has any attributes.
A Mozilla::DOM::NodeList that contains all children of this node. If there are no children, this is a NodeList containing no nodes.
In list context, this returns a list of Mozilla::DOM::Node, instead.
This is a convenience method to allow easy determination of whether a node has any children.
The first child of this node. If there is no such node, this returns null.
The last child of this node. If there is no such node, this returns null.
The node immediately preceding this node. If there is no such node, this returns null.
The node immediately following this node. If there is no such node, this returns null.
The name of this node, depending on its type:
The name of the attribute
#cdata-section
#comment
#document
#document-fragment
The document type name
The tag name
The entity name
The name of the entity referenced
The name of the notation
The target
#text
Matches one of the following constants, which you can export with
use Mozilla::DOM::Node qw(:types), or export them individually.
The node is a Mozilla::DOM::Attr.
The node is a Mozilla::DOM::CDATASection.
The node is a Mozilla::DOM::Comment.
The node is a Mozilla::DOM::Document.
The node is a Mozilla::DOM::DocumentType.
The node is a Mozilla::DOM::DocumentFragment.
The node is a Mozilla::DOM::Element.
The node is a Mozilla::DOM::EntityReference.
The node is a Mozilla::DOM::Entity.
The node is a Mozilla::DOM::Notation.
The node is a Mozilla::DOM::ProcessingInstruction.
The node is a Mozilla::DOM::Text.
The value of this node, depending on its type:
The value of the attribute
The content of the CDATA section
The content of the comment
[null]
[null]
[null]
[null]
[null]
[null]
[null]
The entire content excluding the target
The content of the text node
- $value (string)
The Mozilla::DOM:.
Returns the local part of the qualified name of this node. For nodes of any type other than ELEMENT_NODE and ATTRIBUTE_NODE and nodes created with a DOM Level 1 method, such as createElement from the Document interface, this is always null..
The namespace prefix of this node, or null if it is unspecified.
For nodes of any type other than ELEMENT_NODE and ATTRIBUTE_NODE and nodes created with a DOM Level 1 method, such as createElement from the Document interface, this is always null.
Note that setting this attribute, when permitted, changes the nodeName attribute, which holds the qualified name, as well as the tagName and name attributes of the Element and Attr interfaces, when applicable.
Note also that changing the prefix of an attribute that is known to have a default value, does not make a new attribute with the default value and the original prefix appear, since the namespaceURI and localName do not change.
- $aPrefix (string)
Tests whether the DOM implementation implements a specific feature and that feature is supported by this node.
- $feature (string)
The name of the feature to test. This is the same name which can be passed to the method hasFeature on DOMImplementation.
- $version (string)
This is the version number of the feature to test. In Level 2, version 1, this is the string "2.0". If the version is not specified, supporting any version of the feature will cause the method to return true.
two string args
Adds the node newChildNode to the end of the list of children of this node. If the newChild is already in the tree, it is first removed.
- $newChild (Mozilla::DOM::Node)
The node to add. If it is a DocumentFragment object, the entire contents of the document fragment are moved into the child list of this node.
Returns a duplicate of this node. (See DOM 1 spec for details.)
- $deep (boolean)
If true, recursively clone the subtree under the specified node; if false, clone only the node itself (and its attributes, if it is an Element).
DOM 2 spec:.
- $newChild (Mozilla::DOM::Node)
The node to insert.
- $refChild (Mozilla::DOM::Node)
The reference node, i.e., the node before which the new node must be inserted.
Removes the child node indicated by oldChild from the list of children, and returns it.
- $oldChild (Mozilla::DOM::Node)
Replaces the child node oldChild with newChild in the list of children, and returns the oldChild node.
If newChild is a DocumentFragment object, oldChild is replaced by all of the DocumentFragment children, which are inserted in the same order. If the newChild is already in the tree, it is first removed.
- $newChild (Mozilla::DOM::Node)
The new node to put in the child list.
- $oldChild (Mozilla::DOM::Node)
The node being replaced in the list..
See DOM 2 spec for details.
This software is licensed under the LGPL. See Mozilla::DOM for a full notice. | http://search.cpan.org/dist/Mozilla-DOM/lib/Mozilla/DOM/Node.pod | CC-MAIN-2017-09 | en | refinedweb |
Programmers have always found it difficult to read a big file (5-10 GB) line by line in Java. But there are various ways that can help read a larger file line by line without using much of the memory on the computer.
Using the Buffer Reader and readline() method one can read the data faster even on the computer having less amount of memory.
In the example of Java Read line by line below, a BufferedReader object is created that reads from the file line by line until the readLine() method returns null.
The returning of null by readLine() method of BufferedReader indicates the end of the file.
Let?s say you want to read a file with name ?Read file line by line in Java.txt? that includes a text. Now first thing a program has to do is to find the file. For this we use FileInputStream class.
FileInputStream class extends the java.io.InputStream. It reads a file in bytes.
BufferedReader class extends java.io.Reader. It reads characters, arrays, and lines from an input stream. Its constructor takes input stream as input. It is important to specify the size of the buffer or it will use the default size which is very large.
We have used the readLine() method of BufferedReader class that is used to read a complete line and return string line.
The other methods that can be used are:
read() method of BufferedReader class that reads a character from the input stream.
read(char[] cbuf, int off, int len) method of BufferedReader class reads characters and allots then in the part of an array.
DataInputStream Class extends java.io.FilterInputStream. It reads primitive data types from an underlying input stream.
Whenever a read request is made by Reader, a similar read request is made of the character or byte stream. Hence most of the times, a BufferedReader is wrapped around a Reader like FileReader or InputStreamReader.
The Reader buffers the input from a specific file. If they are not buffered every time a read() method or readline() method is invocated, they can convert the bytes read from the file into characters and then return it. This can be of ineffective.
To release the file descriptor at the end, close the stream like shown in example ?inputStream.close()?. The output of read file line by line in Java is printed on console.
Example of Read file line by line using BufferReader:
package ReadFile; import java.io.BufferedReader; import java.io.DataInputStream; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; public class ReadFileLineByLine { public static void main(String[] args) throws IOException { FileInputStream stream = new FileInputStream("c://readFileLineByLineInJava.txt"); DataInputStream inputStream = new DataInputStream(stream); BufferedReader br = new BufferedReader(new InputStreamReader(inputStream)); String string; while ((string = br.readLine()) != null) { System.out.println(string); } inputStream.close(); } }
Read file line by line in Java using Scanner Class:
Besides the Buffer reader, using Scanner class of Java API is another way to read a file line by line in Java. It used hasNextLine() method and nextLine() method to read the contents of a file line by line.
The benefit of using Scanner class is that it has more utilities like nextInt(), nextLong(). Hence, the need to convert a string to integer is no longer required as it can directly reads the number from the file.
Example of Read file line by line using Scanner Class: Read file line by line in Java program
Post your Comment | http://www.roseindia.net/java/javafile/how-to-read-file-line-by-line-in-java-program.shtml | CC-MAIN-2017-09 | en | refinedweb |
I have 3 files and want to print lines that are a combination of the same line from each file. The files can have any number of lines. How can I iterate over three files in parallel?
protocol.txt
http
ftp
sftp
yahoo
gmail
23
45
56
Protocol 'http' for website 'facebook' with port '23'
Protocol 'ftp' for website 'yahoo' with port '45'
Protocol 'sftp' for website 'gmail' with port '56'
from time import sleep
with open ("C:/Users/Desktop/3 files read/protocol.txt", 'r') as test:
for line in test:
with open ("C:/Users/Desktop/3 files read/website.txt", 'r') as test1:
for line1 in test1:
with open ("C:/Users/Desktop/3 files read/port.txt", 'r') as test2:
for line2 in test2:
print "Protocol (%s) for website (%s) with port (%d)" % line, line1, line2
Here is my version:
import os.path directory = "C:/Users/balu/Desktop/3 files read" with open(os.path.join(directory, "protocol.txt"), 'r') as f1,\ open(os.path.join(directory, "website.txt"), 'r') as f2,\ open(os.path.join(directory, "port.txt"), 'r') as f3: for l1, l2, l3 in zip(f1, f2, f3): print "Protocol %s for website %s with port %d" % (l1.rstrip(), l2.rstrip(), int(l3))
I used the directory variable to simplify the code. Notice that I joined the elements using
os.path.join(), which is safer than just putting a directory separator there.
Using
zip(), we iterate through the three file objects. Using
zip() means that the loop will exit on the file with the fewer lines, if they are uneven. If you cannot guarantee that they all have the same number of lines, then you might need to put an extra check in there.
By the way, at least some of this information is in the etc/services file. | https://codedump.io/share/KmVWOcwiuVoI/1/how-to-pick-each-line-from-3-files-in-parllel-and-print-in-single-line-using-python | CC-MAIN-2017-09 | en | refinedweb |
Opened 5 months ago
Closed 5 months ago
Last modified 5 months ago
#13440 closed bug (duplicate)
putStr doesn't have the same behavior in GHCi and after compilation with GHC
Description
Hi,
main :: IO () main = do { putStr "hello"; x <- getChar; -- or x <- getLine; putStr "x = "; print x; }
the program works in GHCi and after compilation with GHC. in GHCi it works fine but after compilation it does not work fine. in GHCi the first line of code is evaluated the first while after compilation is the second line which is evaluated first. so in GHCi we have as a result:
hellom x = 'm'
and after compilation we have as a result:
m hellox = 'm'
After compilation, putStr is not evaluated the first. It is getChar or getLine. In GHCi the behavior is correct. But after compilation the behavior is strange.
Change History (5)
comment:1 Changed 5 months ago by
comment:2 Changed 5 months ago by
for RyanGIScott
yes, I am on Windows. I use a standard Windows console (cmd.exe)
I also use Linux Debian8 under virtualBox.
I have just tested this program on Linux and here are the results.
With GHCi the result is:
*Main> main hellomx = 'm' *Main>
It seems much better than GHCi on Windows.
After compiling, the result is the same as Windows.
vanto@debian:~/sourcehs$./test m hellox = 'm'
On the other hand, I used runghc to test the program on Windows and Linux and here are the results:
with Windows
the result is the same as compilation on Windows
c:\sourcehs>runghc test.hs m hellox = 'm'
with Linux
the result is the same as GHCi on Windows
vanto@debian:~/sourcehs$ runghc test.hs hellom x = 'm'
Hope this help!
comment:3 Changed 5 months ago by
On Linux I use GHC version: 8.0.2
comment:4 Changed 5 months ago by
As it turns out, this is a duplicate of a long-standing bug in GHCi, #2189. Fixing that bug will probably require rewriting the whole IO manager to use native Win32 IO (see #11394), but luckily, someone is working on this.
Until then, I can offer you two workarounds.
- If you want to have a stronger guarantee that
"hello"will be printed first, you can use
hFlush stdoutto force this:
import System.IO main :: IO () main = do { putStr "hello"; hFlush stdout; x <- getChar; -- or x <- getLine; putStr "x = "; print x; }
- Alternatively, you can try a different buffering strategy. By default,
stdout's buffering mode is
NoBuffering(which should, in theory, mean that all output is immediately printed to the screen, were it not for #2189). But you can change the buffering mode to something else:
import System.IO main :: IO () main = do { hSetBuffering stdout $ BlockBuffering $ Just 1; putStr "hello"; x <- getChar; -- or x <- getLine; putStr "x = "; print x; }
This does buffer the output, but only 1 character at a time.
I've experimentally verified that both of these workaround work on my Windows machine, on both
cmd.exe and MSYS2.
Are you on Windows? If so, what console are you using? I know there are several issues regarding input/output buffering which might explain the discrepancies you're seeing. | https://ghc.haskell.org/trac/ghc/ticket/13440 | CC-MAIN-2017-34 | en | refinedweb |
Getting Past Hello World in Angular 2By Jason.
So you’ve been through the basic Angular 2 application and now you want a bit more. If you’ve been reading about Angular 2 you’ve undoubtedly seen what might look like really odd syntax in templates. You may have heard something about an overhaul to dependency injection. Or maybe some of the features of ES7 (or ES2016, the version of JavaScript planned to come out next year) such as Decorators and Observables.
More from this author
This post will give you a quick introduction to these concepts and how they apply to your Angular 2 applications. We won’t dive deep into these topics yet, but look for later posts that cover each area in more detail.
So, let’s begin at the beginning–setting up an environment.
Environment Setup
Angular2 source is written in TypeScript, a superset of JavaScript that allows for type definitions to be applied to your JavaScript code (you can find more information about TypeScript at).
In order to use TypeScript, we need to install the compiler which runs on top of Node.js. I’m using Node 0.12.6 and NPM 2.9.1. TypeScript should be installed automatically by NPM, but you can also install TypeScript globally using:
npm install –g typescript # Test using tsc -v
We are also using Gulp to build the project. If you don’t have Gulp installed globally, use this command:
npm install -g gulp-cli
Once you have Node installed, use these instructions to set up and start your simple application:
git clone cd second-angular-app npm install gulp go
In a browser, navigate to (or the deployed version at). You should see an extremely basic application like this:
This is a simple application that takes comma-separated ticker symbols as input and retrieves data including price and the stock name, displaying them in a simple list. This lacks styling, but we can use it to demonstrate a few ways to think about Angular 2 applications.
Components
Angular 2 applications are built using components. But what is a component? How do we set up a component in Angular 2?
At a high level, a component is a set of functionalities grouped together including a view, styles (if applicable) and a controller class that manages the functionality of the component. If you’re familiar with Angular 1, a component is basically a directive with a template and a controller. In fact, in Angular 2 a component is just a special type of directive encapsulating a reusable UI building block (minimally a controller class and a view).
We will look at the pieces that make up a component by examining the StockSearch component. In our simple application, the StockSearch component displays the input box and includes a child component called StockList that renders the data returned from the API.
Decorators
Decorators are a new feature in ES7 and you will see them identified in source code by a leading “@” symbol. Decorators are used to provide Angular with metadata about a class (and sometimes methods or properties) to they can be wired into the Angular framework. Decorators always come before the class or property they are decorating, so you will see a codeblock like this:
This block uses the component decorator to tell Angular that
MyClass is an Angular component.
Note: For an in-depth discussion of decorators and annotations (a special type of decorator used in Angular 2) see Pascal Precht‘s post on the difference between annotations and decorators.
Examining a Component
Let’s get into some code. Below is the StockSearch component, responsible for taking user input, calling a service to find stock data, and passing the data to the child StockList component. We will use this sample to talk about key parts of building an Angular 2 application:
import {Component, View} from 'angular2/core' import {StockList} from './stockList' import {StocksService} from '../services/stocks' @Component({ selector: 'StockSearch', providers: [StocksService] }) @View({ template: ` <section> <h3>Stock Price & Name Lookup:</h3> <form (submit)="doSearch()"> <input [(ngModel)]="searchText"/> </form> <StockList [stocks]="stocks"></StockList> </section> `, directives: [StockList] }) export class StockSearch { searchText: string; stocks: Object[]; constructor(public stockService:StocksService) {} doSearch() { this.stockService.snapshot(this.searchText).subscribe( (data) => {this.stocks= data}, (err) => {console.log('error!', err)} ); } }
(Screenshots in this post are from Visual Studio Code in Mac.)
While I’m relatively new to TypeScript, I’ve been enjoying it while writing Angular2 applications. TypeScript’s ability to present type information in your IDE or text editor is extremely helpful, especially when working with something new like Angular2.
I’ve been pleasantly surprised with how well Visual Studio Code integrates with TypeScript. In fact, it’s written in TypeScript and can parse it by default (no plugins required). When coding in Angular2 I found myself referring to the documentation frequently until I found the “Peek Definition” feature. This feature allows you to highlight any variable and use a keyboard shortcut (Opt-F12 on Mac) to look at the source of that variable. Since Angular2’s documentation exists in the source, I find I rarely need to go to the online documentation. Using Peek Definition on the Component decorator looks like this:
import statements
The import statements at the top of the file bring in dependencies used by this component. When you create a component, you will almost always use
import {Component, View} from 'angular2/core at the top. This line brings in the component and view decorators.
Angular 2 also requires that you explicitly tell the framework about any children components (or directives) you want to use. So the next line
(import {StockList} from './stockList') pulls in the StockList component.
Similarly, we identify any services we will need. In this case we want to be able to request data using the StockService, so the last import pulls it in.
selector and template
The
selector and
template properties passed to the component and view decorators are the only two configuration options required to build an Angular 2 component. Selector tells Angular how to find this component in HTML and it understands CSS selector syntax.
Unlike regular HTML, Angular 2 templates are case sensitive. So where in Angular 1 we used hyphens to make camelCase into kebab-case, we don’t need to do this in Angular 2. This means we can now use uppercase letters to start our component selectors, which distinguishes them from standard HTML. So:
selector: 'StockSearch' // matches <StockSearch></StockSearch> selector: '.stockSearch' // matches <div class="stockSearch"> selector: '[stockSearch]' // matches <div stockSearch>
When the Angular compiler comes across HTML that matches one of these selectors, it creates an instance of the component class and renders the contents of the template property. You can use either
template to create an inline template, or
templateUrl to pass a URL, which contains the HTML template.
Dependency Injection (“providers” and “directives”)
If you’re familiar with Angular 1 you know it uses dependency injection to pass things like services, factories and values into other services, factories and controllers. Angular 2 also has dependency injection, but this version is a bit more robust.
In the case of the component above, the class (controller) is called StockSearch. StockSearch needs to use StockService to make a call to get stock data. Consider the following abbreviated code snippet:
import{StocksService} from '../services/stocks' @Component({ selector: 'StockSearch', providers: [StocksService] }) export class StockSearch { constructor(public stockService:StocksService) {} }
As with other classes, the
import statement makes the
StocksService class available. We then pass it into the
providers property passed to the
Component decorator, which alerts Angular that we want to use dependency injection to create an instance of the
StocksService when it’s passed to the StockSearch constructor.
The constructor looks pretty bare-bones, but there’s actually quite a bit happening in this single line.
public keyword
The public keyword tells TypeScript to put the
stockService(camelCase, as opposed to the class name which is PascalCase) variable onto the instance of the StockSearch component. StockSearch’s methods will reference it as
this.stockService.
Type declaration
After
public stockService we have
:StocksService, which tells the compiler that the variable
stockService will be an instance of the
StockService class. Because we used the component decorator, type declarations on the constructor cause Angular’s dependency injection module to create an instance of the StocksService class and pass it to the StockSearch constructor. If you’re familiar with Angular 1, an analogous line of code would look like this:
One key difference between the Angular 1 and Angular 2 dependency injection systems is that in Angular 1 there’s just one large global namespace for all dependencies in your app. If you register a controller by calling
app.controller('MyController', …) in two different places in your code, the second one loaded will overwrite the first. (Note that this overwrite issue doesn’t apply to directives in Angular 1. Registering a directive by calling
app.directive('myDirective', …) in multiple places will add to the behavior of the earlier definition, not overwrite it.)
In Angular 2 the confusion is resolved by explicitly defining which directives are used in each view. So
directives: [StockList] tells Angular to look for the StockList component inside our view. Without that property defined, Angular wouldn’t do anything with the “
<StockList>” HTML tag.
Properties and Events
Looking at the
template passed to the View decorator, we see a new syntax where some attributes are surrounded by parentheses, some by brackets, and some by both.
Parentheses
Surrounding an attribute with parentheses tells Angular to listen for an event by the attribute name. So
<form (submit) = "doSearch()"> tells Angular to listen for the
submit event on the
form component, and when a submit occurs the
doSearch method of the current component should be run (in this case,
StockSearch is the current component).
Square Brackets
Surrounding an attribute with square brackets tells Angular to parse the attribute value as an expression and assign the result of the expression to a property on the target component. So
<StockList [stocks] = "stocks"></StockList> will look for a stocks variable on the current component (StockSearch) and pass its value to the StockList component in a property also named “stocks.” In Angular 1 this required configuration using the Directive Definition Object, but looking just at the HTML there was no indication of how the attribute would be used.
<!-- Angular 2 we know [stocks] causes "stocks" to parsed and passed to StockList component --> <StockList[stocks] = "stocks"></StockList> <!-- Angular 1 doesn't use brakcets. Looking just at the HTML we don't know how the directive is using the stocks attribute --> <stock-list</stock-list>
Square Brackets and Parentheses
This is a special syntax available to the
ngModel directive. It tells Angular to create a two-way binding. So
<input [(ngModel)] = "searchText"> wires up an input element where text entered into the input gets applied to the
searchText property of the StockSearch component. Similarly, changes to
this.searchText inside the StockSearch instance cause an update to the value of the input.
<!-- Angular 2 requires both event and attribute bindings to be explicitly clear we want 2-way binding --> <input [(ngModel)] = "searchText"> <!-- Angular 1 looks simpler without the [(...)]. --> <input ng-
Services
A more in-depth explanation of services will be the subject of a future post, but our application defines a StockService that’s used to make an HTTP call to retrieve data. So let’s take a look at the service and walk through the pieces:
//a simple service import {Injectable} from 'angular2/core'; import {Http, URLSearchParams} from 'angular2/http'; @Injectable() export class StocksService { // TS shortcut "public" to put http on this constructor(public http:Http) {} snapshot(symbols:string): any { let params = new URLSearchParams(); params.set('symbols', symbols); return this.http.get("/api/snapshot", {search: params}) .map(res => res.json()) // convert to JSON .map(x => x.filter(y => y.name)); // Remove stocks w/no name } }
@Injectable() decorator
In Angular 2 we use the Injectable decorator to let Angular know a class should be registered with dependency injection. Without this, the providers property we used in the StockSearch component wouldn’t have worked, and dependency injection wouldn’t have created an instance of the service to pass into the controller. Therefore, if you want a class to be registered with dependency injection, use the Injectable decorator.
HTTP service
We’ll deep-dive into Angular 2’s HTTP library in the future, but the high-level summary is that it lets you call all your basic HTTP methods. Similar to the way we made StockService available to the StockSearch constructor, we are adding “http” on the “this” object using
public http:Http.
If you take a look at the “snapshot” method, you’ll see we call
/api/snapshot, then pass a configuration object with the search params. This is pretty straightforward, but at the bottom pay attention to
.map(...). In Angular 1 (as in most modern HTTP libraries), an HTTP call returns a “promise.” But in Angular 2 we get an Observable.
An observable is like a promise but it can be resolved multiple times. There’s a lot to know about observables and we’ll cover them in upcoming posts, but for more information on observables, set aside some time to go through this post and exercises in order to get up to speed.
Tree of Components
We’ve looked at one component and a service it accesses. Angular 2 applications are built by constructing a tree of components, starting with a root application component. The application component should contain very little logic, as the primary purpose is to lay out the top-level pieces of your application.
//our root app component import {Component, View} from 'angular2/core' import {StockSearch} from './components/stockSearch'; @Component({ selector: 'App' }) @View({ template: ' <header> <h2>Second Angular 2 App</h2> </header> <StockSearch></StockSearch>', directives: [StockSearch] }) export class AppComponent {}
Now that we are familiar with the syntax of wiring up a component, we can see that the template outputs the header, followed by the StockSearch component. StockSearch is in the directive list, so Angular renders the component when it comes across the
<StockSearch> tag.
That’s about all there is to an application component; it is simply the root of our application. There can be only a single root in any Angular 2 application, and the final piece of the puzzle is telling Angular to look for it.
Bootstrapping our Application
In an Angular 2 application, we need to tell Angular when to start up. This is done using the
bootstrap method, passing in our
AppComponent along with other module dependencies. For those familiar with Angular 1, this is similar to the main
angular.module(name, [dependencies...]) construct.
//our root app component import {bootstrap} from 'angular2/platform/browser'; import {HTTP_PROVIDERS} from 'angular2/http'; import 'rxjs/Rx'; import {AppComponent} from './app'; bootstrap(AppComponent, [HTTP_PROVIDERS]) .catch(err =>console.error(err));
Notice the second argument is
[HTTP_PROVIDERS]. This is a special variable that references the classes defining Angular 2’s HTTP functionality, and it’s needed to let Angular know we can use the HTTP classes in our application. The Angular 1 equivalent looks something like this:
// JavaScript code to set up the app angular.module('App', ['ngResource']); <!-- Corresponding HTML tag pointing to "App" module --> <div ng-</div>
Also notice we have
import 'rxjs/Rx' at the top. This pulls in the RxJs library so we get access to methods on Observables. Without adding this line we wouldn’t be able to run
.map() on the return from
http.get() method since the returned Observable wouldn’t have the map method available.
Once the application is bootstrapped, Angular looks for our root component in our markup. Looking back at the
AppComponent, the selector is set to app, so Angular will be looking for an element
<app> and will render the
AppComponent there:
<!DOCTYPE html> <html> <head> <title>Basic Angular 2 Application Demonstration</title> </head> <body> <App> loading... </App> <script src="/lib/angular2-polyfills.js"></script> <script src="/lib/system.js"></script> <script src="/lib/Rx.js"></script> <script src="/lib/angular2.dev.js"></script> <script src="/lib/http.dev.js"></script> <script> System.config({ //packages defines our app package packages: {'app': {defaultExtension: 'js'}} }); System.import('app/bootstrap') .catch(console.error.bind(console)); </script> </body> </html>
Summary
When considering Angular 2 applications, think about components and their children. At any point in the application you are examining some component, and that component knows what its children are. It’s a top-down approach where at each level you define compartmentalized functionality that eventually leads to a complete application. | https://www.sitepoint.com/getting-past-hello-world-angular-2/ | CC-MAIN-2017-34 | en | refinedweb |
Jun 25, 2008 11:00 AM|TheEagle|LINK
Hi,
My website is working fine.I added a deployment project.When I biuld the website(which cause the deployment project to be biulded too) I get the error:
aspnet_merge.exe exited with code 1.
I don't know how to find what cause the error to solve it.Could any one help?My company want the website to be deployed today but I couldn't because of this error.Please help me as fast as possible.
All-Star
16800 Points
Jun 25, 2008 12:13 PM|Jeev|LINK
see this post
In all probability its usually because of naming collisions eg similarly named page in 2 different folders and both of them being in the same namespace
Jun 25, 2008 12:54 PM|hongping|LINK
You could try running the command manually and add a "-errorstack" flag which might also yield more information on the error.
Jun 25, 2008 04:30 PM|hongping|LINK
You can take a look at the output window after you build the Web Deployment Project. It would have the commands ran for aspnet_compiler and aspnet_merge.
------ Rebuild All started: Project: WebSite3_deploy, Configuration: Debug Any CPU ------
if exist ".\TempBuildDir\" rd /s /q ".\TempBuildDir\"
D:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_compiler.exe -v /WebSite3 -p e:\bugs\WebSite3 -u -f -c -d .\TempBuildDir\
Running aspnet_merge.exe.
D:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\aspnet_merge.exe .\TempBuildDir -o WebSite3_deploy -a -debug -copyattrs
Successfully merged '.\TempBuildDir'.
You can try re-running those commands in a Visual Studio command prompt. For aspnet_merge, you could try adding the "errorstack" option.
5 replies
Last post Jun 25, 2008 04:30 PM by hongping | https://forums.asp.net/t/1280702.aspx?aspnet_merge+exe+exited+with+code+1 | CC-MAIN-2017-34 | en | refinedweb |
Viewflow REST¶
PRO Only
viewflow.rest package provides a flow implementation with the REST interface.
- /flows/ - List of available flows
- /processes/ - List of processes
- /tasks/ - List of tasks from all flows
Per flow API:
- /flows/<flow_label>/ - Flow description
- /flows/<flow_label>/chart/ - SVG flow visualization
- /processes/<flow_label>/ - List of flow instances
- /tasks/<flow_label>/ - List of flow tasks
Each flow node could have own API set,
See also the for the Swagger documentation of each endpoint.
Quick start¶
viewflow.rest depends on djangorestframework>=3.6 and django-rest-swagger>=2.1
Start with adding viewflow, viewflow.rest and dependencies to the INSTALLED_APPS settings
INSTALLED_APPS = [ ..., 'viewflow', 'viewflow.rest', 'rest_framework', 'rest_framework_swagger', ]
Flows could be defined in the <app>/flows.py. Here is the same HelloFlow as in the Quick start but with REST interface.
from viewflow import rest from viewflow.base import this, Flow from viewflow.rest import flow, views from . import models class HelloRestFlow(Flow): process_class = models.HelloRestProcess start = flow.Start( views.CreateProcessView, fields=['text'] ).Permission( auto_create=True ).Next(this.approve) approve = flow.View( views.UpdateProcessView, fields=['approved'], task_description="Message approvement required", task_result_summary="Messsage was {{ process.approved|yesno:'Approved,Rejected' }}" ).Permission( auto_create=True ).Next(this.check_approve) check_approve = flow.If( cond=lambda act: act.process.approved ).Then( this.send ).Else( this.end ) send = flow.Handler( this.send_message ).Next(this.end) end = flow.End() def send_message(self, activation): print(activation.process.text)
In case if you use viewflow.frontend you can register flow in it, and get nice rendered SWAGGER api spec.
from viewflow import rest @rest.register class HelloWorldFlow(Flow): ...
Don’t forged to enable the frontend, see Quick start for details.
Without frontend you need directly include flowset urls in the django’s URL config.
from django.conf.urls import url, include from viewflow.rest.viewset import FlowViewSet from .flows import HelloWorldFlow hello_urls = FlowViewSet(HelloWorldFlow).urls urlpatterns = [ url(r'^workflow/api/helloworld/', include(hello_urls, namespace='helloworld')) ] | http://docs.viewflow.io/viewflow_rest.html | CC-MAIN-2017-34 | en | refinedweb |
Identify the character you want to be at the front of the string after the rotation. Then divide the string into two halves such that this character is the first character in the second half. Reverse each half in place, then reverse the resulting string.
There is another O(n) solution. It has a better constant, but may or may not perform better in practice due to cache locality issues.
I call it the "two handed" algorithm, because you need "two hands" to do it. :-)
Basically, you pick up the characters in position 0 and position m, then you put down the character from position 0 into position m. Now holding only the character originally in position m, you pick up the character from position 2m and put down the new character for that position. When you reach a multiple of m that is past the end of the string, you wrap around to the beginning. Repeat until you are back at position 0.
If m and n were co-prime, you're already done. If not, then you need to repeat with positions 1, m+1, 2m+1, .., and then 2, m+2, 2m+2, .., and so on. You can stop at GCD(n,m), since this position will have been reached by the first loop starting at position 0.
Pseudocode:
num_cycles = GCD(n, m)
cycle_length = n / num_cycles
for cycle = 0 to num_cycles - 1
index = cycle
value = string[index]
do
index = (index + m) mod n
-- Note: The following can be implemented as a swap.
next_value = string[index]
string[index] = value
value = next_value
loop while index <> cycle
end for
There's a simpler, more intuitive solution, which also more efficient. The trick is :
Here a python snippet (sorry about the closing brackets, i don't know how to escape them):
def rotate_string(string, d): old_char = string[0) current_index = 0 for i in range(len(string)): target_index = (current_index + d) % len(string) temp_char = string [ target_index ) string [ target_index ) = old_char current_index = target_index old_char = temp_char return string
Here is a simpler solution that is also one-pass with n swaps. Start with 0 and place the element there where it should be (i -> i+m) with a swap. Repeat until you'd wrap around. You've now moved all of the string into place but the last m pieces. These are almost in place, but have been shifted n % m (remainder) places. So if necessary, shift them back recursively using the same algorithm.
I guess if m << n this version should have fairly nice cache-properties, be easy to unroll and work well enough with tail recursion optimization.
In Python:
def swap(a,i,j):
a[i], a[j] = a[j], a[i]
def rotate(a,m,start=0):
n=len(a)-start
m = m % n
if m != 0:
for i in range(start,start+n-m):
swap(a, i, i+m)
rotate (a,m-n%m,start+n-m)
a = range(10)
rotate(a,3)
print a
This rotation problem has been part of algorithmic folklore since the 60s/70s. The solution provided by jliszka showed up as early as 1971 in a text editor written by Ken Thompson.
It is interesting because it is similar to the problem of swapping adjacent regions of memory.
jliskza's solution is probably the "best" solution. It is very simple and easy to implement. It is also both space and time efficient in practice.
logiclrd's solution is probably the worst. Algorithmically it has the best constant, but in practice it suffers poorly due to cache locality. Running some benchmarks on a 1.83GHz Intel dual core, it takes twice as long to run compared to the simple reversal algorithm.
Another algorithm that tends to be even faster than the reversal one is to use a recursive swap like this:
Given s, we want to rotate by m. This is equivalent to the problem s = ab, where we want to swap a and b, and a has length m. Change ab to ablbr, where br has the same length as a. Swap a and br to get brbla. Now we have the subproblem to solve, swapping brbl. This is the same format as the original problem and leads to a recursive solution.
This algorithm can be converted to an iterative version, and is described in Gries's Science of Programming (1981). Benchmarking on a 1.83GHz Intel dual core shows it is about twice as fast as the reversal algorithm.
All of these algorithms are also discussed in John Bentley's Programming Pearls.
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/Rotating+a+String+solution | CC-MAIN-2018-30 | en | refinedweb |
Dan Diephouse wrote:
> But I *want* people to to be able to override the method down stream.
> Why is that so evil?
An ex-colleague of mine once spent three days trying to track down a
networking issue, the symptom he was faced with was an intermittent
closing of his socket connection.
The code he was looking at was similar to the following (from memory and
not compiled)
public abstract class A
{
private OutputStream os ;
A()
{
os = getOutputStream() ;
}
protected abstract OutputStream getOutputStream() ;
}
public class B
{
private Socket socket = null ;
protected OutputStream getOutputStream()
{
socket = .... ;
return socket.getOutputStream() ;
}
}
It took me 5 minutes to realise what was happening , and most of that
was listening to his description of the problem :-)
It may be that you are aware of the implications of the design, as am I,
but there are plenty of people around who are not. I have come across
this type of 'fault' in numerous engagements as a contractor, often made
people who claimed to be experienced.
The simplest way of handling it is always, IMHO, not to let it happen in
the first place. If the rope isn't there then they can't hang
themselves :-)
But that's just my opinion :-)
Kev | http://mail-archives.apache.org/mod_mbox/cxf-dev/200610.mbox/%3C45338EFA.5000505@jboss.com%3E | CC-MAIN-2018-30 | en | refinedweb |
Hi Dan,
Thanks for your response.
Yes its Xander Bakker's script
The link to the script is:
I am a beginner to python/arcpy. I tried but i think i am missing things like ,extract values to points and position it along the profileline and plot the z values on the profile with looping.
i have ammended the script at:
line 11, 53, 75, and 86
and i got this error
zp.append() # want to append raster values with loop
TypeError: append() takes exactly one argument (0 given)
ammended script is :
import arcpy
import os
import matplotlib.pyplot as plt
import numpy
import math
# settings for datasources
linesPath = "lines"
rasterPath = "raster"
pointPath = "points" # point shapefile
# settings for output profiles PNG's
profileFolder = r"\test_arcpy\profile"
profilePrefix = "ArcPy_"
profilePostfix = "_new.png"
fldName = "hm"
# standard NoData mapping
NoDataValue = -9999
# describe raster
inDesc = arcpy.Describe(rasterPath)
rasMeanCellHeight = inDesc.MeanCellHeight
rasMeanCellWidth = inDesc.MeanCellWidth
# search cursor
fldShape = "SHAPE@"
flds = [fldShape, fldName]
with arcpy.da.SearchCursor(linesPath, flds) as curs:
for row in curs:
feat = row[0]
profileName = row[1]
# read extent of feature and create point for lower left corner
extent = feat.extent
print "Processing: {0}".format(profileName)
pntLL = arcpy.Point(extent.XMin, extent.YMin)
# determine number of rows and cols to extract from raster for this feature
width = extent.width
height = extent.height
cols = int(width / rasMeanCellWidth)
rows = int(height / rasMeanCellHeight)
# create Numpy array for extent of feature
arrNP = arcpy.RasterToNumPyArray(rasterPath, pntLL, cols, rows, NoDataValue)
# create empty arrays for distance (d) and altitude (z) and NAP line (zv)
d = []
z = []
zv = []
zp = [] # voor point shape
# loop through polyline and extract a point each meter
for dist in range(0, int(feat.getLength("PLANAR"))):
XYpointGeom = feat.positionAlongLine(dist, False)
XYpoint = XYpointGeom.firstPoint
# translate XYpoint to row, col (needs additional checking)
c = int((XYpoint.X - pntLL.X) / rasMeanCellWidth)
r2 = int((XYpoint.Y - pntLL.Y) / rasMeanCellHeight)
r = rows - r2
if c >= cols:
c = cols-1
if r >= rows:
r = rows-1
# extract value from raster and handle NoData
zVal = arrNP[r,c]
if not zVal == NoDataValue:
d.append(dist)
z.append(arrNP[r,c])
zv.append(0)
zp.append() # want to append raster values with loop
# define output profile filename and figure size settings
elevPNG = profileFolder + os.sep + profilePrefix + profileName + profilePostfix
fig = plt.figure(figsize=(10, 3.5)) # inches
# plot profile, define styles
plt.plot(d,z,'r',linewidth=0.75)
plt.plot(d,z,'ro',alpha=0.3, markersize=3)
plt.plot(d,zv,'k--',linewidth=0.5)
plt.plot(d,zp,'g',alpha=0.3, markersize=3)
plt.xlabel('Distance from start')
plt.ylabel('Elevation')
plt.title('Profile {0} using Python matplotlib'.format(profileName))
# change font size
plt.rcParams.update({'font.size': 8})
# save graph as PNG
fig.savefig(elevPNG, dpi=300)
# delete some objects
del arrNP, XYpointGeom, XYpoint
print "PNGs stored in: {0}".format(profileFolder)
Glad to hear the the script is working (at least for part of what you want to achieve). I assume that the zp list will need to be filled with data to be able to add it to the plot. Since zp will have only a few values, you will need to determine when the values have to be added.
Are the reference values located exactly on the lines? What is the source for these points? To plot the points to location (xy) need to be translated to a distance from start.
Hi Xander,
Thank you for your response.
Yes the script is working perfect.
Yes, the reference values (points shapefile) located exactly on the lines.
The source for these points is the same raster (which used for lines )
You are right the "zp[ ] list need to be filled"
If you need more information dont hesitate ask.
Is this Xander Bakker 's code?
If so a link, or proper formatting would be nice
Code Formatting... the basics++
What were the results of the test?
What have you tried? | https://community.esri.com/message/779475-re-add-points-in-the-surface-profile-with-lables-with-arcpy-and-matplotlib?commentID=779475 | CC-MAIN-2018-30 | en | refinedweb |
This C Program calculates the area of Parallelogram. The formula used in this program are Area = b * a where b is the length of any base, a is the corresponding altitude.
Here is source code of the C Program to Find the area of Parallelogram. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C Program to Find Area of Parallelogram
*/
#include <stdio.h>
int main()
{
float base, altitude;
float area;
printf("Enter base and altitude of the given Parallelogram: \n ");
scanf("%f%f", &base, &altitude);
area = base * altitude;
printf("Area of Parallelogram is: %.3f\n", area);
return 0;
}
Output: $ cc pgm27.c $ a.out Enter base and altitude of the given Parallelogram: 17 19 Area of Parallelogram is: 323.000. | https://www.sanfoundry.com/c-program-find-area-parallelogram/ | CC-MAIN-2018-30 | en | refinedweb |
Log message:
Update nut to 2.7.4 (from 2.6.5).
Notable changes:
* GPLv2 or GPLv3 as license;
* c++ lang addition;
* upsdrvctl moved from libexec/ to sbin/ (thanks systemd), reflect this change
in rc.d script;
* the timeout patch (patch-ab) has been implemented differently, but an
error message has been left out. Move it to patch-snmp-error-msg.c;
* tiff buildlink3 for ups-nut-cgi required (in addition to gd).
Changelog: (lengthy -- if truncated please refer to):
2016-03-09 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac: Fix autoreconf on Debian For some reason, Automake
doesn't search the current directory correctly when searching for
helper scripts, when 'nut' is running as a git-submodule, as it is
the case with the website repository
* configure.ac: update version to 2.7.4
* drivers/apc-ats-mib.c: snmp-ups: add APC ATS input.source
* drivers/snmp-ups.c: snmp-ups: fix the matching OID tests For both
sysOID and classic methods, we used to test one of the two OIDs
provided in the mib2nut structures. However, these two OIDs
(oid_pwr_status and oid_auto_check) tend to be redundant and
confusing. Replace these matching by an extraction of
{ups,device}.model
* drivers/nut-libfreeipmi.c: nut-ipmipsu: fix compilation warnings
* NEWS, UPGRADING, docs/FAQ.txt, docs/net-protocol.txt, docs/new-
drivers.txt, docs/nut-names.txt, docs/nutdrv_qx-subdrivers.txt,
docs/packager-guide.txt, docs/snmp-subdrivers.txt, lib/README,
scripts/augeas/README: Fix spelling and typo errors Following the
spell-checking scope expansion, do a spell-checking pass
* docs/documentation.txt: Update documentation as per the new devices
scope
* docs/features.txt, docs/user-manual.txt: Update documentation as
per the new devices scope
* docs/Makefile.am: Expand spell-checking scope Add more documents,
outside of the docs/ directory Closes:
2016-03-08 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac, tools/Makefile.am, tools/driver-list-format.sh: Check
driver.list[.in] format at make dist time Instead of checking
driver.list.in at configure time, move the checking and
modification into a script that is called at make dist time. The
script can also be called manually, and will try to fix both
driver.list.in and driver.list
2016-03-07 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powerware-mib.c: Remove an erroneous test This was made
test the enumerated values registration in snmp-ups, and should not
have been committed
* data/driver.list.in: HCL: APC ATS AP7724 supported by snmp-ups
These Automatic Transfer Switch should be supported by snmp-ups,
however this was not tested at all Reference:
* NEWS, UPGRADING: Update for release 2.7.4
* drivers/Makefile.am, drivers/apc-ats-mib.c, drivers/apc-ats-mib.h,
drivers/snmp-ups.c: snmp-ups: support APC Automatic Transfer Switch
Following the recent extension of NUT scope and variable namespace,
to support Automatic Transfer Switch (ATS), implement SNMP support
for APC ATS (with help from "maaboo" through github) Reference:
2016-03-04 Arnaud Quette <arnaud.quette@free.fr>
* drivers/baytech-mib.c, drivers/bestpower-mib.c, drivers/compaq-
mib.c, drivers/cyberpower-mib.c, drivers/eaton-mib.c, drivers/mge-
mib.c, drivers/netvision-mib.c, drivers/powerware-mib.c, drivers
/raritan-pdu-mib.c: snmp-ups: fix mib2nut structures Non existent
OIDs, for testing MIB selection, must be expressed as NULL and not
as empty string ("") for the algorithm to work
2016-03-03 Daniele Pezzini <hyouko@gmail.com>
* docs/download.txt: docs: update several download links
2016-03-03 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/upsrw.txt, docs/net-protocol.txt: Clarification on NUMBER
type float values Clarify a bit more documentation on how to
express float values, when using upsrw. That is to say, using
decimal (base 10) english-based representation, so using a dot, in
non-scientific notation. So hexadecimal, exponents, and comma for
thousands separator are forbiden
* clients/upsrw.c, docs/net-protocol.txt, server/netget.c: Prefer
NUMBER to NUMERIC for variable type As per discussion on the
Github pull request, NUMBER would be more suitable than NUMERIC
2015-11-22 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: increase timeouts in 'sgs' USB
subdriver Apparently the previously used timeouts in the 'sgs' USB
subdriver were not always enough, so increase them.
2015-11-11 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: various TS Shara UPSes supported by
nutdrv_qx Protocol: 'megatec' USB subdriver: 'sgs'
* drivers/nutdrv_qx.c: nutdrv_qx: make sure 'sgs' USB subdriver uses
only what it reads Since, in 'sgs' USB subdriver, we read only 8
bytes at a time and we expect the first byte to tell us the length
of the data that follows, make sure we don't use more than what we
read from the device in case the first byte is not what we expect
it to be.
2015-03-04 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: increase verbosity of 'sgs' USB
subdriver In 'sgs' USB subdriver: - be more verbose when
debugging, - always print the return code when dealing with an
error.
2014-01-31 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt: nutdrv_qx: update man page for new 'sgs'
USB subdriver
2014-01-31 Ronaldo Yamada <rhyamada@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: add new 'sgs' USB subdriver to
support TS Shara units
2016-03-02 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in: HCL: added Eaton Powerware 9125-5000g
Supported with the additional USB card, with the bcmxcp_usb driver
* docs/man/upsrw.txt, docs/net-protocol.txt: Clarification on NUMERIC
type float values Clarify documentation on how to express float
values, when using upsrw. That is to say, using decimal english-
based representation, so using a dot
* drivers/mge-xml.c: netxml-ups: fix Eaton XML published data Some
raw protocol data were wrongly published, and are now commented.
Also add some R/W flags to ambient thresholds Closes:
* tools/nut-scanner/nut-scanner.c: nut-scanner: fix thread attachment
Add a test to have the right thread waiting for the scan to be
complete (patch from Michal Hlavinka, Red Hat)
* configure.ac, tools/nut-scanner/nutscan-init.c, tools/nut-
scanner/scan_avahi.c, tools/nut-scanner/scan_ipmi.c, tools/nut-
scanner/scan_nut.c, tools/nut-scanner/scan_snmp.c, tools/nut-
scanner/scan_usb.c, tools/nut-scanner/scan_xml_http.c: nut-scanner:
don't depend on development libraries nut-scanner was previously
trying to use directly libXXX.so (libusb-0.1, libfreeipmi,
libnetsnmp, libavahi-client, libneon, libupsclient}. However, these
files are generally provided by the development packages. nut-
scanner now tries to look at some known paths, including the one
provided through --libdir, to find the correct libraries Closes:
2016-03-01 Arnaud Quette <arnaud.quette@free.fr>
* clients/upsrw.c, docs/net-protocol.txt, server/netget.c: Default to
type NUMERIC for variables Any variable that is not STRING, RANGE
or ENUM is just a simple numeric value. The protocol documentation
(net-protocol.txt) was previously stating that "The default <type>,
when omitted, is integer." which was not fully true, since a
variable could also be a float. Hence, the wording was changed to
advertise this, and that each driver is then responsible for
handling values as either integer or float. Moreover, instead of
returning a TYPE "UNKNOWN", return "NUMERIC", which is more
suitable, and aligned with the NUT protocol specification
* tools/nut-snmpinfo.py: SNMP subdriver generator: fix output
formatting
* tools/nut-snmpinfo.py: SNMP subdriver generator: discard commented
lines Discard any commented mib2nut_info_t declaration, which
should thus not be taken into account
2016-02-26 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in, drivers/Makefile.am, drivers/eaton-ats-mib.c,
drivers/eaton-ats-mib.h, drivers/snmp-ups.c: snmp-ups: support
Eaton Automatic Transfer Switch Following the recent extension of
NUT scope and variable namespace, to support Automatic Transfer
Switch (ATS), implement SNMP support for Eaton ATS. Note that this
device can also be supported through Eaton XML/PDC (XML over HTTP)
protocol, supported by the NUT netxml-ups driver
* data/cmdvartab, docs/nut-names.txt: Extend namespace for Automatic
Transfer Switch (ATS) Extend NUT namespace to support a new type
of power device: ATS - Automatic Transfer Switch. These devices
are used to setup 2 power systems, such as UPS, to power a single
power supply system, and be able to automatically transfer between
the input sources in case of failure of the primary one. The added
variable are for now limited to 'input.source' and
'input.source.preferred', but may be extended if needed
2016-02-25 C Fraire <cfraire@me.com>
* docs/scheduling.txt: Fix docs location of upssched to sbin
2016-02-25 Arnaud Quette <arnaud.quette@free.fr>
* scripts/subdriver/gen-snmp-subdriver.sh: snmp-ups: add the last
missing element in the structure
* drivers/apc-mib.c, drivers/bestpower-mib.c, drivers/compaq-mib.c,
drivers/cyberpower-mib.c, drivers/delta_ups-mib.c, drivers/huawei-
mib.c, drivers/ietf-mib.c, drivers/mge-mib.c, drivers/netvision-
mib.c, drivers/powerware-mib.c, drivers/xppc-mib.c,
scripts/subdriver/gen-snmp-subdriver.sh: snmp-ups: fix values
lookup terminating element The terminating element should really
be NULL, and not the string "NULL", as it was originally done, back
in 2002
* drivers/snmp-ups.c: snmp-ups: revert order of the NULL/"NULL" test
Fix a segfault when doing first the string comparison test
* drivers/snmp-ups.c: snmp-ups: register values enumerations
Whenever there is a values lookup structure for read/write data,
push the values as enumerations for upsrw
* drivers/snmp-ups.c: snmp-ups: try to lookup values for numeric
elements Numeric elements can also use the value resolution
mechanism
* drivers/snmp-ups.c: snmp-ups: counter test sysOID with a test OID
Some devices have buggy sysOID exposed. Allow to counter test
another OID, to be able to select between different mapping
structures
2016-02-24 Arnaud Quette <arnaud.quette@free.fr>
* scripts/subdriver/gen-snmp-subdriver.sh: SNMP subdriver creation
script: allow sysOID override Allow to use -s to override buggy
sysOID in some device FW. In this case, the sysOID entry in the
mib2nut structure should be set to NULL
2016-02-11 Arnaud Quette <arnaud.quette@free.fr>
* drivers/raritan-pdu-mib.c: snmp-ups: fix macaddr support for
Raritan PDU Raritan MIB was fixed to expose macaddr on
device.macaddr instead of ups.macaddr
* drivers/baytech-mib.c: snmp-ups: fix macaddr support for Baytech
PDU Baytech MIB was fixed to expose macaddr on device.macaddr
instead of ups.macaddr
* drivers/eaton-mib.c: snmp-ups: fix and complete macaddr support for
Eaton Eaton G2 and G3 can now expose the MAC address of the
device, using device.macaddr. Eaton G1 Aphel was fixed to expose
this data on device.macaddr instead of ups.macaddr
* drivers/snmp-ups.c: snmp-ups: add support for hexadecimal octet
strings
* drivers/snmp-ups.c: snmp-ups: fallback for classic MIB detection
If the sysOID matching has failed, then snmp-ups uses ups.model to
get an OID to test. In case ups.model is not available, fallback to
trying to use device.model instead
* docs/images/nut_layering.png, docs/images/nut_layering.svg: Refresh
and complete NUT architecture diagram
2016-02-08 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powerware-mib.c: snmp-ups: extend Eaton 3ph outputSource
values map Add the new status values for xupsOutputSource
(.1.3.6.1.4.1.534.1.4.5.0), that maps to both ups.status and
ups.type
2016-02-03 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powerware-mib.c: snmp-ups: improve support for Eaton 3ph
Improve support for temperature and humidity data, including: -
ups.temperature now available - fixing ambient.temperature
(previously pointing at a wrong OID) - ambient.humidity now
available - the following settings now available: *
ups.temperature.low * ups.temperature.high * ambient.humidity.high
* ambient.humidity.low * ambient.temperature.high *
ambient.temperature.low
2016-02-01 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: various APCUPSD-controlled APC devices
via apcupsd-ups Originally reported by GitHub user @Thermionix.
Reference:
2016-01-31 Charles Lepple <clepple+nut@gmail.com>
* docs/man/nutdrv_atcl_usb.txt: man/nutdrv_atcl_usb: point to
nutdrv_qx (fuji) for 0001:0000 Also update best guess for the USB-
to-serial converter situation.
* docs/FAQ.txt: FAQ: udevadm for fixing permissions
2016-01-30 Charles Lepple <clepple+nut@gmail.com>
* drivers/nut-libfreeipmi.c: FreeIPMI: do not split function
arguments with a conditional Alternate approach to suggestion by
Romero B. de S. Malaquias Closes:
2016-01-24 Charles Lepple <clepple+nut@gmail.com>
* docs/config-notes.txt: Documentation: fix formatting Put syntax
examples in verbatim mode, and remove spaces from ends of lines.
* drivers/apc-hid.c: usbhid-ups: handle missing USB strings in APC
code Closes:
Might fix: … ug/1483615
2016-01-23 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: added NHS Laser Senoidal 5000VA Source: … devel/7123
Closes:
2016-01-14 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: snmp-ups: fix staleness detection With some
ePDUs or devices using template for outlet and outlet.group,
communication loss with the device were not detected, due to the
handling mechanism. Simply skipping commands for templates, after
the init time, is sufficient to avoid this issue
2016-01-05 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: snmp-ups: improve stale communication recovery
Disable the 10 iterations to retry communicating with stale device.
This was leading up to 10 x 30 seconds, so 5mn, before being able
to get data again
* docs/new-drivers.txt, docs/nut-names.txt: Document
battery.charger.status This will in time replace the historic CHRG
and DISCHRG flags published in ups.status. Closes
2016-01-03 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Sweex model P220 via blazer_usb
Reference:
2016-01-04 Arnaud Quette <arnaud.quette@free.fr>
* drivers/ietf-mib.c, drivers/ietf-mib.h, drivers/snmp-ups.c: snmp-
ups: add support for Tripplite units on IETF mib These devices
expose ".1.3.6.1.4.1.850.1", which could be supported through this
specific MIB. For now, just link that to the IETF MIB, to provide
a first level of support Reference:
2015-12-30 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac: First stab at checking driver.list.in format
2015-12-29 Charles Lepple <clepple+nut@gmail.com>
* scripts/upower/95-upower-hid.rules: upower: update for AEG
2015-12-29 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powercom.c: Fix the processing of output voltage for KIN
units The processing of output voltage requires to also take into
account the line voltage, as reported by Patrik Dufresne. This may
still need some further adjustments Reference:
* drivers/powercom.c: Fix the processing of input voltage for KIN
units The processing of input voltage requires to also take into
account the line voltage, as reported by Patrik Dufresne. Also bump
the driver version to 0.16, since 0.15 was already used, but not
set Reference:
* drivers/mge-hid.c: Fix letter case for AEG USB VendorID The letter
case of this VendorID may be important for generated files, such as
the udev ones (reported by Charles Lepple)
2015-12-28 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in, drivers/mge-hid.c: HCL: AEG PROTECT B / NAS
supported by usbhid-ups Reference:
2015-12-17 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: Legrand Keor Multiplug supported by
nutdrv_qx Reference:
2015-12-09 Andrey Jr. Melnikov <temnota.am@gmail.com>
* drivers/bcmxcp_usb.c: Don't call usb_close() after reset
2015-12-08 Andrey Jr. Melnikov <temnota.am@gmail.com>
* drivers/bcmxcp_usb.c: Call usb_reset() when driver unable to claim
device
* drivers/bcmxcp.h, drivers/bcmxcp_usb.c: Refactor get_answer()
routine, make it properly deal with multi-packets responses. Lower
stack usage.
2015-07-27 Daniele Pezzini <hyouko@gmail.com>
* common/common.c, common/str.c, drivers/bcmxcp.c, drivers/blazer.c,
drivers/blazer_ser.c, drivers/blazer_usb.c, drivers/libhid.c,
drivers/mge-xml.c, drivers/nutdrv_qx.c, drivers/powerp-bin.c,
drivers/powerp-txt.c, drivers/powerpanel.c, drivers/tripplitesu.c,
drivers/upscode2.c, include/common.h, include/str.h, server/upsd.c,
tools/nut-scanner/scan_usb.c: common: consolidate some string-
related functions Move *trim*() functions from common to str.
Prepend the 'str_' common prefix. Bailout early if string is NULL
or empty. In *trim_m() functions, make sure the string containing
characters to be removed is not NULL and bailout early if empty.
Add new str_trim[_m]() functions to remove both leading and
trailing characters at the same time. Update all the tree
accordingly; versioning where appropriate.
* common/Makefile.am, common/str.c, include/Makefile.am,
include/common.h, include/str.h: common: add some string-related
functions
2015-11-10 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Electrys UPS 2500 (nutdrv_qx and
blazer_ser) Closes
* data/driver.list.in: HCL: Eaton E Series DX UPS 1-20 kVA uses
blazer_ser Closes
2015-11-09 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: add number of outlets in Eaton ePDU
groups
* docs/nut-names.txt: Add a variable for the number of outlets in a
group Added 'outlet.group.n.count' which provides the number of
outlets in the outlet group 'n'
2015-11-06 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx_voltronic-qs.c: nutdrv_qx: update 'voltronic-qs'
subdriver Since, for devices supported by 'voltronic-qs'
subdriver, in reality: - invalid commands or queries are echoed
back, - accepted commands are followed by action without any
further reply, update the subdriver interface accordingly. Also: -
change slightly the way we publish protocol as ups.firmware.aux, -
update F's reply examples and some info_type (ratings;
output.frequency) in QX to NUT table to reflect reality, - increase
version number.
2015-10-19 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx_voltronic-qs-hex.c: nutdrv_qx: improve 'T'
protocol support in 'voltronic-qs-hex' subdriver Since the last
byte of the reply to the QS query (before the trailing CR) of
devices that implement the 'T' protocol holds in reality ratings
informations (nominal output frequency/voltage and nominal battery
voltage) in its bits, change the 'voltronic-qs-hex' subdriver
accordingly. Also: - change slightly the way we publish protocol as
ups.firmware.aux, - increase version number.
* drivers/nutdrv_qx_voltronic-qs-hex.c: nutdrv_qx: simplify
{in,out}put voltage conversion in 'voltronic-qs-hex' In
'voltronic-qs-hex' subdriver, instead of calculating separately the
fractional and integer part of input and output voltage, do it at
once. Also, increase version number.
* drivers/nutdrv_qx_voltronic-qs-hex.c: nutdrv_qx: improve protocol
identification in 'voltronic-qs-hex' Since 'V' protocol, in
reality, never happens to use the encoded version of the reply to
the QS query, but it always uses the plain version already
implemented in 'voltronic-qs' subdriver, remove it from the
identification process of 'voltronic-qs-hex' subdriver. Also,
remove some non-significant entries from the testing table and
increase version number.
* drivers/nutdrv_qx_voltronic-qs-hex.c: nutdrv_qx: harmonize
declarations/definitions in 'voltronic-qs-hex' In 'voltronic-qs-
hex' subdriver, the scope of support functions is limited to the
subdriver as rightly stated in forward declarations, so correct
their definitions to reflect that. Also, increase version number.
2015-10-09 Arnaud Quette <arnaud.quette@free.fr>
* docs/nut-qa.txt: Reference Black Duck OpenHUB in QA documentation
Closes networkupstools/nut#192
2015-10-08 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: snmp-ups: also use __func__ for additional
traces
* drivers/powerware-mib.c: powerware-mib: more comments for RFC
device.event Add more comments on the need to RFC device.event for
some data that are currently published under ups.alarm
* drivers/powerware-mib.c: snmp-ups: improve Eaton 3-phase UPS alarms
reporting Eaton 3phase UPS, using the Powerware MIB, can expose
many new alarms. Also use the standard driver "X.YY" versioning,
and bump subdriver release to "0.85"
* drivers/snmp-ups.c, drivers/snmp-ups.h: snmp-ups: fix and improve
the ups.alarms mechanism This mechanism allows to walk a subtree
(array) of alarms, composed of OID references. The object
referenced should not be accessible, but rather when present, this
means that the alarm condition is TRUE. Both ups.status and/or
ups.alarm values can be provided
* drivers/snmp-ups.c: snmp-ups: fix on some snprintf calls Some
snprintf calls are using dynamically allocated variables, which
doesn't work with sizeof
* drivers/snmp-ups.c: snmp-ups: use __func__ in debug messages
* drivers/snmp-ups.c: snmp-ups: nut_snmp_get_oid() returns TRUE on
success
* drivers/snmp-ups.c: snmp-ups: only use snprintf calls instead of
sprintf
* drivers/eaton-mib.c, drivers/snmp-ups.c: snmp-ups: simplify
handling of other alarms outlet, outlet groups and phase alarms
are now using a simplified approach that does not require specific
lookup structure to adapt alarm messages. This applies to Eaton
ePDU G2/G3
2015-09-22 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: snmp-ups: fix a typo error in debug message
Unknown is spelled with an ending N (reported by Evgeny "Jim"
Klimov, from Eaton)
* drivers/snmp-ups.c: snmp-ups: optimize phase number extraction
efficiency Since we know that we are processing an alarm for a
phase "Lx", don't use strchr, but simply index (reported by Evgeny
"Jim" Klimov, from Eaton)
* docs/nut-names.txt, drivers/eaton-mib.c: snmp-ups: use dash-
separator for out-of-range For the sake of coherence with other
status relative to thresholds, "out of range" frequency status now
also use dash as separator, instead of space
* drivers/eaton-mib.c: Fix a spelling error in comments
* drivers/eaton-mib.c: snmp-ups: fix a typo error on Eaton ePDU G2/G3
MIB Critical is really spelled critical, and not cricital, as used
in the various status thresholds value-lookup structures (reported
by Evgeny "Jim" Klimov, from Eaton)
* data/cmdvartab: Mention the unit for ambient humidity information
Add an explicit mention that ambient information related to
humidity use the "(percent)" unit
* data/cmdvartab, docs/nut-names.txt: Mention the unit for input
voltage information Add an explicit mention that input information
related to voltage use the "Volts (V)" unit
* data/cmdvartab: Mention the unit for ambient temperature
information Add an explicit mention that ambient information
related to temperature use the "degrees C" unit
2015-09-18 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: add outlet group identifier for
Eaton ePDU Eaton ePDU can now publish the parent group of each
outlet
* docs/nut-names.txt: Extend outlet collection namespace with group
ID An outlet can now publish the group to which it belongs to
* drivers/snmp-ups.c: snmp-ups: complete nut_snmp_get_{str,int}
These methods now allow to get the value of an OID returned by the
source OID (as for the sysOID). In case of failure (non existent
OID), the functions return the last part of the returned OID (ex:
1.2.3 => 3)
* drivers/snmp-ups.c: snmp-ups: create a nut_snmp_get_oid() function
This method allows to get the value of an OID which is also an OID
(as for the sysOID), without trying to get the value of the pointed
OID. This will allow to use nut_snmp_get_{int,str}() the get the
value of the pointed OID
2015-09-17 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: outlet groups type handling for
Eaton ePDU Eaton ePDU can now publish the type of outlet group
* docs/nut-names.txt: Extend outlet group collection namespace with
type The type of outlet group can now be published, part of the
new outlet.group data collection
* drivers/eaton-mib.c: snmp-ups: outlet groups commands for Eaton
ePDU Eaton ePDU can now handle commands outlet groups, including
on, off and reboot (cycle)
* drivers/snmp-ups.c: snmp-ups: fix commands handling for outlet
groups The su_instcmd() function of snmp-ups is now adapted to
support outlet groups
* drivers/eaton-mib.c: Advanced outlets groups alarm handling for
Eaton ePDU Eaton ePDU can now handle alarms on outlets groups, for
voltage and current, relative to the configured thresholds
* drivers/snmp-ups.c: snmp-ups: improvements for outlet groups and
alarms Improve the code for general template management, including
outlets and outlets groups for now, and add alarm management for
outlet groups, the same way as for outlets
2015-09-16 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: snmp-ups: fix set variable for outlet groups
The setvar() function of snmp-ups is now adapted to support outlet
groups
* drivers/eaton-mib.c: snmp-ups: outlet groups handling for Eaton
ePDU Eaton ePDU can now handle outlet groups, including voltage
and current (with thresholds and status relative to the configured
thresholds), along with power and realpower. A subsequent commit
will address the alarms, settings and commands. Bump subdriver
version to 0.30
* drivers/snmp-ups.c: snmp-ups: update debug message The template
guestimation function name was changed, but the debug message was
left with the old function name
2015-09-15 Arnaud Quette <arnaud.quette@free.fr>
* docs/nut-names.txt: Extend NUT namespace with outlet.group
collection A new data collection, called "outlet.group", is now
available. It provides grouped management for a set of outlets. The
same principles and data than the outlet collection apply to
outlet.group
* drivers/snmp-ups.c, drivers/snmp-ups.h: snmp-ups: adapt template
mechanisms for outlet groups The template handling mechanisms,
originally created for outlets, is now adapted to also manage
outlet groups
2015-09-14 root <root@arno-zbook15.euro.ad.etn.com>
* docs/nut-names.txt: Add a note on the outlet.count variable
2015-09-14 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: add nominal input current for Eaton
ePDU snmp-ups now provides input.[Lx.]current.nominal for Eaton
ePDU G2/G3, both for 1phase and 3phase
* drivers/eaton-mib.c: snmp-ups: better input.power handling for
Eaton ePDUs Improve the way we declare and process input.power, as
previously done for input.realpower, in order to address the
variations between Eaton ePDUs G2 and G3
2015-09-11 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: publish part number for Eaton ePDU
device.part was standardized in NUT namespace, so enable the
declaration for Eaton ePDU
* drivers/eaton-mib.c: snmp-ups: 3-phase alarm handling for Eaton
ePDU Eaton ePDU can now handle alarms on 3-phase, currently
limited to voltage and current, relative to the configured
thresholds
* drivers/snmp-ups.c: snmp-ups: implement 3-phase alarm handling
snmp-ups now allows to publish 3-phase alarms in ups.alarm, the
same way as with outlet. Declaration of such alarms are done using
"Lx.alarm". info_lkp_t structures messages are shared templates
with outlets, and use the string formats to include the context
(outlet or phase) and the number (of the outlet or phase) in alarm
messages. These alarms are then published in "ups.alarm", with the
standard mechanism for alarm publication
* docs/nut-names.txt: Extend 3-phase collection namespace with alarms
3-phase data collection now allows to specify alarms, the same way
than with the outlet collection ("outlet.n.alarm"), but using
"Lx.alarm" (for example "L1.alarm"). These alarms are then
published in "ups.alarm", with the standard mechanism for alarm
publication
* drivers/eaton-mib.c: Advanced threshold handling for Eaton 3-phase
ePDU Eaton ePDU can now handle warning and critical thresholds
settings and status for input voltage and current on 3-phase units.
Alarms are however still to be implement
* docs/nut-names.txt: Extend 3-phase collection namespace with
threshold 3-phase data collection now allows to specify low / high
warning and critical thresholds for voltage and current. Status
relative to the thresholds also exist for these data
2015-09-07 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c, drivers/snmp-ups.h: snmp-ups: fix loss of
precision when setting values su_setvar() was losing precision
when converting and casting the provided values to send to the SNMP
agent. As an example, with an OID in millivolt (multiplier set to
0.001), when providing 238 (V) using upsrw, the value sent to the
SNMP agent was 237999, so leaking 0.1 volt
2015-09-04 Arnaud Quette <arnaud.quette@free.fr>
* drivers/dstate.c: Extend ups.alarm internal buffer to 1024 chars
Currently, ups.alarm can hold up to 256 chars to expose alarms.
With the recent outlet alarms handling addition, the buffer may
quickly be to small. Thus, increase to 1024, which may still not
be sufficient but already provides a bit more room
* drivers/eaton-mib.c: snmp-ups: outlet alarm handling for Eaton ePDU
Eaton ePDU can now handle alarms on outlets, currently limited to
outlet voltage and current, relative to the configured thresholds
* drivers/snmp-ups.c: snmp-ups: implement outlets / PDU alarm
handling snmp-ups now allows to publish outlets and PDU alarms in
ups.alarm, the same way as with ups.status. Declaration of such
alarms are done using the outlet template mechanism
("outlet.%i.alarm"). info_lkp_t structures messages can also use
the string formats to include the outlet number in alarm messages.
These alarms are then published in "ups.alarm", with the standard
mechanism for alarm publication
* docs/nut-names.txt: Extend outlet collection namespace with alarms
Outlet data collection now allows to specify alarms, using the
template definitions ("outlet.n.alarm"). These alarms are then
published in "ups.alarm", with the standard mechanism for alarm
publication
2015-09-02 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: outlet threshold handling for Eaton
ePDU Eaton ePDU can now handle warning and critical thresholds
settings and status for outlet voltage and current
* docs/nut-names.txt: Extend outlet collection namespace with
threshold Outlet data collection now allows to specify low / high
warning and critical thresholds for voltage and current. Status
relative to the thresholds also exist for these data
2015-09-01 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: alarms handling for Eaton ePDU
Eaton ePDU can now publish alarms, related to input status
(voltage, frequency and current) and ambient status (temperature
and humidity). Note that alarms are still published under
ups.alarms, though these should belong to either pdu.alarms or
better device.alarms
* drivers/eaton-mib.c: Advanced input threshold handling for Eaton
ePDU Eaton ePDU can now handle warning and critical thresholds
settings and status for input voltage and current, along with the
frequency status
* data/cmdvartab, docs/nut-names.txt: Extend input collection
namespace with threshold Input data collection now allows to
specify low / high warning and critical thresholds for voltage and
current. Status relative to the thresholds also exist for these
data, and for the frequency
2015-08-31 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: snmp-ups: ambient dry contacts support for
Eaton ePDU Eaton ambient modules, connected on ePDU, now publish
the status of the connected dry contacts sensors
* data/cmdvartab, docs/nut-names.txt: Extend ambient collection
namespace with dry contacts Ambient data collection now allow to
specify dry contacts sensor status
* drivers/eaton-mib.c: snmp-ups: fix Eaton Pulizzi Switched PDU
multiplier As per the previous commit, to well handle integer RW
variables
* drivers/eaton-mib.c: snmp-ups: ambient threshold handling for Eaton
ePDU Eaton ePDU can now handle warning and critical thresholds and
status for both humidity and temperature
* data/cmdvartab, docs/nut-names.txt: Extend ambient collection
namespace with threshold Ambient data collection now allow to
specify warning and critical thresholds
* drivers/eaton-mib.c: snmp-ups: publish presence of Eaton ambient
sensor Publish the actual presence of ambient sensor for Eaton
ePDU G2 and G3
* data/cmdvartab, docs/nut-names.txt: Publish the actual presence of
an ambient sensor A new data was created (ambient.present) to
publish the actual presence of an ambient sensor
2015-10-06 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Asium P700, Micropower LCD 1000 and Eaton
5E1100iUSB
2015-10-06 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: LYONN CTB-800V supported by nutdrv_qx
Protocol: 'voltronic-qs-hex' Reference:
2015-08-22 Mariano <marianojan@users.noreply.github.com>
* drivers/nutdrv_qx_voltronic-qs-hex.c: nutdrv_qx: add support for
LYONN CTB-800V Small protocol validation change in 'voltronic-qs-
hex' subdriver to add support for the protocol used by the LYONN
CTB-800V UPS.
2015-09-28 Arnaud Quette <arnaud.quette@free.fr>
* docs/new-drivers.txt: Fix spacing error
2015-09-22 Charles Lepple <clepple+nut@gmail.com>
* drivers/solis.c, drivers/solis.h: solis: remove additional warnings
The "Waiting" flag is always zero, and several other variables were
not used.
* drivers/solis.c, drivers/solis.h: solis: clean up warnings Comment
out unused constants, and add 'static' and 'const' wherever
possible.
2015-09-20 Charles Lepple <clepple+nut@gmail.com>
* drivers/Makefile.am, drivers/solis.c: solis: math fixes As
mentioned here:
2015-09-19 bsalvador <bruno.salvador@gmail.com>
* drivers/solis.c, drivers/solis.h: solis: patch for Microsol Back-
Ups BZ1200-BR patch for correct reading for Microsol Back-Ups
BZ1200-BR (rebased onto solis_debug branch, and cleaned up
whitespace. -- CFL) Closes and Closes
2015-09-16 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in, docs/man/snmp-ups.txt, drivers/powerware-
mib.c, drivers/powerware-mib.h, drivers/snmp-ups.c: snmp-ups: add
Eaton Power Xpert Gateway UPS Card This newer generation of SNMP
card is used for BladeUPS or other UPS, and is serving the same
XUPS MIB, as in the "pw" subdriver
* scripts/subdriver/gen-snmp-subdriver.sh: Update SNMP subdriver
generation script Complete the documentation, by adding some notes
and examples ; Fix the MIBs directories list and the "keep
temporary files" option
2015-09-11 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: Improve log/debug output trace
2015-09-08 Charles Lepple <clepple+nut@gmail.com>
* drivers/solis.c: solis: resync with end-of-packet character (0.64)
Suggested by @rpvelloso in
ssues/231#issuecomment-134795747 Note that the driver could
possibly get out-of-sync after initial detection.
2015-09-07 Charles Lepple <clepple+nut@gmail.com>
* docs/man/macosx-ups.txt, drivers/macosx-ups.c: macosx-ups:
gracefully handle disconnection of UPS Tested on 10.9.5 and
10.10.5. Returns "data stale" when UPS disappears.
2015-09-07 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powerware-mib.c: Bump Powerware SNMP subdriver version
2015-09-04 Charles Lepple <clepple+nut@gmail.com>
* Makefile.am, docs/configure.txt, docs/new-clients.txt, tools/nut-
scanner/README: doc: correct remaining `--with-lib` references
Credit: Paul Vermeer
2015-09-01 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.h: Minor updates to TODO comments
* drivers/snmp-ups.c: Implement ups.alarm for SNMP snmp-ups now
allows to publish alarms in ups.alarm, the same way as with
ups.status
2015-08-31 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: Proper handling of integer RW variables RW
variables were previously supposed to always be strings. Thus, the
multiplier (using the info_len field) was not applied. Also allow
setting float values, not only integer
* drivers/snmp-ups.c, drivers/snmp-ups.h: Fix default SNMP retries
and timeout The previous patch was using the default values from
Net-SNMP, which are set to -1. When the user was not providing
overriden values, this was causing the driver to not be able to
establish the communication with the device. The default values are
now fixed, as per documented (i.e. 5 retries and timeout of 1
second). Also bump the driver version to 0.74
* docs/man/ups.conf.txt, drivers/dstate.c: Make more obvious the
socket write failure Document the error that require the use of
the 'synchronous' flag. Also use debug level 1 instead of 2 for the
debug message
2015-08-23 Charles Lepple <clepple+nut@gmail.com>
* drivers/solis.c: solis: Add upsdebug*() and upslogx() calls for
diagnostics
2015-08-18 Kenny Root <kenny@the-b.org>
* drivers/powerware-mib.c: Add ups.start.auto for Powerware SNMP Use
the IETF UPS MIB to indicate to Powerware devices that it should
restart when mains power is applied.
* drivers/powerware-mib.c: Fix some indentation problems in PowerWare
SNMP
* drivers/powerware-mib.c: Add shutdown.return for Powerware SNMP
The Powerware MIB supports the concept of shutting down with a
delay and then returning when line power is restored. The delay is
set to 0 seconds currently.
* drivers/powerware-mib.c: Add load.{off,on}.delay for Powerware SNMP
The commands to shut down with delay have existed since the first
version of the Powerware MIB so add the newer commands
"load.off.delay" and "load.on.delay" to aid in shutdown \
scripts.
2015-08-07 Arnaud Quette <arnaud.quette@free.fr>
* drivers/dummy-ups.c, drivers/dummy-ups.h: Fix dummy-ups for
external value changes dummy-ups allow to change the values of the
publicated variables through the standard upsrw tool. This method
is handy to script value changes, in a controlled way, compared to
the dynamic version (using the TIMER keyword in .dev files), which
changes the values in a non controlled way. Bump driver version to
0.14
* m4/nut_check_libnss.m4: Fully check for a working Mozilla NSS
Rework the NSS tests so that just having runtime libraries
installed is not enough. Moreover, since GNU libc6 also provides a
nss.h header, the test now checks for both nss.h and ssl.h Closes
networkupstools/nut#184
* docs/download.txt: Fix Red Hat / Fedora packages repository URL
2015-08-03 Tomas Halman <TomasHalman@eaton.com>
* clients/nutclient.cpp: Problem: nutclient library sometimes reads
socket closed by server. Solution: proper read return value
evaluation
2015-08-04 Arnaud Quette <arnaud.quette@free.fr>
* tools/nut-scanner/scan_snmp.c: Fix a crash on a 2nd call to
libnutscan on behalf of Tomas Halman, from Eaton Opensource Team
2015-07-24 Nash Kaminski <nashkaminski@gmail.com>
* drivers/tripplitesu.c: tripplitesu: Fix initialization when
tripplite firmware is buggy With some Tripplite SU1000RT2U (and
possibly more) UPS's, a firmware bug causes a malformed response to
the very first command that is sent after the serial port is opened
following a warm or cold boot of the system. My theory is that this
related to either the RS232 data lines or handshaking lines being
pulled high once the server's UART is powered however I have not
determined precisely if this is related to the data line being
pulled high or the handshaking lines being asserted. However, I
have been able to consistently reproduce the issue where the driver
fails to start on the first attempt after a cold/warm boot across 3
different machines and 2 SU1000RT2U UPS's. To workaround this, the
initial enumeration is repeated a 2nd time after 300ms(to allow all
garbage data to arrive) if the first attempt fails, which allows
the driver to consistently startup successfully on the 1st attempt.
Closes networkupstools/nut#220
2015-07-24 Tim Smith <tsmith84@gmail.com>
* INSTALL.nut: Spelling fixes Spelling fixes and capitalization of
SUSE
2015-07-23 Arnaud Quette <arnaud.quette@free.fr>
* scripts/augeas/nutupsconf.aug.tpl: Update Augeas lens for ups.conf
Add the various missing global directives and ups fields
2015-07-20 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: fix case and spacing
2015-07-18 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: when targeting 'UPS No Ack'
consider also the trailing CR In 'fabula' and 'krauler' USB
subdrivers, take into account also the trailing CR in the reply
while looking for a 'UPS No Ack'.
* drivers/nutdrv_qx.c: nutdrv_qx: stay true to return code in
'fabula' USB subdriver In 'fabula' USB subdriver, when reading
'UPS No Ack' from device, since we already mimic a timeout, also
empty the reply.
2015-07-11 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Fideltronic INIGO Viper 1200 supported by
nutdrv_qx
2015-07-02 Charles Lepple <clepple+nut@gmail.com>
* drivers/usbhid-ups.c: usbhid-ups: bump version to 0.41 Both the
eaton_dual_reportdesc and usbhid_ups_input_vs_feature branches
claimed version 0.40, so let's disambiguate the merged version.
2015-07-02 Arnaud Quette <arnaud.quette@free.fr>
* drivers/libhid.c: Add a debug trace for the number of HID objects
found
* drivers/hidtypes.h: Fix testing typo MAX_REPORT is really 500 (HID
objects), not 50!
* drivers/hidparser.c: Report when there are further unprocessed HID
objects Following the last commits, and especially the MAX_REPORT
one, warn whenever there are remaining HID objects that were not
processed. This may serve
* drivers/hidtypes.h: Increase the maximum number of HID objects The
previous value (300) was causing a trim of the remaining objects.
Increase the value to 500, which should give a bit of time
* drivers/libshut.c, drivers/libshut.h, drivers/libusb.c, drivers
/usb-common.h, drivers/usbhid-ups.c: Add support for Eaton dual HID
report descriptor All devices use HID descriptor at index 0.
However, some newer Eaton units have a light HID descriptor at
index 0, and the full version is at index 1 (in which case,
bcdDevice == 0x0202). This dual report descriptor approach is due
to the fact that the main report descriptor is now too heavy, and
cause some BIOS to hang. A light version is thus provided at the
default index, solving this BIOS issues
2015-06-27 Charles Lepple <clepple+nut@gmail.com>
* drivers/macosx-ups.c: macosx-ups: fix for 10.10 (Yosemite); v1.1
In OS X 10.9 and earlier, IOPSGetPowerSourcesInfo() returned a
CFDictionary. In 10.10 it returns a CFArray. Programmers are
supposed to use IOPSGetPowerSourceDescription() to gloss over this
distinction. However, this does not make it easy to distinguish
between a laptop battery and an UPS. So the "port" driver option no
longer has any effect.
umentation/IOKit/Reference/IOPowerSources_header_reference/#//apple
_ref/c/func/IOPSGetPowerSourceDescription
2015-06-22 Arnaud Quette <arnaud.quette@free.fr>
* scripts/upower/95-upower-hid.rules, tools/nut-usbinfo.pl: Update
UPower HID rules and generator
2015-06-11 Charles Lepple <clepple+nut@gmail.com>
* drivers/usbhid-ups.c: usbhid-ups.c: fall back to HID Input type if
not a Feature
2015-06-07 Charles Lepple <clepple+nut@gmail.com>
* drivers/tripplite-hid.c: tripplite-hid.c: device.part is static
(version 0.82)
2015-06-04 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: make sure processed item's
boundaries are not wrong
2015-04-26 Nick Mayerhofer <nick.mayerhofer@enchant.at>
* docs/nutdrv_qx-subdrivers.txt, drivers/nutdrv_qx.c,
drivers/nutdrv_qx.h: nutdrv_qx: improve documentation for some
methods
2015-06-04 Daniele Pezzini <hyouko@gmail.com>
* docs/nutdrv_qx-subdrivers.txt, drivers/nutdrv_qx.c,
drivers/nutdrv_qx.h: nutdrv_qx: remove redundant comments and
update docs
2015-04-28 Nick Mayerhofer <nick.mayerhofer@enchant.at>
* drivers/nutdrv_qx_voltronic.c: nutdrv_qx: move var declaration in
'voltronic' subdriver Move variable declaration to fulfill
condition '3.3. Portability' of the developer guide. Bump version.
* drivers/libhid.c: libhid: replace "flush loop" with memset Move to
the C way of setting memory (memset), replacing a for loop with a
few anti-patterns in it: - for (...; ; i++) - for (...; i <
MAGIC_NUMBER; ...) - for (...) array[i] = 0: give subdrivers a last chance
to process the command Add (and document) a new function
('preprocess_command()') to preprocess the command to be sent to
the device, just before the actual sending and, in case of instant
commands/setvars, after the 'preprocess()' function has been
triggered (if appropriate). As an example, this function can be
useful to add to all commands (both queries and instant
commands/setvars) a CRC or to fill the command of a query with some
data. Also, in qx_process(), address buf size vs item->answer size
earlier. Update all subdrivers accordingly, bump versions.
2015-06-01 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/snmp-ups.txt, drivers/snmp-ups.c, drivers/snmp-ups.h:
Provide access to Net-SNMP timeout and retries Two new extra
arguments are now available to allow overriding Net-SNMP number of
retries (snmp_retries) and timeout per retry (snmp_timeout). These
respectively maps to snmpcmd "-r retries" and "-t timeout"
2015-05-29 Arnaud Quette <arnaud.quette@free.fr>
* scripts/upower/95-upower-hid.rules: Update UPower HID rules
* tools/nut-usbinfo.pl: Fix UPower device matching for recent kernels
As per the UPower patch below referenced, hiddev* devices now have
class "usbmisc", rather than "usb". See … 95-upower-
hid.rules?id=9f31068707fc79744961cea7258b0eb262effbf1
2015-05-28 Arnaud Quette <arnaud.quette@free.fr>
* tools/nut-scanner/nut-scan.h, tools/nut-scanner/nut-scanner.c,
tools/nut-scanner/nutscan-device.c, tools/nut-scanner/nutscan-
device.h, tools/nut-scanner/nutscan-display.c, tools/nut-scanner
/nutscan-init.c, tools/nut-scanner/nutscan-init.h, tools/nut-
scanner/nutscan-ip.c, tools/nut-scanner/nutscan-ip.h, tools/nut-
scanner/nutscan-serial.c, tools/nut-scanner/nutscan-serial.h, tools
/nut-scanner/scan_avahi.c, tools/nut-scanner/scan_eaton_serial.c,
tools/nut-scanner/scan_ipmi.c, tools/nut-scanner/scan_nut.c, tools
/nut-scanner/scan_snmp.c, tools/nut-scanner/scan_usb.c, tools/nut-
scanner/scan_xml_http.c: Fix legal information on source-code
headers Copyright and author were not mentioned as it should be.
Most of the nut-scanner copyright belongs to EATON, apart from few
parts. Files descriptions are now also in Doxygen format: make preprocessed value's
size_t a const There's no need to intervene on the passed-to-the-
function value of a preprocessed value's size_t, so clarify it is a
const. Update all subdrivers accordingly, bump versions.
* drivers/nutdrv_qx.c: nutdrv_qx: make sure an answer is not reused
if preprocess_answer() fails If an item's preprocess_answer()
function fails, the answer should not be considered valid and
inherited by the following items with the same command. Therefore,
on failure, clear the answer so that the following items are forced
to query the device and preprocess the answer anew, if appropriate.
2015-05-13 Arnaud Quette <arnaud.quette@free.fr>
* docs/download.txt: Update NUT packages for Windows to 2.6.5-6
2015-05-07 Arnaud Quette <arnaud.quette@free.fr>
* scripts/systemd/nut-server.service.in: Restore systemd relationship
with nut-driver service The Requires directive from nut-server to
nut-driver was previously removed, since it was preventing upsd
from starting whenever one or more drivers, among several, was
failing to start. Use the Wants directive, a weaker version of
Requires, which will start upsd even if the nut-driver unit fails
to start. closes
2015-04-23 Arnaud Quette <arnaud.quette@free.fr>
* Makefile.am: Cleanup GPG signature before generation
2015-04-22 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac: bump version back to 2.7.3.1
* configure.ac: Restore version 2.7.3 for release
* docs/security.txt: Missing link reference update The filename of
the previous GPG release key was not updated, leading to pointing
to the current release key
2015-04-08 Nick Mayerhofer <nick.mayerhofer@enchant.at>
* docs/nutdrv_qx-subdrivers.txt, drivers/nutdrv_qx.c,
drivers/nutdrv_qx.h: nutdrv_qx: clarify docs/inline comments
2015-04-16 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac: bump version to 2.7.3.1
2015-04-15 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac: update version to 2.7.3
* docs/security.txt: Update release signature verification The
release manager key has change. Update the documentation to reflect
it, along with keeping necessary for checking the previous releases
* docs/download.txt: Fix formatting issue
* NEWS, UPGRADING: Final update for release 2.7.3 Complete the
release information for NUT 2.7.3
* docs/maintainer-guide.txt: Store some comments for latter
processing
2015-04-10 Arnaud Quette <arnaud.quette@free.fr>
* drivers/mge-hid.c: Improve Eaton ABM support for USB/HID units As
per clarifications from David G. Miller (Eaton ABM expert) and
customers request, when ABM is enabled, we now both publish the
following as per the ABM information: - the 5 status bits
{charging, discharging, floating, resting, off} under
battery.charger.status - the 2 historical status bits {CHRG,
DISCHRG} under ups.status When ABM is disabled, we just publish the
2 historical status bits {CHRG, DISCHRG} under ups.status, as per
UPS.PowerSummary.PresentStatus.{Charging,Discharging}, as done
previously
2015-04-02 Arnaud Quette <arnaud.quette@free.fr>
* conf/ups.conf.sample, docs/man/ups.conf.txt, drivers/dstate.c,
drivers/dstate.h, drivers/main.c: Improve synchronous driver flag
implementation The previous commit was suffering a number of
issues. The present commit fixes these, along with adding more
documentation and a better and more understandable implementation
code. Thanks to Daniele Pezzini for the thorough review Closes:
2015-04-01 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/ups.conf.txt, drivers/dstate.c, drivers/dstate.h,
drivers/main.c: Implement synchronous driver flag As per issue
#197, NUT drivers work by default in asynchronous mode.. By enabling the 'synchronous'
flag, the driver will wait for data to be consumed by upsd, prior
to publishing more. This can be enabled either globally or per
driver.
2015-04-07 Arnaud Quette <arnaud.quette@free.fr>
* scripts/systemd/nut-server.service.in: Do not Require systemd nut-
driver for nut-server Put the Requires=nut-driver.service in
comment for nut-server systemd unit file. Thus we don't require
drivers to be successfully started! This was a change of behavior
compared to init SysV, and could prevent from accessing
successfully started drivers, or at least to audit a system
Closes:
2015-04-04 Charles Lepple <clepple+nut@gmail.com>
* UPGRADING: UPGRADING: mention SSL permissions (#199)
* docs/security.txt: NSS SSL documentation Addresses new behavior as
part of the NSS forking fix (#199). Formatting and wording fixed as
well.
2015-04-04 Émilien Kia <emilien.kia@gmail.com>
* server/upsd.c: Initialize SSL after deamonize and downgrade to
user. Fix issue #190 - upsd: NSS SSL only working in debug mode
2015-04-02 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: Better input.realpower handling for Eaton
ePDUs G2/G3 Improve the way we declare and process
input.realpower, in order to address the variations between Eaton
ePDUs G2 and G3
2015-03-19 Arnaud Quette <arnaud.quette@free.fr>
* docs/nut-names.txt: Document new variables and commands addition
The variables and commands that were added were not described in
the NUT namespace document. These are: input.transfer.delay -
battery.energysave.load - battery.energysave.delay -
battery.charger.status - outlet.1.shutdown.return -
outlet.2.shutdown.return
* drivers/bcmxcp.c: Fix the letter case of ABM and outlets status
For more coherence with NUT status publication, these status are
now lower case
* drivers/bcmxcp.c: Add missing Author
2014-10-10 gavrilov-i <gavrilov-i@users.noreply.github.com>
* data/cmdvartab, drivers/bcmxcp.c, drivers/bcmxcp.h: drivers/bcmxcp:
advanced features Closes: #158 Added setvar function exec result
parsing Add command to turn load on after shutdown.stayoff and
shutdown.return. Outlet control changed. Outlet control via
commands "outlet.n.load.on/off" like in other drivers. Variable
outlet.n.staus now only for reading. Some code changes in
outlet.n.shutdown.return command - now supporting more than 3
outlets (up to 9). Add descriptions to new and some old variables
and commands. Add "bypass.start" command, for enabling bypass. For
returning in On-Line mode exec "load.on" command. Additional
checks of UPS vars. Now add zero var only if it could be changed.
2015-04-01 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: Workaround input.{power,realpower} for Eaton
ePDUs Add variable declarations to handle missing
input.{power,realpower} on Eaton ePDUs G2 and G3 1phase. On 3phase,
these variables point at SNMP OIDs that sum up the 3 phases
information. These OIDs should also be present on 1phase, however
it's actually not the case. So simply duplicate the L1 declaration
2015-03-31 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powerware-mib.c: Implement battery.charger.status for Eaton
SNMP This new official variable now replaces the historic
'vendor.specific.abmstatus', as per other similar implementations
(in usbhid-ups and bcmxcp drivers)
2015-03-27 Arnaud Quette <arnaud.quette@free.fr>
* drivers/mge-hid.c: Implement Eaton ABM support for USB/HID units
Add support for Eaton Advanced Battery Monitoring, for USB/HID
units. Information are provided through the new
battery.charger.status. For now, at least, when ABM is enabled, the
historic CHRG and DISCHRG flags are not published anymore in
ups.status
2015-03-26 Stuart Henderson <stu@spacehopper.org>
* data/driver.list.in, docs/man/snmp-ups.txt, docs/snmp-
subdrivers.txt, drivers/Makefile.am, drivers/huawei-mib.c, drivers
/huawei-mib.h, drivers/snmp-ups.c: snmp-ups: new subdriver for
Huawei "Hi, the [commit] below adds a new subdriver for snmp-ups
to support Huawei UPS, based on an observed walk from a UPS5000-E
with a few bits filled in from the MIBs (copy at)."-
root.php?message_id=slrnmh6npf.tg7.stu%40naiad.spacehopper.org
2015-03-25 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx_voltronic.c: nutdrv_qx: add support in
'voltronic' subdriver for P13 protocol
2015-03-24 Arnaud Quette <arnaud.quette@free.fr>
* drivers/mge-hid.c: Complementary Energy Saving data for Eaton USB
devices Add a 2nd HID path for battery.energysave.delay. Depending
on the exact device model, different implementations may be used
2015-03-22 Daniele Pezzini <hyouko@gmail.com>
* NEWS: nutdrv_qx: update NEWS about new 'fuji' USB subdriver
2015-03-21 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c, drivers/nutdrv_qx.h: nutdrv_qx: typedef
testing_t only if TESTING is #defined First reported by GitHub
user @nickma82
* docs/man/nutdrv_qx.txt, docs/nutdrv_qx-subdrivers.txt: nutdrv_qx:
document 'voltronic-qs-hex' subdriver in man pages
* docs/man/nutdrv_qx.txt,: add 'ignoresab' flag to support bogus devices Some
UPSes incorrectly report the 'Shutdown Active' bit (7th bit of the
'status byte') as always on (=1), consequently making the driver
believe the UPS is nearing a shutdown (and, as a result, ups.status
always contains FSD). To workaround this issue, add a new
'ignoresab' flag that makes the driver do just what its name tells
(IGNORE Status Active Bit) skipping the relative item in qx2nut
tables. References: -
/nut-upsdev/2015-March/006896.html -
2015-03-11 Arnaud Quette <arnaud.quette@free.fr>
* data/cmdvartab, docs/nut-names.txt: Add some new variable names,
related to ePDUs Add new variables names, related to ePDUs, such
as input.*.load, input.*.realpower and input.*.power
* drivers/eaton-mib.c: Minor update to comments
2015-02-04 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: Fix outlet.{power,realpower} data mapping
According to the new mapping using the input collection, these two
data mapping were targeting at the wrong OIDs.
2015-02-03 Arnaud Quette <arnaud.quette@free.fr>
* drivers/eaton-mib.c: Fix and complete a bit Eaton ePDUs support
Add some new data mapping to improve support for Eaton ePDUs. This
commit includes some new NUT data names that requires approval
before being merged
2015-03-19 Arnaud Quette <arnaud.quette@free.fr>
* data/cmdvartab, docs/nut-names.txt, drivers/mge-hid.c: Add more
Energy Saving features for Eaton USB devices Add two new Energy
Saving features: - battery.energysave.delay: to configure the delay
before switching off the UPS if running on battery and load level
low (in minutes) - battery.energysave.realpower: to switch off the
UPS if running on battery and power consumption on UPS output is
lower than this value (expressed in Watts). Note that documentation
in nut-names.txt and cmdvartab was limited to difference with an
upcoming branch merge, that will add the others
* drivers/mge-hid.c: Align Energy Saving variable names Change
ups.load.energysave to battery.energysave.load, to be coherent with
the latest commit made in the bcmxcp driver
2015-03-10 Arnaud Quette <arnaud.quette@free.fr>
* data/cmdvartab, drivers/mge-hid.c: Add a new EnergySaving threshold
for Eaton UPSs Add 'ups.load.energysave' parameter, to enable
energy saving when the power consumption on the UPS output drops
below this value (in percent). This new variable however requires
to go through the NUT RFC process to get approved
2015-03-19 Arnaud Quette <arnaud.quette@free.fr>
* tools/Makefile.am: Also distribute nut-ddl-dump.sh helper script
2015-03-18 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: EUROCASE EA200N 2000VA supported by
nutdrv_qx Protocol: 'megatec' USB subdriver: 'fuji' Reference: ht
tp://thread.gmane.org/gmane.comp.monitoring.nut.user/8808/focus=908
1
* drivers/nutdrv_qx_bestups.c, drivers/nutdrv_qx_blazer-common: remove redundancy in blazer-common-dependent subdrivers
Since main nutdrv_qx driver already sets an alarm when FSD arises
(see nutdrv_qx.c>ups_alarm_set()), there is no need to do so in the
various subdrivers. So, in order to prevent a duplicated alarm
message, remove all unneeded code from the affected subdrivers (all
the ones that depend on nutdrv_qx_blazer-common).
2015-03-17 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: update Mecer ME-1000-WTU (supported by
nutdrv_qx) Tested by @sliverc (Oliver Sauder) on NUT 2.7.1
Reference:
2015-03-16 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt: nutdrv_qx: document USB subdrivers'
glitches
* drivers/nutdrv_qx.c: nutdrv_qx: add workaround in 'fuji' subdriver
to support all shutdown.returns As 'fuji' subdriver discards all
the commands of more than 3 characters, in order to support 'SnRm'
shutdown.returns (and hence the standard 'S.5R0003' shutdown.return
with DEFAULT_{ON,OFF}DELAYs) map 'SnRm' shutdown.returns to the
corresponding 'Sn' commands, meanwhile ignoring ups.delay.start and
making the UPS turn on the load as soon as power is back.
* drivers/nutdrv_qx.c: nutdrv_qx: fix command handling in 'fuji'
subdriver 'fuji' subdriver supported devices only allow one 8
bytes interrupt as a command/query: make the subdriver discard (and
echo back) all the too long commands.
2014-11-08 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt: nutdrv_qx: update man for the new 'fabula'
and 'fuji' USB subdrivers
2014-06-26 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: add new 'fuji' USB subdriver Add a
new USB subdriver ('fuji') to support models manufactured by Fuji
(and others) and accompained by UPSmart2000I software.
2015-03-15 Charles Lepple <clepple+nut@gmail.com>
* docs/developers.txt, docs/new-drivers.txt: doc: document build
dependencies, etc. Closes:
* docs/nutdrv_qx-subdrivers.txt: doc: fold a few long preformatted
lines in nutdrv_qx developer guide
* docs/FAQ.txt: docs: FAQ update This addresses several issues: * * and Closes:
* docs/man/Makefile.am: docs/man: provide additional detail for
missing asciidoc/a2x error
* configure.ac: configure: indicate required version of
Asciidoc/A2X/dblatex Still doesn't address data-only packages like
docbook-xsl, so leaving this issue open. Reference:
* docs/Makefile.am, docs/man/Makefile.am: Pass --nonet to xsltproc
This prevents xsltproc from downloading DocBook XSL files for each
step in the documentation build process. Reference: Still need to
document what to do if the build fails.
2015-03-10 Arnaud Quette <arnaud.quette@free.fr>
* docs/documentation.txt: Reference DDL on the Documentation page
Add a reference to the NUT Devices Dumps Library (DDL) on the
Documentation page, both for the website and the distributed
documentation. There are separate references, to distinguish the
DDL interest from a user and a developer point of view
2015-03-06 Arnaud Quette <arnaud.quette@free.fr>
* tools/nut-ddl-dump.sh: First stab at a helper script to generate
device dumps This preliminary version only generates .dev (static)
dump files. However, a merge with nut-recorder.sh, which generates
.seq files (dynamic simulation) is to be considered, along with an
improved version for the newer .nds format
2015-02-24 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: CPS Value 1500ELCD-RU @ 2.6.3 Source:-
root.php?message_id=1423241134.6830.8.camel%40ignatev
* data/driver.list.in: HCL: JAWAN JW-UPSLC02 with blazer_usb @ 2.7.2
Source: … T404%2dEAS
8312A94DDAF0FAD4B7702BA52A0%40phx.gbl
2015-02-22 Charles Lepple <clepple+nut@gmail.com>
* scripts/python/app/NUT-Monitor, scripts/python/app/gui-1.3.glade:
NUT-Monitor: updated version to 1.3.1
2015-02-14 Charles Lepple <clepple+nut@gmail.com>
* NEWS, UPGRADING: NEWS/UPGRADING for 2.7.3
2015-02-14 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt: nutdrv_qx: specify 'bestups' ranges in man
pages
2015-01-03 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx_bestups.c: nutdrv_qx: bestups - add support for
'M' query
2014-11-02 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt, docs/nutdrv_qx-subdrivers.txt: nutdrv_qx:
update man pages for new 'bestups' subdriver
* drivers/Makefile.am, drivers/nutdrv_qx.c,
drivers/nutdrv_qx_bestups.c, drivers/nutdrv_qx_bestups.h:
nutdrv_qx: add BestUPS subdriver (protocol=bestups) A subdriver
using Best Power/Sola Australia protocol as described in Based also on
bestups.c and meant to eventually replace it.
2015-02-14 Michael Fincham <michael.fincham@catalyst.net.nz>
* scripts/python/app/NUT-Monitor: Correct unsafe permissions on
~/.nut-monitor (Debian #777706) fix-permissions-on-start.debdiff
from … =777706#24
Closes:
2015-02-14 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c: apcsmart: fix SEGV in apc_getcaps() ups ...
2015-02-13 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c, drivers/apcsmart.h: apcsmart: fix command set
parsing for protocol version 4 The issue was discovered with
Smart-UPS RT 10000 XL by surr, see When protocol
version is 4, command set query returns string with additional
section after another '.' . This patch updates the code to handle
such string as well.
2015-02-10 Charles Lepple <clepple+nut@gmail.com>
* configure.ac: configure.ac: bump version to 2.7.2.6 for snapshots
* scripts/upower/95-upower-hid.rules: upower: regenerate for Powercom
PID 0001 (PR #121)
* configure.ac: configure.ac: add bug report URL Should be
compatible with Autoconf 2.59 and newer.
2015-02-04 Arnaud Quette <arnaud.quette@free.fr>
* drivers/netvision-mib.c: Improve support for on-battery detection
Add support for upsAlarmOnBattery OID, to better detect on-battery
events (reported by Henning Fehrmann)
* drivers/libhid.c, drivers/libhid.h: Fix compilation warning related
to sign comparison
2015-01-31 Ryan Underwood <nemesis@icequake.net>
* drivers/apc-hid.c, scripts/upower/95-upower-hid.rules: Add a
product ID for APC AP9584 USB kit. Resolves
networkupstools/nut#181
2015-01-12 Charles Lepple <clepple+nut@gmail.com>
* scripts/upower/95-upower-hid.rules: upower: regenerate rules file
for OpenUPS PID 0xd005
* Makefile.am: Add systemd unit dir fix for 'make distcheck'
2015-01-11 Sergey Kvachonok <ravenexp@gmail.com>
* configure.ac: Undo ${systemdsystemunitdir} mangling. Running sed
's/\/lib/\${libdir}/' destroys any ${systemdsystemunitdir} values
that don't start with '/lib' e.g. '/usr/lib64/systemd/system'
becomes '/usr/usr/lib6464/systemd/system'. If a local installation
prefix is needed use appropriately prefixed --with-
systemdsystemunitdir='' parameter instead.
2015-01-02 Charles Lepple <clepple+nut@gmail.com>
* docs/cables.txt, docs/config-notes.txt, docs/configure.txt,
docs/features.txt, docs/history.txt, docs/man/nutdrv_qx.txt, docs
/nut-names.txt, docs/scheduling.txt, docs/security.txt: docs: typo
fixes
2015-01-01 Charles Lepple <clepple+nut@gmail.com>
* docs/cables.txt: docs: MGE NMC pinout Closes
* docs/cables.txt: docs: Best Power cable pinout Closes
* INSTALL.nut: docs: clarify group ownership of directory in
INSTALL.nut Closes
* docs/man/dummy-ups.txt: docs: dummy-ups repeater mode requires `@`
in port name Also reworded parts of the man page.
2014-12-17 bsalvador <bruno.salvador@gmail.com>
* drivers/solis.c: Update solis.c to force ScanReceivePack()
2014-12-12 Andy Juniper <ajuniper@freeuk.com>
* clients/upslog.c, docs/man/upslog.txt: upslog: break out of sleep
on SIGUSR1 and log immediately Reference:
/find-root.php?message_id=54863D44.3000902%40freeuk.com
2014-11-25 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: additional NHS models
2014-11-17 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: NHS Sistemas de Energia: Expert C
Isolador series Source:
e_id=CADe06rfE5MA%3dyWDZzofPsC7TOgGOU4TRSoi67uMXedymA9L7ow%40mail.g
mail.com
2014-11-07 Arnaud Quette <arnaud.quette@free.fr>
* scripts/subdriver/gen-snmp-subdriver.sh: Various minor fixes to the
SNMP subdriver generator
2014-11-06 Charles Lepple <clepple+nut@gmail.com>
* drivers/openups-hid.c: openups-hid: Fix scale factors for 0xd005
(0.4) Previous commit had extra scale factors applied.
* drivers/openups-hid.c, drivers/openups-hid.h: openups-hid: voltage
scale factors based on product IDs
* drivers/openups-hid.c: openups-hid: remove a const; this will
require more thought The USB matching routines should have their
parameters marked as "const" to indicate that they do not modify
the matching tables, but that will require more invasive changes.
Roll this back for now.
2014-11-05 Charles Lepple <clepple+nut@gmail.com>
* drivers/openups-hid.c: openups-hid: const and float/double fixups
(0.2)
* drivers/openups-hid.c: openups-hid: add USB ProductID d005 for
OpenUPS2
2014-11-05 Arnaud Quette <arnaud.quette@free.fr>
* Makefile.am: Store the git start point as a variable For
ChangeLog, we now store the git start point (older reference) in a
separate variable, to make the process more clear
2014-10-31 Charles Lepple <clepple+nut@gmail.com>
* docs/download.txt: Update VMware ESXi package link (from René
Garcia)
2014-10-29 Charles Lepple <clepple+nut@gmail.com>
* scripts/upower/95-upower-hid.rules: upower: Update Belkin and
Liebert rules Follow-up to issue #159.
* drivers/belkin-hid.c, drivers/liebert-hid.c: usbhid-ups: comments
describing Belkin/Liebert/Phoenixtec situation Follow-up to issue
#159.
* data/driver.list.in: HCL: Rucelf UPOII-3000-96-EL supported by
blazer_ser Manufacturer: Closes:
2014-10-28 Elio Parisi <E.Parisi@riello-ups.com>
* drivers/riello_usb.c: riello_usb: explicitly claim USB interface
Reference: … =7731ed2f9
8014b8a90e695a06d077970%40AM3PR07MB289.eurprd07.prod.outlook.com
and … bug=738122
2014-10-20 Elio Parisi <E.Parisi@riello-ups.com>
* drivers/riello_usb.c: riello_usb: timeouts and error handling
(0.03) Small changes in riello_usb.c that solved some problem with
managing transmission errors between the Raspberry Pi and Riello
ups (thanks to Fredrik Öberg): introducing timeout in reading ups
data in cypress_command; enhanced handling error codes … 574bcbaf5c
dd95523e0b68%40AM3PR07MB289.eurprd07.prod.outlook.com
* drivers/riello_ser.c: riello_ser: enhanced handling error codes
(0.03)
2014-10-20 Charles Lepple <clepple+nut@gmail.com>
* docs/.gitignore: docs: docinfo.xml is now auto-generated
2014-10-20 Nik Soggia <nut@niksoggia.it>
* drivers/Makefile.am: missing -lm in drivers/Makefile.am Both
bcmxcp and bcmxcp_usb use ldexp(), so both need `-lm`.-
root.php?message_id=544515BA.4060804%40niksoggia.it
2014-10-10 Paul Chavent <paul.chavent@onera.fr>
* drivers/belkin-hid.c: drivers : add Liebert GXT3 device.
* drivers/main.c: drivers : fix possible memory leak. In arguments
parsing, if user option is passed.
2014-09-30 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c: apcsmart: increase passes in setvar_enum()
Current 6 is not enough for bigger units - especially if we swap to
the value directly preceeding the current setting.
2014-09-27 Daniele Pezzini <hyouko@gmail.com>
* configure.ac, docs/Makefile.am, docs/docinfo.xml,
docs/docinfo.xml.in: docs: add NUT version number/date in PDF
documents Reference:
* docs/chunked.xsl, docs/common.xsl, docs/xhtml.xsl: docs: move
DocBook options common to html stylesheets to common.xsl
* docs/Makefile.am, docs/common.xsl: docs: add NUT version
number/date into footer of HTML pages Reference:
2014-09-28 Arnaud Quette <arnaud.quette@free.fr>
* docs/configure.txt: Clarify a bit more Avahi build requirements
* tools/nut-scanner/Makefile.am: Don't reference subdir-object with
$(top_srcdir) Replace references to objects in separate
directories that were using $(top_srcdir) by the expanded version
'../../'. The variable was otherwise part of the path, resulting in
build failures. This completes commit f8abb9b Closes
networkupstools/nut#155
* configure.ac: Explicitly use subdir-objects in automake init
Closes networkupstools/nut#155
2014-09-27 Arnaud Quette <arnaud.quette@free.fr>
* configure.ac, m4/nut_check_asciidoc.m4: Also check for source-
highlight at configure time source-highlight is used for
documentation generation. It's however optional, so we just check
for the sake of completion
2014-09-27 Arnaud Quette <arno@arno-zbook15.euro.ad.etn.com>
* docs/man/Makefile.am, docs/man/asciidoc.conf: Add NUT version
number into footer of HTML man pages Override AsciiDoc default for
footer-txt to include NUT version number into footer of HTML man
pages. This commit addresses the 2nd point of
networkupstools/nut#150
2014-09-26 Charles Lepple <clepple+nut@gmail.com>
* drivers/tripplite_usb.c: tripplite_usb: set input.voltage.nominal
back to 230V (0.30) Keeps the input.voltage and output.voltage
scaling from 0.28 Discussion: … .user/8719
2014-09-26 Arnaud Quette <arnaud.quette@free.fr>
* conf/upsmon.conf.sample.in, docs/man/nut.conf.txt,
docs/man/upsmon.conf.txt, docs/packager-guide.txt: Replace outdated
references to shutdown.txt shutdown.txt was merged into config-
notes.txt during the AsciiDoc conversion of the whole documentation
and website. This content is now available in the docs/config-
notes.txt file, section [[UPS_shutdown]] "Configuring automatic
shutdowns for low battery events"
* conf/upsmon.conf.sample.in: Fix default value of POWERDOWNFLAG
POWERDOWNFLAG path changed from the hard-coded value /etc/killpower
to the build-time generated @CONFPATH@/killpower. This resulted in
an unexpected value '/etc/nut/killpower', at least on Debian.
(reported by Laurent Bigonville) Closes networkupstools/nut#74
2014-09-25 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: move to ltrim_m()/rtrim_m()
functions
* common/common.c, include/common.h: Add ltrim_m()/rtrim_m()
functions to trim several chars at the same time Also, make
ltrim() / rtrim() wrappers around ltrim_m() / rtrim_m().
* common/common.c: Make ltrim() modify the input string Also, always
check string length in both ltrim() and rtrim(). Reference:
2014-09-25 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: sort Tripp Lite models by name, then
increasing power I know this doesn't allow the cell merging code
to do as much, but this should make it easier to find models.
* data/driver.list.in: HCL: add Tripp Lite OMNIVSINT800
(tripplite_usb) Source: … .user/8713
* drivers/tripplite_usb.c: tripplite_usb: scale min/max voltages for
SMART protocol (0.29) Observed in a dump file from driver version
0.11. Scale input.voltage.minimum and input.voltage.maximum the
same way as other voltages.
* drivers/tripplite_usb.c: tripplite_usb: fix voltage scaling for
240V/1001 (0.28) Reported by Dave Williams: … .user/8713 The
input.voltage and output.voltage scaling for Protocol 1001 did not
factor in the input_voltage_scaled value.
2014-09-24 Arnaud Quette <arnaud.quette@free.fr>
* scripts/python/app/nut-monitor.appdata.xml: Fix compliance of NUT-
Monitor FreeDesktop AppData file Following the upstream update (by
David Goncalves), update the screenshots width and height to
conform to AppData specification: Closes
networkupstools/nut#127
2014-05-18 Charles Lepple <clepple+nut@gmail.com>
* drivers/genericups.c: genericups: log cable type overrides as they
are parsed Fixes networkupstools/nut#28 Better than nothing, but
without a unit to test against, I don't want to make any more
intrusive changes.
2014-09-17 Arnaud Quette <arnaud.quette@free.fr>
* docs/nut-qa.txt: Minor update and completion Use the new Debian
package tracker URL and add Redhat / Fedora bug tracker
2014-09-15 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: add devices supported by nutdrv_qx -
Fideltronik LUPUS 500 USB Protocol: 'megatec' USB subdriver:
'fabula' Reference:-
upsuser/2014-June/009059.html - FTUPS FT-1000BS(T) / Voltronic
Power Apex 1KVA Protocol: 'voltronic-qs-hex' USB devices -> USB
subdriver: 'cypress' - FTUPS FT-1000BS / Voltronic Power Imperial
1KVA Protocol: 'voltronic-qs' USB devices -> USB subdriver:
'cypress'
2014-07-11 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: improve the USB matching procedure
Consider also the iManufacturer/iProduct strings when checking
devices (if subdriver is not specified) to assign the right
subdriver in case the VID:PID couple is not specific enough.
2014-06-30 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: add new 'fabula' USB subdriver Add
a new USB subdriver ('fabula') to support models
manufactured/rebranded by Fideltronik and accompained by
UPSilon2000 software. Reference:-
upsuser/2014-June/009059.html
2014-09-03 Daniele Pezzini <hyouko@gmail.com>
* drivers/Makefile.am, drivers/nutdrv_qx.c, drivers
/nutdrv_qx_voltronic-qs-hex.c, drivers/nutdrv_qx_voltronic-qs-
hex.h: nutdrv_qx: add Voltronic-QS-Hex subdriver (protocol
=voltronic-qs-hex) A subdriver using a protocol, specific to UPSes
manufactured by Voltronic Power, partially Hex-encoded (e.g. 'QS'
reply) and supporting some megatec commands.
* docs/nutdrv_qx-subdrivers.txt: nutdrv_qx: update docs about added
support for more complex UPS answers
* drivers/nutdrv_qx.c, drivers/nutdrv_qx.c,
drivers/nutdrv_qx_voltronic.c, drivers/nutdrv_qx_zinto.c:
nutdrv_qx: add basic support for more complex UPS answers Add
support (also in 'TESTING' mode) for '\0' chars in raw UPS answers
and the ability to preprocess answers before anything else (e.g.:
for CRC, decoding, ...). Increase verbosity of USB subdrivers and
serial communication. Always print also the return code when
dealing with an error. Update all subdrivers accordingly, bump
versions.
2014-09-08 Charles Lepple <clepple+nut@gmail.com>
* docs/documentation.txt: docs: Add link to Roger Price's openSUSE
writeup
2014-09-04 Daniele Pezzini <hyouko@gmail.com>
* drivers/nutdrv_qx.c: nutdrv_qx: prevent a vicious loop when
unexpected answers happen If a 'QX_FLAG_QUICK_POLL' item gets an
unexpected (non-empty) answer and, after returning from
'qx_ups_walk()', it is not followed by at least one item using a
different 'command', the driver will loop endlessly using the same
'broken' answer instead of trying to get a new one from the UPS. To
solve this issue, make sure to have an empty 'previous_item' when
starting 'qx_ups_walk()'. Also, bail out of 'qx_ups_walk()' when a
'QX_FLAG_QUICK_POLL' item can't be preprocessed properly through
'ups_infoval_set()'.
2014-09-04 Charles Lepple <clepple+nut@gmail.com>
* docs/man/asem.txt: docs: recommend I2C bus name for asem driver
2014-09-03 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/nut-scanner.txt: Fix typo error in nut-scanner doc The
example network range scanned when using 192.168.0.0/25 is actually
192.168.0.0 to 192.168.0.12*7* not (i.e. not .128) as previously
stated (reported by Evgeny 'Jim' Klimov) Closes
networkupstools/nut#144
2014-09-02 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in, drivers/belkin-hid.c: HCL: Belkin Regulator
PRO-USB 050d:0f51 (0.17)
2014-08-22 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Mecer ME-100-WTU with blazer_usb USB
VID:PID = 0665:5161 Tested by @silvec (Oliver Sauder) on NUT 2.6.3
Reference:
2014-08-19 Charles Lepple <clepple+nut@gmail.com>
* docs/documentation.txt: docs: update links for two articles
2014-08-17 Charles Lepple <clepple+nut@gmail.com>
* tools/nut-usbinfo.pl: nut-usbinfo: fix FreeBSD devd.conf to use
$cdev
* scripts/upower/95-upower-hid.rules: 95-upower-hid.rules: updated by
nut-usbinfo.pl URL updated in previous commit.
* tools/nut-usbinfo.pl: nut-usbinfo: change link from Alioth SVN to
GitHub
* scripts/udev/.gitignore: udev: ignore 62-nut-usbups.rules Follow-
up commit to networkupstools/nut#140
2014-08-17 Yann E. MORIN <yann.morin.1998@free.fr>
* conf/Makefile.am: conf/: fix parallel install Do not reference the
upsmon.conf.sample twice, otherwise install, with a high number of
make jobs, may fail, like so:
s/256/2567e13cd5bc702bc3a38a1d6fc8e34022cc7db5/build-end.log ---
This is not a rare occurence, as my testing managed to trigger the
issue in about 1 test out of 10 on average, on a not-so-fast
machine.
2014-08-16 Charles Lepple <clepple+nut@gmail.com>
* tools/nut-usbinfo.pl: nut-usbinfo: ignore *.orig files
2014-08-14 Émilien Kia <emilien.kia@gmail.com>
* configure.ac, docs/man/Makefile.am, m4/nut_check_asciidoc.m4: Test
presence of xmllint for manpages doc generation.
* configure.ac, docs/man/Makefile.am, m4/nut_check_asciidoc.m4: Test
presence of xsltproc for manpages doc generation.
2014-08-04 Charles Lepple <clepple+nut@gmail.com>
* docs/man/ups.conf.txt, drivers/libusb.c: Remove redundant
usb_set_altinterface(), unless user requests it Adds flag/value to
USB driver options. Closes networkupstools/nut#138
2014-08-09 Charles Lepple <clepple+nut@gmail.com>
* drivers/blazer_usb.c, drivers/libusb.c, drivers/nutdrv_qx.c,
drivers/tripplite_usb.c, drivers/usb-common.h, drivers/usbhid-
ups.c: libusb.c: consolidate USB-related addvar() calls
* drivers/cps-hid.c: usbhid-ups (CPS): determine battery.voltage
scale factor at runtime If the battery.voltage reading is greater
than 1.4x battery.voltage.nominal, apply a scale factor of 2/3 to
bring the voltage back in line. Closes networkupstools/nut#142
2014-08-08 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/nut-scanner.txt: Fix typo error Fix a typo error on "-B"
option (reported by Evgeny 'Jim' Klimov)
2014-08-05 Arnaud Quette <arnaud.quette@free.fr>
* scripts/python/Makefile.am: Distribute FreeDesktop AppData file for
NUT Monitor FreeDesktop AppData file for NUT Monitor was not
distributed, waiting for some approval
2014-08-01 Arnaud Quette <arnaud.quette@free.fr>
* scripts/udev/Makefile.am, scripts/udev/README: Fix USB permission
issues related to Linux / udev Rename udev rules file to 62-nut-
usbups.rules, to prevent NUT USB privileges from being overwritten
Closes #140
* docs/cables.txt: Fix typo error on Eaton / MGE USB-RJ45 cable
2014-07-14 Charles Lepple <clepple+nut@gmail.com>
* scripts/Aix/.gitignore: Ignore generated AIX spec file
2014-07-14 Giuseppe Corbelli <giuseppe.corbelli@copanitalia.com>
* AUTHORS, docs/man/asem.txt: asem: additional documentation … devel/6741
2014-07-13 Charles Lepple <clepple+nut@gmail.com>
* docs/man/upscli_get.txt: upscli_get(): mention SIGPIPE handling
Closes: #132
* data/driver.list.in: HCL: distinguish between Tripp Lite old and
new protocol 3005
2014-07-13 Arnaud Quette <arnaud.quette@free.fr>
* scripts/python/app/nut-monitor.appdata.xml: Complete FreeDesktop
AppData file for NUT Monitor As per Richard Hughes comments, in
#127, complete the description field
2014-07-12 Arnaud Quette <arnaud.quette@free.fr>
* scripts/Aix/nut-aix.spec.in: Minor adjustments as per Github
* docs/configure.txt: Add missing documentation for configure option
The new asem driver introduced --with-linux_i2c, for which
documentation was missing in configure documentation
2014-07-11 Charles Lepple <clepple+nut@gmail.com>
* NEWS, data/driver.list.in, docs/man/Makefile.am, docs/man/asem.txt,
docs/man/index.txt: asem: documentation
2014-07-07 Giuseppe Corbelli <giuseppe.corbelli@copanitalia.com>
* configure.ac, data/driver.list.in, drivers/Makefile.am,
drivers/asem.c: Support for ASEM UPS on Linux/i2c Patch from … devel/6723
Thread:-
root.php?message_id=53A83FCB.1080808%40copanitalia.com Builds on
Ubuntu 12.10 and 14.04; requires libi2c-dev
2014-07-05 Charles Lepple <clepple+nut@gmail.com>
* drivers/tripplite_usb.c: tripplite_usb: fix typos in bin2d() and
control_outlet() (0.27)
* drivers/tripplite_usb.c: tripplite_usb: control_outlet() for
protocol 3005 (0.26)
* drivers/tripplite_usb.c: tripplite_usb: Additional 3005 protocol
support (0.25)
iwyG8HCfg%2dQqxcwhnm1Yo0z0F0BLyOPCYX%2d4yMMFg8sB4QQ%40mail.gmail.co
m
* drivers/tripplite_usb.c: tripplite_usb: basic support for 3005
binary protocol (0.24) Based on logs from SMART500RT1U
2014-07-04 vesnn <metanoite@rambler.ru>
* drivers/powercom.c: Update powercom.c Fix Powercom Imperial
initialization for models since 2009 with USB interface.
2014-06-23 Arnaud Quette <arnaud.quette@free.fr>
* scripts/python/app/nut-monitor.appdata.xml: Create a FreeDesktop
AppData file for NUT Monitor appData files provide to users long
descriptions, screenshots and other useful information on
application. This will mainly serve for Software Center like
applications
2014-06-19 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Lacerda New Orion 800VA with blazer_usb … TUMYU03QbP
T8JxEtLd38mvwfTMMhZqS%3d%2diGpdvJDA%40mail.gmail.com
2014-06-17 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: add APC-Microsol entry for solis
* drivers/solis.c: solis: silence clang warnings about extra
parentheses Since we're in the neighborhood (#133)... Typically,
the idiom is either: if ( a == b ) for equality checking, or: if
( ( a = b ) ) for assignment with a comparison.
* drivers/solis.c, drivers/solis.h: solis: eliminate fixed-length
string buffer for model name The new APC model name overflows the
buffer. (#133)
2014-06-16 bsalvador <bruno.salvador@gmail.com>
* drivers/solis.c: Update on solis.c to add more support to Back-UPS
1200BR
* drivers/solis.c: Update solis.c to support Microsol-APC Unit.
Added support to Back-UPS 1200BR (Microsol-APC) unit.
2014-06-15 Charles Lepple <clepple+nut@gmail.com>
* docs/man/upscli_get.txt, docs/man/upscli_list_next.txt,
docs/man/upscli_list_start.txt: docs: synchronize upscli_*
numq/numa with header There were a few leftover signed int
parameters in the man pages, but the headers and implementation use
'unsigned int'. Closes:
* docs/man/solis.txt: docs: mention APC in Microsol driver man page
* docs/man/apcsmart.txt: docs: point APC Microsol users from apcsmart
to solis Also make some of the formatting and grammar self-
consistent.
* drivers/solis.c: solis: recognize APC BZ1200-BR and BZ2200BI-BR
(0.62) Patch suggested by Bruno Salvador for BZ1200-BR, and also
tested by Douglas A. Augusto on BZ2200BI-BR. Reference: * … 0-br-back-
ups-rs-1200va-600w-bivolt-115-nt.20247/ *
/find-root.php?message_id=CACu22%2d3Nn2R%3dQQe9uy%5fPXHRduaPaFgCp2S
w4ra57Ow2qDQcOJQ%40mail.gmail.com
2014-06-08 Arnaud Quette <arnaud.quette@free.fr>
* scripts/subdriver/gen-snmp-subdriver.sh: Inline documentation fixes
2014-06-03 george <rpubaddr0@gmail.com>
* scripts/python/module/PyNUT.py: Fixed version description.
* scripts/python/module/PyNUT.py: Added author information, bumped
version. According to the semantic versioning scheme
(), adding features that do not break backwards
compatibility with previous releases means that the minor version
number should be incremented.
* scripts/python/module/PyNUT.py: Change format of raise keyword.
Fixes PyNUT Python 3 compatibility.
* scripts/python/module/PyNUT.py: PyNUT: Create a custom exception
class. This maintains backwards compatibility, and allows calling
programs to use "except PyNUTError" instead of "except \
Exception"
when using PyNUT methods. See for more
information.
* scripts/python/module/PyNUT.py: Fix error when raising without an
Exception. Raising without a valid exception is invalid: >>>
raise Traceback (most recent call last): File "<stdin>", line \
1, in
<module> TypeError: exceptions must be old-style classes or derived
from BaseException, not NoneType >>> raise Exception Traceback
(most recent call last): File "<stdin>", line 1, in <module>
Exception Changing this to "raise Exception" fixes this problem.
2014-06-01 Charles Lepple <clepple+nut@gmail.com>
* docs/man/tripplite_usb.txt, drivers/tripplite_usb.c: tripplite_usb:
last tweaks, for now. Initialize bv_12V to a dummy value, since
gcc can't see that it is used in the union of both conditionals
where it is set. Also, align the documentation with the strange
definition of empty used by the Tripp Lite state-of-charge
approximation.
* drivers/tripplite_usb.c: tripplite_usb: silence warning (0.23)
Pedantic, to be sure, but someone might try the driver with a
protocol not listed, and sure enough, bv_12V won't be initialized.
* docs/man/tripplite_usb.txt, drivers/tripplite_usb.c: tripplite_usb:
expose battery_min/_max as variables (0.22)
/find-
root.php?message_id=21370.36829.817425.464627%40godel.bruda.ca
2014-05-27 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: GRAFENTHAL PR-3000-HS supported by snmp-
ups Tested with 2.6.5-3 (0.68) on Windows (IETF MIB 1.4). Some
NUT variables are zero - further testing may be needed. Reference: … BA4A4E8ADF
76BC3AD3E568581EC753%40MS03.MACLE.DE
2014-05-23 Charles Lepple <clepple+nut@gmail.com>
* drivers/tripplite_usb.c: tripplite_usb: use dv/dq charge
calculation for all models (0.21)
2014-05-20 Andrew Burdo <zeezooz@gmail.com>
* drivers/powercom-hid.c: Add comments for some values.
* drivers/usbhid-ups.c: Reuse variable.
* drivers/usbhid-ups.c: Add default case.
2014-05-18 Charles Lepple <clepple+nut@gmail.com>
* scripts/upower/95-upower-hid.rules: upower: regenerate rules file
USB VID:PID = 10af:0004 This dependency graph makes my head spin.
* configure.ac: configure.ac: version to 2.7.2.5 for snapshots
2014-05-13 Daniele Pezzini <hyouko@gmail.com>
* drivers/compaq-mib.c: compaq-mib: comment out no longer used items
As per 31827d5faa86377efb7a92b7aec322cc4c7a275f
2014-05-03 Daniele Pezzini <hyouko@gmail.com>
* docs/download.txt: docs: add Void Linux in download/Binary packages
Reference:
2014-05-03 Charles Lepple <clepple+nut@gmail.com>
* docs/Makefile.am: docs: add mge-usb-rj45.jpg to distribution
2014-05-02 Arnaud Quette <arnaud.quette@free.fr>
* docs/images/cables/mge-usb-rj45.jpg: Add MGE information on USB-
RJ45 cable The illustration matching the previous commit was still
needed on the nut repository, and not on the nut-website on
* docs/cables.txt: Add MGE information on USB-RJ45 cable These
information were provided by MGE years ago, and were waiting for
counter testing. Martin De Graaf - Loyer has now fixed this. Note
that the matching illustration will be committed on the new nut-
website repository
2014-04-29 Andrew Burdo <zeezooz@gmail.com>
* drivers/powercom-hid.c, drivers/usbhid-ups.c: Bump versions.
* data/driver.list.in, docs/man/usbhid-ups.txt: Update documentation.
* drivers/usbhid-ups.c: Reconnect on interrupt read error.
* drivers/libhid.c, drivers/libhid.h, drivers/powercom-hid.c, drivers
/usbhid-ups.c: Reading from the interrupt pipe implies that you use
INPUT flagged objects.
* drivers/powercom-hid.c: Remove erroneous status.
* drivers/powercom-hid.c: Comment non-compliant variables.
2014-04-17 Andrew Burdo <zeezooz@gmail.com>
* drivers/libhid.c, drivers/libhid.h, drivers/powercom-hid.c, drivers
/usbhid-ups.c: Add support for 0d9f:0001 (USB HID, Powercom).
2014-04-17 Arnaud Quette <arnaud.quette@free.fr>
* NEWS, UPGRADING, configure.ac: Update for release 2.7.2 Complete
the release information for NUT 2.7.2
2014-04-17 Stephen J. Butler <stephen.butler@gmail.com>
* drivers/tripplite-hid.c: Scale for SMART1500LCDT
2014-04-07 Arnaud Quette <arnaud.quette@free.fr>
* drivers/compaq-mib.c: Fix erroneous status in HP/Compaq SNMP MIB
Using the most recent HP firmware (1.76), erroneous on-battery
status were reported. Also disable an erroneous low-battery
definition (pointing nowhere), while waiting for actual
improvements (report and patch from Philippe Andersson ; Closes
networkupstools/nut#117)
2014-04-06 Daniele Pezzini <hyouko@gmail.com>
* drivers/mge-xml.c: mge-xml: fix compile-time warnings, versioning
2014-04-05 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: Numeric Digital 800 plus USB VID:PID =
0665:5161 Reference: networkupstools/nut#115 (blazer_usb @ 2.6.4;
waiting for confirmation with nutdrv_qx)
* data/driver.list.in: HCL: Eaton Powerware 3105 supported by
bcmxcp_usb Closes networkupstools/nut#117
* data/driver.list.in, drivers/belkin-hid.c: usbhid-ups/belkin-hid:
add support for Emerson Network Power Liebert PSI 1440 USB VID:PID
= 10af:0004 … .user/8479
2014-04-05 Arnaud Quette <arnaud.quette@free.fr>
* drivers/al175.c: Fix data format warnings on all architectures
Complete commit 7daa0feb6ed4f1c29bfe14c8e491ba198a4ba643, and
actually fix some of the warnings related data format. Also bump
al175 driver revision
* clients/Makefile.am: Update libupsclient library version
information Following the recent export of libcommon functions in
libupsclient, update the library version information to 4:0:0
2014-04-04 Arnaud Quette <arnaud.quette@free.fr>
* drivers/al175.c: Fix data format warnings Fix a few warnings
related data format, in debug code
* clients/Makefile.am: Add libnutclient library version information
Add the missing LDFLAGS for adding version information
2014-03-21 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in: [HCL] CABAC UPS-1700DV2 supported by
blazer_usb Reported by jammin84 Closes #113
* clients/Makefile.am, common/Makefile.am: Link libupsclient with
libcommon Fix undefined references related to functions of
libcommon. This issue was reported on Debian: (patch from Matthias Klose ; Closes
Github issue #73)
2014-03-18 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: [HCL] Digitus DN-170014 supported by
richcomm_usb Reference:
/nut-upsdev/2014-March/006695.html -or--
root.php?message_id=CADq9dvWMx0xBz9XXkVKXCre4ox%2d2kSeHtD7LW39eEDH1
RCY8sQ%40mail.gmail.com
2014-03-05 Charles Lepple <clepple+nut@gmail.com>
* UPGRADING: Added note about --enable-option-checking=fatal Closes
#99 (really)
2014-03-05 Émilien Kia <emilien.kia@gmail.com>
* scripts/Aix/nut-aix.spec.in: Make web source path independant from
specific version.
* scripts/Aix/nut-aix.spec.in: Use configure-dependant variables
instead of statically defined ones for user and group.
* configure.ac: Use $target_cpu instead of calling uname to know cpu
type. Fix crosscompilation.
2013-06-14 Vaclav Krpec <VaclavKrpec@Eaton.com>
* scripts/Aix/nut-aix.spec.in, scripts/Aix/nut.init: AIX: packaging &
init script improvements (cherry picked from commit
ce195e3a2eff1abbd8e192f4d3e278017d7ffb21)
2013-06-12 Vaclav Krpec <VaclavKrpec@Eaton.com>
* scripts/Aix/nut.init: Fixed client startup detection (cherry
picked from commit 23df5e811cc9008bfa0a37bd174b59890a3760a6)
* scripts/Aix/nut-aix.spec.in: Fixed AIX RPM specfile (cherry picked
from commit 11ba37bf36dcda0398c8c62fab838dd00e54c5db)
2013-06-11 Vaclav Krpec <VaclavKrpec@Eaton.com>
* scripts/Aix/nut-aix.spec.in: Allow libneon-based XML driver &
scanning for AIX (cherry picked from commit
4c2e89ec584b2015b22f4599d1571c26f2f94e3d)
2013-06-10 Vaclav Krpec <VaclavKrpec@Eaton.com>
* clients/Makefile.am: Fix of AIX-specific parseconf linking bug
Added dummy do_upsconf_args to binaries that use libcommon to
satisfy the linker. libcommon links libparseconf, which calls
do_upsconf_args supplied from above as an implementation-specific
routine. (cherry picked from commit
0078f9383d3a7af4f3edfed6c78de387a12c6b2b)
2013-04-25 Vaclav Krpec <VaclavKrpec@Eaton.com>
* clients/Makefile.am, clients/upsclient.c, configure.ac:
linupsclient: NUT scanning on AIX bugfix 1/ A simmilar bug like in
Solaris is in AIX itself---non-blocking connect may return -1 while
errno == 0. Shall be treated as EINPROGRESS. 2/ Linking of
libupsclent.so on AIX requires libcommon, otherwise scanning for
NUT crashes with SIGSEGV on unresolved usplogx (cherry picked from
commit 16177f99bc995852bb86d2183958f24f11993632)
2013-03-13 Vaclav Krpec <VaclavKrpec@Eaton.com>
* Makefile.am: AIX packages: make package does the trick (cherry
picked from commit 1d25bd2868339decace5b3028c834746f2824670)
2013-03-12 Vaclav Krpec <VaclavKrpec@Eaton.com>
* scripts/Aix/nut-aix.spec.in, scripts/Aix/nut.init: AIX packaging:
nut-client uninstal bugfix Packages clean uninstallation
(lost/forgotten commit) (cherry picked from commit
f6dd1aec5d2157a3ba3654621fa8e2ac88b060f9)
2013-03-08 Vaclav Krpec <VaclavKrpec@Eaton.com>
* clients/upsclient.c, configure.ac: Solaris/i386: non-blocking
connect WA (cherry picked from commit
d2b466b9ee5402074ccbf7f2967433350affdbcc)
2013-03-04 Vaclav Krpec <VaclavKrpec@Eaton.com>
* Makefile.am, configure.ac, scripts/Aix/nut-aix.spec.in,
scripts/Aix/nut.init: AIX packaging AIX init script and RPM spec.
file added (cherry picked from commit
3851525edcb417f96a5d1c12fb786b85095b54d4)
2014-03-03 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in: HCL: various updates * Closes
*-
upsdev/2013-November/006564.html *-
root.php?message_id=50D9D460.1080408%40gmail.com
*
bPz%2bRVPVXJyFLj0HErh1ZOtm5tk8b6n5Nd5kSk0g%40mail.gmail.com *-
root.php?message_id=528EC53C.9000801%40me.com
* docs/nut-qa.txt: NUT QA document: updated and reworded
* docs/nut-qa.txt: NUT QA document: CR->LF
* docs/FAQ.txt: FAQ: minor updates Update the bestfortress entry,
fix the mythicbeasts URL, and reword a few entries.
2014-02-13 Charles Lepple <clepple+nut@gmail.com>
* docs/man/upsimage.cgi.txt: upsimage.cgi(8): update GD homepage
2014-03-03 Émilien Kia <emilien.kia@gmail.com>
* drivers/nutdrv_qx.h: Detect if TRUE (and FALSE) are already defined
and define bool_t accordingly.
2014-02-27 Arnaud Quette <arnaud.quette@free.fr>
* UPGRADING: Add a note on Hardware Abstraction Layer removal
* INSTALL.nut, autogen.sh, configure.ac, docs/Makefile.am,
docs/configure.txt, docs/developers.txt, docs/features.txt,
docs/macros.txt, docs/new-drivers.txt, docs/nut-hal.txt, docs
/packager-guide.txt, drivers/Makefile.am, drivers/dstate-hal.c,
drivers/dstate-hal.h, drivers/main-hal.c, drivers/main-hal.h,
m4/nut_check_libhal.m4, m4/nut_config_libhal.m4: Remove the
remaining HAL files and references Remove the remaining build
rules, source code and documentation related to the FreeDesktop
Hardware Abstraction Layer (HAL) support. For the record, with this
HAL implementation, NUT drivers were sending data over DBus
(Closes: #99)
2014-02-24 Charles Lepple <clepple+nut@gmail.com>
* drivers/blazer_usb.c, drivers/libusb.c, drivers/nutdrv_qx.c,
drivers/riello_usb.c, drivers/usbhid-ups.c: OpenBSD ports tree
patches for EPROTO Closes networkupstools/nut#44
2014-02-26 Arnaud Quette <arnaud.quette@free.fr>
* scripts/Makefile.am, scripts/README, scripts/hal/.gitignore,
scripts/hal/Makefile.am, tools/nut-usbinfo.pl: Remove the
generation of HAL support files Remove the code supporting the
generation of HAL FDI file. This is the first commit of a set to
address Github issue #99
* drivers/snmp-ups.c: Fix snmp-ups segmentation fault A basic sanity
check was missing in the core code of snmp-ups, causing a driver
crash under some specific circumstances, at driver initialisation
time. Hence, this does not affect production systems
* README, UPGRADING, docs/FAQ.txt, docs/config-notes.txt,
drivers/Makefile.am, scripts/Solaris/nut.in,
scripts/Solaris/postinstall.in, scripts/Solaris/preremove.in,
scripts/systemd/nut-driver.service.in,
scripts/systemd/nutshutdown.in: Closes #96: Install upsdrvctl to
$prefix/sbin Install upsdrvctl to $prefix/sbin rather than
$driverexec. upsdrvctl has been historically standing beside the
drivers. It now resides in the system binaries ($prefix/sbin)
directory
2014-02-25 Arnaud Quette <arnaud.quette@free.fr>
* drivers/mge-hid.c: Add improved support for Eaton 5P Add the
necessary hooks to improve support for Eaton 5P range. This
includes post-processing of the model name, along with handling
rules for battery voltage (actual and nominal)
2014-02-19 Daniele Pezzini <hyouko@gmail.com>
* docs/Makefile.am, docs/chunked.xsl, docs/common.xsl,
docs/xhtml.xsl: docs: prevent smartphones from being too smart
(docbook) Add HTML <meta> tag to not auto-create telephone number
links on mobile browsers also in docbook processed documents.
Reference: XSL
files source: -
/docbook-xsl/common.xsl - … r/docbook-
xsl/xhtml.xsl -
/docbook-xsl/chunked.xsl
* docs/man/asciidoc.conf: docs: prevent smartphones from being too
smart Add HTML <meta> tag to not auto-create telephone number
links on mobile browsers. Reference:
2014-02-15 Arnaud Quette <arnaud.quette@free.fr>
* docs/acknowledgements.txt: Update NUT team membership for Daniele
Pezzini Daniele Pezzini is a now a NUT senior developer
2014-02-14 Arnaud Quette <arnaud.quette@free.fr>
* docs/acknowledgements.txt, docs/website/news.txt: Formalizing the
end of the relationship with Eaton The situation of the
relationship with Eaton has evolved, and since 2011 Eaton does not
support NUT anymore. This may still evolve in the future. But for
now, please do not consider anymore that buying Eaton products will
provide you with official support from Eaton, or a better level of
device support in NUT.
2014-02-14 Charles Lepple <clepple+nut@gmail.com>
* Makefile.am: devd: use staging directory for distcheck
* drivers/Makefile.am, drivers/snmp-ups.c, drivers/xppc-mib.c,
drivers/xppc-mib.h: snmp-ups: add XPPC-MIB for Tripp Lite
SU10KRT3/1X
* scripts/subdriver/gen-snmp-subdriver.sh: gen-snmp-subdriver.sh:
documentation updates
2014-02-10 Charles Lepple <clepple+nut@gmail.com>
* scripts/subdriver/gen-snmp-subdriver.sh: gen-snmp-subdriver.sh: fix
option typos * Use '-M' for MIB directories, to match snmpwalk and
the help text. * Add space before '-c' in snmpwalk (not sure how
this worked before)
* scripts/Makefile.am: cosmetic: Indent scripts/Makefile.am
EXTRA_DIST continuation lines
* scripts/Makefile.am: Add gen-snmp-subdriver.sh to distribution
tarball
2014-02-14 Arnaud Quette <arnaud.quette@free.fr>
* docs/acknowledgements.txt: Update NUT team membership for
Frédéric Bohe Frederic Bohe, NUT senior developer and Eaton
contractor from 2009 to 2013, is now a retired member. Thanks for
all the hard work on the Windows port, nut-scanner, Unix packaging,
support, ... Also update the developers membership page, from
Alioth to GitHub
2013-02-24 Charles Lepple <clepple+nut@gmail.com>
* autogen.sh, configure.ac, scripts/Makefile.am,
scripts/devd/.gitignore, scripts/devd/Makefile.am,
scripts/devd/README, tools/nut-usbinfo.pl: FreeBSD: generate
devd.conf files for USB UPSes This adds a --with-devd-dir=PATH
option to ./configure, which defaults to /usr/local/etc/devd (or
/etc/devd, whichever is found first). Unlike udev, there does not
seem to be a way to re-trigger rules at runtime. This means you
will likely need to unplug and replug your UPS after installing the
new nut-usb.conf file.
2014-02-13 Arnaud Quette <arnaud.quette@free.fr>
* .gitignore, server/.gitignore: Minor completion to gitignore files
Add a few more exotic targets, related to debug or official
distribution
2014-02-11 Daniele Pezzini <hyouko@gmail.com>
* .gitignore, clients/.gitignore, common/.gitignore, conf/.gitignore,
data/.gitignore, data/html/.gitignore, docs/.gitignore,
docs/man/.gitignore, docs/website/.gitignore,
docs/website/scripts/.gitignore, drivers/.gitignore,
include/.gitignore, lib/.gitignore, m4/.gitignore,
scripts/.gitignore, scripts/HP-UX/.gitignore,
scripts/Solaris/.gitignore, scripts/augeas/.gitignore,
scripts/avahi/.gitignore, scripts/hal/.gitignore,
scripts/hotplug/.gitignore, scripts/python/.gitignore,
scripts/systemd/.gitignore, scripts/udev/.gitignore,
scripts/ufw/.gitignore, server/.gitignore, tests/.gitignore,
tools/.gitignore, tools/nut-scanner/.gitignore: Simplify gitignore
files Remove redundancies and old/svn things. Limit the scope
wherever it makes sense. Ignore all cscope files and test logs.
Make ignoring generated files easier to maintain.
2014-02-11 Charles Lepple <clepple+nut@gmail.com>
* drivers/libshut.c: libshut: partially revert PnP/RTS change
Reported by Baruch Even. It is unclear how this will work after
running nut-scanner, but it is more important to keep the drivers
working. Reference: 65db105 /
2013-09-24T08:18:00Z!fredericbohe@eaton.com Closes:
networkupstools/nut#91
2014-02-09 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt: nutdrv_qx: update manpage for the newly
supported Voltronic Power P98 units
* drivers/nutdrv_qx.c, drivers/nutdrv_qx_mecer.c,
drivers/nutdrv_qx_mecer.h: nutdrv_qx: improve support for
'(ACK/(NAK' and Voltronic Power P98 UPSes In 'mecer' subdriver's
claim function try to get protocol (QPI, for Voltronic Power
devices) used by the UPS: - supported devices are Voltronic Power's
P98 units - if the UPS doesn't support the QPI command, use its
reply to identify whether it uses '(ACK\r'/'(NAK\r' replies This
way we can catch '(ACK/(NAK' devices, while previously the 'mecer'
subdriver was 'hidden' by the 'megatec' (echo back/'ACK/NAK') one.
Plus Q1 units with 'ACK'/'NAK' replies or echoing back not
supported and rejected commands are no longer wrongly 'claimed' by
the 'mecer' subdriver.
2014-02-03 Daniele Pezzini <hyouko@gmail.com>
* docs/.gitignore, docs/Makefile.am, docs/documentation.txt: docs:
build PDF also for cables.txt
2014-02-02 Daniele Pezzini <hyouko@gmail.com>
* Makefile.am, configure.ac, docs/.gitignore, docs/Makefile.am,
docs/man/.gitignore, docs/man/Makefile.am, docs/stable-hcl.txt,
docs/user-manual.txt, docs/website/.gitignore,
docs/website/Makefile.am, docs/website/css/ie-overrides.css,
docs/website/css/web-layout.css,
docs/website/css/xhtml11-quirks.css, docs/website/css/xhtml11.css,
docs/website/faviconut.ico, docs/website/faviconut.png,
docs/website/news.txt, docs/website/old-news.txt,
docs/website/projects.txt, docs/website/scripts/.gitignore,
docs/website/scripts/filter_png.js, docs/website/scripts/jquery.js,
docs/website/scripts/nut_jquery.js, docs/website/scripts/toc.js,
docs/website/ups-protocols.txt, docs/website/web-layout.conf,
docs/website/website.txt, tools/Makefile.am, tools/nut-hclinfo.py:
website: move to a standalone website
2014-01-18 Daniele Pezzini <hyouko@gmail.com>
* docs/net-protocol.txt: docs: fix a couple of asciidoc errors in
net-protocols.txt
* server/netlist.c: net-protocol: fix closing line of LIST RANGE
2014-01-16 Charles Lepple <clepple+nut@gmail.com>
* drivers/nutdrv_atcl_usb.c: nutdrv_atcl_usb: fix permissions-based
crash, and enable vendor variable (1.1)
2014-01-13 Charles Lepple <clepple+nut@gmail.com>
* .gitignore: Ignore cscope.out
* docs/man/.gitignore, docs/man/nutdrv_atcl_usb.txt,
drivers/nutdrv_atcl_usb.c: nutdrv_atcl_usb: documentation and
logging (v1.0)
2014-01-11 Charles Lepple <clepple+nut@gmail.com>
* drivers/apc-mib.c: snmp-ups: APC SmartBoost and SmartTrim are OL
SmartBoost and SmartTrim are voltage regulation functions that
prevent the UPS from using the battery during brownouts and
overvoltages, so the BOOST and TRIM states are also mapped to OL.
Reference: … devel/6583
* data/driver.list.in: [HCL] MicroDowell B.Box LP 500: genericups
type 7 Closes networkupstools/nut#83 From @lxp: UPS shutdown
only works when on-battery and has a delay of about 1min until
execution (something between 50sec to 1min 30sec on mine).
References: … art-0.html
2014-01-11 Daniele Pezzini <hyouko@gmail.com>
* drivers/blazer_ser.c, drivers/blazer_usb.c: blazer: fix man page
references
2014-01-11 Charles Lepple <clepple+nut@gmail.com>
* docs/man/nutdrv_atcl_usb.txt, drivers/nutdrv_atcl_usb.c:
nutdrv_atcl: match iManufacturer (vendor) string
* docs/man/snmp-ups.txt: snmp-ups: update and edit documentation
2014-01-11 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in: HCL: add Atlantis Land/Voltronic Power units
supported by nutdrv_qx
* drivers/nutdrv_qx_blazer-common.c, drivers/nutdrv_qx_blazer-
common.h: nutdrv_qx: fix nutdrv_qx_blazer-common.{c,h} header
* docs/man/nutdrv_qx.txt, docs/nutdrv_qx-subdrivers.txt: nutdrv_qx:
update manuals for new 'voltronic-qs' subdriver
2013-12-05 Daniele Pezzini <hyouko@gmail.com>
* drivers/Makefile.am, drivers/nutdrv_qx.c, drivers
/nutdrv_qx_voltronic-qs.c, drivers/nutdrv_qx_voltronic-qs.h:
nutdrv_qx: add Voltronic-QS subdriver (nutdrv_qx protocol
=voltronic-qs) A subdriver using a protocol, specific to UPSes
manufactured by Voltronic Power, based on the 'mustek' one (i.e.
'QS').
2014-01-01 Charles Lepple <clepple+nut@gmail.com>
* drivers/nutdrv_atcl_usb.c: nutdrv_atcl_usb: adjusted logging and
retries (v0.02)
* data/driver.list.in, docs/man/Makefile.am, docs/man/index.txt,
docs/man/nutdrv_atcl_usb.txt: nutdrv_atcl_usb: man page and HCL
entries
2013-12-31 Charles Lepple <clepple+nut@gmail.com>
* drivers/.gitignore, drivers/Makefile.am, drivers/nutdrv_atcl_usb.c,
tools/nut-usbinfo.pl: nutdrv_atcl_usb: 'ATCL FOR UPS' new driver
Reference:-
root.php?message_id=%3c52B4C54E.1050106%40ariwainer.com.ar%3e
* drivers/libusb.c, drivers/usb-common.h: Move USB_TIMEOUT to usb-
common.h
2013-12-31 Laurent Bigonville <bigon@bigon.be>
* .gitignore, INSTALL, INSTALL.nut, Makefile.am, docs/FAQ.txt,
docs/Makefile.am, docs/configure.txt, docs/packager-guide.txt, docs
/user-manual.txt: Rename INSTALL to INSTALL.nut Rename it to
INSTALL.nut so autoreconf will not try to overwrite it. In Debian
tools like dh_autoreconf calls autoreconf with -f which overwrite
the INSTALL file.
2013-12-27 Charles Lepple <clepple+nut@gmail.com>
* scripts/subdriver/gen-usbhid-subdriver.sh: usbhid-ups: fix call to
is_usb_device_supported() The is_usb_device_supported() function
now takes a USBDevice_t* instead of a pair of USB ID values.
2013-12-22 Florian Bruhin <nut@the-compiler.org>
* data/driver.list.in, docs/man/powercom.txt, drivers/powercom.c: Add
OptiUPS VS 575C support to PowerCom Reference:-
root.php?message_id=%3c20131126085646.GM28832%40lupin%3e
2013-12-11 Denis Yantarev <denis.yantarev@gmail.com>
* drivers/blazer_usb.c, drivers/nutdrv_qx.c: Fixed incorrectly
reported Ippon response length
2013-11-30 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt, drivers/nutdrv_qx.c, drivers
/nutdrv_qx_blazer-common.c, drivers/nutdrv_qx_blazer-common.h,
drivers/nutdrv_qx_megatec-old.c, drivers/nutdrv_qx_mustek.c,
drivers/nutdrv_qx_q1.c: nutdrv_qx: fix 'megatec/old' and 'mustek'
subdrivers' claim functions Address, for 'megatec/old' and
'mustek' subdrivers, the same problem fixed in commit
720975f4de910b270ba705a7f2981c2ee33ca2eb for Q1-based ones: - Make
the claim function of 'megatec/old' and 'mustek' subdrivers not
poll the UPS for 'vendor' informations as they are not really
needed to set these protocols apart from the other ones (i.e. the
'status' poll is specific enough, at the time of writing). - Move
common 'light' claim function to nutdrv_qx_blazer-common.{c,h}. -
Update manual. - Versioning.
2013-11-24 Daniele Pezzini <hyouko@gmail.com>
* docs/nutdrv_qx-subdrivers.txt: nutdrv_qx: improve developer manual
Get rid of useless tables. Fix minor errors/typos.
* drivers/nutdrv_qx.c: nutdrv_qx: versioning
* docs/man/nutdrv_qx.txt, docs/nutdrv_qx-subdrivers.txt: nutdrv_qx:
update manuals for new Q1 subdriver and improve readability
2013-11-23 Daniele Pezzini <hyouko@gmail.com>
* drivers/Makefile.am, drivers/nutdrv_qx.c, drivers/nutdrv_qx_q1.c,
drivers/nutdrv_qx_q1.h: nutdrv_qx: add new 'fallback' Q1 subdriver
Add new 'Q1' subdriver. This subdriver implements the same protocol
as the one used by the 'megatec' subdriver minus the vendor (I) and
ratings (F) queries. In the claim function: - it doesn't even try
to get 'vendor' informations (I) - it checks only status (Q1),
through 'input.voltage' variable Therefore it should be able to
work even if the UPS doesn't support vendor/ratings *and* the user
doesn't use the 'novendor'/'norating' flags, as long as: - the UPS
replies a Q1-compliant answer (i.e. not necessary filled with all
of the Q1-required data, but at least of the right length and with
not available data filled with some replacement character) - the
UPS reports a valid input.voltage (used in the claim function) -
the UPS reports valid status bits (1st, 2nd, 3rd, 6th, 7th are the
mandatory ones) This commit reintroduces a functionality of the
blazer subdrivers that was lost because now, in order to tell
whether a device is supported by a subdriver or not, if the user
doesn't call the driver with the 'novendor' flag, both the status
(Q1) and the vendor (I/FW?) queries are needed (that's to better
discern the subdrivers). Reference:-
upsuser/2013-November/008692.html
2013-11-23 Charles Lepple <clepple+nut@gmail.com>
* configure.ac, configure.in: Rename configure.in to configure.ac
autoconf has been warning about this for a while - let's fix it
before too many branches get created with the old name.
* configure.in: configure.in: bump version to 2.7.1.5 Some packaging
systems don't like the -pre# system.
2013-11-21 Laurent Bigonville <bigon@bigon.be>
* docs/man/ups.conf.txt, docs/man/upsdrvctl.txt, drivers/upsdrvctl.c:
Provide retry options for upsdrvctl and driver(s) As recently seen
in Debian (bugs #694717 and #677143), it may be required to have
upsdrvctl retrying to start the driver in case of failure. More
specifically, a mix of init system (V and systemd), udev and USB
device(s) can result in the /dev entry not being available at
driver startup, thus resulting in a general failure to start NUT.
This commit provides at least a way to overcome this issue. A more
suitable solution will require more work on NUT design. This
patch if based on Arnaud Quette proposal
2013-11-20 Arnaud Quette <arnaud.quette@free.fr>
* Makefile.am: Maintainers targets: distribution signature / hashes
Create some handy targets to ease and automate release publication
2013-11-19 Charles Lepple <clepple+nut@gmail.com>
* configure.in: configure: update version to 2.7.1
* docs/website/news.txt: news: add 2.7.1 release
* Makefile.am: ChangeLog: use full path to generator script
* docs/website/projects.txt: website: update related project links
2013-11-18 Arnaud Quette <arnaud.quette@free.fr>
* NEWS: Minor reordering of the news
2013-11-18 Kirill Smelkov <kirr@mns.spb.ru>
* MAINTAINERS, docs/man/.gitignore, docs/man/Makefile.am,
docs/man/al175.txt, docs/man/index.txt, docs/man/nutupsdrv.txt,
docs/new-drivers.txt, drivers/Makefile.am, drivers/al175.c: al175:
updated driver, please restore it Back in 2005 I was young and
idealistic, that's why you finally marked al175 as 'broken', but
now I understand your points (some) and that in NUT you need good
portability. So this time I've checked that al175 compiles with
CC="gcc -std=c89 -pedantic", and CC="gcc -std=c99 \
-pedantic" Also,
I've tried to clean-up the driver based on feedback from 2009, but
unfortunately I no longer have hardware to test and will not have
any in foreseable future, so the driver was reworked to meet the
project code quality criteria, without testing on real hardware.
Some bugs may have crept in. Changes since last posting in 2009:
- patch rebased on top of current master (v2.6.5-400-g214c442); -
added reference to COMLI communication protocol document; - status
decode errors go to log, instead of setting non-conformant status
like "?T", "?OOST", etc. For such errors new loglevel is
allocated; - "High Battery" status is back; - converted tracing
macros to direct use of upsdebugx and numbers 1,2,3,4 for loglevels
as requested (but now lines got longer because of explicit __func__
usage); - lowered usage of other macros (e.g. REVERSE_BITS
inlined); - alarm(3) is not used anymore - instead whole I/O
transaction time budget is maintained manually; - man page
converted to asciidoc and supported variables list is merged into
it; - upsdebug_ascii moved to common.c and to separate patch.
~~~~ Changes since al175 was removed from NUT tree in 2008: -
alloca was eliminated through the help of automatic variables -
debugging/tracing were reworked to (almost always) use NUT builtins
- al175 now uses 3 debug levels for (1=user-level info, 2=protocol
debugging, 3=I/O tracing) - rechecked. … opers.html and
applied where apporpiate Also > This driver does not support
upsdrv_shutdown(), which makes > it not very useful in a real world
application. This alone > warrants 'experimental' status, but for
the below mentioned > reasons (to name a few), it's flagged
'broken' instead. Yes, at present shutdown is not supported, and
unfortunately now I don't have AL175 hardware at hand, so that I
can't write it and verify the implementation. I've marked the
driver as DRV_EXPERIMENTAL, although it was tested by us as part of
our systems to work OK for more than three years in production
environment on ships (and we don't need shutdown there -- in
critical situations the system has to operate as long as possible,
untill the battery is empty) Also, all of the previous issues
listed below are now fixed in this al175 version: - ‘return’
with a value, in function returning void (2x) - anonymous variadic
macros were introduced in C99 - C++ style comments are not allowed
in ISO C90 - ISO C forbids braced-groups within expressions (5x) -
ISO C90 forbids specifying subobject to initialize (16x) - ISO C99
requires rest arguments to be used (18x) Yes, "All the world is
not an x86 Linux box," and I've tried to make all the world happy.
Please apply. Thanks, Kirill.
* common/common.c, docs/developers.txt, include/common.h: common:
upsdebug_ascii() - to dump a message in ascii For debugging ASCII-
based protocols with control characters (e.g. COMLI) it is handy to
dump messages not in hex, but in ascii with human readable codes.
Add utility function to do it.
2013-11-17 Charles Lepple <clepple+nut@gmail.com>
* docs/man/.gitignore, drivers/.gitignore: apcupsd-ups: ignore
generated files
* drivers/apcupsd-ups.c: apcupsd-ups: fix cut-n-paste error
* drivers/apcupsd-ups.c: apcupsd-ups 0.04: use O_NONBLOCK instead of
FIONBIO
* NEWS, docs/man/index.txt: apcupsd-ups: add NEWS and man page link
* UPGRADING: Mention upsrw output change.
* docs/man/nut-recorder.txt: Reword nut-recorder man page
* UPGRADING: UPGRADING: link to man pages for changed drivers
* configure.in: Bump version to 2.7.1-pre2
* NEWS, UPGRADING: Update NEWS and UPGRADING for 2.7.1 Closes:
networkupstools#37
* data/driver.list.in: HCL: StarPower PCF-800VA Reported by Don.
Reference: … =%3cCAPO%2
bLDnApF3ALNfp%5fwaVpHqSuJ9sajKCKXPXLLsAWUWww7Of%3dw%40mail.gmail.co
m%3e
* data/driver.list.in: HCL: Atlantis Land A03-P551(V1.2) supported by
blazer_usb Reported by Giovanni Panozzo. Reference:-
root.php?message_id=%3c51B76B0C.1080109%40panozzo.it%3e Note that
blazer_usb will eventually be replaced by nutdrv_qx.
* clients/nutclient.h, clients/upsclient.c,
conf/upsmon.conf.sample.in, configure.in, docs/FAQ.txt,
docs/man/libnutclient.txt, docs/man/libnutclient_general.txt,
docs/man/upsmon.conf.txt, docs/security.txt, drivers/powerman-
pdu.c, server/netssl.c: Replace 'connexion' with 'connection' in
English contexts Also reworded a few phrases surrounding the
replacements.
* docs/man/.gitignore: asciidoc: ignore all generated blazer*.html
files
* data/driver.list.in: HCL: update CyberPower entries, including
CP900AVR Reported by Craig Duttweiler Reference:-
root.php?message_id=%3c51295F86.4080601%40twistedsanity.net%3e
* docs/stable-hcl.txt: GitHub issues can also be used to report HCL
updates
* docs/website/projects.txt: Update links to related projects
* docs/download.txt: Update download page * Re-added link to
Buildbot snapshot generator * Updated a few links
2013-11-13 Daniele Pezzini <hyouko@gmail.com>
* docs/man/.gitignore, drivers/.gitignore: Add nutdrv_qx to
.gitignore files and remove voltronic from them
2013-11-12 Charles Lepple <clepple+nut@gmail.com>
* docs/man/Makefile.am: a2x: use --destination-dir This option seems
to work now. Previously, Asciidoc source files were copied to the
destination directory, but this did not account for included files.
2013-11-12 Daniele Pezzini <hyouko@gmail.com>
* docs/man/nutdrv_qx.txt: nutdrv_qx: fix cross links in manpage
Remove links to voltronic manuals. Fix links to blazer manuals.
2013-11-12 Charles Lepple <clepple+nut@gmail.com>
* .gitignore: git: ignore test-driver, and sort ignores list test-
driver is apparently part of automake, generated for libcpp unit
tests.
2013-11-10 Charles Lepple <clepple+nut@gmail.com>
* docs/man/Makefile.am: Include blazer-common.txt in built tarball
2013-11-10 Daniele Pezzini <hyouko@gmail.com>
* clients/upsrw.c: upsrw: publish also the maximum length of STRING
rw variables
2013-11-09 Daniele Pezzini <hyouko@gmail.com>
* docs/website/scripts/nut_jquery.js: HCL: Improve readability of
nut_jquery.js
* docs/website/scripts/nut_jquery.js: HCL: make support-level filter
show items with a 'higher or equal' level Reference:
b.com/networkupstools/nut/issues/48#issuecomment-28134135
* data/driver.list.in: nutdrv_qx: readd HCL's items lost with the
revert of the voltronic merge
* data/driver.list.in: nutdrv_qx: remove superfluous indications from
the HCL
* data/driver.list.in, docs/Makefile.am, docs/blzr-subdrivers.txt,
docs/man/Makefile.am, docs/man/blzr.txt, docs/man/index.txt,
docs/man/nutdrv_qx.txt, docs/man/nutupsdrv.txt, docs/new-
drivers.txt, docs/nutdrv_qx-subdrivers.txt, drivers/Makefile.am,
drivers/blzr.c, drivers/blzr.h, drivers/blzr_blazer-common.c,
drivers/blzr_blazer-common.h, drivers/blzr_mecer.c,
drivers/blzr_me/nutdrv_qx.c, drivers/nutdrv_qx.h,
drivers/nutdrv_qx_blazer-common.c, drivers/nutdrv_qx_blazer-
common.h, drivers/nutdrv_qx_mecer.c, drivers/nutdrv_qx_mecer.h,
drivers/nutdrv_qx_megatec-old.c, drivers/nutdrv_qx_megatec-old.h,
drivers/nutdrv_qx_megatec.c, drivers/nutdrv_qx_megatec.h,
drivers/nutdrv_qx_mustek.c, drivers/nutdrv_qx_mustek.h,
drivers/nutdrv_qx_voltronic.c, drivers/nutdrv_qx_voltronic.h,
drivers/nutdrv_qx_zinto.c, drivers/nutdrv_qx_zinto.h, tools/nut-
usbinfo.pl: nutdrv_qx: rename 'blzr' driver to 'nutdrv_qx'
Reference:-
upsdev/2013-November/006555.html
* docs/stable-hcl.txt, docs/website/css/web-layout.css: Address Issue
#48 (text-based browsers) Reference:
tools/nut/issues/48#issuecomment-28107101
2013-11-08 Arnaud Quette <arnaud.quette@free.fr>
* docs/website/projects.txt: Cleanup NUT related projects
2013-10-25 Daniele Pezzini <hyouko@gmail.com>
* drivers/blazer.c: blazer: Support UPSes that reply '(ACK' when an
instant command succeeds
* drivers/blazer.c: blazer: Fix a discrepancy in the handling of
instant commands Check if the reply we got back from the UPS is
'ACK' also for the commands stored in the array.
2013-10-17 Daniele Pezzini <hyouko@gmail.com>
* docs/man/blazer-common.txt: blazer: Cosmetic changes
* drivers/blazer_ser.c, drivers/blazer_usb.c: blazer: Fix
blazer_{ser,usb} + TESTING Those things are useless when TESTING
is defined
* docs/man/blazer-common.txt: blazer: Fix user manuals
{Serial,USB}-specific sections belong to 'Extra arguments' section
* drivers/blazer_ser.c, drivers/blazer_usb.c: blazer: Versioning
* drivers/blazer.c: blazer: Add more log infos in instcmd
2013-10-16 Daniele Pezzini <hyouko@gmail.com>
* docs/man/Makefile.am, docs/man/blazer-common.txt,
docs/man/blazer.txt, docs/man/blazer_ser.txt,
docs/man/blazer_usb.txt, docs/man/index.txt,
docs/man/nutupsdrv.txt: blazer: Fix {usb,ser} manual Split the old
blazer manual in two manuals named after their executables with a
common source.
* docs/man/blazer.txt: blazer: Fix user manual Fix minor errors Add
ranges Fix test.battery.start (i.e. minutes instead of seconds)
* drivers/blazer.c: blazer: Fix shutdown sequence Split stop pending
shutdown and shutdown itself so that if we have problems stopping
the shutdown (e.g. there's no shutdown pending and the UPS, because
of that, echoes back the command) we can still shutdown the UPS.
* drivers/blazer.c: blazer: Fix minor error in battery guesstimation
We need both battery.voltage.low and battery.voltage.high to
'guesstimate' the battery charge
* drivers/blazer.c: blazer: Fix test.battery.start T00 doesn't make
any sense: the range should be 01-99 minutes
* drivers/blazer.c: blazer: Fix shutdown.return 'SnR0000' is meant
to put the UPS down and not return 'Sn' should be used instead when
ondelay is 0
* drivers/blazer.c: blazer: Fix shutdown delay 'offdelay' as used by
this driver is meant to be in the .2-.9 (12..54 seconds) and 01-10
(60..600 seconds) range.
2013-11-03 Charles Lepple <clepple+nut@gmail.com>
* data/driver.list.in, docs/man/.gitignore, docs/man/Makefile.am,
docs/man/index.txt, docs/man/voltronic_ser.txt,
docs/man/voltronic_usb.txt, drivers/Makefile.am,
drivers/voltronic.c, drivers/voltronic.h, drivers/voltronic_ser.c,
drivers/voltronic_usb.c, tools/nut-usbinfo.pl: Revert "Merge branch
'voltronic-driver'" This reverts commit
de07fc7f5e7f68b91507b2bf3d4d3b92b774c3ed, reversing changes made to
a074844f88ca352780dd881b5fa3c435832d165e. The voltronic
funtionality will be a subdriver of the new blazer driver.
2013-11-04 Daniele Pezzini <hyouko@gmail.com>
* drivers/blzr_voltronic.c: blzr: Fix log message
* drivers/blzr_voltronic.c: blzr: Fix compile-time error Reference:-
upsdev/2013-November/006549.html
2013-10-25 Daniele Pezzini <hyouko@gmail.com>
* drivers/blzr.c: blzr: Cosmetic changes
* drivers/blzr_megatec-old.c, drivers/blzr_megatec.c,
drivers/blzr_mustek.c, drivers/blzr_zinto.c: blzr: Remove
duplicates in the testing struct
2013-11-04 Daniele Pezzini <hyouko@gmail.com>
* drivers/blzr_blazer-common.c, drivers/blzr_blazer-common.h: blzr:
Fix blzr_blazer-common.{c,h} header comments
2013-10-25 Daniele Pezzini <hyouko@gmail.com>
* docs/blzr-subdrivers.txt, docs/man/blzr.txt, drivers/Makefile.am,
drivers/blzr.c, drivers/blzr_mecer.c, drivers/blzr_mecer.h: blzr:
Add Mecer subdiver (blzr protocol=mecer) A subdriver covering an
idiom similar to the one used by the megatec subdriver, but with
these peculiarities: - if a command/query is rejected or invalid,
the UPS will reply '(NAK\r' - if a command succeeds, the UPS will
reply '(ACK\r'
2013-10-17 Daniele Pezzini <hyouko@gmail.com>
* docs/blzr-subdrivers.txt: blzr: Improve developer manual Add note
on how to group items in blzr2nut array.
* drivers/blzr_voltronic.c: blzr: Fix switch/case Forgot to break at
the end of the case
* docs/man/blzr.txt, drivers/blzr.c, drivers/blzr_voltronic.c: blzr:
Cosmetic changes
2013-10-16 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in, docs/Makefile.am, docs/blzr-subdrivers.txt,
docs/man/Makefile.am, docs/man/blzr.txt, docs/man/index.txt,
docs/man/nutupsdrv.txt, docs/new-drivers.txt, drivers/Makefile.am,
drivers/blzr.c, drivers/blzr.h, drivers/blzr_blazer-common.c,
drivers/blzr_blazer/dstate-hal.c, drivers/dstate-hal.h,
tools/nut-usbinfo.pl: blzr: New driver 'blzr' New driver for Q*
UPSes. Based on blazer, usbhid-ups and voltronic driver. This
might address Issue #25
2013-11-04 Charles Lepple <clepple+nut@gmail.com>
* include/Makefile.am: nut_include.h: fail gracefully if git fails
Fix proposed by Jim Klimov.
2013-11-03 Charles Lepple <clepple+nut@gmail.com>
* docs/stable-hcl.txt: HCL: typos
* data/driver.list.in, docs/stable-hcl.txt: HCL: minor cleanup
Remove a duplicate Tripp Lite entry, and add a missing "a".
* docs/stable-hcl.txt: HCL documentation: reword
* data/driver.list.in, docs/acknowledgements.txt: HCL: incorporate
Tripp Lite test results Source: … .user/8173
* docs/Makefile.am, docs/website/Makefile.am: HCL: additional
dependencies Apparently still not complete, though.
* docs/website/scripts/nut_jquery.js: HCL JavaScript: make key case-
insensitive Also special-case the spelling change for Tripp Lite.
TODO: make the value matching case-insensitive as well.
* docs/website/scripts/nut_jquery.js: HCL JavaScript: update the USB-
matching code Slightly more accurate, but later on we should
really track the connection type as a first-class attribute for
each entry in the HCL. Matching the driver name is brittle.
* docs/website/scripts/nut_jquery.js, tools/nut-hclinfo.py: HCL
generation: don't combine driver names The Python and JavaScript
code for generating the HCL was combining adjacent drivers even
when the support level was different. This clutters up the driver
list a bit, but presents a more accurate picture of support levels.
2013-10-28 Michal Soltys <soltys@ziu.info>
* docs/man/apcsmart.txt: apcsmart: minor man update A short note
about availabilty of apcsmart-old.
* docs/man/apcsmart.txt, drivers/apcsmart.c, drivers/apcsmart.h:
apcsmart: string/comment/text trivial changes
2013-10-27 Charles Lepple <clepple+nut@gmail.com>
* tools/nut-scanner/nutscan-device.c: [nut-scanner] Remove unused
variable
2013-10-18 Vaclav Krpec <VaclavKrpec@Eaton.com>
* tools/nut-scanner/nutscan-device.c, tools/nut-scanner/nutscan-
device.h, tools/nut-scanner/nutscan-display.c, tools/nut-
scanner/scan_nut.c: Nutscan fix and enhancement Closes #60 (GitHub
Pull Request via fbohe)
* docs/man/netxml-ups.txt, drivers/mge-xml.c, drivers/mge-xml.h,
drivers/netxml-ups.c: netxml: added RW access, fixed FSD/shutdown
duration bugs, etc. * Fixed bugs in resolution of FSD condition
and computation of shutdown duration. * Added System.* UPS
variables. * Enabled RW access to appropriate UPS variables. *
Added UPS veriables value convertors. * Added support for XML
protocol v3 {GET|SET}_OBJECT query implementing getvar and setvar
routines. * netxml driver man page updated to include info about
the driver-specific configuration parameters. Closes #59 (GitHub
pull request: "Enhancement for netxml driver") Pull request by:
Frédéric BOHE <fredericbohe@eaton.com>
* clients/upsc.c, clients/upscmd.c, clients/upslog.c,
clients/upsrw.c: Fix AIX linkage of do_upsconf_args() Closes #58
(GitHub pull request "Fix AIX build") (cherry picked from commit
5fc7518f97d2738d791c3c77f2257d05e3a9da3b)
2013-10-26 Charles Lepple <clepple+nut@gmail.com>
* configure.in: Define _REENTRANT for all Solaris and AIX platforms.
This is essentially the final commit in pull request #39.
2013-10-24 Frédéric BOHE <fredericbohe@eaton.com>
* drivers/mge-hid.c: Fix wrong OFF status reported when on battery.
UPS.BatterySystem.Charger.PresentStatus.Used is not related to UPS
outputs being on or off but rather to the charger being on or off.
2013-10-16 Daniele Pezzini <hyouko@gmail.com>
* scripts/python/Makefile.am, scripts/python/app/gui-1.3.glade,
.../app/locale/it/LC_MESSAGES/NUT-Monitor.mo,
scripts/python/app/locale/it/it.po, scripts/python/app/nut-
monitor.desktop: Add italian translation
* scripts/python/app/locale/fr/fr.po: Add source of french
translation
* scripts/python/app/gui-1.3.glade.h, scripts/python/app/locale/NUT-
Monitor.pot: Add translation sources
2013-10-16 Frédéric BOHE <fredericbohe@eaton.com>
* drivers/powerware-mib.c, drivers/snmp-ups.c, drivers/snmp-ups.h:
Fix Low Battery detection with ConnectUPS cards The low battery
OID itself cannot be read directly. Low battery alarms OID appears
in an alarm array.
2013-10-02 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in: [HCL] Add support for Eaton 5S Add Eaton 5S
(USB ID 0x0463:0xffff) to the list of usbhid-ups supported models
(reported by Matt Ivie)
2013-10-02 Frédéric BOHE <fredericbohe@eaton.com>
* data/driver.list.in: [HCL] update Eaton UPS
2013-09-30 Frédéric BOHE <fredericbohe@eaton.com>
* drivers/libshut.c: Increment driver revision
2013-09-28 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c: bcmxcp: Fix handling of date and time format The
date and time bytes are packed BCD, so it must be properly decoded.
The check for the Julian or Month:Day format was wrong Info on
format taken from
P_Rev_C1_Public_021309.pdf
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for some
more meters and one more command Add mapping for
PW_SET_TIME_AND_DATE command. Add mapping for input.bypass.voltage,
input.bypass.L1-N.voltage, input.bypass.L2-N.voltage,
input.bypass.L3-N.voltage. Add mapping for input.bypass.frequency.
Add mapping for ups.power.nominal if provided as meter, it was
previously only set on init. Change mapping for ups.realpower for
single phase. Tested on Eaton PW9130.
* drivers/bcmxcp.c: bcmxcp: Remove newline on debug output for
outlets
2013-09-24 Frédéric BOHE <fredericbohe@eaton.com>
* drivers/libshut.c, tools/nut-scanner/scan_eaton_serial.c: Change
RTS init level for PnP devices Setting RTS line to 1 disturbs
communication with some devices using serial plug and play feature.
So we need to initialize it to 0.
2013-09-07 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in: Add support for Forza FX-1500LCD Add Forza
FX-1500LCD (USB ID 0x0665:0x5161) to the list of blazer_usb
supported models (reported by Gabor Tjong A Hung)
* data/driver.list.in: Add Schneider APC AP9630 SNMP management card
Add Schneider APC AP9630 SNMP management card to the list of snmp-
ups supported models. Note that it requires the option
"privProtocol=AES" to work (reported by Tim Rice)
* drivers/.gitignore: Git ignore drivers/voltronic_{ser,usb} Add
drivers/voltronic_{ser,usb} to the list of Git ignored files
2013-08-28 Charles Lepple <clepple+nut@gmail.com>
* packaging/RedHat/.gitignore, packaging/debian/.gitignore,
packaging/mandriva/.gitignore, packaging/opensuse/.gitignore:
Remove .gitignore files from long-gone packaging directory.
2013-08-28 Daniele Pezzini <hyouko@gmail.com>
* docs/website/css/web-layout.css: Improve CSS readability
* docs/stable-hcl.txt, docs/website/css/web-layout.css: Address Issue
#48 Move legend out of filters' block. (HTML+CSS)
2013-08-10 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c: bcmxcp: Add instcmd for system test capabilities
based on what UPS support
2013-08-09 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Code restructure,
declare variables at top of method After re-reading code style,
compiled with -pedantic, and got some warnings, so moved variable
declarations to the top of methods
* drivers/bcmxcp.c, drivers/bcmxcp_io.h, drivers/bcmxcp_ser.c,
drivers/bcmxcp_usb.c: bcmxcp: Reformat code, remove tabs in the
middle of lines. No code changes After re-reading the developer
code style guide, use spaces and not tabs in the middle of lines to
align text
* drivers/bcmxcp.h: bcmxcp: Reformat code, remove tabs in the middle
of lines. No code changes After re-reading the developer code
style guide, use spaces and not tabs in the middle of lines to
align text
* drivers/bcmxcp.c: bcmxcp: Refactor code, use if-else if rather than
4 if statements
2013-08-08 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c: bcmxcp: Add parameter to nut_find_infoval to
control debug output We do not want debug output if
nut_find_infoval does not find a mapped value in all cases. For
example, when a command byte is not mapped to a instcmd, we do not
want debug output.
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Remove
PW_UPDATE_POWER_SOURCE_STATUS_COMMAND, it seems very unlikely to be
used
* drivers/bcmxcp.h: bcmxcp: Cosmetic changes constant definitions. No
code changes.
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Use command map to
control which instcmd are supported Use the command map info
retrieved from UPS to list all commands supported by the UPS at
debug level 2. Use the info from command map to set up which
instcmd the UPS supports.
* drivers/bcmxcp.c: bcmxcp: Use info_lkp_t structure for mapping
topology info Make code simpler by using the info_lkp_t structure
for mapping value from topology block to text presented to user as
ups.description
* drivers/bcmxcp.c: bcmxcp: Cosmetic commentary fixes and remove some
empty lines. No code changes
* drivers/bcmxcp.c: bcmxcp: Output unsupported alarms on debug level
3, not level 2 The supported alarms in alarm map is outputted at
debug level 2. The unsupported alarms should be outputted at debug
level 3, it is not that interesting. Also remove debug outputted
empty line after table heading line for meter map and alarm map.
* drivers/bcmxcp.c: bcmxcp: Refactor code for setting which alarms
are supported, to avoid code duplication Refactor the code which
checks the alarm map for supported alarms, by making a new method
which checks the alarm bit to see if the alarm is supported.
* drivers/bcmxcp.c: bcmxcp: Only include ups.serial and device.part
if they have a value Only set info about ups.serial and
device.part if the UPS actually report useful info for these.
Remove the handling of space characters as meaning string
termination for ups.serial, this is not done for part number, and
according to bcmxcp spec are these both 16 byte ascii text
messages. Move Nominal output frequence handling up, placing it
just below Nominal output voltage
2013-08-04 Charles Lepple <clepple+nut@gmail.com>
* docs/man/index.txt, docs/man/voltronic_ser.txt,
docs/man/voltronic_usb.txt: voltronic* documentation updates - Add
to man page index - Reword a few sections - Fix typos - Comment out
USB section in voltronic_ser.txt Long-term, we should probably
figure out a better way to maintain two parallel driver pages like
this. The blazer man page is the same for both, with .so links for
the man pages, but then you have USB info in a serial driver page.
For now, voltronic_usb.txt is just a copy of voltronic_ser.txt with
a few _ser-to-_usb replacements.
2013-08-01 Daniele Pezzini <hyouko@gmail.com>
* drivers/voltronic.c: Get rid of 'god.knows' variables
2013-07-26 Arnaud Quette <arnaud.quette@free.fr>
* drivers/powercom-hid.c: Forgotten subdriver version bump
2013-07-25 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/upsc.txt, docs/man/upscmd.txt, docs/man/upsrw.txt:
Complete upsclient commands usage note Add a note for scripting
usage, for upsc, upscmd and upsrw, to state the obvious: only
consider the output from stdout for data requested. stderr may
contain error messages, which can disrupt your script execution.
Address the second task and closes Github issue #30
* clients/upsclient.c: Fix a minor regression in upsclient output
NSS support has introduced a minor regression in upsclient output.
Clients such as upsc, upscmd and upsrw were particularly affected.
This patch restores a default behavior similar to prior versions.
However, "-v" option remains to be implemented. Address the first
task of Github issue #30
2013-07-24 Charles Lepple <clepple+nut@gmail.com>
* docs/website/web-layout.conf: Add GitHub link to website sidebar
2013-07-24 Frédéric BOHE <fredericbohe@eaton.com>
* configure.in: Fix wrong errno reported by connect on Solaris
Closes issue #43
* clients/upsclient.c: Fix connect in multi-threaded environnement on
AIX Closes issue #42
2013-07-23 Frédéric BOHE <fredericbohe@eaton.com>
* clients/upsclient.c: Fix nut-scanner crash on nut server scan,
upscli_sslinit calls upscli_readline which might calls
upscli_disconnect in case of error. upscli_disconnect frees
ups->host and set it to NULL, so it is illegal to use ups->host
after a call to upscli_sslinit.
2013-07-23 Charles Lepple <clepple+nut@gmail.com>
* clients/Makefile.am: Revert "Fix connect in multi-thread
environnement on Solaris" This reverts the previous commit. It
overwrites the CFLAGS which specifies one of the key include
directories.
2013-07-23 Frédéric BOHE <fredericbohe@eaton.com>
* clients/Makefile.am: Fix connect in multi-thread environnement on
Solaris
* tools/nut-scanner/nutscan-device.c, tools/nut-scanner/nutscan-
device.h, tools/nut-scanner/scan_avahi.c, tools/nut-
scanner/scan_eaton_serial.c, tools/nut-scanner/scan_ipmi.c, tools
/nut-scanner/scan_nut.c, tools/nut-scanner/scan_snmp.c, tools/nut-
scanner/scan_usb.c, tools/nut-scanner/scan_xml_http.c: [nut-
scanner] Make sure to return the first device of the list.
2013-07-22 Charles Lepple <clepple+nut@gmail.com>
* docs/download.txt: Download information: reference Git
* configure.in: Bump NUT version to 2.7.1-pre1
2013-07-21 Charles Lepple <clepple+nut@gmail.com>
* include/Makefile.am: nut_version.h: trim tag characters through
first slash
2013-04-27 Charles Lepple <clepple+nut@gmail.com>
* include/Makefile.am: nut_version.h: remove SVN plumbing This
should eliminate the "Unversioned directory" message. The source of
the version information is also listed in nut_version.h Closes
Github issue #15
2013-07-16 Sven Putteneers <sven.putteneers@gmail.com>
* scripts/python/app/NUT-Monitor: NUT-Monitor: parse battery.runtime
as float Without this patch, I get a flood of "Invalid literal for
int with base 10: '28500.00" errors.-
root.php?message_id=%3c51E54B99.9030908%40gmail.com%3e
2013-07-10 Arnaud Quette <arnaud.quette@free.fr>
* docs/website/news.txt, docs/website/projects.txt: Reference walNUT
Gnome Shell extension
2013-07-09 Charles Lepple <clepple+nut@gmail.com>
* drivers/riello_ser.c, drivers/riello_usb.c: riello: suppress some
warnings about %lu versus %u
2013-07-09 Elio Parisi <E.Parisi@riello-ups.com>
* drivers/riello.h, drivers/riello_ser.c, drivers/riello_usb.c:
riello: whitespace fixes, and read nominal values only once Bumped
driver versions to 0.02
2013-07-07 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add support for reading
topology map and setting ups.description based on it
* drivers/bcmxcp.c: bcmxcp: Initialize variables in
calculate_ups_load method
* drivers/bcmxcp.c: bcmxcp: Output more info hardware capabilities in
debug mode Add some more debug output on driver init, to let us
know what the hardware support. Outputs length of alarm history
log, topology block length and maximum supported command length.
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
input.quality to meters
* drivers/bcmxcp.c: bcmxcp: Add Alf Høgemark as one of the authors
for the driver
* drivers/bcmxcp.c: bcmxcp: Only calculate ups.load if the UPS does
not report it directly If the UPS does not report a meter mapped
to ups.load, we try to calculate the ups.load, but we do not
calculate it if the UPS can report the ups.load directly
* drivers/bcmxcp.c: bcmxcp: Use defined constants in setvar, and
handle BCMXCP_RETURN_ACCEPTED_PARAMETER_ADJUST Use the defined
constants from header file, instead of magic numbers in setvar
method. Add handling of BCMXCP_RETURN_ACCEPTED_PARAMETER_ADJUST.
Report upsdrv_comm_good on successful execution of setvar to UPS.
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add handling of
BCMXCP_RETURN_ACCEPTED_PARAMETER_ADJUST and others in ACK block
Add support for handling more return statuses when exeucting
commands, the most important being
BCMXCP_RETURN_ACCEPTED_PARAMETER_ADJUST, which means the command
was executed. The others added all handles cases where command was
not executed, but you now get a more detailed entry in log as to
why it was not executed.
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
output.L<phase>.power to meters Not tested on hardware, due to
lack of hardware supporting it
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
battery.current.total to meters Not tested on hardware, due to
lack of hardware supporting it
2013-07-06 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
input.realpower to meters input.realpower is not listed in-
guide.chunked/apas01.html, but other drivers use it. Not tested on
hardware, due to lack of hardware supporting it
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
ambient.1.temperature to meters Not tested on hardware, due to
lack of hardware supporting it
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
input.power to meters input.power is not listed in-
guide.chunked/apas01.html, but other drivers use it. Not tested on
hardware, due to lack of hardware supporting it
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
output.powerfactor and input.powerfactor to meters
input.powerfactor is not listed in-
guide.chunked/apas01.html, so a bit unsure if this should be added.
Not tested on hardware, due to lack of hardware supporting it
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add mapping for
output.L<phase>.power.percent to meters Not tested on hardware,
due to lack of hardware supporting it
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: map ups.date and
ups.time to meters. Not testes on hardware, due to lack of
hardware supporting it
* drivers/bcmxcp.h: bcmxcp: Comment which meter map constants are
mapped to nut variables
2013-07-05 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Add constants for all
bcmxmp meter map, and replace magic numbers with constants Take
all the bcmxcp meter map defined in
protocols/eaton/XCP_Meter_Map_021309.pdf and put them into the
bcmxcp.h file. Update the bcmxcp.c file, replacing magic numbers
for meter map by using the corresponding defined constant.
2013-07-04 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c: bcmxcp: Let decode_instcmd_exec also handle short
read from UPS
* drivers/bcmxcp.c: bcmxcp: Add test.panel.start instcmd support
* drivers/bcmxcp.c: bcmxcp: Use one fuction to decode command
execution status in all places To avoid duplicating the logic
which checks the status of command execution at UPS, add a new
function which contains the check, and use that function whenever
we send a command to UPS and get status back from UPS.
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: Define byte for
choosing which system test to run in header file
* drivers/bcmxcp.c: bcmxcp: Fix outlet number for
outlet.x.shutdown.return if more than 2 outlets
* drivers/bcmxcp.c: bcmxcp: Let upsdrv_shutdown call instcmd for
shutting down To avoid code duplication between upsdrv_shutdown
and instcmd, let the upsdrv_shutdown method first try to issue a
shutdown.return instcmd, and then proceed with shutdown.stayoff if
the shutdown.return failed. This seems to be in line with what the
usbhid driver does.
* drivers/bcmxcp.c: bcmxcp: report upsdrv_comm_good at successful
execution of instcmd
* drivers/bcmxcp.c: bcmxcp: Return more specific error codes from
instcmd Use the available STAT_INSTCMD_FAILED and
STAT_INSTCMD_INVALID as return value from the instcmd method when
applicable, instead of always returning STAT_INSTCMD_UNKNOWN or -1.
2013-07-03 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c, drivers/bcmxcp.h: bcmxcp: use command map if
supplied. If UPS supplies command map, use it to control what
commands we register with dstate_addcmd. If UPS does not supply
command map, we register default commands with dstate_addcmd
* data/cmdvartab, drivers/bcmxcp.c: bcmxcp: cosmetic: make changes by
Prachi Gandhi more coherent with rest of driver
* drivers/bcmxcp.c: bcmxcp: Fix method name outputted in debug
message Reference: … devel/6458
Whitespace was addressed in previous commit (clepple)
2013-07-03 Charles Lepple <clepple+nut@gmail.com>
* drivers/bcmxcp.c: bcmxcp: indentation fixes (no code changes)
2013-07-03 Alf Høgemark <alf@i100.no>
* drivers/bcmxcp.c: bcmxcp: add ups.load and battery.voltage.low
Adapted slightly for bcmxcp branch (original patch was against
master). Bump driver version to 0.28 as well. (clepple) Reference: … devel/6460
2013-06-18 Daniele Pezzini <hyouko@gmail.com>
* drivers/voltronic.c: Add unknown/unused and commented capability
entries Might be useful for future versions.
* data/driver.list.in: Add devices to HCL
* drivers/voltronic.c, drivers/voltronic_ser.c,
drivers/voltronic_usb.c: Fix warning flag + versioning Some UPSes
seem to reply with a \0 just before the end of the warning flag
(obtained with QWS), as a consequence of that, the string in C is 1
char shorter than expected (the \r is not within the string). ->
Fix voltronic_warning function. Increase driver versions.
* drivers/voltronic.c, drivers/voltronic_ser.c,
drivers/voltronic_usb.c: Fix shutdown.return + versioning Fix
shutdown.return when ondelay = 0 -> split between offdelay < 60 and
offdelay > 60. Increase driver versions.
2013-06-17 Daniele Pezzini <hyouko@gmail.com>
* docs/man/voltronic_ser.txt, docs/man/voltronic_usb.txt: Correct
typos @shutdown.{return,stayoff}
* drivers/voltronic.c, drivers/voltronic_ser.c,
drivers/voltronic_usb.c: Imrove shutdown sequence + versioning
Split shutdown and stopping of pending shutdowns so that if there's
no shutdown pending and the UPS doesn't accept a shutdown.stop in
this situation (i.e. it replies '(NAK') the shutdown procedure
doesn't get halted. Increase version number of drivers.
* data/driver.list.in: Correct typos & add software reference in HCL
* docs/man/voltronic_ser.txt, docs/man/voltronic_usb.txt: Improve
docs layout
* drivers/voltronic_usb.c: Add USBDevice_t structure
* drivers/voltronic_usb.c: Add comment so that autogen rules have the
right comment
* drivers/voltronic_ser.c, drivers/voltronic_usb.c: Correct manpage
references
2013-05-14 Bo Kersey <bo@vircio.com>
* drivers/bestfcom.c: bestfcom: Use fc.idealbvolts for calculating
percent charge .' Ref: … .user/7891
2013-05-13 Michal Soltys <soltys@ziu.info>
* docs/nut-names.txt: Add device.uptime to nut-names.txt Also fix
one typo.
* docs/man/apcsmart.txt, drivers/apcsmart.c: apcsmart: allow users to
select non-canonical tty mode The main reason behind this addition
is windows compatibility, see … .user/7762 IGNCR
has been readded earlier in commit
20c52bee77fa0b3ea3c7f8bec25afd103b7ff4a2 - this might be enough to
handle windows behavior, but if it's not the case - using non
canonical processing (same as is present in apcsmart-old) should
solve any pressing issues.
* drivers/apcsmart_tabs.c: apcsmart: add device.uptime to vartab
2013-04-26 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c, drivers/apcsmart_tabs.h: apcsmart: remove
APC_DEPR flag APC_{MULTI, PRESENT} are both sufficient for
handling 1:n and n:1 relations
2013-04-22 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c: apcsmart: expand APC_MULTI to apc:nut 1:n cases
2013-04-15 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c, drivers/apcsmart_tabs.c: apcsmart: change
approach to 2 digit compatibility entries As reported in … .user/7762 - 2
digit values reported through 'b' are really >255V voltage values.
So we match whole 00 - FF set as single (fake) compat entry.
2013-04-16 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c: apcsmart: remove strchr() check from
legacy_verify() As vartab doesn't contain characters from
APC_UNR_CMDS.
2013-04-15 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.h: apcsmart: re-add CR to ignore sets Despite
icanon mode, windows (supposedly) is uncapable of ignoring CR in
fashion analogous to IGNCR flag. See … .user/7762 for
rationale.
* drivers/apcsmart_tabs.c: apcsmart: add regex format to
ambient.0.temperature 'T' might (on older units) also mean "ups
uptime", so we want to distinguish that case gracefully. The
formats are: uptime: 000.0 temp: 00.00
2013-05-03 Andrew Avdeev <andrew.avdeev@gmail.com>
* drivers/powercom-hid.c: PowerCOM BNT-1000AP HID instant commands
Adds a few vendor-specific HID mappings for PowerCOM. Instant
commands supported on UPS [pcm]: beeper.disable - Disable the UPS
beeper beeper.enable - Enable the UPS beeper beeper.toggle - Toggle
the UPS beeper load.off - Turn off the load immediately load.on -
Turn on the load immediately shutdown.return - Turn off the load
and return when power is back shutdown.stayoff - Turn off the load
and remain off test.battery.start.quick - Start a quick battery
test … devel/6435
2013-04-25 Christian Wiese <christian.wiese@securepoint.de>
* tools/nut-scanner/Makefile.am, tools/nut-scanner/scan_usb.c: nut-
scanner: fix scan_usb to remove trailing spaces from output strings
This patch uses rtrim() from libcommon to remove trailing spaces
from serialnumber, device_name and vendor_name. see:
2013-04-18 Arnaud Quette <arnaud.quette@free.fr>
* docs/new-drivers.txt: Add a reference to the SNMP subdrivers
chapter
2013-04-15 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart.c, drivers/apcsmart.h, drivers/apcsmart_tabs.c,
drivers/apcsmart_tabs.h: apcsmart: move variable regex matching
into vartab This also allows us to properly validate (in near
future) cases when single apc variable can match multiple nut
variables. Other changes: - adjust rexhlp() to follow 0 for false
and non-0 for true, like in the rest of the functions - remove
valid_cmd() as rexhlp() can be used directly with formats in the
table; furthermore the warning (in case of failure) could be
confusing when we add nut:apc n:1 case
2013-04-11 Arnaud Quette <arnaud.quette@free.fr>
* drivers/libusb.c: Set USB timeout to 5 seconds Set the low level
USB timeout back to the standard 5 seconds. This was set to 4
seconds, for performance reasons, but is now causing issues with
some devices (reported by Stefan "stevenbg", GitHub issue #23)
2013-04-10 Charles Lepple <clepple+nut@gmail.com>
* docs/man/.gitignore: Ignore voltronic_* generated documentation
* drivers/voltronic_usb.c: voltronic_usb: switch to new
is_usb_device_supported() syntax
* tools/nut-usbinfo.pl: Add voltronic_usb driver to USB info
extractor tool
2013-04-10 Daniele Pezzini <hyouko@gmail.com>
* data/driver.list.in, docs/man/Makefile.am,
docs/man/voltronic_ser.txt, docs/man/voltronic_usb.txt,
drivers/Makefile.am, drivers/voltronic.c, drivers/voltronic.h,
drivers/voltronic_ser.c, drivers/voltronic_usb.c: New drivers:
voltronic_ser/voltronic_usb Reference: … devel/6418
2013-04-10 Elio Parisi <E.Parisi@riello-ups.com>
* drivers/riello.c, drivers/riello.h, drivers/riello_ser.c,
drivers/riello_usb.c: Riello drivers: fix memset() arguments, and
use stdint.h Reference: … devel/6417
2013-04-04 Émilien Kia <emilien.kia@gmail.com>
* configure.in, docs/new-clients.txt, scripts/Makefile.am,
scripts/README, scripts/java/.gitignore, scripts/java/Makefile.am,
scripts/java/README, scripts/java/jNut/.gitignore,
scripts/java/jNut/README, scripts/java/jNut/pom.xml,
.../main/java/org/networkupstools/jnut/Client.java,
.../java/org/networkupstools/jnut/Command.java,
.../main/java/org/networkupstools/jnut/Device.java,
.../org/networkupstools/jnut/NutException.java,
.../java/org/networkupstools/jnut/Scanner.java,
.../org/networkupstools/jnut/StringLineSocket.java,
.../java/org/networkupstools/jnut/Variable.java,
.../java/org/networkupstools/jnut/ClientTest.java,
.../java/org/networkupstools/jnut/ScannerTest.java,
scripts/java/jNutList/README, scripts/java/jNutList/pom.xml,
.../java/org/networkupstools/jnutlist/AppList.java,
scripts/java/jNutWebAPI/README, scripts/java/jNutWebAPI/pom.xml,
.../jnutwebapi/NutRestProvider.java,
.../jnutwebapi/RestWSApplication.java,
.../jnutwebapi/ScannerProvider.java, .../jNutWebAPI/src/main/webapp
/WEB-INF/web.xml: Remove java related files (jNut) which will be
moved to a separated repository. See issues: - -
2013-03-26 Alex Lov <alex@alexlov.com>
* drivers/ietf-mib.c: Fix OID for input.bypass.voltage in ietf-mib.c
Ooops, forgot fix one
* drivers/ietf-mib.c: Fix OIDs for bypass group in ietf-mib.c
Reference For bypass
voltage, current and power
2013-03-13 Arnaud Quette <arnaud.quette@free.fr>
* docs/FAQ.txt: Add a FAQ entry for supported but not working USB UPS
2013-03-10 Charles Lepple <clepple+nut@gmail.com>
* scripts/upower/95-upower-hid.rules: upower: update generated rules
file
2013-03-09 Charles Lepple <clepple+nut@gmail.com>
* docs/download.txt: Update VMware ESXi package link (from René
Garcia)
2013-02-28 Charles Lepple <clepple+nut@gmail.com>
* Makefile.am, tools/gitlog2changelog.py: Issue #4: Specify starting
commit to gitlog2changelog.py
2013-02-27 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart_tabs.c: apcsmart: add old APC 600I compatibility
entry Though without 'T' - until we handle situations when single
nut variable is able to correspond to more than one apc var.
Testet-by: Markus Pruehs <apc@markus.pruehs.com>
2013-02-26 Charles Lepple <clepple+nut@gmail.com>
* tools/gitlog2changelog.py: git-changelog: really fixes #4 (missing
entries) The script was discarding any commits which happened to
include the word 'commit'.
* tools/gitlog2changelog.py: git-changelog: remove re.* calls for
simple string matching
2013-02-25 Charles Lepple <clepple+nut@gmail.com>
* Makefile.am, tools/Makefile.am: git-changelog: Fix list of
distributed files
* tools/gitlog2changelog.py: Fixes issue #4: ChangeLog now includes
single-file commits.
* Makefile.am, tools/gitlog2changelog.py: Issue #4: generate
ChangeLog from git logs This seems to generate long ChangeLog
entries in the format we had with svn2cl, but some commits appear
to be missing.
* tools/gitlog2changelog.py: Import gitlog2changelog.py (2008-12-27) … 701ef2ed00
2aa9e718d4146e#scripts/gitlog2changelog.py
2013-02-25 Émilien Kia <emilien.kia@gmail.com>
* clients/nutclient.cpp, clients/nutclient.h: Add comparison operator
for nut::Device class. Make std::set<nut::Device> work and not
dropping devices anymore.
2012-11-02 Charles Lepple <clepple+nut@gmail.com>
* README, scripts/upower/95-upower-hid.rules: apcupsd-ups: link to
man page from README Patch by Arnaud:
acker/index.php?func=detail&aid=313846&group_id=30602&atid=411544
2012-10-30 Charles Lepple <clepple+nut@gmail.com>
* docs/man/apcupsd-ups.txt: apcupsd-ups: Update man page with
variables and units
2012-09-28 Charles Lepple <clepple+nut@gmail.com>
* drivers/apcupsd-ups.c, drivers/apcupsd-ups.h: apcupsd-ups:
Additional variables
* drivers/apcupsd-ups.c, drivers/apcupsd-ups.h: apcupsd-ups: Remove
multiplier from ups.load
2012-09-27 Charles Lepple <clepple+nut@gmail.com>
* docs/man/apcupsd-ups.txt, drivers/apcupsd-ups.c: apcupsd-ups:
miscellaneous cleanup
2012-09-27 Andreas Steinmetz
* docs/man/Makefile.am, docs/man/apcupsd-ups.txt,
drivers/Makefile.am, drivers/apcupsd-ups.c, drivers/apcupsd-ups.h:
apcupsd client driver
ail&atid=411544&aid=313846&group_id=30602
2013-02-23 Charles Lepple <clepple+nut@gmail.com>
* drivers/bcmxcp.c: bcmxcp: remove unused variable
2013-02-16 Charles Lepple <clepple+nut@gmail.com>
* docs/website/news.txt: News: Git conversion
* docs/developers.txt: Update developer documentation for Git
repository
2013-02-21 Arnaud Quette <arnaud.quette@free.fr>
* .gitignore, scripts/HP-UX/.gitignore: Git ignored files completion
2013-02-21 Michal Soltys <soltys@ziu.info>
* drivers/apcsmart_tabs.c: apcsmart: minor fixups to compat. tables
* drivers/apcsmart.c: apcsmart: verify/setup fixups legacy_verify()
- check against commands we always ignore oldapcsetup() - extra
2013-02-17 Charles Lepple <clepple+nut@gmail.com>
* tools/git-svn.authors, tools/svn2cl.authors: Remove obsolete
authors files.
2013-02-16 Arnaud Quette <arnaud.quette@free.fr>
* .gitignore: Git ignored files completion
2013-02-06 Frederic Bohe <fbohe-guest@alioth.debian.org>
* scripts/Solaris/nut.in, scripts/Solaris/postinstall.in: [Solaris]
Fix postinstall user/group, and service start * Fix postinstall
user/group detection/creation. * Fix service start depending on the
mode, and add poweroff command.
2013-02-04 Emilien Kia <kiae.dev@gmail.com>
* include/proto.h: Move __cplusplus/extern "C" begin block before to
fix a problem of ifdef when included in real C++ code.
2013-02-01 Frederic Bohe <fbohe-guest@alioth.debian.org>
* configure.in, scripts/HP-UX/makedepot.sh, scripts/HP-UX/nut.psf.in,
scripts/HP-UX/postinstall.in: [HP-UX] : add postinstal script for
installing services files.
2013-02-01 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/snmp-ups.txt: Update SNMP driver documentation Mentioning
'mib' is not needed anymore since NUT 2.6.2. Also mention 'v3' as
an allowed value for 'snmp_version' (reported by Tim Rice)
2013-01-29 Michal Soltys <msoltyspl-guest@alioth.debian.org>
* drivers/apcsmart.c, drivers/apcsmart.h, drivers/apcsmart_tabs.c,
drivers/apcsmart_tabs.h: apcsmart: implement #311678 (multiple
values per variable) This is a bit more general than the original
request. All variables that return multiple comma-separated
values, are added as *.N.* where 1 <= N <= APC_PACK_MAX; the
variables are stored with temporary name *.0.* in apcsmart_tabs.c,
but only at least 1 and at most 4 are added per update run
(superfluous - if any - are removed), with *.0.* placeholder being
ignored. We assume that the particular variables cannot belong to
the capability set at the same time (as reported by user) -
otherwise we will need a bit more complex handling, including
updates to all setvar functions.
* drivers/apcsmart.c, drivers/apcsmart.h: apcsmart: update logging
logic / apc_read() - logging This mostly adds few macros that
implicitly use or pass caller's name (and in case of hard errors,
line number). This allows removal of a few "failed" / \
"succeeded"
lines (which in practice don't really happen), for example there is
no need for: upslogx(LOG_ERR, "preread_data: apc_write failed");
as any hard error will be reported by apc_write() internally,
providing the place and line number it was called at. Similarly,
some upslogx / upsdebugx calls were wrapped in analogous macros to
provide caller's name automatically. Debug levels (-D) were
adjusted to require only one letter. - apc_read() It's been a bit
more scrutinized: - filling up full caller's buffer is considered
an error; shouldn't happen unless the ups is somehow damaged or
some model is capable of returning more than 512 bytes in one read
(current max I witnessed is around 270 bytes during capability
read) - timeout reads (whether it's allowed or not) cannot really
have any non-0 count, though sanity check could theoretically be
useful in non-canonical mode; commented out code was added for
reference
* drivers/apcsmart.c: apcsmart: enhance prtchr() So it can handle 4
returns with static pointers.
* drivers/apcsmart.c: apcsmart: allow timeout on read in smartmode()
As this function is used to "nudge" ups, we should expect it to
timeout. Also avoids extra log spam.
2013-01-29 Frederic Bohe <fbohe-guest@alioth.debian.org>
* scripts/Solaris/postinstall.in: Use variables to generate Solaris
postinstall script.
* scripts/Solaris/postinstall.in: Enhance the Solaris post install
script.
2013-01-22 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/Makefile.am: Remove duplicate entries for Eaton serial
scan
* include/Makefile.am, tools/nut-scanner/Makefile.am: List missing
header files to be distributed nutscan-serial.h and nut_platform.h
were missing from the distribution
* scripts/Solaris/.gitignore: Subversion ignored files completion
Mark Solaris generated packaging files as Subversion ignored (no
functional changes)
* docs/man/.gitignore, docs/man/Makefile.am, docs/man/index.txt,
docs/man/nut-scanner.txt,
docs/man/nutscan_get_serial_ports_list.txt,
docs/man/nutscan_scan_avahi.txt,
docs/man/nutscan_scan_eaton_serial.txt,
docs/man/nutscan_scan_ipmi.txt, docs/man/nutscan_scan_nut.txt,
docs/man/nutscan_scan_snmp.txt, docs/man/nutscan_scan_usb.txt,
docs/man/nutscan_scan_xml_http.txt, include/nut_platform.h, tools
/nut-scanner/Makefile.am, tools/nut-scanner/nut-scan.h, tools/nut-
scanner/nut-scanner.c, tools/nut-scanner/nutscan-device.h, tools
/nut-scanner/nutscan-display.c, tools/nut-scanner/nutscan-serial.c,
tools/nut-scanner/nutscan-serial.h, tools/nut-
scanner/scan_eaton_serial.c, tools/nut-scanner/scan_nut.c: Add nut-
scanner support for Eaton serial units nut-scanner and libnutscan
now provides respectively an option and functions to detect Eaton
serial devices. The following protocols are supported: SHUT, XCP
and Q1 (patch from Frederic Bohe, with parts from Arnaud Quette,
both for Eaton)
* configure.in: Fix for pthread on HP-UX pthread is compiled on a
stub when -lpthread is not explicitly added. This commit is a
duplicate of [[SVN:3801]], from Frederic Bohe (for Eaton)
* drivers/bcmxcp_ser.c: Change baud-rates ordering for auto-detection
* docs/new-drivers.txt, drivers/serial.c, drivers/serial.h: Add non-
fatal versions of ser_open / ser_set_speed
2013-01-21 Frederic Bohe <fbohe-guest@alioth.debian.org>
* scripts/Solaris/nut.in, scripts/Solaris/postinstall.in: Allow
start/stop of NUT from Solaris packages
2013-01-13 Emilien Kia <kiae.dev@gmail.com>
* clients/cgilib.h, clients/status.h, clients/upsimagearg.h,
clients/upslog.h, clients/upsmon.h, clients/upssched.h,
clients/upsstats.h, include/common.h, include/extstate.h,
include/proto.h, include/state.h, include/upsconf.h, server/conf.h,
server/desc.h, server/netcmds.h, server/netget.h,
server/netinstcmd.h, server/netlist.h, server/netmisc.h,
server/netset.h, server/netssl.h, server/netuser.h,
server/sstate.h, server/stype.h, server/upsd.h, server/upstype.h,
server/user-data.h, server/user.h, tools/nut-scanner/nut-scan.h,
tools/nut-scanner/nutscan-device.h, tools/nut-scanner/nutscan-
init.h, tools/nut-scanner/nutscan-ip.h: Protect header files for
C++ inclusion.
2012-12-19 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in: HCL: Add support for Lyonn CTB-1200 Add Lyonn
CTB-1200 (USB ID 0x0665:0x5161) to the list of blazer_usb supported
models (reported by Martin Sarsale)
* docs/stable-hcl.txt: Clarify expected report for shutdown testing
State explicitly that, for now, a statement that the user has
actually tested the shutdown procedure successfully is enough
(report from Martin Sarsale)
2012-12-19 Frederic Bohe <fbohe-guest@alioth.debian.org>
* scripts/HP-UX/makedepot.sh, scripts/HP-UX/nut.psf.in: Use installed
binaries to create package
2012-12-18 Arnaud Quette <arnaud.quette@free.fr>
* drivers/delta_ups-mib.c: Fix a typo error and current multiplier
factor
* data/driver.list.in, drivers/Makefile.am, drivers/delta_ups-mib.c,
drivers/delta_ups-mib.h, drivers/snmp-ups.c: Support for DeltaUPS
MIB and Socomec Netys RT 1/1 Add preliminary SNMP support for a
new MIB: DeltaUPS MIB, with sysOID ".1.3.6.1.4.1.2254.2.4". The
first known supported devices are Socomec Netys RT 1/1, equiped
with Netvision SNMP card
2012-12-18 Michal Soltys <msoltyspl-guest@alioth.debian.org>
* drivers/apcsmart.c: apcsmart: add update_info() No need for almost
identical update_info_normal() and update_info_all()
* drivers/apcsmart.c: apcsmart: two fixups In poll_data(): we are
not supposed to set variable after its (formally impossible)
removal In upsdrv_shutdown(): wrong comparison
2012-12-14 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in, drivers/powerp-txt.c, drivers/powerpanel.c:
Add support for CyberPower OL3000RMXL2U Add CyberPower
OL3000RMXL2U serial support to the powerpanel driver, text protocol
version (Alioth patch #313910, from Timothy Pearson)
2012-12-13 Arnaud Quette <arnaud.quette@free.fr>
* docs/Makefile.am, docs/new-drivers.txt, docs/snmp-subdrivers.txt,
scripts/subdriver/gen-snmp-subdriver.sh: Helper script to create
SNMP subdrivers stubs Created a new shell script
(scripts/subdriver/gen-snmp-subdriver.sh) to automatically create a
"stub" subdriver. This will make it a lot easier and quicker to
create subdrivers for snmp-ups. A new documentation chapter has
also been added ("How to make a new subdriver to support another
SNMP device")
* drivers/tripplite-hid.c: Add support for newer TrippLite
Smart1500LCD Add newer TrippLite Smart1500LCD (USB ID
0x09ae:0x3016) to the list of usbhid-ups supported models (reported
by Steve Salier)
2012-12-12 Frederic Bohe <fbohe-guest@alioth.debian.org>
* drivers/mge-hid.c: Fix crash with debug level greater or equal to 2
2012-12-10 Michal Soltys <msoltyspl-guest@alioth.debian.org>
* drivers/apcsmart.c: apcsmart: add prtchr() helper Add prtchr()
helper and simplify reporting when we check whether some APC
cmd/var character is or isn't printable.
* drivers/apcsmart_tabs.c: apcsmart: apc_cmdtab[] fixup Earlier
commit that adjusted regex checks, also changed cmd fields for all
instant commands handled by custom functions. We cannot do that, as
they are not detected as supported this way.
2012-12-08 Arnaud Quette <arnaud.quette@free.fr>
* drivers/mge-utalk.c, drivers/mge-utalk.h: Change Martin Loyer's
mail address As per Martin's request.
* drivers/mge-utalk.c: Improve mge-utalk general behavior Make two
adjustments to improve the general behavior: first, send the double
"Z" prior to "Si" command. Second, inter-commands delay \
has been
increased to comply with the specification
2012-12-08 Michal Soltys <msoltyspl-guest@alioth.debian.org>
* drivers/apcsmart.c: apcsmart: serial related stuff a bit more
strict Also: - apc_flush() now loops with >0 condition (otherwise
errored apc_read() might cause inf loop) - ser_comm_good/fail()
were kind of missing in write wrappers
* drivers/apcsmart.c, drivers/apcsmart.h: apcsmart:
sdlist/sdtype/advorder changes - verify 'advorder' with regex -
remove unused defines - as the user is directed towards man page
either way (and without it numbers are kind of meaningless), drop
SDMAX
* drivers/apcsmart.c: apcsmart: setup port after variable
sanitization in upsdrv_initups()
* drivers/apcsmart.c: apcsmart: cleanup dstate ok/stale calls
* drivers/apcsmart.c: apcsmart: getbaseinfo() fixup In extremely
unlikely case of failing write, report it up and act accordingly.
* drivers/apcsmart.c, drivers/apcsmart.h, drivers/apcsmart_tabs.c,
drivers/apcsmart_tabs.h: apcsmart: adjust regex logic A bit
simpler / tighter now.
* drivers/apcsmart.c: apcsmart: add var_string_setup() In theory
deprecate_vars() should also consider APC_STRING variables. In
practice - we don't have any variables that are both APC_MULTI and
APC_STRING - but it's more correct this way, so let's do it.
* drivers/apcsmart.c: apcsmart: cosmetics - code shuffling,
lines of help directing to man page
* drivers/apcsmart.c: apcsmart: simplify query_ups() /
proto_verification() Both functions rely now on common variable
verificaion function.
* drivers/apcsmart.c: apcsmart: add var_verify() The function will
be used in subsequent commit for common verification.
* drivers/apcsmart.c: apcsmart: shuffle two functions query_ups()
and oldapcsetup()
* drivers/apcsmart.c: apcsmart: simplify query_ups() This commit
changes query_ups() function and makes it rely on the same
deprecate_vars() logic that protocol_verify() requires. We can
shorten the code a bit now, and it allows us to do more
simplifications in subsequent commits.
* drivers/apcsmart.c: apcsmart: add functions informing about
[un]supported cmds/vars In unified fashion, instead of each
protocol-verification related function doing it on its own.
* drivers/apcsmart.c, drivers/apcsmart.h, drivers/apcsmart_tabs.c,
drivers/apcsmart_tabs.h: apcsmart: minor tidying up comments,
trivial changes, code shuffling ...
* drivers/apcsmart.c: apcsmart: remove unused field 'def' from cchar
In apc_ser_diff() reporting differences between
tcgetattr/tcsetattr, 'def' field was unused (along with related
defines).
2012-12-06 Frederic Bohe <fbohe-guest@alioth.debian.org>
* tools/nut-scanner/nut-scanner.c: Fix nut-scanner compilation
without pthread
2012-12-02 Charles Lepple <clepple+nut@gmail.com>
* drivers/riello_usb.c: riello_usb.c: eliminate uninitialized
variable
2012-11-29 Arnaud Quette <arnaud.quette@free.fr>
* conf/.gitignore: Subversion ignored files completion Mark
upsmon.conf.sample as Subversion ignored, since it is now generated
from a .in template file (no functional changes)
* conf/Makefile.am, conf/upsmon.conf.sample,
conf/upsmon.conf.sample.in, configure.in: Adapt upsmon.conf sample
to use configured values The sample upsmon.conf provided now
adapts RUN_AS_USER value, and NOTIFYCMD / POWERDOWNFLAG base path
to the user configured values
2012-11-28 Arnaud Quette <arnaud.quette@free.fr>
* drivers/riello.c, drivers/riello_ser.c, drivers/riello_usb.c: Minor
improvements to Riello drivers Fix ups.power.nominal name in
Riello drivers, and its value for GPSER protocol(riello_ser).
device.mfr was also changed in both drivers, and revisions were
bumped to 0.02 (patch from Elio Parisi, Riello)
* data/driver.list.in, drivers/Makefile.am, drivers/openups-hid.c,
drivers/openups-hid.h, drivers/usbhid-ups.c, scripts/upower/95
-upower-hid.rules: Official support for Minibox openUPS Intelligent
UPS Add a new usbhid-ups subdriver to handle Minibox openUPS
Intelligent UPS (USB ID 0x04d8:0xd004) (patch from Nicu Pavel,
Mini-Box.Com)
2012-11-28 Charles Lepple <clepple+nut@gmail.com>
* conf/upsmon.conf.sample, docs/man/upsmon.conf.txt: Update
references to pager.txt
2012-11-27 Frederic Bohe <fbohe-guest@alioth.debian.org>
* data/driver.list.in, docs/man/genericups.txt: Add information about
Eaton Management Card Contact
2012-11-25 Arnaud Quette <arnaud.quette@free.fr>
* drivers/riello.c, drivers/riello.h, drivers/riello_ser.c,
drivers/riello_usb.c: Minor improvements to Riello drivers Fix
functions and variables names to use English language. Also fix
warnings reported by Mac OS X Buildbot and Charles Lepple (patch
from Elio Parisi, Riello)
2012-11-21 Arnaud Quette <arnaud.quette@free.fr>
* docs/acknowledgements.txt: Complete Acknowledgements with a Riello
entry Riello deserves a dedicated entry in the Supporting UPS
manufacturers, for having provided protocols information and
drivers implementations
* docs/man/.gitignore: Subversion ignored files completion Mark
riello_ser and riello_usb HTML manpages as Subversion ignored (no
functional changes)
* data/driver.list.in, docs/man/.gitignore, docs/man/Makefile.am,
docs/man/riello_ser.txt, docs/man/riello_usb.txt,
drivers/.gitignore, drivers/Makefile.am, drivers/riello.c,
drivers/riello.h, drivers/riello_ser.c, drivers/riello_usb.c, tools
/nut-usbinfo.pl: Official support for Riello serial and USB devices
Add two new drivers, riello_ser and riello_usb, to support the
whole ranges of Riello devices: IDG, IPG, WPG, NPW, NDG, DVT, DVR,
DVD, VST, VSD, SEP, SDH, SDL, SPW, SPT, MCT, MST, MCM, MCT, MHT,
MPT and MPM. This completes the official Riello protocols
publication, that happened in May 2012 (developed by Elio Parisi,
from Riello)
2012-11-20 Arnaud Quette <arnaud.quette@free.fr>
* clients/upsclient.h, server/nut_ctype.h: Fix NSS include directives
The current NSS include directives (nss/nss.h) were incorrect.
These were failing on Redhat systems, and working on some others
because of the default include path (reported by Michal Hlavinka,
from Redhat)
2012-11-19 Arnaud Quette <arnaud.quette@free.fr>
* data/driver.list.in: HCL: Add support for Aviem Power RT
1000-3000VA Add Aviem Systems - Aviem Power RT 1000-3000VA to the
list of blazer_ser supported models (reported by Michael
Dobrovitsky)
2012-11-19 Emilien Kia <kiae.dev@gmail.com>
* tools/nut-scanner/nutscan-device.c: Fix a memory leak in scanner.
2012-11-13 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/.gitignore: Complete the list of Subversion ignored files
Commit r3778 was missing generated HTML files (no functional
changes)
* docs/man/Makefile.am: Fix installation of libnutclient manual pages
Commit r3777 fixed the test target, but libnutclient manual pages
were not actually installed
* docs/man/.gitignore, lib/.gitignore: Complete the list of
Subversion ignored files The merge of NSS and libnutclient
branches have left some new generated files (no functional changes)
* docs/man/libnutclient_commands.txt,
docs/man/libnutclient_devices.txt,
docs/man/libnutclient_general.txt, docs/man/libnutclient_misc.txt,
docs/man/libnutclient_tcp.txt, docs/man/libnutclient_variables.txt:
Fix Buildbot failures on previous commit (man pages) The merge of
the libnutclient branch caused a failure of the 'distcheck-light'
test target. Manual pages documentation in this branch uses a
mechanism to generate multiple manpages from one source file. This
was however conflicting with a Makefile rule, that requires the
generated file to have the same name as the source file. Applies
the same principle by adding the content of the Header section to
the NAME commands list. Also fix a typo error in the Header section
of libnutclient_devices
2012-11-13 Emilien Kia <kiae.dev@gmail.com>
* clients/Makefile.am, clients/nutclient.cpp, clients/nutclient.h,
configure.in, docs/man/Makefile.am, docs/man/index.txt,
docs/man/libnutclient.txt, docs/man/libnutclient_commands.txt,
docs/man/libnutclient_devices.txt,
docs/man/libnutclient_general.txt, docs/man/libnutclient_misc.txt,
docs/man/libnutclient_tcp.txt, docs/man/libnutclient_variables.txt,
docs/new-clients.txt, lib/Makefile.am, lib/README,
lib/libnutclient.pc.in: Merge libnutclient (libcpp) branch Pull
Request #2: "High level C and C++ libnutclient" from . Hand-merged into SVN trunk
from commit: 701cc571f4f8578e9c82b13c1e9eab509a41cd7f
2012-11-08 Frederic Bohe <fbohe-guest@alioth.debian.org>
* docs/man/usbhid-ups.txt, drivers/usbhid-ups.c: Add a command line
to usbhid-ups to activate the max_report tweak.
* drivers/apc-hid.c, drivers/libhid.c: Fix tweak for APC Back-UPS
since it seems to break Back-UPS 700 connectivity (reported by
Denis Serov). Adding some more comments on UPS which need and
which do not need the tweak. Refactored the detection code.
2012-11-07 Arnaud Quette <arnaud.quette@free.fr>
* scripts/subdriver/gen-usbhid-subdriver.sh: Fix USB HID subdriver
generation tool This tool has not been updated since timestamps
were added to driver debug traces. It was thus producing erroneous
results (reported by Nicu Pavel)
2012-11-01 Arnaud Quette <arnaud.quette@free.fr>
* drivers/snmp-ups.c: Fix a crash on outlets management snmp-ups was
crashing when the number of outlets was equal to zero
2012-10-31 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/.gitignore, docs/man/blazer.txt: Fix blazer manual pages
generation to generate blazer, blazer_ser and blazer_usb manual
pages. The same manual page is now distributed and available under
these 3 names (warning to packagers)
* docs/man/Makefile.am, docs/man/asciidoc.conf: Fix manpage
refmiscinfo attributes and multiple NAME refmiscinfo attributes
were previously specified through asciidoc.conf. This approach
prevented from specifying and generating multiple manual pages from
a single source. Moreover, manversion (pointing NUT version) was
mistyped, and thus omitted. Makefile rules now directly define
refmiscinfo through attributes, and allow to solve the long
standing blazer / blazer_ser / blazer_usb related issue, and the
upcoming libnutclient one
2012-10-19 Arnaud Quette <arnaud.quette@free.fr>
* drivers/cps-hid.c, drivers/idowell-hid.c, scripts/subdriver/gen-
usbhid-subdriver.sh: Replace missing occurrences in previous commit
* docs/hid-subdrivers.txt, drivers/libhid.c, scripts/Makefile.am,
scripts/subdriver/gen-usbhid-subdriver.sh, scripts/subdriver/path-
to-subdriver.sh: Rename usbhid subdriver generation script This
script was previously named path-to-subdriver.sh, which was not
enough meaningful. The renaming to gen-usbhid-subdriver.sh also
makes sense with a potential gen-snmp-subdriver.sh
* data/driver.list.in: HCL: Add support for Apollo 850VA Add Apollo
850VA (USB ID 0x0665:0x5161) to the list of blazer_usb supported
models (reported by Mike Raath)
2012-10-15 Arnaud Quette <arnaud.quette@free.fr>
* configure.in, scripts/systemd/nut-driver.service.in: Fix driver
path in systemd driver unit The driver path, in nut-
driver.service, was not expanded correctly (reported by Marc
Rechté)
2012-10-15 Michal Soltys <msoltyspl-guest@alioth.debian.org>
* data/driver.list.in: HCL: add info about new APC models Info about
new SMT, SMX and SURTD models which require additional card for
"legacy" smart protocol.
2012-10-15 Arnaud Quette <arnaud.quette@free.fr>
* configure.in: Only fail if SSL was explicitly requested
Configuration should not abort if neither OpenSSL nor Mozilla NSS
has been found, and if SSL was not explicitly requested by the
user. This fixes the Buildbot compilation failure on Aix (build
#206)
2012-10-12 Charles Lepple <clepple+nut@gmail.com>
* tools/git-svn.authors, tools/svn2cl.authors: Update Emilien Kia's
2012-10-11 Arnaud Quette <arnaud.quette@free.fr>
* docs/Makefile.am: Fix Solaris compilation failure
2012-10-10 Arnaud Quette <arnaud.quette@free.fr>
* README: Spell check fix (test)
* .gitignore, configure.in, docs/.gitignore, docs/Makefile.am, docs
/nut-qa.txt, docs/nut.dict: Spell checking framework implementation
Implement a framework to spell check documentation source files,
using Aspell. This includes an interactive build target (make
spellcheck-interactive), and an automated one (make spellcheck),
mainly for QA / Buildbot purpose. Note that a base NUT dictionnary
is also available (docs/nut.dict), providing a glossary of terms
related to power devices and management
* drivers/tripplite_usb.c: Remove POD ("Plain Old Documentation")
With the approval of the author (Charles Lepple), remove POD
("Plain Old Documentation"). This embedded documentation was
redundant, and is probably out of date, with respect to the
AsciiDoc version
* drivers/powercom.c, drivers/powercom.h, drivers/upscode2.c: Remove
unnecessary RCS $Id lines
2012-10-05 Arnaud Quette <arnaud.quette@free.fr>
* tools/nut-scanner/nut-scan.h: Fix compilation error Define
IPMI_PRIVILEGE_LEVEL_ADMIN value, in case FreeIPMI is not available
2012-10-04 Arnaud Quette <arnaud.quette@free.fr>
* docs/man/nut-scanner.txt, drivers/nut-ipmipsu.c, tools/nut-scanner
/nut-scan.h, tools/nut-scanner/nut-scanner.c, tools/nut-
scanner/scan_ipmi.c: Support power supplies scan over the network
nut-scanner can now scan for power supplies with IPMI over LAN.
This is currently limited to IPMI 1.5 only
2012-10-03 Arnaud Quette <arnaud.quette@free.fr>
* docs/acknowledgements.txt: Update acknowledgements
2012-09-28 Charles Lepple <clepple+nut@gmail.com>
* drivers/.gitignore: Cleanup of svn:ignore list in drivers/ (no code
change)
2012-09-27 Charles Lepple <clepple+nut@gmail.com>
* tools/git-svn.authors, tools/svn2cl.authors: Welcome, Václav! (SVN
username mappings)
2012-09-21 Arnaud Quette <arnaud.quette@free.fr>
* docs/nut-qa.txt: Update the link to the Ubuntu QRT script
2012-09-19 Arnaud Quette <arnaud.quette@free.fr>
* drivers/nut-libfreeipmi.c, m4/nut_check_libfreeipmi.m4, tools/nut-
scanner/scan_ipmi.c: Support for FreeIPMI 1.1.x and 1.2.x (#2)
Prepare for supporting API changes in FreeIPMI 1.1.x and 1.2.x.
This 2nd patch, which completes [[SVN:3675]], addresses FRU API
changes, and removes code redundancy. This code has been tested
with FreeIPMI 0.8.12 and the latest [[FreeIPMI SVN]] trunk r9505
(reported as 1.2.0.beta2 by pkgconfig)
* docs/download.txt, docs/website/news.txt: Update Windows package
publications for 2.6.5-3
2012-09-17 Arnaud Quette <arnaud.quette@free.fr>
* docs/download.txt, docs/website/news.txt: Update Windows package
publications for 2.6.5-2
2012-09-12 Arnaud Quette <arnaud.quette@free.fr>
* drivers/bcmxcp_usb.c: Fix data reception loop The new data
reception algorithm was trying to get more data than it should
(patch from Rich Wrenn)
2012-09-10 Frederic Bohe <fbohe-guest@alioth.debian.org>
* drivers/apc-hid.c, drivers/apc-hid.h, drivers/libhid.c: Add a tweak
for APC Back UPS ES APC Back UPS ES have a buggy firmware which
overflows on ReportID 0x0c, i.e.
UPS.PowerSummary.RemainingCapacity. This results in battery.charge
not being exposed and endless reconnections on systems with libusb
reporting EOVERFLOW. And it results on a failure to init the driver
for systems with libusb not reporting EOVERFLOW but EIO (i.e. on
Windows).
* tools/nut-scanner/nut-scanner.c: [nut-scanner] Fix a crash when no
start IP is provided.
* drivers/apc-hid.c, drivers/bcmxcp_usb.c, drivers/belkin-hid.c,
drivers/blazer_usb.c, drivers/cps-hid.c, drivers/idowell-hid.c,
drivers/liebert-hid.c, drivers/mge-hid.c, drivers/powercom-hid.c,
drivers/richcomm_usb.c, drivers/tripplite-hid.c,
drivers/tripplite_usb.c, drivers/usb-common.c, drivers/usb-
common.h: Extend USB device support check (from Arnaud Quette) Use
USBDevice_t structure in is_usb_device_supported(), instead of
direct VendorID and ProductID. This allows to pass it to the
specific processing handler for broader check
2012-09-07 Leo Arias <elopio-guest@alioth.debian.org>
* conf/nut.conf.sample: Update nut.conf.sample (grammar and
documentation)
d=411544&aid=313762&group_id=30602
2012-08-14 Arnaud Quette <arnaud.quette@free.fr>
* NEWS, UPGRADING, configure.in, data/driver.list.in,
docs/Makefile.am, docs/configure.txt, docs/documentation.txt,
docs/download.txt, docs/images/eaton-logo.png,
docs/images/hostedby.png, docs/images/simple.png,
docs/man/.gitignore, docs/man/Makefile.am, docs/man/index.txt,
docs/man/macosx-ups.txt, docs/man/mge-shut.txt,
docs/man/nutscan.txt, docs/man/nutscan_scan_avahi.txt,
docs/man/powercom.txt, docs/man/skel.txt, docs/nut-names.txt,
docs/website/.gitignore, docs/website/Makefile.am, docs/website/css
/web-layout.css, docs/website/news.txt, docs/website/old-news.txt,
docs/website/projects.txt, docs/website/web-layout.conf,
drivers/.gitignore, drivers/Makefile.am, drivers/macosx-ups.c,
drivers/mge-hid.c, drivers/powercom-hid.c, drivers/skel.c, drivers
/usbhid-ups.c, drivers/usbhid-ups.h, m4/nut_check_libltdl.m4: Merge
from trunk [[SVN:3679]] to [[SVN:3718]] to ssl-nss-port
2012-08-09 Arnaud Quette <arnaud.quette@free.fr>
* docs/website/.gitignore, docs/website/Makefile.am,
docs/website/news.txt, docs/website/old-news.txt: Integrate
archived news
* docs/nut-names.txt, drivers/mge-hid.c: Add shutdown ability switch
to Eaton units Eaton HID units (using usbhid-ups or [new,old]mge-
shut) were missing a data mapping to allow the change of the
shutdown ability switch. The result was that the UPS was not
powered off, even if all the protocol commands were sent (reported
by Daniel O'Connor)
* docs/download.txt, docs/website/news.txt: Update Windows package
publications for 2.6.5-1
* docs/man/.gitignore, docs/man/index.txt: Added macosx-ups manual
page to the index Also add generated groff and HTML contents to
the list of Subversion ignored files
Log message:
Recursive bumps for fontconfig and libzip dependency changes.
Log message:
Recursive revbump from graphics/libwebp
Log message:
Revbump after graphics/gd update
Log message:
Recursive revbump from multimedia/libvpx
Log message:
Recursive revbump from pkgsrc/multimedia/libvpx.
Log message:
recursive bump from graphics/gd shlib major bump.
Log message:
Try to fix the fallout caused by the fix for PR pkg/47882. Part 3:
Recursively bump package revisions again after the "freetype2" and
"fontconfig" handling was fixed. | https://pkgsrc.se/sysutils/ups-nut-cgi | CC-MAIN-2020-10 | en | refinedweb |
Python wrapper for NI Oscilloscopes
Project description
provides a Python package niscope that wraps the NI-SCOPE driver software for Python using ctypes. The package is tested with NI-SCOPE library version 3.3.2 using PCI-5122 cards Windows XP, and PXI-5114 on Windows 7.
Basic usage
Example:
import matplotlib.pyplot as plt import niscope scope = niscope.Scope() scope.ConfigureHorizontalTiming() scope.ConfigureVertical() scope.ConfigureTrigger("Edge",TRIGGER_SOURCE.EXTERNAL,2.5,SLOPE.POSITIVE,0,0) scope.InitiateAcquisition() raw_input("Enter") data = scope.Fetch() scope.close() plt.plot(data) plt.show()
Requirements
The national instruments NI-Scope drivers are required. If you do not have a physical NI scope, it is possible to test pyniscope by installing a simulated instrument in the NI Measurement & Automation Explorer. [Search ni.com for NI-SCOPE](). Pyniscope was origionaly tested on linux with NI-SCOPE 3.1, NI-KAL 2.1, and NI-VISA 5.0.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pyniscope/ | CC-MAIN-2020-10 | en | refinedweb |
HTTP stream from camera delayed by a long time
Hi all,
I have a raspberry pi camera with "Motion" operating at
localhost/8081 which I have got access to through my browser by simply including this html element in a page:
<iframe src="my.pi.IP.add:8081" *params*></iframe>, and this works absolutely great.
After the fact, I decide I needed to get the "profile" across a frame for data processing reasons and, since I already have other data coming from a python server running tornado, I decided to include python-opencv and use that to analyse the frames.
I initialise the video with:
def init_video(): print("Opening video port") # Set up video stream video_port = "8081" stream = "localhost:%s/frame.mjpeg" % video_port capture = cv2.VideoCapture(stream) return capture
Within the tornado ioloop I have a scheduled function which sends data to the client every 50ms or so, to update numeric indicators on a display, and again this works flawlessly. I also have the following function as part of the scheduler (which takes
capture as an argument):
def intensity_profiles(capture): # capture frame from stream and take profiles across it ret, frame = capture.read() grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) x_profile = grey[grey.shape[1]/2, :] return x_profile
And this works, but there is a BIG delay between the stream shown in the iframe (which updates close to real time) and the
x_profile data which is coming out of this function. If I include a line in
intensity_profiles() which prints the value of one pixel, covering the camera with my hand, I counted the seconds until I saw a difference in the output and it was between 10 and 30 seconds delay.
I don't know where this delay comes from, since both the iframe and the cv2 capture are pointed at the exact same URI for the motion server and so should be receiving frames at roughly the same time, right?
In any case, this is really annoying for a ~real time application viewing the profiles (maximum desired lag ~2 seconds or less) so does anyone know what I should do differently? | https://answers.opencv.org/question/129971/http-stream-from-camera-delayed-by-a-long-time/ | CC-MAIN-2020-10 | en | refinedweb |
error in dispatch
Hello,
I connect to IB and add two symbols (for each symbol, 1-min and 3-min bars) to the store; total 4 data feed. However, I get this error very often (not always):
13-Mar-17 15:25:03 ERROR Exception in message dispatch. Handler 'historical Data' for 'historicalData' Traceback (most recent call last): File "C:\Users\Reza\Anaconda3\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__results.append(listener(message)) File "C:\Users\Reza\trading\libs\backtrader\backtrader\stores\ibstore.py", line 920, in historicalData q = self.qs[tickerId] KeyError: 16777221 13-Mar-17 15:25:03 ERROR Exception in message dispatch. Handler 'historicalData' for 'historicalData' Traceback (most recent call last): File "C:\Users\Reza\Anaconda3\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__ results.append(listener(message))
@rastegarr said in error in disptach:
in historicalData q = self.qs[tickerId] KeyError: 16777221
This is the relevant part. It means that an incoming event for a historical data download with
tickerId=16777221came in and the system is not expecting that.
The
tickerIdare purged from the queues expecting data when the historical download is finished or when the historical data is canceled.
Some extra information is needed to understand why the
tickerIdis no longer there if a download is actually happening
@backtrader I updated to the last version of backtrader and I cannot get that dispatcher error however I'm getting a different issue. The code is very simple:
from __future__ import (absolute_import, division, print_function, unicode_literals) import backtrader as bt import time class BuyOnGapStrategy(bt.Strategy): def log(self, txt, doprint=False): if doprint: print('%s' % (txt)) def __init__(self): self.log('******************* Strategy Created *********************', doprint = True) def notify_data(self, data, status, *args, **kwargs): # CONNECTED, DISCONNECTED, CONNBROKEN, NOTSUBSCRIBED, DELAYED, LIVE self.log('DATA NOTIF: %s, %s' %( data._getstatusname(status), data._dataname), doprint = True) def notify_store(self, msg, *args, **kwargs): self.log('-> STORE NOTIF: %s' %(msg), doprint= True) def notify_order(self, order): pass def notify_trade(self, trade): pass def next(self): for indx in range(0, len(self.datas), 2): datax = self.datas[indx] datax2 = self.datas[indx+1] if datax is not None and len(datax.datetime) > 0: self.log("Sym %s, Time %s, 1-min Close %.2f" %(datax._dataname, datax.datetime.time(), datax.close[0]), doprint = True) if datax2 is not None and len(datax2.datetime) > 0: self.log("Sym %s, Time %s, 3-min Close %.2f" %(datax2._dataname, datax2.datetime.time(), datax2.close[0]), doprint = True) if __name__ == '__main__':'] storekwargs = dict( host = "127.0.0.1", port = 4001, clientId = 35, timeoffset = True, reconnect = True, timeout = 10, notifyall = False, _debug = False ) ibstore = bt.stores.IBStore(**storekwargs) cerebro = bt.Cerebro(exactbars = 1) cerebro.setbroker(ibstore.getbroker()) datakwargs = dict( timeframe = bt.TimeFrame.Minutes, compression = 1, qcheck = 0.5, historical = False, backfill_start = True, backfill= True, latethrough = True ) for symbol in all_syms:) # Add the strategy cerebro.addstrategy(BuyOnGapStrategy) cerebro.run()
Here is message I get:
Server Version: 76 TWS Time at connection:20170317 11:58:19 CST ******************* Strategy Created ********************* -> STORE NOTIF: <error id=-1, errorCode=2104, errorMsg=Market data farm connecti on is OK:usfarm> -> STORE NOTIF: <error id=-1, errorCode=2106, errorMsg=HMDS data farm connection is OK:ushmds> DATA NOTIF: DELAYED, YHOO-STK-SMART-USD DATA NOTIF: DELAYED, CTXS-STK-SMART-USD DATA NOTIF: DELAYED, ADSK-STK-SMART-USD DATA NOTIF: DELAYED, ETFC-STK-SMART-USD DATA NOTIF: DELAYED, DISCA-STK-SMART-USD DATA NOTIF: DELAYED, QCOM-STK-SMART-USD DATA NOTIF: DELAYED, ADS-STK-SMART-USD DATA NOTIF: DELAYED, PVH-STK-SMART-USD DATA NOTIF: DELAYED, AMG-STK-SMART-USD DATA NOTIF: DELAYED, CMA-STK-SMART-USD Traceback (most recent call last): File "ib_test.py", line 84, in <module> cerebro.run() File "C:\Users\Reza\trading\libs\backtrader\backtrader\cerebro.py", line 794, in run runstrat = self.runstrategies(iterstrat) File "C:\Users\Reza\trading\libs\backtrader\backtrader\cerebro.py", line 924, in runstrategies self._runnext(runstrats) File "C:\Users\Reza\trading\libs\backtrader\backtrader\cerebro.py", line 1240, in _runnext strat._next() File "C:\Users\Reza\trading\libs\backtrader\backtrader\strategy.py", line 296, in _next super(Strategy, self)._next() File "C:\Users\Reza\trading\libs\backtrader\backtrader\lineiterator.py", line 240, in _next clock_len = self._clk_update() File "C:\Users\Reza\trading\libs\backtrader\backtrader\strategy.py", line 285, in _clk_update newdlens = [len(d) for d in self.datas] File "C:\Users\Reza\trading\libs\backtrader\backtrader\strategy.py", line 285, in <listcomp> newdlens = [len(d) for d in self.datas] File "C:\Users\Reza\trading\libs\backtrader\backtrader\lineseries.py", line 43 2, in __len__ return len(self.lines) File "C:\Users\Reza\trading\libs\backtrader\backtrader\lineseries.py", line 19 9, in __len__ return len(self.lines[0]) ValueError: __len__() should return >= 0
- backtrader administrators last edited by backtrader
@rastegarr said in error in dispatch:)
The recommendation is to create independent data feeds and then resample them individually.']
This is for sure bound to generate a pacing violation in the communication with Interactive Brokers which is going to prevent historical data download for many of the symbols (and each symbol is being downloaded twice)
Hence
ValueError: __len__() should return >= 0
At least (probably many) of the data feeds has downloaded absolutely nothing and this has generated a synchronization problem.
@backtrader Thanks backorder!
- How can I get only last 30 minutes of the historical data so this way I won't violate the IB download constraints?
- Since I have about 190 symbols, how can I have refill (in the beginning and upon the disconnect) without violating pacing violation?
Thanks for your help!
@rastegarr said in error in dispatch:
First, use a data feed for each resampling instance you wish. To avoid having duplicate historical requests which are later resampled by the platform. A pacing violation (seeing the amount of symbols) is also probably the cause for the original error.
- How can I get only last 30 minutes of the historical data so this way I won't violate the IB download constraints?
It's not considered in the platform.
backtraderis not meant to account for the pacing violations from IB. It tries to automatically get just the amount of data for a single symbol which fits into a request. Reducing the size of a single request will not reduce the amount of requests.
- Since I have about 190 symbols, how can I have refill (in the beginning and upon the disconnect) without violating pacing violation?
By downloading offline, disabling the normal backfilling and using
backfill_from, to give you pass a data feed which is reading from the on-disk stored data.
This of course doesn't allow backfilling after a disconnect/reconnect cycle. That would again hit a pacing limit and trigger a pacing violation with 190 requests (x2, since you hare inputting each symbol twice)
Interactive Brokers deems itself as a broker and not as a data provider, hence the pacing violations and the recommendation by IB itself to use a real data provider if such scenarios are needed.
@backtrader Thanks, for your informative comments. What company would you recommend as a minute-bar data provider that is compatible with backtrader? I'd still like to use IB as the broker since my business partner likes them due to their cost structure.
If you can write it down to disk, anything will be compatible.
If you are looking for live minute data with backfilling, the only other such source which is implemented is *[VisualChart][]. This has the drawback that it only runs under Windows and may be out of your area of influence.
Communication over
COMhas also small issues and data download may have to be restarted (
COMis not one of the strongest aspects in Python)
- pablomarin last edited by pablomarin
@backtrader is it possible to put a sleep(15) somewhere in the code to overcome this limitation instead of downloading offline and use backfill_from? when requesting the historical data for many symbols, a sleep(15) between each request might do the trick. Let us know.
You can also shoot yourself in the knee and then see if it hurts. The problem is the data provider and the amount of symbols requested from it. Blocking a thread 15 seconds to try to overcome a structural problem is not the way.
Feel free to do it yourself. | https://community.backtrader.com/topic/247/error-in-dispatch/10 | CC-MAIN-2020-10 | en | refinedweb |
How to kill a script?
I have a script running in an infinite loop and I can't stop it short of closing Pythonista. On a PC I'd hit <CTL>C. Of course, that's not an option on an iPad. What's the routine? Thanks.
- KnightExcalibur
I asked a similar question a few days ago. If you are using scene, I quote pulbrich in his response:
- pulbrich posted 4 days ago
In the Game class you can include a
def stop(self):
...
method. It gets called automatically when a scene is stopped (by tapping the “x” button).
If this is a script that is running in the console, I imagine it would require a different approach. I'll leave that to the experienced guys:)
@KnightExcalibur Thanks, but I'm not using scene.
@mikael I can see the X at the top right, but when I tap on it nothing happens. Should have said that originally. I should add that I have the same problem in IDLE on my PC running the same script, so I assume it's a Python issue and not a Pythonista issue.
Don't create infinite loops.
The X in the console basically issues a KeyboardInterrupt. But if your loop uses try/except all, you won't be able to cancel it. Be sure to always only catch the exceptions specific to your code.
Don't create infinite loops.
Well, I need to repeatedly request input from the user. How else can I do that except something like while(True)? I also use try/except since the input should only be a number and the user may enter a letter or something else by mistake. The user can enter 4 to exit, and that function issues a sys.exit(), but that has no effect.
@Involute this script can be stopped by the x at top right
while True: try: t = int(input()) print(t) except KeyboardInterrupt: break except Exception as e: #print(e) print('input is not integer')
except Exception is also a little questionable -- you are probably getting a specific exception , catch that one.
while True is probably okay, just make sure there is a parh to exit the loop. | https://forum.omz-software.com/topic/5880/how-to-kill-a-script | CC-MAIN-2020-10 | en | refinedweb |
Multi-profile supported, flexible config library
Project description
octoconf
Multi-profile supported, flexible config library for Python 2 & 3.
Features
- Allow multiple config profiles in one YAML file
- Allow include multiple YAML files
- Overridable profile selection from code for special use-cases (e.g. config for testing)
- Inheritable config profiles, what makes profile merges by dictionaries. (the native YAML bookmarking is also available)
- Can use variables in the config file
Installation
pip install octoconf
Config format
An octoconf config file is pure YAML file with some reserved keywords:
- USED_CONFIG>: <node_name> in the file root
- you can specify the name of default config profile
- <INCLUDE: <yml_path(s)> in the file root
- this octoconf file(s) will be included
- <BASE: <node_name> in the 2nd level
- this will used for making (merge based) inheritance between profiles
The profile nodes should be on 1st level!
Usage
- You can load config from string with loads():
import octoconf config = octoconf.loads(yaml_string) print(config)
- Or directly from StringIO (e.g. from file) with load():
import octoconf with open('config.yml') as fd: config = octoconf.load(fd) print(config)
Please check the features docs for explain octoconf’s features.
Examples YAML files
USED_CONFIG>: UserConfig <INCLUDE: vendor.defaults.yml # This config overrides the production preset (from vendor.defaults.yml file) UserConfig: <BASE: ProductionConfig App: TITLE: "Amazing Foobar" Flask: SQLALCHEMY_DATABASE_URI: "sqlite:///${SERVER}/app.sqlite"
For more examples, please check the examples directory.
Bugs
Bugs or suggestions? Visit the issue tracker.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/octoconf/ | CC-MAIN-2020-10 | en | refinedweb |
Pytorch implementation of the learning rate range test
Project description
PyTorch learning rate finder
A PyTorch implementation of the learning rate range test detailed in Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith and the tweaked version used by fastai.
The learning rate range test is a test that provides valuable information about the optimal learning rate. During a pre-training run, the learning rate is increased linearly or exponentially between two boundaries. The low initial learning rate allows the network to start converging and as the learning rate is increased it will eventually be too large and the network will diverge.
Typically, a good static learning rate can be found half-way on the descending loss curve. In the plot below that would be
lr = 0.002.
For cyclical learning rates (also detailed in Leslie Smith's paper) where the learning rate is cycled between two boundaries
(start_lr, end_lr), the author advises the point at which the loss starts descending and the point at which the loss stops descending or becomes ragged for
start_lr and
end_lr respectively. In the plot below,
start_lr = 0.0002 and
end_lr=0.2.
Installation
Python 2.7 and above:
pip install torch-lr-finder
Install with the support of mixed precision training (requires Python 3, see also this section):
pip install torch-lr-finder -v --global-option="amp"
Implementation details and usage
Tweaked version from fastai
Increases the learning rate in an exponential manner and computes the training loss for each learning rate.
lr_finder.plot() plots the training loss versus logarithmic learning rate.
from torch_lr_finder import LRFinder model = ... criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=1e-7, weight_decay=1e-2) lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(trainloader, end_lr=100, num_iter=100) lr_finder.plot() # to inspect the loss-learning rate graph lr_finder.reset() # to reset the model and optimizer to their initial state
Leslie Smith's approach
Increases the learning rate linearly and computes the evaluation loss for each learning rate.
lr_finder.plot() plots the evaluation loss versus learning rate.
This approach typically produces more precise curves because the evaluation loss is more susceptible to divergence but it takes significantly longer to perform the test, especially if the evaluation dataset is large.
from torch_lr_finder import LRFinder model = ... criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.1, weight_decay=1e-2) lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(trainloader, val_loader=val_loader, end_lr=1, num_iter=100, step_mode="linear") lr_finder.plot(log_lr=False) lr_finder.reset()
Notes
- Examples for CIFAR10 and MNIST can be found in the examples folder.
- The optimizer passed to
LRFindershould not have an
LRSchedulerattached to it.
LRFinder.range_test()will change the model weights and the optimizer parameters. Both can be restored to their initial state with
LRFinder.reset().
- The learning rate and loss history can be accessed through
lr_finder.history. This will return a dictionary with
lrand
losskeys.
- When using
step_mode="linear"the learning rate range should be within the same order of magnitude.
Additional support for training
Gradient accumulation
You can set the
accumulation_steps parameter in
LRFinder.range_test() with a proper value to perform gradient accumulation:
from torch.utils.data import DataLoader from torch_lr_finder import LRFinder desired_batch_size, real_batch_size = 32, 4 accumulation_steps = desired_batch_size // real_batch_size dataset = ... # Beware of the `batch_size` used by `DataLoader` trainloader = DataLoader(dataset, batch_size=real_bs, shuffle=True) model = ... criterion = ... optimizer = ... lr_finder = LRFinder(model, optimizer, criterion, device="cuda") lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp", accumulation_steps=accumulation_steps) lr_finder.plot() lr_finder.reset()
Mixed precision training
Currently, we use
apex as the dependency for mixed precision training.
To enable mixed precision training, you just need to call
amp.initialize() before running
LRFinder. e.g.
from torch_lr_finder import LRFinder from apex import amp # Add this line before running `LRFinder` model, optimizer = amp.initialize(model, optimizer, opt_level='O1') lr_finder = LRFinder(model, optimizer, criterion, device='cuda') lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode='exp') lr_finder.plot() lr_finder.reset()
Note that the benefit of mixed precision training requires a nvidia GPU with tensor cores (see also: NVIDIA/apex #297)
Besides, you can try to set
torch.backends.cudnn.benchmark = True to improve the training speed. (but it won't work for some cases, you should use it at your own risk)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/torch-lr-finder/ | CC-MAIN-2020-10 | en | refinedweb |
unittest-based test runner with Ant/JUnit like XML reporting.
Project description
unittest-xml-reporting (aka xmlrunner)
A unittest test runner that can save test results to XML files in xUnit format. The files can be consumed by a wide range of tools, such as build systems, IDEs and continuous integration servers.
Schema
There are many schemas with minor differences.
We use one that is compatible with Jenkins xUnit plugin, a copy is
available under
tests/vendor/jenkins/xunit-plugin/junit-10.xsd (see attached license).
- Jenkins (junit-10.xsd), xunit plugin (2014-2018), please note the latest versions (2.2.4 and above are not backwards compatible)
You may also find these resources useful:
-
-
- Jenkins (junit-10.xsd), xunit plugin 2.2.4+
- JUnit-Schema (JUnit.xsd)
- Windyroad (JUnit.xsd)
- a gist (Jenkins xUnit test result schema)
Things that are somewhat broken
Python 3 has the concept of sub-tests for a
unittest.TestCase; this doesn't map well to an existing
xUnit concept, so you won't find it in the schema. What that means, is that you lose some granularity
in the reports for sub-tests.
Requirements
- Python 3.5+
- Please note Python 2.7 end-of-life was in Jan 2020, last version supporting 2.7 was 2.5.2
- Please note Python 3.4 end-of-life was in Mar 2019, last version supporting 3.4 was 2.5.2
- Please note Python 2.6 end-of-life was in Oct 2013, last version supporting 2.6 was 1.14.0
Installation
The easiest way to install unittest-xml-reporting is via Pip:
$ pip install unittest-xml-reporting
If you use Git and want to get the latest development version:
$ git clone $ cd unittest-xml-reporting $ sudo python setup.py install
Or get the latest development version as a tarball:
$ wget $ unzip master.zip $ cd unittest-xml-reporting $ sudo python setup.py install
Or you can manually download the latest released version from PyPI.
Command-line
python -m xmlrunner [options] python -m xmlrunner discover [options] # help python -m xmlrunner -h
e.g.
python -m xmlrunner discover -t ~/mycode/tests -o /tmp/build/junit-reports
Usage
The script below, adapted from the
unittest, shows how to use
XMLTestRunner in a very simple way. In fact, the only difference between
this script and the original one is the last line:
import random import unittest import xmlrunner class TestSequenceFunctions(unittest.TestCase): def setUp(self): self.seq = list(range(10)) @unittest.skip("demonstrating skipping") def test_skipped(self): self.fail("shouldn't happen")( testRunner=xmlrunner.XMLTestRunner(output='test-reports'), # these make sure that some options that are not applicable # remain hidden from the help menu. failfast=False, buffer=False, catchbreak=False)
Reporting to a single file
if __name__ == '__main__': with open('/path/to/results.xml', 'wb') as output: unittest.main( testRunner=xmlrunner.XMLTestRunner(output=output), failfast=False, buffer=False, catchbreak=False)
Doctest support
The XMLTestRunner can also be used to report on docstrings style tests.
import doctest import xmlrunner def twice(n): """ >>> twice(5) 10 """ return 2 * n class Multiplicator(object): def threetimes(self, n): """ >>> Multiplicator().threetimes(5) 15 """ return 3 * n if __name__ == "__main__": suite = doctest.DocTestSuite() xmlrunner.XMLTestRunner().run(suite)
Django support
In order to plug
XMLTestRunner to a Django project, add the following
to your
settings.py:
TEST_RUNNER = 'xmlrunner.extra.djangotestrunner.XMLTestRunner'
Also, the following settings are provided so you can fine tune the reports:
Contributing
We are always looking for good contributions, so please just fork the repository and send pull requests (with tests!).
If you would like write access to the repository, or become a maintainer, feel free to get in touch.
Testing changes with
tox
Please use
tox to test your changes before sending a pull request.
You can find more information about
tox at.
$ pip install tox # basic sanity test, friendly output $ tox -e pytest # all combinations $ tox
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/unittest-xml-reporting/ | CC-MAIN-2020-10 | en | refinedweb |
Re: DB API 2.0 and transactions
- From: Magnus Lycka <lycka@xxxxxxxxx>
- Date: Mon, 13 Jun 2005 12:23:48 +0200
I'm CC:ing this to D'Arcy J.M. Cain. (See comp.lang.python for prequel D'Arcy.)
Christopher J. Bottaro wrote:
Check this out...
<code> import pgdb import time
print time.ctime() db = pgdb.connect(user='test', host='localhost', database='test') time.sleep(5) db.cursor().execute('insert into time_test (datetime) values (CURRENT_TIMESTAMP)') db.commit() curs = db.cursor() curs.execute('select datetime from time_test order by datetime desc limit 1') row = curs.fetchone() print row[0] </code>
<output> Fri Jun 10 17:27:21 2005 '2005-06-10 17:27:21.654897-05' </output>
Notice the times are exactly the same instead of 5 sec difference.
What do you make of that? Some other replies to this thread seemed to indicate that this is expected and proper behavior.
This is wrong. It should not behave like that if it is to follow the SQL standard which *I* would expect and consider proper.
I don't think the SQL standard mandates that all evaluations of CURRENT_TIMESTAMP within a transaction should be the same. It does manadate that CURRENT_TIMESTAMP in only evaluated once in each SQL statement, so "CURRENT_TIMESTAMP=CURRENT_TIMESTAMP" should always be true in a WHERE statement. I don't think it's a bug if all timestamps in a transaction are the same though. It's really a bonus if we can view all of a transaction as taking place at the same time. (A bit like Piper Halliwell's time-freezing spell in "Charmed".)
The problem is that transactions should never start until the first transaction-initiating SQL statement takes place. (In SQL-92, all standard SQL statements are transaction initiating except CONNECT, DISCONNECT, COMMIT, ROLLBACK, GET DAIGNOSTICS and most SET commands (SET DESCRIPTOR is the exception here).) Issuing BEGIN directly after CONNECT, ROLLBACK and COMMIT is in violation with the SQL standards.
A workaround for you could be to explicitly start a new transaction before the insert as PostgreSQL (but not the SQL standard) wants you to do. I suppose you can easily do that using e.g. db.rollback(). If you like, I guess you could do db.begin=db.rollback in the beginning of your code and then use db.begin().
Another option would be to investigate if any of the other postgreSQL drivers have a more correct behaviour. The non-standard behaviour that you describe it obvious from the pgdb source. See: (Comments added by me.)
class pgdbCnx:
def __init__(self, cnx): self.__cnx = cnx self.__cache = pgdbTypeCache(cnx) try: src = self.__cnx.source() src.execute("BEGIN") # Ouch! except: raise OperationalError, "invalid connection."
... def commit(self): try: src = self.__cnx.source() src.execute("COMMIT") src.execute("BEGIN") # Ouch! except: raise OperationalError, "can't commit."
def rollback(self): try: src = self.__cnx.source() src.execute("ROLLBACK") src.execute("BEGIN") # Ouch! except: raise OperationalError, "can't rollback."
....
This should be changed to something like this (untested):
........class pgdbCnx:
def __init__(self, cnx): self.__cnx = cnx self.__cache = pgdbTypeCache(cnx) self.inTxn = False #NEW try: src = self.__cnx.source() # No BEGIN here except: raise OperationalError, "invalid connection."
........def commit(self): try: src = self.__cnx.source() src.execute("COMMIT") self.inTxn = False # Changed except: raise OperationalError, "can't commit."
def rollback(self): try: src = self.__cnx.source() src.execute("ROLLBACK") self.inTxn = False # Changed except: raise OperationalError, "can't rollback."
def cursor(self): try: src = self.__cnx.source() return pgdbCursor(src, self.__cache, self) # Added self except: raise pgOperationalError, "invalid connection."
....
> self.__conn = conn # New> self.__conn = conn # Newclass pgdbCursor:
def __init__(self, src, cache, conn): # Added conn self.__cache = cache self.__source = src
self.description = None self.rowcount = -1 self.arraysize = 1 self.lastrowid = None
.... (execute calls executemany) ....
> if not self.__conn.inTxn: # Added test> if not self.__conn.inTxn: # Added testdef executemany(self, operation, param_seq): self.description = None self.rowcount = -1
# first try to execute all queries totrows = 0 sql = "INIT" try: for params in param_seq: if params != None: sql = _quoteparams(operation, params) else: sql = operation
self.__source.execute('BEGIN')> self.__conn.inTxn = True
rows = self.__source.execute(sql) if rows != None: # true is __source is NOT a DQL totrows = totrows + rows else: self.rowcount = -1
I guess it would be even better if the executemany method checked that it was really a tranasction-initiating SQL statement, but that makes things a bit slower and more complicated, especially as I suspect that the driver premits several SQL statements separated by semicolon in execute and executemany. We really don't want to add a SQL parser to pgdb. Making all statements transaction-initiating is at least much closer to standard behaviour than to *always* start transactions start prematurely. I guess it will remove problems like the one I mentioned earlier (repeated below) in more than 99% of the cases.
This bug has implications far beyond timestamps. Imagine two transaction running with isolation level set to e.g. serializable. Transaction A updates the AMOUNT column in various rows of table X, and transaction B calculates the sum of all AMOUNTs in X.
Lets say they run over time like this, with | marking transaction start and > commit (N.B. ASCII art follows, you need a fixed font to view this):
....|--A-->.......|--A-->........ ............|-B->.........|-B->..
This works as expected... The first B-transaction sums up AMOUNTs after the first A-transaction is done etc, but imagine what happens if transactions implicitly begin too early as with the current pgdb:
|-----A-->|---------A-->|------- |------------B->|----------B->|-
This will cause B1 to sum up AMOUNTs before A1, and B2 will sum up AMOUNTs after A1, not after A2. .
- Follow-Ups:
- Re: DB API 2.0 and transactions
- From: Stuart Bishop
- References:
- DB API 2.0 and transactions
- From: Christopher J. Bottaro
- Re: DB API 2.0 and transactions
- From: Magnus Lycka
- Re: DB API 2.0 and transactions
- From: Christopher J. Bottaro
- Prev by Date: Java to Python - how to??
- Next by Date: asyncore and GUI (wxPython)
- Previous by thread: Re: DB API 2.0 and transactions
- Next by thread: Re: DB API 2.0 and transactions
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2005-06/msg01834.html | crawl-002 | en | refinedweb |
06, 2007 10:00 AMRuby implementations are a dime a dozen nowadays. There are already two implementations of Ruby for the JVM (JRuby and XRuby), and .NET is catching up as well. IronRuby has caused a lot of buzz in the past month, but until it's released in late July 2007, it's not known just how complete it'll be.[.]A compliant Ruby parser is a big part of a Ruby implementation, and using Ruby.NET's parser surely saves the IronRuby team a lot of work.
Since the last release we have added support for interoperability with other .NET languages, so that components developed using other .NET languages can conveniently use classes implemented using Ruby.NET and vice versa.An example for this is shown with a Ruby class that's used in C# code. The Ruby class:
class Person
def init(name, age)
@name = name
@age = age
end
def print()
puts "#{@name} is #{@age}"
end
end
Person bruce = new Person();
bruce.init("Bruce", 42);
bruce.print();
We will soon be moving to a more traditional open source model of community contribution to our code base and will be calling for volunteer developers. If anyone has any experience in managing that kind of process, we'd be interested in your input.In light of recent doubts about IronRuby, and the the fact that over the past year, many Ruby runtime developers have been hired to work on their projects (JRuby, XRuby, Rubinius, IronRuby), .NET Open Source developers with an interest in Ruby might want to look into this. | http://www.infoq.com/news/2007/06/rubydotnet-08-release | crawl-002 | en | refinedweb |
problem with from win32com.client import Dispatch
- From: muttu2244@xxxxxxxxx
- Date: 14 Nov 2005 04:27:57 -0800
Hi all
Am trying to read an html page using win32com in the following way.
from win32com.client import Dispatch
ie = Dispatch("InternetExplorer.Application")
ie.Navigate("")
doc =ie.Document
print doc.body.innerHTML
with this code am easily able to read the mentioned page from the
pythonwin interactive window, but the same code if I write it in some
*.py file, am not able to read the page.
The following error it throws when I compile the file.
Traceback (most recent call last):
File
"C:\Python23\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py",
line 310, in RunScript
exec codeObject in __main__.__dict__
File "C:\Python23\Lib\site-packages\Script1.py", line 14, in ?
ie.Quit()
File "C:\Python23\Lib\site-packages\win32com\client\__init__.py",
line 456, in __getattr__
return self._ApplyTypes_(*args)
File "C:\Python23\Lib\site-packages\win32com\client\__init__.py",
line 446, in _ApplyTypes_
return self._get_good_object_(
com_error: (-2147352567, 'Exception occurred.', (0, None, None, None,
0, -2147467259), None)
if any body has the answer for this please help me in this.
Thanks and regards
Yogi
.
- Follow-Ups:
- Re: problem with from win32com.client import Dispatch
- From: calfdog@xxxxxxxxx
- Prev by Date: Re: PySizer 0.1
- Next by Date: multiple inharitance super() question
- Previous by thread: Re: HOW do you stop a print???
- Next by thread: Re: problem with from win32com.client import Dispatch
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2005-11/msg01665.html | crawl-002 | en | refinedweb |
Re: How to pass a struct to a function
- From: Keith Thompson <kst-u@xxxxxxx>
- Date: Wed, 28 Mar 2007 14:49:54 -0700
"Bill" <bill.warner@xxxxxxxxxxxx> writes:
On Mar 28, 9:48 am, Chris Dollin <chris.dol...@xxxxxx> wrote:
Bill wrote:
I am trying to pass a struct to a function. How would that best be
accomplished?
Assuming that the function has an argument of the right structure type,
write an expression evaluating to your struct -- the name of a variable
of that struct type is popular -- in the appropriate argument position
in a call to that function.
What am I missing?
Please don't quote signatures. Trim quoted material to what's
necessary for your followup to make sense to someone who hasn't see
the parent article.
Here is what I have so far, won't even compile. And I am:
#include <stdio.h>
#include <string.h>
Your program doesn't use anything from <string.h>, so this #include
directive is unnecessary. But, as we'll see, it *should*, so yes,
you'll need this #include directive.
int AddEntry(struct entry);
At this point, you haven't declared a type "struct entry". Because
it's an incomplete struct type, there are some interesting details
about how this declaration is handled, but that's not relevant; this
is not what you want, and you'll need to fix it.
struct{
char fName[51];
char lName[51];
char Phone[13];
}Entry;
Now you declare a struct object, but you *still* haven't declared a
type "struct entry". In fact, you don't declare "struct entry" (other
than as an incomplete type) anywhere in your program. What you've
done here is (a) declared an anonymous struct type, and (b) declared a
single object of that type, named "Entry". If all you want is a
single object, that might be sensible thing to do (or you might as
well declare the members as individual variables). But since you want
to pass it as an argument to a function, you should give the type a
name when you declare it.
Change the above declaration to:
struct entry {
char fName[51];
char lName[51];
char Phone[13];
};
and place it *above* the function declaration, so AddEntry knows what
a "struct entry" is. (More precisely, so the compiler knows what a
"struct entry" is when it processes the function declaration.)
(There's another common style that uses a typedef to give a structure
type a one-word name, but I won't get into that here.)
This creates a *named* type, called "struct entry". Note that it only
declares the type; unlike your original declaration, it doesn't
declare a variable of that type. I made that change because the
object (note: I'm using the terms "object" and "variable"
interchangeably) doesn't need to be global (more precisely, at file
scope). You can declare it inside the main() function.
int main()
Ok, but "int main(void)" is more explicit, and is preferred.
{
int retCode;
Add here:
struct entry Entry;
Though you might pick a more descriptive name than "Entry". In a real
program, you're probably going to have more than one object of that
type; each one should have a name that describes that object.
Entry.fName = "Fred";
Entry.lName = "Flintstone";
Entry.Phone = "123-456-7890";
If the members were char* pointers, you could do this, but you can't
assign to an array. Read section 6 of the comp.lang.c FAQ,
<>. You can use the strcpy function here:
strcpy(Entry.fName, "Fred");
retCode = AddEntry(Entry);
You don't do anything with the value of retCode. That's ok in a small
sample program, but in a real program you'll probably want to do
something based on the result.
You should add "return 0;" before the closing brace of the main
function. It's not strictly required, but it's a good idea. You can
do more elaborate things if you want your program to report whether it
succeeded or failed.
}
//int AddEntry(char fName[51], char lName[51], char Phone[13])
It's best to avoid "//" comments when posting to Usenet. The C99
standard supports them, as do most exsting pre-C90 compilers (at least
in some mode), but they're not 100% portable. Also, Usenet software
commonly wraps long lines. Wrapping a "//" comment typically creates
a syntax error; a wrapped "/* ... */" comment typically is still a
valid comment.
int AddEntry(struct entry)
In a function declaration (without a body), you only need the types of
the parameters; the names are optional. In a function definition,
though, you have to provide names for the parameters, so the body of
the function can refer to them.
Let's call it "arg", for "argument":
int AddEntry(struct entry arg)
{
FILE *fp;
fp = fopen("AppData.dat", "a"); /* open file for writing */
You're opening the file for appending, not just writing. Either
mention that in the comment, or just drop the comment; any reader who
doesn't know what fopen() does is going to have bigger problems.
Comments like that are ok if you're trying to convince someone that
you know what you're doing, but they're not useful in providing actual
information to the reader.
You don't check whether the fopen() call succeeded. It returns a null
pointer (NULL) if it fails. Always check the result of fopen(), and
take some corrective action if it fails -- even if the corrective
action is to terminate the program with an error message.
fprintf(fp, "%s", fName); /* write some info */
fprintf(fp, "\t");
fprintf(fp, "%s", lName);
fprintf(fp, "\t");
fprintf(fp, "%s", Phone);
fprintf(fp, "\n");
The references to fName, lName, and Phone assume that they exist as
individual variables. They don't; they're members of a structure.
Now that that structure has been declared properly, you can refer to
them as arg.fName and so forth.
These six printf statements can be reduced to one:
fprintf(fp, "%s\t%s\t%s\n", arg.fName, arg.lName, arg.Phone);
fprintf() can fail. Combining the six printfs into one makes it
easier to check the result, and take some corrective action on
failure.
fclose(fp); /* close the file */
Another vacuous coment.
And fclose() can also fail. Consult the documentation for each of
these functions to find out how they indicate failure; it's different
for different functions.
return 1;
You always return the same value. If that's all you're going to do,
the function might as well not return a value at all (i.e., you can
declare it to return void rather than int). Or you can use the return
value to indicate to the caller whether the function succeeded or
failed. Decide how you want to do this, and *document* your
convention, for example in a comment associated with the function
declaration. One convention is to return 0 (false) for failure, 1
(true) for success. Another convention, since there are more ways to
fail than to succeed, is to return 0 for success, and some non-zero
value for failure; different values can indicate different kinds of
failure.
}
The values 51 and 13 are "magic numbers"; it's not at all clear what
they mean, or why you chose those particular values. Declare them as
constants, probably as preprocessor macros:
#define NAME_MAX 51
#define PHONE_MAX 13
Finally, you should learn to understand your compiler's diagnostic
messages (which you didn't show us). The details will vary depending
on which compiler you're using, but any decent compiler will print
messages that will tell an experienced programmer what the problem is,
though they may not always be obvious to a newbie. (They may not be
obvious to an experienced programmer if the compiler writer has
written poor messages, which does happen.)
*Before* you start correcting your code, compile it again, and pay
close attention to what your compiler tells you. If possible, set
options on your compiler to tell it to print lots of verbose warnings.
You'll probably find that your compiler's error and warning messages
will tell you just about everything about your code that I've just
told you, *if* you can understand how it's telling you. Take a look
at what your compiler tells you, and compare it to what I've told you
here.
Note that syntax errors can often confuse the compiler, and make some
of the following messages meaningless. A syntax error is, for
example, something like a missing semicolon or a mismatched
parenthesis. For other errors, like type mismatches, it's generally
easier for the compiler to recover and guess what you meant in the
following code. Fix the syntax errors first, so you can get better
messages for other errors. The very first error message the compiler
gives you is the most reliable; following messages may be the result
of the compiler's confusion.
(For gcc, "-ansi -pedantic -Wall -W -O3" is a good start. ("-O3"
enables optimization; a side effect is that the compiler performs more
analysis and can detect more problems in your code.) For other
compilers, check your documentation.)
--
Keith Thompson (The_Other_Keith) kst-u@xxxxxxx <>
San Diego Supercomputer Center <*> <>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
.
- Follow-Ups:
- Re: How to pass a struct to a function
- From: Bill Pursell
- References:
- How to pass a struct to a function
- From: Bill
- Re: How to pass a struct to a function
- From: Chris Dollin
- Re: How to pass a struct to a function
- From: Bill
- Prev by Date: Re: Learning C?
- Next by Date: Re: Memory Question
- Previous by thread: Re: How to pass a struct to a function
- Next by thread: Re: How to pass a struct to a function
- Index(es): | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2007-03/msg04893.html | crawl-002 | en | refinedweb |
FSSTAT(2) OpenBSD Programmer's Manual GETFSSTAT(2)
NAME
getfsstat - get list of all mounted file systems
SYNOPSIS
#include <sys/param.h>
#include <sys/mount.h>
int
getfsstat(struct statfs *buf, size_t bufsize, int flags);.
OpenBSD 2.6 June 9, 1993 2 | http://www.rocketaware.com/man/man2/getfsstat.2.htm | crawl-002 | en | refinedweb |
Re: Determining the Main Class
- From: Daniele Futtorovic <da.futt.news@xxxxxxxxxxxxxxx>
- Date: Mon, 11 Aug 2008 20:55:47 +0200
On 11/08/2008 19:54, Stefan Ram allegedly wrote:
Jason Cavett <jason.cavett@xxxxxxxxx> writes:Is there a way to determine (maybe through reflection or something?)
where a Java application is being run from (what class contains the
main method)?
Examine the stack trace.
public class Main { public static void method() { System.out.println
( java.util.Arrays.toString
( java.lang.Thread.currentThread().getStackTrace() )); }
public static void main( final java.lang.String[] args ) { Main.method(); }}
[java.lang.Thread.getStackTrace(Unknown Source), Main.method(Main.java:24), Main.main(Main.java:28)]
In the stack trace above, it is the last entry. This might
not always be so. Also, there might be other methods with
the signatur of main in the stack trace. So, some care has
to be taken. Often, it should be the last method in the
stack trace with the signature of main.
The code above also assumes that the current thread is
the main thread.
I've investigated that a bit, trying to get the main Thread through
searching the ThreadGroups, and also via Thread.getAllStacktraces.
There's a problem with that approach. To wit, that if the main Thread
isn't *active*, you don't get it either way (at least that's what my
tests indicate).
I would venture say that in a typical application, the main Thread isn't
active. So this approach might not work.
As for searching for defined classes with a main method, that's
pointless, since many classes can have a main defined.
As Mark said, it would work with a JAR (via its manifest). Ideally,
you'd have to know the command the java executable was started with to
deal with all cases...
--
DF.
.
- Follow-Ups:
- Re: Determining the Main Class
- From: Arne Vajhøj
- References:
- Determining the Main Class
- From: Jason Cavett
- Prev by Date: Re: wrting a soap response to file
- Next by Date: Re: Delegation and generics craziness
- Previous by thread: Re: Determining the Main Class
- Next by thread: Re: Determining the Main Class
- Index(es): | http://coding.derkeiler.com/Archive/Java/comp.lang.java.programmer/2008-08/msg01223.html | crawl-002 | en | refinedweb |
XML may be human readable but it can be a tedious chore writing code to read and manipulate it. For the lazily efficient coder, tedious chores are not for them, and thanks to XmlBeans and E4X, a lot of those chores can be eliminated.
Let's start with XmlBeans, now at version 2.0. XmlBeans originated within BEA and was donated to Apache. The world is full of XML/Java data binding tools; what makes XmlBeans different is that it is useful to many if not most classes of XML related coding, from low level node traversal over non-data elements (so you can dig out those comments programmatically) to abstracted data oriented manipulation.
Working with XmlBeans starts with an XML Schema. Let's take an example schema for a simple application, sites.xsd. You'll find this, with the rest of the examples in this article in the download for this article.
Our example schema specifies a schema with a "sites" element, which can contain a number of "site" elements, which in turn can contain any number of "rating" or "comment" elements which both have an email address as an attribute for identifying the author of the rating or comment.
Get XmlBeans from the Apache Web site and install it. Remember to set the enviroment variable XMLBEANS_HOME to where you install it and add the $XMLBEANS_HOME/bin directory from the distribution to your path. This will make the XmlBeans commands available, most important of these being "scomp", the schema compiler.
Now if we run
scomp -out sites.jar sites.xsd
We get to see
Time to build schema type system: 1.185 seconds Time to generate code: 0.131 seconds Time to compile code: 1.179 seconds Compiled types to: sites.jar
Scomp has built a type system, generated Java code for it and compiled it. We only get a sites.jar file as output because unless told not to, scomp just generates the precompiled jar file. If you want to see the source of the code it generates, you can run
scomp -srconly -src srcdir sites.xsd
Now we can see what has been created. The generated classes are all in the com.example.sites.site package; this has been derived from the target namespace of the schema. This is defined in the opening lines of the schema file:
<?xml version="1.0" encoding="utf-8"?>
<?xsd:schema xmlns:xsd=""
xmlns:
If your schema doesn't have a target namespace, the generated classes will appear in a package called noNamespace. For the example, we have a SitesDocument class, and classes for each of the types defined in the schema, Sites, Site, Comment, Rating and Email. All of these generated classes extend the XmlBeans foundation class, XmlObject. We'll get back to that but for now we'll get straight to parsing a XML document...
import com.example.sites.site.*;
import java.io.*;
import org.apache.xmlbeans.*;
public class Example1 {
public static void main(String[] args) {
try {
SitesDocument sd=SitesDocument.Factory.parse(new
File("./sites.xml"));
/* ......... */
}
catch (IOException e) {
System.err.println("IOException:"+e);
}
catch (XmlException e) {
System.err.println("XmlException:"+e);
}
}
}
...and that's it. Problems in parsing will throw an XmlException. We are ready to do some processing. All of the generated classes have a static Factory to allow for the creation of instances, either by parsing or programmatic creation. Here we've used the simplest parse method in the factory.
Let's add some code to iterate through the site entries and print out the comments and ratings. First we need to get the root element of our document.
Sites sites=sd.getSites();
Sequences in XML Schemas are represented by arrays in XmlBeans, so we can take that array and iterate through it (note that we're using Java SE 5.0's new for construct);
for(Site s:sites.getSiteArray()) {
Elements and their attributes have methods generated for them to allow you to get and set their values; let's get the src attribute of our site and print it.
System.out.println(s.getSrc());
And in the same way, we can get the Rating and Comment arrays within the Site element and iterate through them.
for(Rating r:s.getRatingArray()) {
System.out.println(r.getEmail() + " rated the site "
+ r.getRated() + " on " + r.getRatedon());
}
for(Comment c:s.getCommentArray()) {
System.out.println(c.getEmail() + " said "
+ c.getStringValue());
}
}
1
Rick Simms - 03/11/05
I always get an IO exception - even if I include my sdk bin and xml bin in a -cp parameter - in your example I used
(Unknown Source)
scomp -out sites.jat sites.xsd
and
scomp -cp C:\j2sdk1.4.2_09\bin;D:\xmlbeans-2.0.0\bin -out sites.jat sites.xsd
I always get -
Time to build schema type system: 1.057 seconds
Time to generate code: 0.17 seconds
java.io.IOException: CreateProcess: D:\XMLExamples\xmlbnat\javac @D:\DOCUME~1\dz
8fnk\LOCALS~1\Temp\javac62757 error=2
null
java.io.IOException: CreateProcess: D:\XMLExamples\xmlbnat\javac @D:\DOCUME~1\dz
8fnk\LOCALS~1\Temp\javac62757 error=2
at java.lang.Win32Process.create(Native Method)
at java.lang.Win32Process.
at java.lang.Runtime.execInternal(Native Method)
at java.lang.Runtime.exec(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at org.apache.xmlbeans.impl.tool.CodeGenUtil.externalCompile(CodeGenUtil
.java:229)
at org.apache.xmlbeans.impl.tool.SchemaCompiler.compile(SchemaCompiler.j
ava:1121)
at org.apache.xmlbeans.impl.tool.SchemaCompiler.main(SchemaCompiler.java
:367)
BUILD FAILED
-------------------------------------------------
I have been able to open the temp file - add spaces for example it has -dD:\DOCUME~1\dz8fnk\LOCALS~1\Temp\xbean5761.d\cl****es
I add spaces before or after switches file names etc and get it to at least compile with javac.
Any help would be appreciated - with this method I do not get the jar and when I try to create/run code it can not find the import for apache.
» Report offensive content
2
Gaurav Saxena - 24/07/06
I get the same exception while working with xmlbeans2.2.0
here is the exception trace
» Report offensive content
3
Data Recovery Software - 13/11/08
IPod digital video recovery software supports all type of IPod include IPod first generation to next generation series etc.
» Report offensive content
4
Data Recovery Prices - 03/12/08
Windows NTFS data recovery software restore formatted JPG, JPEG, GIF, BMP file format pictures from hard disk volume.
Title:data recovery
» Report offensive content
5
drv soft - 10/12/08
Backup data recovery utility works with all popular brands of removable media drives that include Kingston, Transcend, Konica Minolta, Nikon, Toshiba, Sony, Epson, Hitachi and many others.
» Report offensive content
6
Setup Install - 13/12/08
Setup software provides customized Installation and Uninstallation package for your all application programs.
» Report offensive content
7
Data recovery software - 14/04/09
Download memory card file recovery software for retrieving deleted data from all types of memory card.
» Report offensive content
8
Computer monitoring - 23/06/09
Official website for Mobile phone inspector provides entire mobile phone information like mobile IMEI and SIM IMSI number.
» Report offensive content | http://www.builderau.com.au/program/xml/soa/Beans-Means-XML/0,339028469,339205924,00.htm | crawl-002 | en | refinedweb |
Username:
lost p/w?
|
|
subscribe
|
search
|
zZounds.com
offers
recording equipment
for starting and experienced musicians. Find the best prices on
electric guitars
,
acoustic guitars
,
classical guitars
and more.
Music
News
Forum
Community
Services
Subscribe
Lyrics
Chat
Games
headlines
submit
search
EarthLink to outsource call center jobs
Posted by
CodeWarrior
in
on January 11, 2004 at 12:58 PM
"Internet service provider EarthLink is laying off most of its call center employees and outsourcing the work to domestic and overseas companies, in an effort to cut costs.. "
--------------------SNIP--------------------------------------------------------
OK, this is not an RIAA story, or even a music story, I concede that, but is it important? I think yes. The reason is that it reflects what seems to be a neverending trend to ourcource our jobs in every sector possible. Many years ago, I worked in tech support, and I can tell you that the feeling of every company I have dealt with is that, they HATE having to provide tech support, and if they could eliminate ALL tech support, they would. They see tech support as an outflow of money that gains no return (I know..it's stupid...but I guarantee you that's the real fact about the matter).
This "jobless recovery" nonsense, has its fallacy highlighted by the enormous hemorrhage of jobs from this country, with no hope of them ever returning. We have no apparent source of a transfusion for this hemorrhage either, and I think, the analogy here is quite apt. One cannot exist without blood, and I'll be damned if I can see how we as a country, can continue to sustain this economy as more and more jobs are exported, with a one way ticket.
FULL ARTICLE
HERE
DeadMan2003
Date:
January 11, 2004 @ 1:16 PM
"Hi this is Earthlink support can I help
you?"
"Yes I have just got Earthlink installed and
it comes with this 'Earthlink Technofast'
cable modem and need help in setting it up"
"Technofast?" Sounds of rifling through
papers. "Is that what Earthlink supplied you
with?"
"Yes"
"Hmm. I don't seem to have anything here
about that model of modem. Are you sure it's
Earthlink?"
"Yes I am sure it has 'Earthlink Technofast'
wrriten on it"
"Well I don't have anything here about that
model. Hold on. Let me check with my
supervisor" Sound of some crappy music plays
while you are put on hold for 10 minutes.
"Hello? Are you still there?"
"Yes I've been waiting for 10 minutes now at
$1 a minute!"
"Aah sorry about that. I'm afraid we cannot
find any information about that model of
modem. It must be something new. We have not
had any memo about it yet. Can you try
calling back next week? We are still setting
up the computer system here"
"What? A week? You must be kidding me? Can I
speak to the supervisor?"
"I'm sorry I cannot do that"
"Why not?"
"Well..." long pause "He does not speak good
"Eh? Why not?"
"He's from Bangladesh"
"Huh? Since when have US companies been
hiring non-English speaking staff for
English speaking customer services?"
"Since they started outsourcing their
support to other countries sir"
"What? Where the hell are you?"
"I'm in Bangladesh sir"
"What? Your shittin me right?"
"No sir"
"I don't believe this..." hangs up in
disgust and throws modem out the window.
mroop
Date:
January 11, 2004 @ 1:53 PM
"What? A week? You must be kidding me? Can I
speak to the supervisor?"
"I'm sorry I cannot do that"
"Why not?"
"Well..." long pause "He does not speak good
This scenario is ridiculous. Of course the
supervisors speak English. I have always
received quality service from India based
customer service reps. That being said, this
outsourcing trend is very disturbing. Our
middle class is shrinking and we are slowly
becoming a third world country.
captdunsel
Date:
January 11, 2004 @ 2:01 PM
may be that you have received quality
service from India but that Arescom modem I
got from MSN required I get tech support
from somewhere in Mexico and they were as
clueless as I was.
namelessone
Date:
January 11, 2004 @ 2:29 PM
I understand Dell (or was it Gateway?)
contracted a firm in India to do their tech
support... and ended up having to bring it
back this year due to the volume of
complaints of bad service.
Doing it cheaper does not neccesarily equate
to being better. Sadly, there is that
ever-increasing tendancy to try to make hay
at any price, and if they get compaints,
well, they made good money while it lasted.
Once again, bottom-line wins over
common-sense. Disgusting.
mroop
Date:
January 11, 2004 @ 2:35 PM
"be that you have received quality service
from India"
Fair enough, but to suggest the supervisors
can't speak English is still ridiculous.
"I understand Dell (or was it Gateway?)
contracted a firm in India to do their tech
support... and ended up having to bring it
back"
That makes me happy. May all the CEO's
outsourcing jobs for higher profit margins
and stock prices rot in hell for eternity.
CodeWarrior
Date:
January 11, 2004 @ 2:51 PM
Something consumers don't know is that at
some places...if a "real supervisor" is not
available...you really get just another
tech....assuming the role of a
supervisor...and, most often, even if you
are getting a "real supervisor"....they are
all operating out of the same playbook and
you will get the same answer.
One thing anyone who calls tech support has
to understand is this....techs (as a former
tech I know) are monitored, watched,
recorded, and their conversations reviewed
by QA ("quality assurance")...and really
gauged by low call times...so they are
stressed to begi with. As I have
said...these companies don't even want to
have tech support, and want to get buy with
the lowest number of live tech calls
possible, and have them all handled with
minimum call times. At an unnamed company I
worked at..there were big digital clocks
indicating average call wait times, and if
the times got too long...you got crap about
it...
Another thing is that, at its best, tech
support is a crappy job...and often, by the
time you get a call, the "customer" is
cursing you out because their problem didn't
get resolved (and the reason is usually do
to company guidelines that you must adhere
to and don't control)..so, I feel sorry for
these poor guys in India, dealing in not a
native language, with people who are already
pissed off and just demand someone bring a
new box out to them (usually not gonna
happen).
As for supervisors speaking Engish...I agree
w/ mroop that all call center supervisors
(or level two techs) probably do speak
English...but some of the accents that
people in India have, can make it difficult
to understand them.
I blame the companies for this outsourcing
nonsense. For example, Dell used to have
construction/assembly facilities in Round
Rock Texas...they moved these to China....
The allegiance of these corps is the bottom
line and paying the CEO...they have no
loyalty or fidelity to the citizens of the
USA....
CodeWarrior
Date:
January 11, 2004 @ 2:54 PM
typos...begi with= begin with
want to get buy = want to get by
mroop
Date:
January 11, 2004 @ 3:00 PM
I also managed a small call center for a
small insurance company. The reps take a lot
of crap and I was proud of my reps who
handled it with grace under pressure. In the
summer on Fridays I bought ice cream and all
the toppings so they could make their own
ice cream sundaes. : ) The rude ones got
sent packing.
purfus
Date:
January 11, 2004 @ 3:06 PM
Wow I bet this will do wonders for our
unemployment rate. Seems a bit obsured that
a firm would be allowed to grab kids fresh
out of highschool and train them to know
nothing more than what they need to work for
them. Then suddenly drop them back into
society with no education and no more
prospects than working at mcdonalds or maybe
someday being lucky enough to get another
job on the phone. It really seems to me
that that firm has hurt society more than
helped and operations like this should be
avoided in the future.
CodeWarrior
Date:
January 11, 2004 @ 3:09 PM
mroop...when I did supervision, we had
software that we could view the "after call"
time for each tech, their average call
times, just about everything they did...you
could literally sit there and watch dozens
of techs and do micromanagement and
evaluation of their performance on a second
by second basis (call times were measured to
the hundredth of a second)....
I can recall a call...will never forget
it...where we had a guy demanding to talk to
a supervisor (ME)...he called from LA, and
tried to tell me there was not a computer in
LA where he could make a Windows startup
floppy disk...
I suggested a library, and he claimed there
were no libraries near him...
he was demanding we just send him a new
computer....he refused to troubleshoot (this
was part of his legal agreement...i.e. the
agreement to troubleshoot on the phone)....I
remember telling him to "Have a nice day"
and as I was putting the phone down...a
torrent of the foulest obscenities you can
imagine.....
"so it goes..."- K. Vonnegut
CodeWarrior
Date:
January 11, 2004 @ 3:12 PM
oh yeah...and I can recall that about the
only break the techs had from the stress and
monotony in their cubicles...was downloading
music and movies and burning them to CDs
while they were doing tech support....
Occasionally, a box got audited and they
removed everything...but I remember the guy
that got busted had a KORN picture for his
desktop...Korn screensaver....and wore Korn
t-shirts.....
He wouldn't have been a more obvious target
for box audit if he had sat there with his
headset on mute, singing along with his
downloaded MP3s....
rjosborn
Date:
January 11, 2004 @ 4:24 PM
Believe it or not... There are more call
centers located off our shores than any of
us can realize all ready. Only recently,
have any companies started to announce that
it is happening.
mroop
Date:
January 11, 2004 @ 4:36 PM
"(call times were measured to the hundredth
of a second)...."
Holy cow. We didn't have any monitoring
software at all and the pc's were not online
either. It was a very basic operation.
CNN is doing a story right now. They showed
these guys:
compmore
Date:
January 11, 2004 @ 4:37 PM
Code I'm with you 100%. I worked at a call
center for MSN internet access. Microsoft
has no employees at any of it's call
centers. it's all private copmanies
contracted by Microsoft. I can't tell you
the number of times I had to hang up on
abusive (not angry, abusive) customers but
we were told that we had to take the abuse
or risk getting fired. we were not allowed
to pass it to a supervisor unless the
customer specificly asked. In our center
supervisors seldom if ever got on the
phones. I got to play supervisor many
times. I remember being on a call and the
agent next to me was on a call and each of
our customers wanted a supervisor. We just
passed each other our headsets and took the
calls. this is the website I built a couple
years ago detailing how it really works.
it's a little outdated but is still very
acurate.
dave109100
Date:
January 11, 2004 @ 5:41 PM
For you Bush supporters out there, why
doesn't he do something about this? I really
want to know.
compmore
Date:
January 11, 2004 @ 5:48 PM
you honestly think a democrat or indepentant
would stop it?? It's an issue of corporate
greed, money, and special intrests. doesn't
matter who's in office.
hamjay711
Date:
January 11, 2004 @ 7:23 PM
I actually work for an outsourcing company
for Earthlink. There are some things that
you people state that are just wrong.
Earthlink does provide minimal, but FREE,
tech support. Sups, real sups, are always
available, and problems do get solved. The
only BS thing that I can think of is that we
are monitered by call time, which is what
determines job security.
I don't like what Earthlink is doing, but
hey, I still have a job, I get $10 an hour,
and it keeps me happy. One thing that you
people do not realize is that many of the
techs you call in to know enough to fix your
computer, they just can't do anything about
it because the company doesn't support it.
And because your computer is hosed, your OS
is taking a crap, or you refuse to
upgrade/update is no excuse to blame it on
techs.... And I am a kid fresh out of High
School going to college, but I DID grow up
in the information age so I did self teach
myself computers.
compmore
Date:
January 11, 2004 @ 8:16 PM
hamjay you are rightabout many things. some
places do give you supervisors but many
don't. check my link above. but
supervisors can do no more for you than an
agent.
You are right that there are a lot of people
out there who screw up their OS and blame it
on the ISP software. question are you
allowed to tell a customer that they can set
up their earthlink account throught DUN or
do you have to tell them to use the
software? unless earthlink has changed they
can go online without software.
Earthlink may have many good points (as well
as any callcenter) however the call center
principle hurts overall customer service no
matter what the companies tell you. As you
pointed out, call time restrictions along
with support boundries restrict the service
a customer can get. Besides I still
maintain over all that many of the techs in
any tech support for internet access do not
know a whole lot about the internal workings
of a computer.
CodeWarrior
Date:
January 11, 2004 @ 9:08 PM
compmore....one of my funniest and truest
memories, was one time, I really pissed off
a customer...(everyone had to be on the
phone that day as we were short
handed)....they called back and got another
tech...demanded to ask for a supervisor, and
by that time, the calls had slacked a bit
and I was doing superviser duties...he put
his hands up for me to take the call as an
escalated call...I recalled the person's
voice from earlier....they started bitching
about this rude tech they got earlier...I
asked what his name was (knowing it was
me)...and I told them...." That is really
odd, because this is the first complaint
I've ever had on him. He's one of our best
techs..." . I finally got their issue
resolved, but they never did know it was
me.....
true story!
W-B
Date:
January 11, 2004 @ 9:18 PM
And there is a relation to this RIAA
nonsense viz this outsourcing and the
"jobless recovery" -- without jobs, people
can't earn real money; without real money,
they can't spend on items like CD's or
DVD's, hence sales continue to go down, and
the multinational entertainment-media
complex (and their respective alphabet-soup
lobbies) will repeat their Big Lie of "it's
all piracy, stupid," and the vicious cycle
will repeat itself yet again, with more
calls for discriminatory, exclusionary "DRM"
technology that would be used AGAINST us
all, not to mention socialistic Big
Government interference in various
digital-based systems. In other words, it
isn't so much "free music" as free trade
that has brought CD sales plummeting.
compmore
Date:
January 11, 2004 @ 9:36 PM
Code. lol we could do a whole thread on
this if not a website. check out my site,
when I put it up Cyberrep management was so
pissed they threatened to fire instantly
anyone they saw who had that site on their
screens. they went crazy trying to figure
out who did it.
DeadMan2003
Date:
January 11, 2004 @ 10:08 PM
I must apologise for my parody. But after
all it was just that. A parody. I used to
work for ICL Sorbus support. Stuffed into my
little cubicle working strange shifts and
adhering to strict call times (We also had
those nasty led boards on the walls) with an
overseer on an elevated platform. We
supported Escom PC's initially (Anyone who
remembers them before they went belly up
will know how shite they were. Good riddance
to crap I say).
It was a horrible job and I was always
catching flack from the management. I had to
offload someone to a supervisor once. A mad
abusive Scotsman who made me blow a circuit.
It's not a nice job. Never again.
Justin42980
Date:
January 11, 2004 @ 10:13 PM
When companies outsource it's called a free
market econonomy, but when people who need
less expensive perscription drugs order from
Canada it's called illegal... Makes you
wonder who the government is actually
working for, themselves... They called it
"unsafe" to order from Canada, and we all
know how canadian drugs are unsafe
(sarcasm)....
compmore
Date:
January 11, 2004 @ 10:44 PM
great point Justin
inlivingcolour
Date:
January 11, 2004 @ 10:48 PM
Whats crazy is I almost was hired at the San
Jose, Calif facility for Earthlink. Also, I
had to get tech help from earthlink. It
seemed kinda slow.
purfus
Date:
January 11, 2004 @ 11:23 PM
"I don't like what Earthlink is doing, but
hey, I still have a job, I get $10 an hour,
and it keeps me happy."
I think you missed the main point. They are
outsourcing jobs. That maens you are less
likely to get 10 dollars an hour. Which,
unless I'm of a completely inverted
rational, will make you unhappy.
Is it their right to do so? Ofcourse, it is
supposed to be a free market. But as justin
pointed out the balance of freedoms is
dramitically shifted towards the firm and
from the consumer. Why? I have no idea.
But that does not negate the fact that
something should be done about it. Our
government is intended to maintain our
society. We will not maintain and certainly
not maintain a free economy as it was
intended to be by its founders, if the
balance of power continues to shift as it
has.
purfus
Date:
January 11, 2004 @ 11:25 PM
Also this does have quite a bit to do with
the RIAA. The problems we as consumers have
with the RIAA are nothing more than a
sympton of a serious disorder in our society.
dave109100
Date:
January 12, 2004 @ 1:35 AM
"you honestly think a democrat or
indepentant would stop it?? It's an issue of
corporate greed, money, and special
intrests. doesn't matter who's in office."
true, but if this continues someone will
have to speak out against it to get votes.
I honestly just wanted to see what someone
would say.
"I don't like what Earthlink is doing, but
hey, I still have a job, I get $10 an hour,
and it keeps me happy."
In MN you wouldn't be able to afford much
sadly. If your happy thats cool, but i'd
rather have high paying jobs here.
purfus
Date:
January 12, 2004 @ 1:46 AM
10 bucks an hour is pretty damn good for the
area I live in. Most wages here are at or
within a dollar of minimum which is 6.25.
There are a lot of people here that realie
on those 10/hour telemarketing jobs and
pulling them out is only putting more people
in the unemployment line.
Litheon
Date:
January 12, 2004 @ 3:36 AM
Maybe we should just all leave this country
and go to India, Mexico, Bangladesh or where
ever. Atleast we'd have jobs
spareme
Date:
January 12, 2004 @ 7:42 AM
---The problems we as consumers have with
the RIAA are nothing more than a sympton of
a serious disorder in our society.
Why yes, people who think they have the
right to steal anything digital they want.
tds67
Date:
January 12, 2004 @ 9:33 AM
Economists expected 150,000 new jobs to be
created last quarter, but only 1,000
materialized.
I saw "free trade" for what it was 12 years
ago...a corporate grab bag, the prize being
cheap, sweatshop labor in foreign countries
and the legitimizing of being disloyal to
the country that allowed these companies to
exist in the first place.
One particular argument for free trade that
was made 12 years ago was this: Since
offshore labor costs will be reduced (like
$2.00 parts and labor to make a pair of
tennis shoes in China, for example), the
American consumer will benefit from lower
costs. But after the NAFTA and GATT
treaties passed, the American consumer was
told "no, silly, the price of a product is
NOT determined by labor costs...it's
determined by the law of supply and demand"
(hence tennis shoes are still as expensive
as before these "free trade" treaties, even
though the cost of making them has been
reduced).
pumpgod
Date:
January 12, 2004 @ 10:02 AM
I guess when the CEO's and board member are
a regular replacement with someone from
India (Delphia can do this now). And, their
bonuses get cut from it. Then it will be the
the end of the 2 year wonders making stupid
decisions like this. Of course if they don't
do it one of the foreign subsidiaries of GE,
GM etc will come up with the idea. "Short
term business solutions create long term
problems for a company."
purfus
Date:
January 12, 2004 @ 12:43 PM
"Why yes, people who think they have the
right to steal anything digital they want.
"
I think it is a bit deeper than that. I
also see many more facits of the problem
relating to much more than copyright issues.
I personally do not feel that the people of
this country are the root of all the
problems. If they are than it is time the
government and suppliers adjust to its
people. Do you seriously think they can
customize their consumer base to something
they want? At the same time elliminating
the peoples jobs. I ask this. How many
people have lost their jobs due to copyright
infringement? Now, how many people have
lost their jobs due outsourcing and firms
tightning their belts for the sheer sake of
more profits?
purfus
Date:
January 12, 2004 @ 1:27 PM
Oh and while we are on the subject of
misguided perceptions of rights. What gives
producers the right to think they can use
powerful psychological influence to make
people buy their products at the highest
posible prices. Prices that are high enough
to break peoples wallets and cause parents
to disappoint their children who are
convinced due to the massive advertising
campaings and other influences put on by
those firms. What gives them the right to
force us into a certain way of thought than
strip us of cash simply because they can.
And dont tell me its capitalism because what
has been done goes far beyond capitalization.
tds67
Date:
January 12, 2004 @ 1:58 PM
I'll take that line of reasoning one step
further, purfus...do you remember the
controversy about the possibility of
subliminal advertising on TV and at the
movies? You know, the one where a picture
frame is shown fast enough for the
subconscious to pick up, but not the
conscious mind? Would this form of
influence be considered "marketing" and
"capitalism" by most people? I think
not...that smacks of mind
control/manipulation to me. So why should
TV and radio advertising be considered less
sinister than subliminal advertising?
Remye
Date:
January 16, 2004 @ 9:43 AM
I managed to piss off Gateway so bad a few
weeks ago they were considering canceling my
service agreement. I called for aproblem
I've been having ever since I was givin this
computer, and I couldn't understand the
m'f'er on the phone. I told him (her?) I
wanted to speak to someone I could
understand, and got transferred to some
OTHER department. I was on the phone for an
hour, and got no where, so I asked to speak
to a supervisor, preferably someone who
spoke English I could understand, w/o the
Indian accent (so think I couldn't figure
out half of what was bein said.
I was immediately transferred to yet ANOTHER
supervisor (who spoke perfect English btw)
who told me that if I was going to make
"racist slurs and comments" to the techs
that my contract would be canceled. I told
em fat chance, as my service contract is
part of a rather LARGE gov't contract for
vets. I've not heard anything else yet, but
the guy did suggest that I drive to Boston
(3 hours) for my tech support from now on.
Go figger. I'm actually getting a local
company to wipe this thing clean and buying
my own version of XP, w/o all the gateway
bullshit added in.
Point is, if they are going to outsource to
other countries, then they should be willing
to deal with customers who WANT tech
support, who have (somehow anyway) paid for
tech support, and want to be able to
understand the solutions that are presented,
crappy as they may be.
Since when did "profit" mean " a way to be
compsensated forlack of service" ??? | http://news.dmusic.com/article/9830 | crawl-002 | en | refinedweb |
Blog about technology, media and other interesting tidbits.
Luckily, from a comment on the QuickGraph CodeProject article, I found the Microsoft Research Project called GLEE [] which certainly looked a better choice. The limitation is that you can only use in non-commercial applications.
Before continuing on our Method Visualizer, let's do a small sample with GLEE.
First, download and extract the GLEE assemblies
Now fire up IronPython shell from the same directory and add the necessary references.
>>> import clr>>> clr.AddReference("Microsoft.GLEE")>>> clr.AddReference("Microsoft.GLEE.Drawing")>>> clr.AddReference("Microsoft.GLEE.GraphViewerGDI")
>>> clr.AddReference("System.Drawing")
>>> import Microsoft
As expected, the object model has a graph model which is the central point of reference.
>>> graph = Microsoft.Glee.Drawing.Graph("Sample Graph")
Once you have the graph object which is like a surface, you can start adding edges in the surface.
>>> graph.AddEdge.__doc__'IEdge AddEdge(self, str source, str edgeLabel, str target)\r\nIEdge AddEdge(self, str source, str target)'>>> graph.AddEdge("PointA", "PointB")<Microsoft.Glee.Drawing.Edge object at 0x000000000000002B ["PointA" -> "PointB"[color="#000000ff",fontcolor="#000000ff",,]]>>>> graph.AddEdge("PointA", "PointC")<Microsoft.Glee.Drawing.Edge object at 0x000000000000002C ["PointA" -> "PointC"[color="#000000ff",fontcolor="#000000ff",,]]>
We have 3 points on the graph now. Point A is connected to both PointB and PointC. Now, we'll render it and generate a diagram of the graph.
>>> renderer = Microsoft.Glee.GraphViewerGdi.GraphRenderer(graph)>>> renderer.CalculateLayout()
After this, we can either attach it with a control that comes with the download. See this in action in the sample. Or we can render it as Image and save as file. Here, I use the second method.
To use the bitmap object, we need to import the System.Drawing namespace
>>> from System.Drawing import *>>> from System.Drawing.Imaging import *>>> bmp = Bitmap(graph.Width, graph.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb)>>> renderer.Render(bmp)>>> bmp.Save("IronGlee.png")
To add color, we need to find the node ( by the name we used in AddEdge) and set the color
>>> graph.FindNode("PointA").NodeAttribute.Fillcolor = Microsoft.Glee.Drawing.Color.LightGreen>>> renderer = Microsoft.Glee.GraphViewerGdi.GraphRenderer(graph)>>> renderer.Render(bmp)>>> bmp.Save("IronGleeColored.png")
GLEE looked like an interesting project which apart from its license, doesn't seem to have any issues. Next, we'll use GLEE to generate a method tree. | http://weblogs.asp.net/nleghari/archive/2007/04/02/fun-with-ironpython-glee.aspx | crawl-002 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.